The present disclosure generally relates to the field of immersive technology applications in medical devices, and more particularly, the disclosure relates to a virtual reality system for a virtual robotic surgery environment in medical applications.
This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present disclosure, which are described below. This disclosure is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not just as an admissions of prior art.
Robotic assisted surgical systems have been adopted worldwide to gradually replace conventional surgical procedures such as open surgery and laparoscopic surgical procedures. The robotic assisted surgery offers various benefits to a patient during surgery and during post-surgery recovery time. The robotic assisted surgery equally offers numerous benefits to a surgeon in terms of enhancing the surgeon's ability to precisely perform surgery, less fatigue and a magnified clear three-dimensional (3D) vision of a surgical site. Further, in a robotic assisted surgery, the surgeon typically operates with a hand controller/master controller/surgeon input device/joystick at a surgeon console system to seamlessly receive and transfer complex actions performed by him/her giving the perception that he/she himself/herself is directly articulating a surgical tools/surgical instrument to perform the surgery. The surgeon operating on the surgeon console system may be located at a distance from a surgical site or may be located within an operating theatre where the patient is being operated on.
The robotic assisted surgical systems may comprise of multiple robotic arms aiding in conducting robotic assisted surgeries. The robotic assisted surgical system utilizes a sterile adapter/a sterile barrier to separate a non-sterile section of the multiple robotic arms from a mandatory sterile surgical tools/surgical instrument attached to one end of the multiple robotic arms. The sterile adaptor/sterile barrier may include a sterile plastic drape that envelops the multiple robotic arms and the sterile adaptor/sterile barrier that operably engages with the sterile surgical tools/surgical instrument in the sterile field.
For performing robotic assisted surgeries, training is required to be provided to surgeons, operation theater (OT) staff and other assistants, who directly and indirectly participate in these surgeries. One of the main challenges is, unless the surgeon, OT staff and others get completely familiar and trained with all features and functions of the robotic surgical system, performing live surgery is not desirable. Another challenge is, getting familiarity with different features and functions of the robotic surgical system takes its own time. Also, such training requires an ample amount of time investment, creation of training modules, and a physical trainer.
Further, the diagnostic scans of a patient being in 2D format are difficult to manipulate for diagnosing any anomalies. Moreover, the surgeons may face difficulty in identifying the exact position and orientation of an organ during the robotic assisted surgeries.
In light of the aforementioned challenges, there is a need for providing training to the surgeons and OT staff, such that all the issues related to providing training to surgeons and OT staff for performing robotic assisted surgeries are resolved.
Some or all of the above-mentioned problems related to providing training to the surgeons and OT staff are proposed to be addressed by certain embodiments of the present disclosure.
According to an aspect of the invention, there is disclosed a virtual reality system for simulating a virtual robotic surgery environment comprising one or more virtual robotic arms each coupled to a virtual surgical instrument at its distal end, a virtual operating table, and a virtual patient lying on top of the virtual operating table, whereby the one or more virtual robotic arms are arranged along the virtual operating table, the system comprising: an input device configured to receive an input from an operator; and a processor coupled to the input device and configured to: extract a relevant data based on the received input, from a database stored on a server operably connected to the processor, wherein the server is configured to store a database including at least one of a diagnostic scan and patient details for one or more patients or a virtual tutorial for one or more robotic surgical procedures; render the relevant data on a stereoscopic display coupled to the processor; and manipulate the relevant data based on another input received from the operator and render the manipulated data on the stereoscopic display, to create a virtual robotic surgery environment.
According to another aspect of the invention, there is disclosed a method for simulating a virtual robotic surgery environment comprising one or more virtual robotic arms each coupled to a virtual surgical instrument at its distal end, a virtual operating table, and a virtual patient lying on top of the virtual operating table, whereby the one or more virtual robotic arms are arranged along the virtual operating table, the method comprising: receiving, using an input device, an input from an operator; storing, using a server, in a database at least one of a diagnostic scan and patient details for one or more patients or a virtual tutorial for one or more robotic surgical procedures; extracting, using a processor, a relevant data based on the received input, from the database stored on the server; rendering, using the processor, the relevant data on a stereoscopic display coupled to the processor; manipulating, using the processor, the relevant data based on another input received from an operator; and rendering, using the processor, the manipulated data on the stereoscopic display.
According to an embodiment of the invention, the input device comprises at least one hand controller for each hand or any means to receive hand gestures of the operator.
According to another embodiment of the invention, the input device can be tracked using at least one of an infra-red tracking, optical tracking using image processing, radio frequency tracking, or IMU sensor tracking.
According to yet another embodiment of the invention, the server comprises at least one of a local database or a cloud-based database.
According to yet another embodiment of the invention, each of a diagnostic scan and patient details of one or more patients and a virtual tutorial for one or more robotic surgical procedures comprises of 2D/3D images and texts.
According to yet another embodiment of the invention, the server is further configured to convert a 2D diagnostic scan into a 3D model using a segmentation logic.
According to yet another embodiment of the invention, storing the database including the diagnostic scan and patient details comprises: creating a database of a diagnostic scan and patient details of one or more patients; and modifying the database of one or more patients.
According to yet another embodiment of the invention, the diagnostic scan comprises various medical scans, but not limited to MRI scan, CT scan, and the like, of one or more patients.
According to yet another embodiment of the invention, the patient details comprise at least one of a name, age, sex, or medical history of one or more patients.
According to yet another embodiment of the invention, storing the database including a virtual tutorial for one or more robotic surgical procedures comprises: creating a database of virtual tutorials for one or more robotic surgical procedures using one or more virtual surgical instruments in a virtual robotic surgery environment; and modifying the database of virtual tutorials.
According to yet another embodiment of the invention, the virtual tutorials of one or more robotic surgical procedures can be used to provide training to healthcare professionals.
According to yet another embodiment of the invention, extracting the relevant data from the stored database on the server comprises fetching at least one of a 3D model of diagnostic scan/patient details of one or more patients, or a virtual tutorial for one or more robotic surgical procedures, based on the received input.
According to yet another embodiment of the invention, the relevant data comprises augmented 3D model or a 3D holographic projection, related to at least one of a diagnostic scan and patient details of one or more patients, or a virtual tutorial for one or more robotic surgical procedures.
According to yet another embodiment of the invention, rendering the relevant data comprises displaying the augmented 3D model on a stereoscopic display.
According to yet another embodiment of the invention, the rendered image can be projected on an external display.
According to yet another embodiment of the invention, the stereoscopic display is coupled to a virtual reality headset.
According to yet another embodiment of the invention, the 3D models of diagnostic scan and patient details of one or more patients can be stored on the server for safekeeping and reference.
According to yet another embodiment of the invention, the 3D model of a diagnostic scan can be manipulated to diagnose any anomalies in the diagnostic scan of one or more patients.
According to yet another embodiment of the invention, the 3D models of diagnostic scan and patient details of one or more patients can be used for training healthcare professionals.
According to yet another embodiment of the invention, the manipulated data comprises a modified version of the relevant data, generated based on the received input from the operator.
According to yet another embodiment of the invention, rendering the relevant data of a virtual tutorial for a selected robotic surgical procedure, based on the received input comprises of following steps: positioning of the virtual patient on the virtual operating table; placing of virtual ports on the virtual patient; draping of the virtual robotic arms; docking of the virtual robotic arms in the patient around the virtual operating table; selecting one or more virtual surgical instruments; practicing the selected surgical procedure by using the virtual surgical instruments; undocking and storing the virtual robotic arms; practicing quick undocking of the virtual robotic arms in case of any adverse situation; and cleaning and sterilizing of the virtual surgical instruments post the virtual surgical procedure.
According to yet another embodiment of the invention, the processor is further configured to transmit the manipulated data to the server for storage in the database.
According to yet another embodiment of the invention, the augmented 3D model of the patient anatomy can be superimposed on the virtual patient to enable the surgeon to identify the exact position and orientation of organ during actual surgery.
According to yet another embodiment of the invention, simulating the virtual robotic surgery environment is based on predetermined models for the virtual robotic arms, the virtual surgical instruments, the virtual operating table, and the virtual patient.
According to still another embodiment of the invention, separate sessions of the virtual tutorials for surgeons and OT staff can be designed using the virtual robotic surgery environment.
Other embodiments, systems, methods, apparatus aspects, and features of the invention will become apparent to those skilled in the art from the following detailed description, the accompanying drawings, and the appended claims.
The summary above, as well as the following detailed description of the disclosure, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the present disclosure, exemplary constructions of the disclosure are shown in the drawings. However, the present disclosure is not limited to specific methods and instrumentalities disclosed herein. Moreover, those skilled in the art will understand that the drawings are not to scale. Wherever possible, like elements have been indicated by identical numbers.
Embodiments of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein:
For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as illustrated therein being contemplated as would normally occur to one skilled in the art to which the disclosure relates.
It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the disclosure and are not intended to be restrictive thereof. Throughout the patent specification, a convention employed is that in the appended drawings, like numerals denote like components.
Reference throughout this specification to “an embodiment”, “another embodiment”, “an implementation”, “another implementation” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrase “in an embodiment”, “in another embodiment”, “in one implementation”, “in another implementation”, and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures proceeded by “comprises . . . a” does not, without more constraints, preclude the existence of other devices or other sub-systems or other elements or other structures or additional devices or additional sub-systems or additional elements or additional structures.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The device, system, and examples provided herein are illustrative only and not intended to be limiting.
The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items. Further, the term sterile barrier and sterile adapter denotes the same meaning and may be used interchangeably throughout the description.
Embodiments of the disclosure will be described below in detail with reference to the accompanying drawings.
Also, the surgeon/operator may be based at a remote location. Then the console system (105) may be located in any room other than the robotic surgery environment, or the console system (105) may be operated from a remote location. The communication between the console system (105) and the robotic surgical system (100) may be either wired or wireless and may be implemented. The surgeons and OT staff/other assistants are required to be trained to perform these robotic assisted surgeries.
Further, the medical sector relies heavily on diagnostic scans not limited to computerized tomography (CT) and magnetic resonance imaging (MRI) scans for diagnosis. The CT and MRI scans allow the doctors to analyze and study the internal parts of the body. The doctors and surgeons rely upon CT and MRI scans to help diagnose tumors and internal bleeding or check for internal damage. The CT and MRI scans are extremely important during surgical procedures as well. The CT scans show bones and organs, as well as detailed anatomy of glands and blood vessels. The CT scans are taken shortly before surgery to confirm the location of a tumor and establish the location of the internal organs. The CT and MRI scans are essentially a two-dimensional (2D) medium of information. The patient details comprise at least one of a name, age, sex, or medical history of one or more patients. These patient details of one or more patients and the virtual tutorial for one or more robotic surgical procedures comprise of 2D/3D images and texts. Due to the inherent 2D nature of the diagnostic scans, it is sometimes difficult to visualize a particular organ or tumor in 3D. For example, it is very difficult to visualize a tumor just by looking at the MRI scans. Further, it is difficult to visualize its size, orientation, and other characteristic traits.
A virtual reality system may be of great use in providing training to medical healthcare professionals, performing collaborative long-distance surgeries, and diagnosis of any anomalies in the diagnostic scan of one or more patients.
A virtual reality system for simulating a virtual robotic surgery environment is described herein. A virtual reality system (200) is illustrated in
The processor (206) is configured to extract a relevant data (212) from the server (208) based on the received input from the operator (204). The relevant data (212) comprises at least one of a 3D model of diagnostic scan and patient details of one or more patients, or a virtual tutorial for one or more robotic surgical procedures, based on the received input. The processor (206) then renders the relevant data (212) on a stereoscopic display (214). The relevant data (212) can be an augmented 3D model or a 3D holographic projection.
An external display (216) may be provided to display the relevant data (212). The external display (216) is adapted to display the virtual robotic surgery environment. The stereoscopic display (212) and the external display (214) may be in sync, to be able to display the same content. The stereoscopic display (212) can be coupled to a virtual reality headset.
The processor (206) renders these 3D models using the stereoscopic display (214) or external display (216). These 3D models can also be viewed through virtual reality headsets for viewing the MRI/CT models in 3D. The processor (206) manipulates the relevant data (212) based on further inputs received from the operator (204) and renders the manipulated data on the stereoscopic display (214) or external display (216). The manipulated data comprises of a modified version of the relevant data (212), based on the received input from the operator (204). The manipulation of 3D relevant data (212) helps in diagnosing any anomalies in the diagnostic scan of a one or more patients.
The operator (204) now has the freedom to enlarge the 3D model, filter out the unwanted parts and focus on the organ of interest. The operator (204) can study the internal structure of the organ by either enlarging it or slicing the 3D hologram to view the internal structure. The 3D visualization will not only help doctors/surgeons in conducting diagnoses but also can be further used for training purposes. They will have the freedom to manipulate these 3D holographic projections in any way they want. They can move, rotate the holographic projections, and adjust the scale of the holographic projection. The created database (210) will contain all the 3D scans of the patient for safekeeping and reference. Whenever needed, the scans of a particular patient can be accessed and referred to.
Another type of immersive technology is the augmented reality. In the augmented reality, holographic projections are placed while keeping the surroundings the same as the actual one. Yet another type of immersive technology is the mixed reality. The mixed reality is the merging of real and virtual worlds to produce new environments and visualizations, where physical and digital objects co-exist and interact in real time. The holographic projections interact with the surroundings and the object in them. For example, in mixed reality, a holographic object can be placed on a table as an actual object. It will recognize the table as a solid body and will not pass through it. In mixed reality, the holographic projections and the surroundings are interdependent. It makes holograms interactive that co-exist with the surroundings. The extended reality platform may include virtual reality, mixed reality, and augmented reality. The virtual reality headset can be Oculus Quest 2 and mixed reality headset can be Microsoft HoloLens.
In an embodiment, a simulator may be used to get the surgeons and OT staff to get accustomed with various procedural and structural aspects of robotic surgery. The surgeons and OT staff should develop muscle memory when it comes to setting up the robotic surgical system. In one embodiment, the simulator will have training modules specifically for surgeon console, vision cart and patient cart placement techniques. The simulator will go step-by-step and teach the surgeons and OT staff the method and process of placement for all the 3 components. The simulated sessions will be designed in such a way that an entire surgical procedure will be simulated, and the doctors/surgeons will receive step-by-step instructions on all the activities conducted during the surgery. There may be separate tutorials for surgeons and the OT staff. As the tasks being performed by them will be different, they will receive separate as well as common training modules that will guide them to perform tasks in parallel. This way, during an actual surgical procedure, the OT staff and the surgeon will be able to perform their respective tasks collaboratively and smoothly.
The layout of the tutorials may be segregated into various steps. For example, the first step may be selection of the surgery. As each of the tutorial sessions will be surgery specific, the surgeons and OT staff will be given the option to select the surgery. Based on this selection, the rest of the surgical training sessions will be selected.
In the next step (706), the virtual robotic arms and virtual patient cart are draped. The entire draping procedure can be explained in detail. The robot needs to be placed in the draping position. The OT staff will be taken through a simulated draping process in which they will have to perform the entire draping procedure for each arm. As a warning, pop-ups alerts/messages may be provided that highlight the possible places where the drape might potentially get stuck and tear. The OT staff will take into consideration all the guidelines and complete the draping procedure accordingly.
In the next step (708), the placement and docking of the virtual robotic arms in the virtual patient around the virtual operating table is done. The placement of the patient cart is surgery specific. Patient positioning also needs to be considered. The surgeons and OT staff will be taken through the entire process of virtual patient cart placement with step-by-step guidelines. The best practices and ideal steps will be displayed, and the OT staff will be trained. Then, they can practice by placing and docking the virtual patient carts in their respective locations and orientations based on the type of selected virtual surgical procedure and port placement.
The next step (710) is selection and placement of virtual surgical instruments. The selection and preparation of a virtual surgical instrument is done based on the selected surgical procedure. In this session, the OT staff and surgeons can select the virtual instruments that will be used during the selected virtual surgical procedure. Once the selection process is completed, they can practice handling and placement of virtual surgical instruments on the virtual robotic arms in step (712). By repeated practice of virtual instrument placement and removal, the surgeon/OT staff will develop muscle memory of the entire process and will find the placement and removal of the actual physical instruments easier.
The step (714) involves undocking and storage of the virtual robotic arms. Once the training session of the intra-operative procedure ends, the post-operative training session will include undocking and storage of virtual robotic arms. The surgeons and OT staff will be taken through the steps that are required to safely undock the patient cart arms. They will have a checklist type assistance that will highlight the steps they need to perform to undock the system. Next in step (716), as a contingency step, the OT staff and the surgeons also need to be trained on quickly undocking the virtual robotic arms in any adverse situation. For the surgery to be quickly converted, the surgeons and OT staff will be trained in a way that they can quickly react and perform the appropriate steps seamlessly to ensure patient safety.
Next step (718) will be cleaning and sterilization of virtual surgical instruments. Post-surgery, the surgical instruments undergo a thorough cleaning and sterilization process. This session will take the surgeons and OT staff through the process of cleaning and sterilizing the surgical instrument properly. They will be taken through each step one by one after which they will be able to properly clean and sanitize actual surgical instruments after an actual robotic surgery. Autoclaving procedure steps will also be explained, and practice runs will be conducted.
In one embodiment, once the virtual robotic arms of the virtual surgical system are undocked, they will be taken through the steps for the proper storage of the entire robotic surgical system. They can practice undocking and storage procedures to get used to the system in step (716). The troubleshooting and conversion of surgery is achieved.
In an embodiment, the application of mixed reality for intra-operative procedures is described. The CT scans and MRI scans (DICOM files) can be converted into a 3D model. Using various segmentation techniques, the organ of interest can be segmented from the entire CT/MRI scan and converted into a 3D model. This 3D model can then be superimposed on the patient to give the surgeon a 3D view of the patient's anatomy and organ of interest. This will ensure the surgeon always knows the exact position and orientation of the organ. The mixed reality headset can identify an MRI scan image target using any image processing techniques and project the appropriate holographic model. This model can then be superimposed on the patient to find out the exact location and orientation of the organ of interest. The holographic projection of the organ of interest will have the exact size, anatomical structure, and characteristics of the patient's organ as it has been converted from his/her own MRI or CT scan.
In an embodiment, the mixed reality headset identifies the MRI scan as an image target and deploys the 3D holographic model on top of it. Once the model is deployed, the surgeon can manipulate this hologram and superimpose it on the patient on a 1:1 scale. As illustrated in
In one embodiment, the OT staff also rely on the vision cart 3D screen to view the feed of the endoscope. With help of mixed reality glasses, the feed from the endoscope can be directly relayed on a virtual screen that they can place anywhere they think is comfortable. The main purpose of the virtual screen will be to reduce the neck strain and visibility issues that occur because of looking at the vision cart screen for prolonged periods of time. The virtual screen will display the 3D view from the endoscope that will ensure the OT staff have the same view as the surgeon.
In an embodiment, application of mixed reality for surgical instrument assistance is described. The robotic surgical systems may have multiple endoscopic surgical instruments that are operated during a surgical procedure. These surgical instruments are inserted into the patient's body via cannulas. Each surgical instrument performs a unique function. There are multiple types of surgical instruments available such as energy instruments which may include monopolar instruments, bipolar instruments, and harmonic instruments. These instruments come under electrosurgical instruments. The electrosurgery is the application of a high-frequency alternating polarity, and electrical current on a biological tissue to cut, coagulate, desiccate, or fulgurate tissue. Its benefits include the ability to make precise cuts with limited blood loss. Monopolar, bipolar, and harmonic are the 3 types of instruments used. In monopolar instruments, energy is passed from one jaw of the instrument to a grounding pad attached to the patient via the tissue. The tissue is then either cut or coagulated when energy is passed. In bipolar instruments, the energy is passed from one jaw of the instrument to the other via the tissue. The tissue is held between the 2 jaws, and energy is passed. Harmonic instruments make use of ultrasonic vibrations to cut a tissue faster. Harmonic instruments are essentially bipolar instruments in which the 2nd passes ultrasonic vibrations through the tissue to cut it faster.
All the surgical instruments have a unique number of maximum uses. Once the number of maximum uses is over, the instrument is no longer detected by the robotic surgical system. To ensure proper bifurcation of instruments, each instrument has a unique serial number as well. In one embodiment, a unique information related to a particular virtual surgical instrument may be displayed on top of the virtual surgical instrument when selected by the operator (204) using either hand gestures/hand controllers (202). The surgical instruments that are required during a surgical procedure are prepped before the actual surgery as a pre-operative procedure. Having a checklist for all instruments, and having unique IDs, names and types are very difficult to manage. It is impractical for the OT staff to know the names, types, and other important information of various separate instruments.
In an embodiment, when the surgeon/OT staff selects a virtual surgical instrument, all the important information will be displayed over the virtual surgical instrument in the form of a text box. This information can be used to confirm the instruments being prepped for surgery are the required instruments, and that they are not expired instruments. The mixed reality headset will identify the unique ID on the selected virtual surgical instrument and based on that will gather related data from database (210) stored on the server (208). This information will then be displayed over the virtual surgical instrument. The project model needs to have the capability to be able to detect multiple virtual surgical instruments at the same time and display the correct information on top of the respective virtual surgical instrument.
In an embodiment, the integration of extended reality headsets will not only assist surgeons and OT staff in their procedures but also ensure maximum safety for patients. Having interactive holograms responding to their environments will assist OT staff and surgeons immensely. The main advantage of having a mixed reality headset in an operation theatre is its collaborative attribute. Procedures such as spatial collaboration can be conveniently done using extended reality headsets. Multiple surgeons and doctors from all around the world can join in on a surgical procedure via a platform called Dynamics 365. All of them will see the same feed and can interact with the holograms collaboratively. This takes tele surgical capabilities to a whole new level. With the successful integration and unification of mixed reality and minimally invasive surgical robotic systems, the surgical procedures carried out will be precise, fast, and reliable.
The proposed virtual reality system of the disclosure is advantageous, as it provides an economic solution for training, compared to traditional methods of training that use cadavers or dummies in an OR environment etc. The virtual training modules of the present disclosure provide interactive content, which enables visual gratification for trainees and enables greater skill retention. Also, the proposed virtual reality system of the disclosure is future forward, as the virtual reality environments are platform agnostic, so they can be used in cross platform devices making accessibility to training easier. Further, presently, there are no comprehensive training modules specific to robotic surgery in medical schools, but with the proposed virtual reality system of the disclosure, robotic surgery can be added in the curriculum making global adoption easier.
Another major advantage of the proposed virtual reality system of the disclosure, is possibility of anatomy resizing. The 3D DICOM of a virtual patient can be resized to any dimensions, making surgical planning more approachable. Further, the anatomy of the virtual patient can be super imposed on a live patient giving an x-ray vision without the necessity to have an actual x-Ray or MRI being done constantly intraoperatively. Moreover, in future, the likelihood of hosting of many web technologies in blockchain, the patient data will remain secure.
The foregoing descriptions of exemplary embodiments of the present disclosure have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiment was chosen and described in order to best explain the principles of the disclosure and its practical application, to thereby enable others skilled in the art to best utilize the disclosure and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions, substitutions of equivalents are contemplated as circumstance may suggest or render expedient but is intended to cover the application or implementation without departing from the spirit or scope of the claims of the present disclosure.
Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any component(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or component of any or all the claims.
While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person in the art, various working modifications may be made to the apparatus in order to implement the inventive concept as taught herein.
Number | Date | Country | Kind |
---|---|---|---|
202211033296 | Jun 2022 | IN | national |
This application is a national sage application of International Application No. PCT/IN2023/050543 filed on Jun. 9, 2023, which application claims priority from Indian patent application Ser. No. 20/221,1033296, filed on Jun. 10, 2022.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IN2023/050543 | 6/9/2023 | WO |