Minimally invasive surgeries (“MIS”) have become a prominent method for many surgical procedures. As compared to traditional open surgery, which involves large incisions to gain direct access to the surgical site, MIS use small incisions to enter elongated surgical instruments to operate on the surgical site. This may result in surgical benefits, including shorter recovery time, smaller external scarring, and lower discomfort. To efficiently perform MIS, the surgeon should understand both the anatomy at the surgical site and the tool-tissue interaction required to operate such surgical site. Thus, before performing MIS, many surgeons use box trainers with phantoms for replicating surgical sites or virtual reality (“VR”) simulators. However, current technology has several disadvantages. For example, box trainers lack realism in illustrating a surgical site with its surroundings of the cavity. Further, VR simulators lack realistic tool-tissue interaction in any of the basic surgical tasks, such as cutting, suturing, and cauterizing. Improved system and methods for surgical training are therefore needed.
In light of the disclosure herein and without limiting the disclosure in any way, in a first aspect of the present disclosure, which may be combined with any other aspect listed herein unless specified otherwise, a system for mixed reality surgical simulation is provided. The system includes a simulator, an optical tracking system, an input device, a visualization screen, and a simulation workstation, wherein the simulation workstation renders an augmented view on the visualization screen.
In accordance with a second aspect of the present disclosure, which may be used in combination with any other aspect listed herein unless stated otherwise, the optical tracking system acquires tracking data and the simulator acquires video stream data.
In accordance with a third aspect of the present disclosure, which may be used in combination with any other aspect listed herein unless stated otherwise, the input device transmits a plurality of user inputs to the simulation workstation.
In accordance with a fourth aspect of the present disclosure, which may be used in combination with any other aspect listed herein unless stated otherwise, the simulation workstation includes a video module configured to receive video stream data, a tracking module configured to receive tracking data, and a user interfacing module configured to receive the plurality of user inputs.
In accordance with a fifth aspect of the present disclosure, which may be used in combination with any other aspect listed herein unless stated otherwise, the simulation workstation further includes a core processing module configured to process video stream data, tracking data, and the plurality of user inputs, wherein the core processing module transmits video stream data, tracking data, and the plurality of user inputs to a graphical rendering module.
In accordance with a sixth aspect of the present disclosure, which may be used in combination with any other aspect listed herein unless stated otherwise, the graphical rendering module renders an augmented view on the visualization screen.
In accordance with a seventh aspect of the present disclosure, which may be used in combination with any other aspect listed herein unless stated otherwise, the simulator includes a scope system, a box tracking frame, a chroma background, an aperture configured to receive an instrument, a configuration table configured to receive a tissue, and an ambient light.
In accordance with an eighth aspect of the present disclosure, which may be used in combination with any other aspect listed herein unless stated otherwise, the scope system includes a tracking frame.
In accordance with a ninth aspect of the present disclosure, which may be used in combination with any other aspect listed herein unless stated otherwise, the simulator is a box simulator.
In light of the disclosure herein and without limiting the disclosure in any way, in a second aspect of the present disclosure, which may be combined with any other aspect listed herein unless specified otherwise, a method of mixed reality surgical simulation is provided. The method includes the steps of positioning a simulator in front of an optical tracking system; activating a simulation workstation, an optical tracking system, and a scope system, wherein the a box tracking frame of the simulator is in a field of view of the optical tracking system; loading a virtual tissue model onto the simulation workstation; placing the virtual tissue model in a virtual environment based on the tracking frame of the simulator; placing a virtual camera in the virtual environment, wherein the virtual camera matches a relative pose of a tracking frame of the scope system; configuring a plurality of parameters using an input device; placing a 3D-printed tissue on a configuration table of the simulator; inserting a surgical instrument into the simulator; and performing a surgical task.
In accordance with an eleventh aspect of the present disclosure, which may be used in combination with any other aspect listed herein unless stated otherwise, the core processing module executes chroma keying on the video stream data to remove a background in the video stream data in real-time.
In accordance with a twelfth aspect of the present disclosure, which may be used in combination with any other aspect listed herein unless stated otherwise, the graphical rendering module places the video stream data with the background removed in the virtual environment at the frustum of the virtual camera and renders the view on a visualization screen.
Additional features and advantages of the disclosed method and apparatus are described in, and will be apparent from, the following Detailed Description and the Figures. The features and advantages described herein are not all-inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the figures and description. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and not to limit the scope of the inventive subject matter.
To efficiently perform MIS, a surgeon should understand both the anatomy at the surgical site and the tool-tissue interaction required to operate such surgical site. Thus, for training, many surgeons use box trainers or VR simulators. However, current technology has several disadvantages. For example, box trainers lack realism in illustrating a surgical site while VR simulators lack realistic tool-tissue interaction in any of the basic surgical tasks. These disadvantages may lead to a suboptimal learning experience for surgeons. Thus, aspects of the present disclosure may address the above discussed disadvantages in current training techniques.
The present disclosure generally relates to systems and methods for mixed reality training simulators. The mixed reality systems may be used to train specific procedures that are used in MIS. In an example embodiment, the system includes a simulator, a simulation workstation, an optical tracking system, an input device, and a visualization screen. The simulator may further include a scope system, a box tracking frame, a chroma background, and an aperture to insert various instruments used in MIS. The simulator may also include a configuration table to hold 3D printed tissue and an ambient light. While in use, the scope system of the simulator acquires a video stream and the optical tracking system acquires tracking data. The tracking data and the video stream are transmitted to the simulation workstation. The input device is used to configure the setting of the simulation workstation. The simulation workstation processes the user input from the input device, the tracking data from the optical tracking system, and the video stream from the scope system to render an augmented view on the visualization screen.
In various embodiments, the simulation workstation is a laptop that is configured to run various software modules. In an example, the software modules are implemented using C++. Further, the graphical rendering may be performed using VTK whereas the GUI implemented may use Qt. The threaded implementation of the modules can be performed using Boost, and the simulation workstation is realized on a standard PC with an integrated graphics processing unit. The optical tracking system that may be used can be implemented on V120: Trio OptiTrack motion capture system by NaturalPoint, Inc. The tracking data can be processed using an OptiTrack software platform, which runs the operating room workstation. The removal of the green background on the box simulator is done using a chroma key filter using OpenCV library.
The user may perform the surgical task on the 3D printed tissue within the simulator, which is continuously rendered as if it is immersed into a surgical field 210. At the end of the task, the user may remove and examine the operated tissue to report a score for the simulated task 212. At such point, the user can determine to re-run the training scenario 214. To perform the simulation task again, a new 3D printed tissue can be replaced on the configuration table 206. However, if a second simulation task is not performed, the user may turn off the system, including the simulation workstation, the optical tracking system, and the scope system 216.
Namely, this procedure involves starting with 3D printing soft deformable tissue models 308. The next step involves the careful placement of the 3D printed tissues onto the configuration table with a chroma background 310. The chroma background is a simple surface with a uniform color that is not present in any of the tissues. Common colors for chroma include green or blue. Once the chroma background and the tissues are in place, the configuration table is placed inside the simulator box 312. A box tracking frame is attached outside the simulator for registration purposes. A scope with another tracking frame is then inserted into the simulator. The virtual models are rendered in the virtual world based on the box tracking frame and the real tissues are also rendered in the virtual world using the output from the scope camera 306. The feed from the camera is processed to remove the chroma background leaving only the tissues and surgical instruments from the feed to be present in the virtual world. Both the box tracking frame and the scope tracking frame are used to register the virtual and 3D printed tissues in the virtual world. The rendering of both virtual and real tissues in the same space may depict a mixed reality system.
The system 400 may further include a tracking module 406. The tracking module 406 is configured to the tracking frame (with a unique arrangement of retroreflective markers) that is attached to the scope. The optical tracking system continuously sense the poses (position and orientation) of the tracking frames and sends the tracking data stream to the tracking module 406. The tracking module 406 then processes the stream and computes the pose of the scope camera and the simulator. In an example embodiment, the scope camera's pose at time instant ‘t’ is represented by a 4×4 homogenous transformation MScope(t). Whereas, the pose of the simulator is represented by MBox(t). MScope(t) and MBox(t) may be measured with respect to the coordinate system of the optical tracking system inside the training room and are fed to the core processing module 404.
The system 400 may further include a tissue module 408. The tissue module 408 is configured to send the 3D meshes and their poses representing tissue models at the surgical site (operating field) to the core processing module 404. These meshes are specific to a simulation scene and loaded into the simulation system with predefined poses MTissue[i] (where ‘i’ varies from 0 to the number of tissue models).
The system 400 may further include a core processing module 404. The core processing module 404 acts as a central core for processing data in the simulation workstation. The module receives data from the user-interfacing module, video module, tracking module, and tissue module and sends data to the graphical rendering module. The core processing module applies the chroma key filter to the video frame FSurgicalView(t) to segment and extract 3D printed tissue and surgical instruments. In other words, chroma keying may be performed. Registration of the segmented video frame is then performed with the 3D meshes fetched from the tissue module. The registration is performed using MScope(t) and MBox(t) poses such that the segmented 3D printed tissue in the video frame aligns with the tissue models.
The system 400 may further include a graphical rendering module 410. The graphical rendering module 410 renders both the 3D meshes along with the video frame (segmented 3D printed tissue and surgical instruments) onto the visualization screen, creating an immersive mixed reality environment. A virtual camera frustum is rendered at MScope(t) with the same configuration as the scope camera used in the simulator as illustrated in
The system 400 may further include a GUI module 412. The GUI module 412 is used to alter the visualization setting, scope parameters, chroma filter settings, and to set the tracking parameters for the tracking module. One aspect of objective assessment requires computing the movements of the tooltips (i.e. tooltip poses with respect to time) and in general the instrument.
Although the method has been described in certain specific aspects, many additional modifications and variations would be apparent to those skilled in the art. In particular, any of the various processes described above can be performed in alternative sequences and/or in parallel in order to achieve similar results in a manner that is more appropriate to the requirements of a specific application. It is therefore to be understood that the present disclosure can be practiced otherwise than specifically described without departing from the scope and spirit of the present embodiments. Thus, embodiments of the present invention should be considered in all respects as illustrative and not restrictive. It will be evident to the annotator skilled in the art to freely combine several or all of the embodiments discussed here as deemed suitable for a specific application of the invention. Throughout this disclosure, terms like “advantageous”, “exemplary” or “preferred” indicate elements or dimensions which are particularly suitable (but not essential) to the invention or an embodiment thereof, and may be modified wherever deemed suitable by the skilled annotator, except where expressly required. Accordingly, the scope of the invention should be determined not by the embodiments illustrated, but by the appended claims and their equivalents.
The present disclosure claims priority to U.S. Provisional Patent Application 63/364,986 titled “SYSTEM AND METHODS FOR MIXED REALITY SURGICAL SIMULATION” having a filing date of May 19, 2022, the entirety of which is incorporated herein.
| Filing Document | Filing Date | Country | Kind |
|---|---|---|---|
| PCT/QA2023/050007 | 5/18/2023 | WO |
| Number | Date | Country | |
|---|---|---|---|
| 63364986 | May 2022 | US |