This application relates to the use of computer vision to recognize anatomical features within a surgical site. In many procedures, it is necessary to track anatomical structures present within the surgical site. Some of those anatomical structures are ones that follow a path within the body. Examples include ureters, ducts, blood vessels, nerves, etc.
Sometimes the complete entire path of the structure may not be visible in the endoscopic view at once. One or more portions of the path may be occluded by organs or other tissue layers. During the course of some procedures, occluded portion(s) of the path may be exposed gradually by surgical dissection.
The concepts disclosed in this application aid the surgeon by helping to identify and track the path of an anatomical structure. This enhances the surgeon's awareness of structures that may only be differentiable via context clues such as their source or destination, and helps the surgeon undertake measures to avoid damaging fragile structures.
System
A system useful for performing the disclosed methods, as depicted in
In still other implementations, this recognition and tracking is a component of a fully autonomous surgical procedure.
The camera 10 is one suitable for capturing images of the surgical site within a body cavity. It may be a 3D or 2D endoscopic or laparoscopic camera. Where it is desirable to use image data to detect movement or positioning of instruments or tissue in three dimensions, configurations allowing 3D data to be captured or derived are used (e.g., a stereo/3D camera, or a 2D camera with software and/or hardware configured to permit depth information to be determined or derived).
The computing unit 12 is configured to receive the images/video from the camera and input from the user input device(s). If the system is to be used in conjunction with a robot-assisted surgical system in which surgical instruments are maneuvered within the surgical space using one or more robotic components (e.g. robotic manipulators that move the instruments and/or camera, and/or robotic actuators that articulate joints, or cause bending, of the instrument or camera shaft) the system may optionally be configured so that the computing unit also receives kinematic information from such robotic components 18 for use in recognizing procedural steps or events as described in this application.
An algorithm stored in memory accessible by the computing unit is executable to, depending on the particular application, use the image data to perform one or more of the functions described with respect to the below-described embodiments.
The system may include one or more user input devices 16. When included, a variety of different types of user input devices may be used alone or in combination. Examples include, but are not limited to, eye tracking devices, head tracking devices, touch screen displays, mouse-type devices, voice input devices, foot pedals, or switches. Various movements of an input handle used to direct movement of a component of a surgical robotic system may be received as input (e.g., handle manipulation, joystick, finger wheel or knob, touch surface, button press). Another form of input may include manual or robotic manipulation of a surgical instrument having a tip or other part that is tracked using image processing methods when the system is in an input-delivering mode, so that it may function as a mouse, pointer and/or stylus when moved in the imaging field, etc. Input devices of the types listed are often used in combination with a second, confirmatory, form of input device allowing the user to enter or confirm (e.g., a switch, voice input device, button, icon to press on a touch screen, etc., as non-limiting examples).
The system is configured to perform one or more of the following functions:
A first example is given in the context of a cholecystectomy, a procedure during which it is necessary for the surgeon to be aware of the cystic duct and the common bile duct. During cholecystectomy, the cystic duct is clipped and cut, but the common bile duct cannot be cut. During the course of the procedure, the cystic duct is gradually exposed via dissection. The system uses computer vision to recognize the cystic duct, and an overlay is generated as shown in
A second example relates to a hysterectomy or colorectal procedure. During these procedures, the surgeon wants to maintain an awareness of the location of the ureter to avoid inadvertent injure to it. However, the entire path of the ureter may not be visible at all times. In this case, the system displays overlays marking the portions of the ureter recognized by the system using computer vision, as shown in
Pre-operative imaging may be optionally used to identify and the tag structures, with live correlation then used during surgery to correlate those structures with the real time endoscopic view.
With regard to the portions of the ureter or other path-like structure that cannot be detected by the system, the system predicts that path of the structure based on the detected portions, and, optionally, other information known or learned by the system. The system displays its predictive path as an overlay on the endoscopic display so as to can help to avoid inadvertent injury to it. This is illustrated in
Referring to
With increased confidence or with user direction, the path connecting what is now believed or known to be the same structure(s) or at least connected structures may be confirmed and tracked. See
Although the paths shown above are straight lines, the predicted shape may have any shape, including straight-line, splines, arcs, etc. or any combination thereof.
The system may make use of active contour models/snake models and their properties to define an acceptable path/potential connectivity criteria. Other anatomical landmarks recognized by the system or identified to the system by the user may be taken into account by the system in predicting pathways. Definition of pathways may also be performed with reference to other instruments. See, for example, commonly owned U.S. Ser. No. 16/733,147 “Guidance of Robotically Controlled Instruments Along Paths Defined with Reference to Auxiliary Instruments”, incorporated by reference.
With the paths predicted or identified, the following additional functions may be optionally be performed:
Machine learning algorithms may be employed to help the system to provide increasingly accurate recommendations over time, as the accuracy of predictions are confirmed to the system and used to train the algorithms.
All patents and applications described herein, including for purposes of priority, are incorporated by reference.
Number | Date | Country | |
---|---|---|---|
63088404 | Oct 2020 | US |