There are various types of surgical robotic systems on the market or under development. Some surgical robotic systems use a plurality of robotic arms. Each arm carries a surgical instrument, or the camera used to capture images from within the body for display on a monitor. Other surgical robotic systems use a single arm that carries a plurality of instruments and a camera that extend into the body via a single incision. Each of these types of robotic systems uses motors to position and/or orient the camera and instruments and to, where applicable, actuate the instruments. Typical configurations allow two or three instruments and the camera to be supported and manipulated by the system. Input to the system is generated based on input from a surgeon positioned at a master console, typically using input devices such as input handles and a foot pedal. Motion and actuation of the surgical instruments and the camera is controlled based on the user input. The image captured by the camera is shown on a display at the surgeon console. The console may be located patient-side, within the sterile field, or outside of the sterile field.
US Patent Publication US 2010/0094312 describes a surgical robotic system in which sensors are used to determine the forces that are being applied to the patient by the robotic surgical tools during use. This application describes the use of a 6 DOF force/torque sensor attached to a surgical robotic manipulator as a method for determining the haptic information needed to provide force feedback to the surgeon at the user interface. It describes a method of force estimation and a minimally invasive medical system, in particular a laparoscopic system, adapted to perform this method. As described, a robotic manipulator has an effector unit equipped with a six degrees-of-freedom (6-DOF or 6-axes) force/torque sensor. The effector unit is configured for holding a minimally invasive instrument mounted thereto. In normal use, a first end of the instrument is mounted to the effector unit of the robotic arm and the opposite, second end of the instrument (e.g. the instrument tip) is located beyond an external fulcrum (pivot point kinematic constraint) that limits the instrument in motion. In general, the fulcrum is located within an access port (e.g. the trocar) installed at an incision in the body of a patient, e.g. in the abdominal wall. A position of the instrument relative to the fulcrum is determined. This step includes continuously updating the insertion depth of the instrument or the distance between the (reference frame of the) sensor and the fulcrum. Using the 6 DOF force/torque sensor, a force and a torque exerted onto the effector unit by the first end of the instrument are measured. Using the principle of superposition, an estimate of a force exerted onto the second end of the instrument based on the determined position is calculated. The forces are communicated to the surgeon in the form of tactile haptic feedback at the hand controllers of the surgeon console.
Often in surgery there are tissues within the body cavity that the surgeon would like to avoid touching with the surgical instruments. Examples of such structures include the ureter, nerves, blood vessels, ducts etc. The need to avoid certain structures is present both in open surgery, as well as in the domain of laparoscopic surgery, including minimally-invasive gynecologic, colorectal, oncologic, pediatric, urologic, or thoracic procedures, as well as other minimally-invasive procedures. The present application describes features and methods for improving on robotic systems by allowing control of the robotic system based on information about identified tissues or structures within the surgical field. They may also be more generally used to assist with tasks or guide tasks.
Embodiments described below include the use of data generated using structured light techniques performed by illuminating the body cavity using structured light delivered from a trocar through which the surgical instrument is inserted into the body.
The present application describes a system and method that make use of information provided to the system about the operative site to allow the robotic surgical system to operate in a manner that avoids unintended contact between surgical instruments and certain tissues or structures within the body. These features and methods allow the system to track the identified structures or tissues and predict whether the instrument is approaching unintentional contact with the tissue or structure to be avoided. Such features and techniques can help protect delicate tissues by automatically controlling the robotic system in a manner that stops or prevents the unintentional contact and/or that gives feedback to the surgeon about the imminence of such contact as predicted by the system so that the surgeon can avoid the predicted contact. They may also be more generally used to assist with tasks or guide tasks. In some cases, the system may be used to track other structures placed in the body, such as ureteral stents (which can help to mark the ureter so it may be avoided during the procedure), or colpotomy cups.
Some embodiments described below also include the use of data generated using structured light techniques performed by illuminating the body cavity using structured light delivered from a trocar through which the surgical instrument is inserted into the body.
Structures/tissues that are identified and/or tracked may be ones that fluoresce, whether by autofluorescence, using a fluorescent agent such as indocyanine green (ICG) or a dye such as methylene blue.
The surgical system may be of a type described in the Background, or any other type of robotic system used to maneuver surgical instruments at an operative site within the body.
At a high level, embodiments described in this application provide method of controlling a robotic surgical system based on identified structures, such as those identified within an endoscopic camera image. Some implementations use additional data sources to provide anticipatory information. The invention acquires data from a source or number of sources, processes that information, and provides output to the surgeon based on that information and/or performs some action with respect to the robotic system movement. As indicated in
In disclosed embodiments, the information source may be an endoscopic image or fluorescent image. Computer vision is applied to the image data to identify tissues or surgical instruments of interest. In some cases, some or all of the structures/tissues that are identified and/or tracked may be ones that fluoresce, whether by autofluorescence, using a fluorescent agent such as indocyanine green (ICG) or a dye such as methylene blue, and that are detected using a fluorescence imaging system. In some cases, the system predicts subsequent motion of the structures or instruments identified using computer vision on the image.
Some embodiments identify structures and provide control input to a robotic surgical system with a limited amount of information. In other embodiments, a richer set of information provides additional benefits, which may include a more responsive system, a system that is easier to use, and others. The invention may be implemented in a number of ways by incorporating various layers of information. These may include, but are not limited to the following:
Endoscope Image only (
Endoscope Image+Motion Prediction on the Endoscope Image(
Endoscope Image+Arm Information Only (
Endoscope Image+Arm+Surgeon Input
Endoscope Image+Other Imaging Sources+Arm +Surgeon Input (
In addition to those described herein, sources of information that may be used as input in the methods described here are found in the following co-pending applications, each of which is incorporated herein by reference:
Referring to
The system makes use of several data models as shown in
Input of information into the data models is illustrated in
As discussed above, to aid the computer vision algorithm in image segmentation and improve robustness, user input may be used to select or guide the algorithm. The user may be prompted to select the tip of the instrument, or “click on the lighted ureter”. This may be with a mouse, touchscreen, the hand controllers, or other input device. In some implementations, eye tracking is used to provide user input.
While the embodiment of
In a second embodiment schematically shown in
In a third embodiment shown in
The information used by the system may be provided to the system or updated at different time intervals. For instance, a camera image may be available at approximately 30Hz or approximately 60Hz. Less frequently, an endoscopic image may be available at approximately 50Hz. In contrast, the control loop and resultant information for a surgical robotic system may be at 250 Hz, 500 Hz, 1 kHz, or 2 kHz. See
This presents an opportunity for using higher-fidelity information, but it is necessary to rectify the timing of information coming from different sources.
In
As discussed above, additional imaging sources may help to enhance the model of the environment. These imaging sources may be co-registered to or tracked by an optical tracking system, or by the robotic surgical system. These image sources may be static, or may be dynamic. Dynamic sources of imaging may include, but are not limited to: ultrasound, OCT, and structured light. Any combination of sources may be used to create a model of the anatomy, which then may be constructed as a deformable model that updates based on the live/real-time/near real-time imaging sources. This may update boundaries/tissue planes that should not be violated, for instance.
A source of structured light may be used to generate additional information in any of the embodiments described above. In some implementations, a source of structured light may be added to the trocar through which the surgical instrument is inserted into the body. This may be an optical element/series of optical elements, or a light source and optical element/series of optical elements. In some implementations, an external light source may be connected (by attachment, by simple proximity, by fiber optic connector, etc.) to the component that provides structured light.
In some implementations, the light source/optical element is outside the nominal circumference of the trocar as shown in
In some implementations, the optical element and/or light source for providing the structured light may be on a sliding/movable element that moves along with the insertion of the instrument. This may allow the structured light source to be closer to the tissue or to maintain a constant/optimal distance.
In some implementations, a source of structured light may be integrated into the trocar.
In some implementations, part of the optical path may be the trocar lumen itself. In some implementations, part of the optical path may be features molded into the surface or structure of the trocar lumen. Alternative implementations may be features attached to or machined/etched/post-processed into the surface or structure of the trocar lumen.
In some implementations, the trocar lumen structure may be over-molded onto optical elements.
The following is a sequence of steps in an exemplary method for providing the illumination:
The text accompanying
As also referenced above, optical flow/motion algorithms may be used to provide predictive motion for tissue positions and/or tool positions. Based on this information, avoidance methods may be used and/or feedback given to the surgeon.
In an alternate embodiment, a source of structured light that is attached to the abdominal wall may be used. In some implementations, this may be magnetically held; potentially with an external magnetic or ferrous device outside the body.
Without limiting the scope of the claimed inventions, a system into which the features and methods described above may be utilized is described in US Published Application No. 2013/0030571 (the '571 application), which is owned by the owner of the present application and which is incorporated herein by reference, describes a robotic surgical system that includes an eye tracking system. The eye tracking system detects the direction of the surgeon's gaze and enters commands to the surgical system based on the detected direction of the gaze.
A control unit 30 provided with the system includes a processor able to execute programs or machine executable instructions stored in a computer-readable storage medium (which will be referred to herein as “memory”). Note that components referred to in the singular herein, including “memory,” “processor,” “control unit” etc. should be interpreted to mean “one or more” of such components. The control unit, among other things, generates movement commands for operating the robotic arms based on surgeon input received from the input devices 17, 18, 21 corresponding to the desired movement of the surgical instruments 14, 15, 16.
The memory includes computer readable instructions that are executed by the processor to perform the methods described herein. These include the various modes of operation methods described herein for practice of the disclosed invention.
The invention(s) are not limited to the order of operations shown and may not require all elements shown; different combinations are still within scope of the invention. use of transmitted spectral information to determine the depth of an identified structure.
All prior patents and applications referred to herein, including for purposes of priority, are incorporated herein by reference.
This application is a continuation of U.S. application Ser. No. 16/010,388, file Jun. 15, 2018, which claims the benefit of U.S. Provisional Application No. 62/520,554, filed Jun. 15, 2017 and U.S. Provisional Application No. 62/520,552, filed Jun. 15, 2017.
Number | Date | Country | |
---|---|---|---|
62520554 | Jun 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16010388 | Jun 2018 | US |
Child | 16237444 | US |