1. Field of Invention
The field of the currently claimed embodiments of this invention relates to systems and methods of acquiring and displaying information during surgical procedures, and more particularly to systems and methods of acquiring and displaying information that include visual tracking of surgical instruments and annotation of information displayed.
2. Discussion of Related Art
Currently surgeons use a number of intraoperative diagnostic and surgical and/or treatment devices for operating on small tissue regions, e.g., laparoscopic ultrasound and/or RF ablation devices. The surgeon observes the device's position relative to the anatomy through a video feed. In the case of diagnostic instruments, the surgeon has to note this position and the imaging results and track this location with the anatomy over time. Over time, the surgeon may want to revisit particular anatomy for an intervention or inspection. If the surgeon does not recall or is not sure about the location or the content of a prior diagnostic image associated with a particular area, the diagnostic device might have to be reintroduced to the operating field, which is time consuming. In the case of surgical devices, the surgeon has to map the already treated area relative to some visible landmarks. This is difficult when the treatment is inside the target tissue, or does not alter the tissue's appearance. Missed, over treated or incorrect treatment locations are to be avoided. The surgeon may choose to inspect or treat a number of anatomical regions, which could be sparsely located, e.g. 10 landmarks. This adds to cognitive load on the surgeon in an already challenging minimally invasive procedure. Furthermore, the anatomy may deform, or change color naturally, or from the intervention itself, which adds to the difficulty of tracking the relevant landmarks and associated intraoperative information.
In the case of vitroretinal surgery, for example, it has been very rare to interrogate the retina with intraocular imaging probes. However, with the discovery of new real time intraoperative imaging modalities (GRIN lens endoscopes, spectroscopy, ultrasound, optical coherence tomography (OCT)), compatible probes and multifunction instruments, new surgical techniques may be possible. These new technologies image tissue at very close distances and very small volumes. The resulting data is sparsely located, requiring the surgeon to track multiple scans or images with the corresponding anatomical locations in the microscope view, which adds significant cognitive load to the already challenging surgical task. This can become more difficult with altered anatomy due to surgical manipulation, bleeding, bio-markers, swelling as well as inherent changes in the field of view, lighting methods and/or directions and intraocular fluid conditions.
There thus remains the need for improved visual tracking and annotation systems and methods for surgical intervention.
A visual tracking and annotation system for surgical intervention according to some embodiments of the current invention has an image acquisition and display system arranged to obtain image streams of a surgical region of interest and of a surgical instrument proximate the surgical region of interest and to display acquired images to a user; a tracking system configured to track the surgical instrument relative to the surgical region of interest; a data storage system in communication with the image acquisition and display system and the tracking system; and a data processing system in communication with the data storage system, the image acquisition and display system and the tracking system. The data processing system is configured to annotate images displayed to the user in response to an input signal from the user.
A visual tracking and annotation method for surgical intervention according to some embodiments of the current invention includes acquiring an image of a surgical region of interest and of a surgical instrument proximate the surgical region of interest, tracking the surgical instrument relative to the surgical region of interest, and displaying the surgical region of interest and the surgical instrument. The displaying includes annotations added in response to an input signal from the user.
A computer-readable medium according to some embodiments of the current invention includes non-transient storage of software for visual tracking and annotation for surgical intervention, which when executed by a computer system, include processing image data of a surgical region of interest and of a surgical instrument proximate the surgical region of interest to provide an image of the surgical region of interest and of the surgical instrument; processing the image data to track the surgical instrument relative to the surgical region of interest; processing the image data and an input signal from a user to annotate the image; and displaying the image of the surgical region of interest and the surgical instrument with the annotations.
Further objectives and advantages will become apparent from a consideration of the description, drawings, and examples.
Some embodiments of the current invention are discussed in detail below. In describing embodiments, specific terminology is employed for the sake of clarity. However, the invention is not intended to be limited to the specific terminology so selected. A person skilled in the relevant art will recognize that other equivalent components can be employed and other methods developed without departing from the broad concepts of the current invention. All references cited anywhere in this specification are incorporated by reference as if each had been individually incorporated.
Some embodiments of the current invention are directed to systems and methods for displaying surgically relevant data collected intraoperatively as a video augmented with overlay annotation. The intraoperative imaging and/or treatment information can be registered to corresponding anatomical landmark locations in intraoperative video, and tracked and visualized in various representations over time in the video. In addition, external tool tracking systems and/or robots could also be included according to some embodiments of the current invention. In that case, they may need to be registered to the video system. The system according to some embodiments of the current invention can be used with any instrument visible in the surgical video field of view. In a simple case, the instrument can be a pointing device, used to paint a region of interest which is then tracked in subsequent video (telestration). In more complex cases, the instruments can be any intra-operative diagnostic and/or imaging devices, such as ultrasound, spectroscopy probes, oxygenation sensors, optical coherence tomography, confocal microscopy, endoscopes, GRIN endoscopes, nerve function measurement, autoflourescene or interventional treatment devices, such as RF liver ablation lasers, electric stimulation, cryoablation, etc. The information from these devices can be linked to the location where it was collected, or in the case of treatment, where it was applied. The information can be displayed as a picture-in-picture in the same screen as the real time surgical video feed or on a separate screen, for example.
The individual annotation can be added to the set or removed by the surgeon. The annotations can have a number of states depending on their temporal nature, whether their associated anatomy is visible (occlusion from tools), deformation by the procedure, or a natural change by a certain amount. These states can be represented in color, intensity, visibility, or textual annotation. The annotations themselves can be in a form of a point, line, a region, a volume, a textual annotation, an image corresponding to the shape of the instrument, or the shape of the imaging area or volume. These annotations can create inherent relationships, whether special or contextual based on landmarks, tissue types (underlying properties of the tissue being interrogated or treated from the intraoperative imaging itself), or device type. Information from multiple devices can be annotated on the same video in some embodiments of the current invention.
Some embodiments of the current invention can include user interfaces, such as pedals, computer mice, touch screens, voice recognition input, or it can be linked to sensor activity level on instruments being used. In some embodiments, gesture recognition of the instrument in the video can be included. Furthermore, tool position tracking in the video can allow the system to provide richer information from simple sensors according to an embodiment of the current invention. An embodiment of the current invention can include intraoperative OCT in which the system creates a B-Scan image from A-Scans and corresponding pose estimation from a video tool tracking module. Further embodiments can extend such an approach to a volumetric C-Scan like information representation, for example.
Some aspects of the current invention include, but are not limited to the following:
1. Annotations are acquired relative to anatomy across image streams (video). We annotate the position on the anatomy (i.e. organ) with sensor stream data.
2. “Information fusion”: Sensor data is time series data that can be correlated over a video or spatially tracked pose sequence relative to the anatomy. The sensor position moving over a region of interest can be tracked and this time synchronized tool position can be combined with tool sensor data stream to create an image over video sequence. For example, an A-Scan stream transformed into B-Scan and/or M-scan data.
3. Reviewing of annotations can involve the user interacting with annotations. Simple interactions can involve selecting “active” annotation by pointing at it with the tracked tool, or other input (voice, pedal etc). More complex interactions may indicate where the instrument is in the annotation itself.
4. Sensor streams can be correlated over multiple video image sequences.
The visual tracking and annotation system 10 can also include a user input device 20 that is in communication with the data processing system to provide the input signal according to some embodiments of the current invention. The input device 20 can be, but is not limited to, one or more pedals, keypads, switches, microphones, eye tracking systems or surgical instruments, according to some embodiments of the current invention.
In some embodiments of the current invention, the data processing system 18 can be configured to annotate images displayed to the user to include at least one of a position or a track of the surgical instrument.
In some embodiments of the current invention, the user input device 20 can be a surgical instrument that is suitable to be tracked by the tracking system. The surgical instrument can include a sensor system constructed and arranged to provide a data stream regarding localized portions of the surgical region of interest and can be in communication with the data storage system such that data obtained by the surgical instrument can be saved for later retrieval. In some embodiments of the current invention, the sensor system can include at least one of an optical sensor system, an ultrasound sensor system, or force-sensing system.
Surgical instruments according to some embodiments of the current invention can allow for simultaneous imaging and surgical intervention functionality integrated into a single instrument. Registration of the instrument to the optical sensor can be achieved by a reference portion of the instrument that is visible in the field of view of the optical sensor according to an embodiment of the current invention. Furthermore, multiple imaging probes can be integrated into the instrument for increased imaging volume, multiple imaging directions, increased resolution, or to provide other types of imaging for simultaneous multimodal imaging functionality according to other embodiments of the current invention. Multiple imaging point probes (multi core fiber, or multifiber bundle) can improve the registration of the tool tip to optical sensor in some embodiments.
A surgical instrument 100 according to an embodiment of the current invention is shown in
In the example of
Alternatively, or in addition to the OCT system illustrated as the optical sensor 114, the optical sensor 114 could be or include a visual imaging system. For example, the optical imaging system could include an optical fiber, or a bundle of optical fibers to simultaneously image the reference portion 112 of the surgical tool 102 and the tissue 118 proximate or in contact with the distal end 106 of the surgical tool 102. In some embodiments, the surgical tool 102 can be a pick, for example, that is suitable for use in eye surgery. However, the general concepts of the current invention are not limited to the particular type of surgical tool. One can imagine a vast range of types of surgical tools that are suitable for surgical tool 102, such as, but not limited to picks, tweezers, knives, light delivery devices, scissors, injectors, vitrectomy tools, or other microsurgery tools. The surgical instrument can be adapted to integrate into a robotic system, such as is illustrated by surgical system 200 in FIG. 5 or the hand-held robot 300 shown in
The following are a couple examples of the use of a visual tracking and annotation system according to some embodiments of the current invention. These examples are provided for illustration and are not intended to define the broad scope of the current invention.
VitroRetinal Surgery
The embodiments illustrated and discussed in this specification are intended only to teach those skilled in the art the best way known to the inventors to make and use the invention. In describing embodiments of the invention, specific terminology is employed for the sake of clarity. However, the invention is not intended to be limited to the specific terminology so selected. The above-described embodiments of the invention may be modified or varied, without departing from the invention, as appreciated by those skilled in the art in light of the above teachings. It is therefore to be understood that, within the scope of the claims and their equivalents, the invention may be practiced otherwise than as specifically described.
This application claims priority to U.S. Provisional Application No. 61/256,531 filed Oct. 30, 2009, the entire contents of which are hereby incorporated by reference, and is a U.S. national stage application under 35 U.S.C. & 371 of PCT/US2010/054988 filed Nov. 1, 2010, the entire contents of which are incorporated herein by reference.
This invention was made with Government support of Grant No. 1R01 EB 007969-01, awarded by the Department of Health and Human Services, NIH; and Grant No. EEC-9731478, awarded by the NSF. The U.S. Government has certain-rights in this invention.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US2010/054988 | 11/1/2010 | WO | 00 | 4/30/2012 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2011/053921 | 5/5/2011 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5554155 | Awh et al. | Sep 1996 | A |
6167296 | Shahidi | Dec 2000 | A |
6428553 | Trese | Aug 2002 | B1 |
6975898 | Seibel | Dec 2005 | B2 |
7126303 | Farritor et al. | Oct 2006 | B2 |
7831294 | Viswanathan | Nov 2010 | B2 |
8073528 | Zhao et al. | Dec 2011 | B2 |
8108072 | Zhao et al. | Jan 2012 | B2 |
8147503 | Zhao et al. | Apr 2012 | B2 |
8398541 | DiMaio et al. | Mar 2013 | B2 |
8401616 | Verard et al. | Mar 2013 | B2 |
8423120 | Tynes et al. | Apr 2013 | B2 |
8528566 | Loesel et al. | Sep 2013 | B2 |
8554307 | Razzaque et al. | Oct 2013 | B2 |
20020171669 | Meron et al. | Nov 2002 | A1 |
20040097805 | Verard | May 2004 | A1 |
20040254454 | Kockro | Dec 2004 | A1 |
20050015005 | Kockro | Jan 2005 | A1 |
20050027199 | Clarke | Feb 2005 | A1 |
20050182295 | Soper et al. | Aug 2005 | A1 |
20050200324 | Guthart et al. | Sep 2005 | A1 |
20060100505 | Viswanathan | May 2006 | A1 |
20060174065 | Kuzara et al. | Aug 2006 | A1 |
20060258938 | Hoffman et al. | Nov 2006 | A1 |
20070021738 | Hasser et al. | Jan 2007 | A1 |
20070115481 | Toth et al. | May 2007 | A1 |
20070167702 | Hasser et al. | Jul 2007 | A1 |
20070238981 | Zhu et al. | Oct 2007 | A1 |
20070276195 | Xu et al. | Nov 2007 | A1 |
20070276226 | Tal | Nov 2007 | A1 |
20080004603 | Larkin et al. | Jan 2008 | A1 |
20080033240 | Hoffman et al. | Feb 2008 | A1 |
20080039705 | Viswanathan | Feb 2008 | A1 |
20080062429 | Liang et al. | Mar 2008 | A1 |
20080119725 | Lloyd | May 2008 | A1 |
20080177256 | Loesel et al. | Jul 2008 | A1 |
20080287783 | Anderson | Nov 2008 | A1 |
20090036902 | DiMaio et al. | Feb 2009 | A1 |
20090220125 | Ren et al. | Sep 2009 | A1 |
20100168763 | Zhao et al. | Jul 2010 | A1 |
20100228123 | Brennan et al. | Sep 2010 | A1 |
20100249506 | Prisco | Sep 2010 | A1 |
20110105898 | Guthart et al. | May 2011 | A1 |
20110106102 | Balicki et al. | May 2011 | A1 |
20110122365 | Kraus et al. | May 2011 | A1 |
20120130258 | Taylor et al. | May 2012 | A1 |
Number | Date | Country |
---|---|---|
1802626 | Jul 2006 | CN |
101222882 | Mar 2013 | CN |
2075763 | Jul 2009 | EP |
2002-510230 | Apr 2002 | JP |
2008-006169 | Jan 2008 | JP |
Entry |
---|
International Search Report and Written Opinion for PCT/US2010/054988. |
Leven et al., DaVinci Canas: A Telerobotic Surgical System with Integrated, Robot-Assisted, Laparoscopic Ultrasound Capability, Proceedings of MICCAI 2005. |
Su et al., Augmented reality during robot-assisted laparoscopic partial nephrectomy: Toward real-time 3d-ct to stereoscopic video registration, Journal of Urology, 2009. |
Official Notice of Rejection dated Sep. 2, 2014 in Japanese Patent Application No. 2012-537171. |
Number | Date | Country | |
---|---|---|---|
20120226150 A1 | Sep 2012 | US |
Number | Date | Country | |
---|---|---|---|
61256531 | Oct 2009 | US |