This disclosure relates to surgical systems, and more particularly, to systems for intraluminal navigation and imaging with depth of view and distance determination.
Knowledge of surgical tool location in relation to the internal anatomy is important to successful completion of minimally invasive diagnostic and surgical procedures. An endoscope or bronchoscope is the simplest form of navigation where a camera is placed at the distal tip of a catheter and is used to view the anatomy of the patient. Typically, the clinician uses their anatomic knowledge to recognize the current location of the bronchoscope. Near complex anatomic structures the clinician may attempt to analyze pre-surgical and intraprocedural patient images derived from any of computed tomography (CT) including cone beam CT, magnetic resonance imaging (MRI), positron emissions tomography (PET), fluoroscopy, or ultrasound scans to determine the location of the endoscope or tool associated therewith. For many luminal and robotic approaches stereoscopic imaging is either needed or beneficial to provide an adequate field of view (FOV) and an understanding of the depth of view (DOV) for the accurate placement of tools such as biopsy devices and ablation tools.
However, not all portions of the anatomy are amenable to the use of a two camera (stereoscopic) solution. In many instances the use of a second camera requires too much space and limits the ability for to use additional tools. These challenges can be particularly acute in the confined luminal spaces of the lung, esophagus, biliary ducts, and the urinary tract, but is also applicable to what are considered the relatively large lumen of the intestines and colon. Thus, improvements are needed to enable real time depth of view determinations to be presented without requiring the use of two cameras to produce stereoscopic views.
One aspect of the disclosure is directed to a method of assessing a depth of view of an image including: determining a position of a catheter in a luminal network, determining a position of a tool relative the catheter in the luminal network, acquiring an image data set. The method also includes analyzing the image data set to determine a diameter of the luminal network proximate the determined position of the tool; displaying an image acquired by an optical sensor secured to the catheter. The method also includes displaying an indicator of a relative position of the catheter and tool in the image acquired by the optical sensor and an indicator of a position of a distal portion of the tool relative to a luminal wall of the luminal network. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods and systems described herein.
Implementations of this aspect of the disclosure may include one or more of the following features. The method further including displaying a distance of a closest point of the distal portion of the tool relative to the luminal wall. The method where the indicator includes at least two orthogonally measured distances of the distal portion of the tool relative to the luminal wall. The method where the image data set is a pre-procedure image data set. The method where the image data set is an intraprocedure image data set. The method where the position of the catheter is determined from data received from a sensor located in the catheter. The method where the position of the tool is determined from data received from a sensor located in the tool. The method where the sensor located in the catheter and in the tool are electromagnetic sensors. The method where the sensor located in the catheter and in the tool are inertial measurement units. The method where the sensor located in the catheter and in the tool are shape sensors. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium, including software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
A further aspect of the disclosure is directed to a system for depicting a depth of view (DOV) in an image including: a catheter including a first sensor configured for navigation in a luminal network and an optical sensor for generating an image; a tool including a second sensor configured to pass through a working channel in the catheter; a locating module configured to detect a position of the catheter and the tool; and an application stored on a computer readable memory and configured, when executed by a processor to execute the steps of. The system also includes registering data received from the first or second sensor with an image data set; analyzing an image data set to determine a diameter of a luminal network proximate the second sensor, and displaying the image generated by the optical sensor in combination with an indicator of a relative position of the catheter and tool and an indicator of a position of a distal portion of the tool relative to a luminal wall of the luminal network. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods and systems described herein.
Implementations of this aspect of the disclosure may include one or more of the following features. The system where the application executes a step of displaying distance of a closest point of the distal portion of the tool relative to the luminal wall. The system where the application executes a step of displaying at least two orthogonally measured distances of the distal portion of the tool relative to the luminal wall. The system where the image data set is a pre-procedure image data set. The system where the image data set is an intraprocedure image data set. The system where the intraprocedure image data set is received from a fluoroscope. The system where the sensor located in the catheter and in the tool are electromagnetic sensors. The system where the sensor located in the catheter and in the tool are inertial measurement units. The system where the sensor located in the catheter and in the tool are shape sensors. The system where the sensor located in the catheter and in the tool are ultrasound sensors. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium, including software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
Various aspects and features of the disclosure are described hereinbelow with references to the drawings, wherein:
This disclosure is directed to systems and methods for navigating within a luminal network and determining the distance a tool, observed in a field of view is from the camera. The disclosure is also directed to determining the distance of the tool from the luminal walls in which the tool in being navigated. In one embodiment the system and method use data derived from sensors placed on the endoscope and the tools to measure the relative distance between them. Additionally, image processing of the luminal network can be conducted of pre-procedure or intra-procedure images, and the detected positions of the endoscope and the tools determined relative to the images. The diameter of the luminal network and the position of the endoscope or the tools relative to the boundary walls of the luminal network determined and displayed in a live image from the endoscope. This depiction of relative distances of elements within a FOV enable assessment of depth of view (DOV) of an image and of the tools and structures found therein. These and other aspects of the disclosure are described in greater detail below.
The system 100 includes a locating module 110 which receives signals from the catheter 104, and processes the signals to generate useable data, as described in greater detail below. A computer 112, including a display 114 receives the useable data from the locating module 110, and incorporates the data into one or more applications running on the computer 112 to generate one or more user-interfaces that are presented on the display 114. Both the locating module 110 and the monitor 108 may be incorporated into or replaced by applications running on the computer 112 and images presented via a user interface on the display 114. Also depicted in
As shown in
A further aspect of the disclosure is related to the use of linear EBUS and REBUS ultrasound sensors 210 described briefly above. In accordance with the ultrasound aspects of the disclosure a liner EBUS sensor may be placed in the distal face of the catheter 104. The result are forward looking ultrasound images can be acquired as the catheter 104 is navigated towards the target. Additionally or alternatively, the ultrasound sensors 210 are REBUS sensors, a 360-degree surrounding view of the distal portion of the catheter 104 can be imaged. Whether REBUS or EBUS, the sensors 210 can be used much like optical sensors to identify fiducials. Further, the images generated by the ultrasound sensors 210 can be compared to virtual ultrasound images generated from pre-procedure CT or MRI images to assist in confirming the location of the ultrasound sensor 210 (and catheter 104 therewith) while navigating towards the target.
There are known in the art a variety of pathway planning applications for pre-operatively planning a path through a luminal network such as the lungs or the vascular system. Typically, a pre-operative image data set such as one acquired from a CT scan or an MRI scan is presented to a user. The target identification may be automatic, semi-automatic, or manually, and allows for determining a pathway through patient P's airways to tissue located at and around the target. In one variation the user scrolls through the image data set, which is presented as a series of slices of the 3D image data set output from the CT scan. By scrolling through the images, the user manually identifies targets within the image data set. The slices of the 3D image data set are often presented along the three axes of the patient (e.g., axial, sagittal, and coronal) allowing for simultaneous viewing of the same portion of the 3D image data set in three separate 2D images.
Additionally, the 3D image data set (e.g., acquired from the CT scan) may be processed and assembled into a three-dimensional CT volume, which is then utilized to generate a 3D model of patient P's airways by various segmentation and other image processing techniques. Both the 2D slices images and the 3D model may be displayed on a display 114 associated with computer 112. Using computer 112, various views of the 3D or enhanced 2D images may be generated and presented. The enhanced two-dimensional images may possess some three-dimensional capabilities because they are generated from the 3D image data set. The 3D model may be presented to the user from an external perspective view, an internal “fly-through” view, or other views. After identification of a target, the application may automatically generate a pathway to the target. In the example of lung navigation, the pathway may extend from the target to the trachea, for example. The application may either automatically identify the nearest airway to the target and generate the pathway, or the application may request the user identify the nearest or desired proximal airway in which to start the pathway generation to the trachea. Once selected, the pathway plan, three-dimensional model, and 3D image data set and any images derived therefrom, can be saved into memory on the computer 112 and made available for use in combination with the catheter 104 during a procedure, which may occur immediately following the planning or at a later date.
Still further, without departing from the scope of the disclosure, the user may utilize an application running on the computer 112 to review pre-operative 3D image data set or 3D models derived therefrom to identify fiducials in the pre-operative images or models. The fiducials are elements of the patient's physiology that are easily identifiable and distinguishable from related features, and of the type that could typically also be identified by the clinician when reviewing images produced by the optic sensor 206 during a procedure. As will be appreciated these fiducials should lay along the pathway through the airways to the target. The identified fiducials, the target identification, and/or the pathway are reviewable on computer 112 prior to ever starting a procedure.
Though generally described herein as being formed pre-operatively, the 3D model, 3D image data set and 2D images may also be acquired in real time during a procedure. For example, such images may be acquired by a cone beam computed tomography (CBCT) device, or through reconstruction of 2D images acquired from a fluoroscope, without departing from the scope of the disclosure.
In a further aspect of the disclosure, the fiducials may be automatically identified by an application running on the computer 112. The fiducials may be selected based on the determined pathway to the target. For example, the fiducials may be the bifurcations of the airways that are experienced along the pathway.
Following, the planning phase, where targets are identified and pathways to those targets are created, a navigation phase can be commenced. With respect to the navigation phase, the locating module 110 is employed to detect the position and orientation of a distal portion of the catheter 104. For example, if an EM sensor 205 is employed in catheter 104, the locating module 110 may utilize a transmitter mat 118 to generate an electromagnetic field in which the EM sensor 205 is placed. The EM sensors 205 generate a current when placed in the electromagnetic field is received by the locating module 110 and either five or six degrees of freedom of the position of the sensor 205 and catheter 104 is determined. To accurately reflect the detected position of the catheter 104 in the pre-procedure image data set (e.g., CT or MRI images) or 3D models generated therefrom, a registration process must be undertaken.
Registration of the patient P's location on the transmitter mat 118 may be performed by moving sensor 205 through the airways of the patient P. More specifically, data pertaining to locations of sensor 205, while locatable guide 104 is moving through the airways, is recorded using transmitter mat 118 and locating module 110. A shape resulting from this location data is compared to an interior geometry of passages of the three-dimensional model generated in the planning phase, and a location correlation between the shape and the three-dimensional model based on the comparison is determined, e.g., utilizing the software on computing device 112. In addition, the software identifies non-tissue space (e.g., air filled cavities) in the three-dimensional model. The software aligns, or registers, an image representing a location of sensor 104 with the three-dimensional model and/or two-dimensional images generated from the three-dimension model, which are based on the recorded location data and an assumption that locatable guide 110 remains located in non-tissue space in patient P's airways.
Though described herein with respect to EMN systems using EM sensor 205, the instant disclosure is not so limited and may be used in conjunction with IMU 202, shape sensor 204, optical sensor 206, or ultrasound sensor 210 or without sensors. Additionally, the methods described herein may be used in conjunction with robotic systems such that robotic actuators (not shown) drive the catheter 104 proximate the target.
The data displayed on the image 300 may be displayed at any time there is a greater than a predetermined distance between the catheter 104 and the tool 106. Alternatively, the data may be selectively enabled when desired, in this way an overload of data in the image may be eliminated during those times when such data is not necessary, for example when navigating central airways when the target is located in the periphery of the lungs. The data may then automatically begin being displayed when the position of the catheter 104 or tool 106 is within a pre-determined distance to a target. Still further, the display of the data on image 300 may simply be selectively switched on and off as desired by the user.
At step 408, with the position of the tool 106 determined and analysis can be made of an image data set. For example, the pre-procedure CT or MRI image data set which was acquired for planning the navigation to one or more targets. At step 408 the image data set can be analyzed to determine the diameter of the lumen in which the tool 106 is located. In addition, because the position of the tool 106 is known, and the pre-procedure images and 3D models have been registered to the patient, the proximity of the tool's detected position to a luminal wall 304 can also be determined. As noted above, this may be the closest point of the distal portion 302 of the tool 106 to the luminal wall 304 as well as an orthogonal distance, as displayed in
At step 410, the distances determined in steps 406 and 408 may be displayed on an image acquired by optical sensor 206. The method 400 also provides for an elective step 412 of depicting an indication of the location of the distal portion 302 of the tool 106 on the luminal wall 304. This is depicted in
Though described in the context of a pre-procedure image data set, the method 400 is not so limited. As noted above, intraprocedural imaging may also be employed to generate data for display in the image 300 acquired by optical sensor 206. For example, cone beam computed tomography (CBCT) or 3D fluoroscopy techniques may be employed as well as other imaging technologies.
Where fluoroscope 116 is employed, the clinician may navigate the catheter 104 and tool 106 proximate a target. Once proximate the target, a fluoroscopic sweep of images may be acquired. This sweep is a series of images (e.g., video) acquired for example from about 15-30 degrees left of the AP position to about 15-30 degrees right of the AP position. Once acquired, the clinician may be required to mark one or more of the catheter 104, tool 106, or target 308 in one or more images. Alternatively, image processing techniques may also be used to automatically identify the catheter 104, tool 106, or target 308. For example, an application running on computer 112 may be employed to identify pixels in the images having relevant Hounsfield units that signify the density of the catheter 104 and tool 106. The last pixels before a transition to a less dense material may be identified as the distal locations of the catheter 104 and tool 106. This may require a determination that the pixels having the Hounsfield unit value indicating a high-density material extent in a longitudinal direction at least some predetermined length. In some instances, the target 308 may also be identified based on its difference in Hounsfield unit value as compared to surrounding tissue. With the catheter 104 and tool 106 positively identified, a 3D volumetric reconstruction of the luminal network can be generated. The 3D volumetric construction may then be analyzed using similar image processing techniques to identify those pixels in the image having a Hounsfield unit signifying the density of the airway wall 304. Alternatively, the imaging processing may seek those pixels having a Hounsfield unit signifying air. In this process, all of the pixels having a density of air are identified until a change in density is detected. By performing this throughout the 3D volumetric construction, the boundaries of the airway wall 304 can be identified. By identifying the airway wall, the diameter of the airway wall can be determined in the areas proximate the catheter 104 or tool 106. Further, the distances the tool 106 is from the airway wall may also be calculated. Accordingly, these additional data, the distance of the distal end 302 of the tool 106 from the catheter 104, the proximity of the tool 106 to the luminal wall 304 and an indicator 306 of the position of the tool relative to the luminal wall 304 can all be depicted on the image 300 generated by the optical sensor 206.
With CBCT, similar processes as those described above with respect to the pre-procedure image data set (i.e., CT or MRI) can be employed. A 3D model may be generated, if desired, depicting the airway. Regardless, image analysis, similar to that described above, can be undertaken to identify the catheter 104 and tool 106. Further, the image processing can determine the diameter of the luminal network in the area proximate the catheter 104 and tool 106. Still further, the position of the catheter 104 and tool 106 within the luminal network can also be identified including the proximity of the catheter 104 or tool 106 to the airway wall. Accordingly, these additional data, the distance of the distal end 302 of the tool 106 from the catheter 104, the proximity of the tool 106 to the luminal wall 304 and an indicator 306 of the position of the tool relative to the luminal wall 304 can all be depicted on the image 300 generated by the optical sensor 206.
In some instances, it may be difficult to determine the position of the tool 106 relative to the catheter 104 using the image processing techniques. Primarily this is because the tool 106 passes through the catheter 104, thus it is difficult to determine where the catheter 104 ends. Accordingly, the sensor data from, for example, the EM sensors 205 located in the catheter 104 and tool 106 may be used to determine the relative position of the catheter 104 and the tool 106 as described above. Thus, the lumen diameter and proximity of the tool 106 to the lumen wall may be determined from the intraprocedure images (CBCT or fluoroscopy) and the distance tool 106 is located relative to the catheter 104 can determined from the sensor data. Accordingly, these additional data, the distance of the distal end 302 of the tool 106 from the catheter 104, the proximity of the tool 106 to the luminal wall 304 and an indicator 306 of the position of the tool relative to the luminal wall 304 can all be depicted on the image 300 generated by the optical sensor 206.
As a result of the processes described hereinabove, the image 300 as depicted in
While several embodiments of the disclosure have been shown in the drawings, it is not intended that the disclosure be limited thereto, as it is intended that the disclosure be as broad in scope as the art will allow and that the specification be read likewise. Therefore, the above description should not be construed as limiting, but merely as exemplifications of particular embodiments.
Throughout this description, the term “proximal” refers to the portion of the device or component thereof that is closer to the clinician and the term “distal” refers to the portion of the device or component thereof that is farther from the clinician. Additionally, in the drawings and in the description above, terms such as front, rear, upper, lower, top, bottom, and similar directional terms are used simply for convenience of description and are not intended to limit the disclosure. In the description hereinabove, well-known functions or constructions are not described in detail to avoid obscuring the disclosure in unnecessary detail.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2021/057764 | 11/2/2021 | WO |
Number | Date | Country | |
---|---|---|---|
63110268 | Nov 2020 | US |