System and method for identifying, marking and navigating to a target using real time two dimensional fluoroscopic data

Information

  • Patent Grant
  • 11341692
  • Patent Number
    11,341,692
  • Date Filed
    Wednesday, November 4, 2020
    4 years ago
  • Date Issued
    Tuesday, May 24, 2022
    2 years ago
Abstract
A system for facilitating identification and marking of a target in a fluoroscopic image of a body region of a patient, the system comprising one or more storage devices having stored thereon instructions for: receiving a CT scan and a fluoroscopic 3D reconstruction of the body region of the patient, wherein the CT scan includes a marking of the target; and generating at least one virtual fluoroscopy image based on the CT scan of the patient, wherein the virtual fluoroscopy image includes the target and the marking of the target, at least one hardware processor configured to execute these instructions, and a display configured to display to a user the virtual fluoroscopy image and the fluoroscopic 3D reconstruction.
Description
BACKGROUND
Technical Field

The present disclosure relates to the field of identifying and marking a target in fluoroscopic images, in general, and to such target identification and marking in medical procedures involving intra-body navigation, in particular. Furthermore, the present disclosure relates to a system, apparatus, and method of navigation in medical procedures.


Description of Related Art

There are several commonly applied medical methods, such as endoscopic procedures or minimally invasive procedures, for treating various maladies affecting organs including the liver, brain, heart, lung, gall bladder, kidney and bones. Often, one or more imaging modalities, such as magnetic resonance imaging (MRI), ultrasound imaging, computed tomography (CT), fluoroscopy as well as others are employed by clinicians to identify and navigate to areas of interest within a patient and ultimately a target for treatment. In some procedures, pre-operative scans may be utilized for target identification and intraoperative guidance. However, real-time imaging may be often required to obtain a more accurate and current image of the target area. Furthermore, real-time image data displaying the current location of a medical device with respect to the target and its surrounding may be required to navigate the medical device to the target in a more safe and accurate manner (e.g., with unnecessary or no damage caused to other organs or tissue).


For example, an endoscopic approach has proven useful in navigating to areas of interest within a patient, and particularly so for areas within luminal networks of the body such as the lungs. To enable the endoscopic, and more particularly the bronchoscopic approach in the lungs, endobronchial navigation systems have been developed that use previously acquired MRI data or CT image data to generate a three-dimensional (3D) rendering, model or volume of the particular body part such as the lungs.


The resulting volume generated from the MRI scan or CT scan is then utilized to create a navigation plan to facilitate the advancement of a navigation catheter (or other suitable medical device) through a bronchoscope and a branch of the bronchus of a patient to an area of interest. A locating or tracking system, such as an electromagnetic (EM) tracking system, may be utilized in conjunction with, for example, CT data, to facilitate guidance of the navigation catheter through the branch of the bronchus to the area of interest. In certain instances, the navigation catheter may be positioned within one of the airways of the branched luminal networks adjacent to, or within, the area of interest to provide access for one or more medical instruments.


However, a three-dimensional volume of a patient's lungs, generated from previously acquired scans, such as CT scans, may not provide a basis sufficient for accurate guiding of medical instruments to a target during a navigation procedure. In certain instances, the inaccuracy is caused by deformation of the patient's lungs during the procedure relative to the lungs at the time of the acquisition of the previously acquired CT data. This deformation (CT-to-Body divergence) may be caused by many different factors, for example: sedation vs. no sedation, bronchoscope changing patient pose and also pushing the tissue, different lung volume because CT was in inhale while navigation is during breathing, different bed, day, etc.


Thus, another imaging modality is necessary to visualize such targets in real-time and enhance the in-vivo navigation procedure by correcting navigation during the procedure. Furthermore, in order to accurately and safely navigate medical devices to a remote target, for example, for biopsy or treatment, both the medical device and the target should be visible in some sort of a three-dimensional guidance system.


A fluoroscopic imaging device is commonly located in the operating room during navigation procedures. The standard fluoroscopic imaging device may be used by a clinician, for example, to visualize and confirm the placement of a medical device after it has been navigated to a desired location. However, although standard fluoroscopic images display highly dense objects such as metal tools and bones as well as large soft-tissue objects such as the heart, the fluoroscopic images may have difficulty resolving small soft-tissue objects of interest such as lesions. Furthermore, the fluoroscope image is only a two-dimensional projection. Therefore, an X-ray volumetric reconstruction may enable identification of such soft tissue objects and navigation to the target.


Several solutions exist that provide three-dimensional volume reconstruction such as CT and Cone-beam CT which are extensively used in the medical world. These machines algorithmically combine multiple X-ray projections from known, calibrated X-ray source positions into three-dimensional volume in which, inter alia, soft-tissues are more visible. For example, a CT machine can be used with iterative scans during procedure to provide guidance through the body until the tools reach the target. This is a tedious procedure as it requires several full CT scans, a dedicated CT room and blind navigation between scans. In addition, each scan requires the staff to leave the room due to high-levels of ionizing radiation and exposes the patient to such radiation. Another option is a Cone-beam CT machine, which is available in some operation rooms and is somewhat easier to operate but is expensive and like the CT only provides blind navigation between scans, requires multiple iterations for navigation and requires the staff to leave the room. In addition, a CT-based imaging system is extremely costly, and in many cases not available in the same location as the location where a procedure is carried out.


Hence, an imaging technology, which uses standard fluoroscope devices, to reconstruct local three-dimensional volume in order to visualize and facilitate navigation to in-vivo targets, and to small soft-tissue objects in particular, has been introduced: US Patent Application No. 2017/035379 to Weingarten et al. entitled SYSTEMS AND METHODS FOR LOCAL THREE DIMENSIONAL VOLUME RECONSTRUCTION USING A STANDARD FLUOROSCOPE, the contents of which are incorporated herein by reference, US Patent Application No. 2017/035380 to Barak et al. entitled SYSTEM AND METHOD FOR NAVIGATING TO TARGET AND PERFORMING PROCEDURE ON TARGET UTILIZING FLUOROSCOPIC-BASED LOCAL THREE DIMENSIONAL VOLUME RECONSTRUCTION, the contents of which are incorporated herein by reference and U.S. patent application Ser. No. 15/892,053 to Weingarten et al. entitled SYSTEMS AND METHODS FOR LOCAL THREE DIMENSIONAL VOLUME RECONSTRUCTION USING A STANDARD FLUOROSCOPE, the contents of which are incorporated herein by reference.


In general, according to the systems and methods disclosed in the above-mentioned patent applications, a standard fluoroscope c-arm can be rotated, e.g., about 30 degrees, around a patient during a medical procedure, and a fluoroscopic 3D reconstruction of the region of interest is generated by a specialized software algorithm. The user can then scroll through the image slices of the fluoroscopic 3D reconstruction using the software interface to identify the target (e.g., a lesion) and mark it.


Such quick generation of a 3D reconstruction of a region of interest can provide real-time three-dimensional imaging of the target area. Real-time imaging of the target and medical devices positioned in its area may benefit numerous interventional procedures, such as biopsy and ablation procedures in various organs, vascular interventions and orthopedic surgeries. For example, when navigational bronchoscopy is concerned, the aim may be to receive accurate information about the position of a biopsy catheter relative to a target lesion.


As another example, minimally invasive procedures, such as laparoscopy procedures, including robotic-assisted surgery, may employ intraoperative fluoroscopy to increase visualization, e.g., for guidance and lesion locating, and to prevent unnecessary injury and complications. Employing the above-mentioned systems and methods for real-time reconstruction of fluoroscopic three-dimensional imaging of a target area and for navigation based on the reconstruction may benefit such procedures as well.


Still, it may not be an easy task to accurately identify and mark a target in the fluoroscopic 3D reconstruction, in particular when the target is a small soft-tissue. Thus, there is a need for systems and methods for facilitating the identification and marking of a target in fluoroscopic image data, and in a fluoroscopic 3D reconstruction in particular, to consequently facilitate the navigation to the target and the yield of pertinent medical procedures.


SUMMARY

There is provided in accordance with the present disclosure, a system for facilitating identification and marking of a target in a fluoroscopic image of a body region of a patient, the system comprising: (i) one or more storage devices having stored thereon instructions for: receiving a CT scan and a fluoroscopic 3D reconstruction of the body region of the patient, wherein the CT scan includes a marking of the target; generating at least one virtual fluoroscopy image based on the CT scan of the patient, wherein the virtual fluoroscopy image includes the target and the marking of the target, (ii) at least one hardware processor configured to execute said instructions, and (iii) a display configured to display to a user the virtual fluoroscopy image simultaneously with the fluoroscopic 3D reconstruction.


There is further provided in accordance with the present disclosure, a system for facilitating identification and marking of a target in a fluoroscopic image of a body region of a patient, the system comprising: (i) one or more storage devices having stored thereon instructions for: receiving a CT scan and a fluoroscopic 3D reconstruction of the body region of the patient, wherein the CT scan includes a marking of the target; generating at least one virtual fluoroscopy image based on the CT scan of the patient, wherein the virtual fluoroscopy image includes the target and the marking of the target, (ii) at least one hardware processor configured to execute said instructions, and (iii) a display configured to display to a user the virtual fluoroscopy image and the fluoroscopic 3D reconstruction.


There is further provided in accordance with the present disclosure, a method for identifying and marking a target in an image of a body region of a patient, the method comprising using at least one hardware processor for: receiving a CT scan and a fluoroscopic 3D reconstruction of the body region of the patient, wherein the CT scan includes a marking of the target; generating at least one virtual fluoroscopy image based on the CT scan of the patient, wherein the at least one virtual fluoroscopy image includes the target and the marking of the target; and displaying to a user the at least one virtual fluoroscopy image simultaneously with the fluoroscopic 3D reconstruction on a display, thereby facilitating the identification of the target in the fluoroscopic 3D reconstruction by the user.


There is further provided in accordance with the present disclosure, a method for identifying and marking a target in an image of a body region of a patient, the method comprising using at least one hardware processor for: receiving a CT scan and a fluoroscopic 3D reconstruction of the body region of the patient, wherein the CT scan includes a marking of the target; generating at least one virtual fluoroscopy image based on the CT scan of the patient, wherein the at least one virtual fluoroscopy image includes the target and the marking of the target; and displaying to a user the at least one virtual fluoroscopy image and the fluoroscopic 3D reconstruction on a display, thereby facilitating the identification of the target in the fluoroscopic 3D reconstruction by the user.


There is further provided in accordance with the present disclosure, a system for navigating to a target area within a patient's body during a medical procedure using real-time two-dimensional fluoroscopic images, the system comprising: a medical device configured to be navigated to the target area; a fluoroscopic imaging device configured to acquire a sequence of 2D fluoroscopic images of the target area about a plurality of angles relative to the target area, while the medical device is positioned in the target area; and a computing device configured to: receive a pre-operative CT scan of the target area, wherein the pre-operative CT scan includes a marking of the target; generate at least one virtual fluoroscopy image based on the pre-operative CT scan, wherein the at least one virtual fluoroscopy image includes the target and the marking of the target; generate a three-dimensional reconstruction of the target area based on the acquired sequence of 2D fluoroscopic images; display to a user the at least one virtual fluoroscopy image and the fluoroscopic 3D reconstruction simultaneously, receive a selection of the target from the fluoroscopic 3D reconstruction via the user; receive a selection of the medical device from the three-dimensional reconstruction or the sequence of 2D fluoroscopic images; and determine an offset of the medical device with respect to the target based on the selections of the target and the medical device.


There is further provided in accordance with the present disclosure, a system for navigating to a target area within a patient's body during a medical procedure using real-time two-dimensional fluoroscopic images, the system comprising: a medical device configured to be navigated to the target area; a fluoroscopic imaging device configured to acquire a sequence of 2D fluoroscopic images of the target area about a plurality of angles relative to the target area, while the medical device is positioned in the target area; and a computing device configured to: receive a pre-operative CT scan of the target area, wherein the pre-operative CT scan includes a marking of the target; generate at least one virtual fluoroscopy image based on the pre-operative CT scan, wherein the at least one virtual fluoroscopy image includes the target and the marking of the target; generate a three-dimensional reconstruction of the target area based on the acquired sequence of 2D fluoroscopic images; display to a user the at least one virtual fluoroscopy image and the fluoroscopic 3D reconstruction, receive a selection of the target from the fluoroscopic 3D reconstruction via the user; receive a selection of the medical device from the three-dimensional reconstruction or the sequence of 2D fluoroscopic images; and determine an offset of the medical device with respect to the target based on the selections of the target and the medical device.


There is further provided in accordance with the present disclosure, a method for navigating to a target area within a patient's body during a medical procedure using real-time two-dimensional fluoroscopic images, the method comprising using at least one hardware processor for: receiving a pre-operative CT scan of the target area, wherein the pre-operative CT scan includes a marking of the target; generating at least one virtual fluoroscopy image based on the pre-operative CT scan, wherein the at least one virtual fluoroscopy image includes the target and the marking of the target; receiving a sequence of 2D fluoroscopic images of the target area acquired in real-time about a plurality of angles relative to the target area, while a medical device is positioned in the target area; generating a three-dimensional reconstruction of the target area based on the sequence of 2D fluoroscopic images; displaying to a user the at least one virtual fluoroscopy image and the fluoroscopic 3D reconstruction simultaneously, receiving a selection of the target from the fluoroscopic 3D reconstruction via the user; receiving a selection of the medical device from the three-dimensional reconstruction or the sequence of 2D fluoroscopic images; and determining an offset of the medical device with respect to the target based on the selections of the target and the medical device.


There is further provided in accordance with the present disclosure, a method for navigating to a target area within a patient's body during a medical procedure using real-time two-dimensional fluoroscopic images, the method comprising using at least one hardware processor for: receiving a pre-operative CT scan of the target area, wherein the pre-operative CT scan includes a marking of the target; generating at least one virtual fluoroscopy image based on the pre-operative CT scan, wherein the at least one virtual fluoroscopy image includes the target and the marking of the target; receiving a sequence of 2D fluoroscopic images of the target area acquired in real-time about a plurality of angles relative to the target area, while a medical device is positioned in the target area; generating a three-dimensional reconstruction of the target area based on the sequence of 2D fluoroscopic images; displaying to a user the at least one virtual fluoroscopy image and the fluoroscopic 3D reconstruction, receiving a selection of the target from the fluoroscopic 3D reconstruction via the user; receiving a selection of the medical device from the three-dimensional reconstruction or the sequence of 2D fluoroscopic images; and determining an offset of the medical device with respect to the target based on the selections of the target and the medical device.


There is further provided in accordance with the present disclosure, a computer program product comprising a non-transitory computer-readable storage medium having program code embodied therewith, the program code executable by at least one hardware processor to: receive a pre-operative CT scan of the target area, wherein the pre-operative CT scan includes a marking of the target; generate at least one virtual fluoroscopy image based on the pre-operative CT scan, wherein the at least one virtual fluoroscopy image includes the target and the marking of the target; receive a sequence of 2D fluoroscopic images of the target area acquired in real-time about a plurality of angles relative to the target area, while a medical device is positioned in the target area; generate a fluoroscopic three-dimensional reconstruction of the target area based on the sequence of 2D fluoroscopic images; display to a user the at least one virtual fluoroscopy image and the fluoroscopic three-dimensional reconstruction simultaneously, receive a selection of the target from the fluoroscopic three-dimensional reconstruction via the user; receive a selection of the medical device from the fluoroscopic three-dimensional reconstruction or the sequence of 2D fluoroscopic images; and determine an offset of the medical device with respect to the target based on the selections of the target and the medical device.


There is further provided in accordance with the present disclosure, a computer program product comprising a non-transitory computer-readable storage medium having program code embodied therewith, the program code executable by at least one hardware processor to: receive a pre-operative CT scan of the target area, wherein the pre-operative CT scan includes a marking of the target; generate at least one virtual fluoroscopy image based on the pre-operative CT scan, wherein the at least one virtual fluoroscopy image includes the target and the marking of the target; receive a sequence of 2D fluoroscopic images of the target area acquired in real-time about a plurality of angles relative to the target area, while a medical device is positioned in the target area; generate a fluoroscopic three-dimensional reconstruction of the target area based on the sequence of 2D fluoroscopic images; display to a user the at least one virtual fluoroscopy image and the fluoroscopic three-dimensional reconstruction, receive a selection of the target from the fluoroscopic three-dimensional reconstruction via the user; receive a selection of the medical device from the fluoroscopic three-dimensional reconstruction or the sequence of 2D fluoroscopic images; and determine an offset of the medical device with respect to the target based on the selections of the target and the medical device.


In another aspect of the present disclosure, the one or more storage devices have stored thereon further instructions for directing the user to identify and mark the target in the fluoroscopic 3D reconstruction.


In another aspect of the present disclosure, the one or more storage devices have stored thereon further instructions for directing the user to identify and mark the target in the fluoroscopic 3D reconstruction while using the virtual fluoroscopy image as a reference.


In another aspect of the present disclosure, the one or more storage devices have stored thereon further instructions for directing the user to identify and mark the target in two fluoroscopic slice images of the fluoroscopic 3D reconstruction captured at two different angles.


In another aspect of the present disclosure, the generating of the at least one virtual fluoroscopy image is based on Digitally Reconstructed Radiograph techniques.


In another aspect of the present disclosure, the generating of the at least one virtual fluoroscopy image comprises: generating virtual fluoroscope poses around the target by simulating a fluoroscope trajectory while scanning the target; generating virtual 2D fluoroscopic images by projecting the CT scan volume according to the virtual fluoroscope poses; generating virtual fluoroscopic 3D reconstruction based on the virtual 2D fluoroscopic images; and selecting a slice image from the virtual fluoroscopic 3D reconstruction which comprises the marking of the target.


In another aspect of the present disclosure, the target is a soft-tissue target.


In another aspect of the present disclosure, the receiving of the fluoroscopic 3D reconstruction of the body region comprises: receiving a sequence of 2D fluoroscopic images of the body region acquired about a plurality of angles relative to the body region and generating the fluoroscopic 3D reconstruction of the body region based on the sequence of 2D fluoroscopic images.


In another aspect of the present disclosure, the method further comprises using said at least one hardware processor for directing the user to identify and mark the target in the fluoroscopic 3D reconstruction.


In another aspect of the present disclosure, the method further comprises using said at least one hardware processor for directing the user to identify and mark the target in the fluoroscopic 3D reconstruction while using the at least one virtual fluoroscopy image as a reference.


In another aspect of the present disclosure, the method further comprises using said at least one hardware processor for instructing the user to identify and mark the target in two fluoroscopic slice images of the fluoroscopic 3D reconstruction captured at two different angles.


In another aspect of the present disclosure, the system further comprises: a tracking system configured to provide data indicating the location of the medical device within the patient's body; and a display, wherein the computing device is further configured to: determine the location of the medical device based on the data provided by the tracking system; display the target area and the location of the medical device with respect to the target on the display; and correct the display of the location of the medical device with respect to the target based on the determined offset between the medical device and the target.


In another aspect of the present disclosure, the computing device is further configured to: generate a 3D rendering of the target area based on the pre-operative CT scan, wherein the display of the target area comprises displaying the 3D rendering; and register the tracking system to the 3D rendering, wherein the correction of the location of the medical device with respect to the target comprises updating the registration between the tracking system and the 3D rendering.


In another aspect of the present disclosure, the tracking system comprises: a sensor; and an electromagnetic field generator configured to generate an electromagnetic field for determining a location of the sensor, wherein the medical device comprises a catheter guide assembly having the sensor disposed thereon, and the determining of the location of the medical device comprises determining the location of the sensor based on the generated electromagnetic field.


In another aspect of the present disclosure, the target area comprises at least a portion of the lungs and the medical device is configured to be navigated to the target area through the airways luminal network.


In another aspect of the present disclosure, the computing device is configured to receive the selection of the medical device by automatically detecting a portion of the medical device in the acquired sequence of 2D fluoroscopic images or three-dimensional reconstruction and receiving the user command either accepting or rejecting the detection.


In another aspect of the present disclosure, the computing device is further configured to estimate the pose of the fluoroscopic imaging device, while the fluoroscopic imaging device acquires each of at least a plurality of images of the sequence of 2D fluoroscopic images, and wherein the generating of the three-dimensional reconstruction of the target area is based on the pose estimation of the fluoroscopic imaging device.


In another aspect of the present disclosure, the system further comprises a structure of markers, wherein the fluoroscopic imaging device is configured to acquire a sequence of 2D fluoroscopic images of the target area and of the structure of markers, and wherein the estimation of the pose of the fluoroscopic imaging device while acquiring each image of the at least plurality of images is based on detection of a possible and most probable projection of the structure of markers, as a whole, on each image.


In another aspect of the present disclosure, the computing device is further configured to direct the user to identify and mark the target in the fluoroscopic 3D reconstruction while using the at least one virtual fluoroscopy image as a reference.


In another aspect of the present disclosure, the method further comprises using said at least one hardware processor for: determining the location of the medical device within the patient's body based on data provided by a tracking system; displaying the target area and the location of the medical device with respect to the target on a display; and correcting the display of the location of the medical device with respect to the target based on the determined offset between the medical device and the target.


In another aspect of the present disclosure, the method further comprises using said at least one hardware processor for: generating a 3D rendering of the target area based on the pre-operative CT scan, wherein the displaying of the target area comprises displaying the 3D rendering; and registering the tracking system to the 3D rendering, wherein the correcting of the location of the medical device with respect to the target comprises updating the registration between the tracking system and the 3D rendering.


In another aspect of the present disclosure, the receiving of the selection of the medical device comprises automatically detecting a portion of the medical device in the sequence of 2D fluoroscopic images or three-dimensional reconstruction and receiving the user command either accepting or rejecting the detection.


In another aspect of the present disclosure, the method further comprises using said at least one hardware processor for estimating the pose of the fluoroscopic imaging device while acquiring each of at least a plurality of images of the sequence of 2D fluoroscopic images, wherein the generating of the three-dimensional reconstruction of the target area is based on the pose estimation of the fluoroscopic imaging device.


In another aspect of the present disclosure, a structure of markers is placed with respect to the patient and the fluoroscopic imaging device such that each image of the at least plurality of images comprises a projection of at least a portion of the structure of markers, and wherein the estimating of the pose of the fluoroscopic imaging device while acquiring each image of the at least plurality of images is based on detection of a possible and most probable projection of the structure of markers as a whole on each image.


In another aspect of the present disclosure, the non-transitory computer-readable storage medium has further program code executable by the at least one hardware processor to: determine the location of the medical device within the patient's body based on data provided by a tracking system; display the target area and the location of the medical device with respect to the target on a display; and correct the display of the location of the medical device with respect to the target based on the determined offset between the medical device and the target.


Any of the above aspects and embodiments of the present disclosure may be combined without departing from the scope of the present disclosure.





BRIEF DESCRIPTION OF THE DRAWINGS

Various aspects and embodiments of the present disclosure are described hereinbelow with references to the drawings, wherein:



FIG. 1 is a flow chart of a method for identifying and marking a target in fluoroscopic 3D reconstruction in accordance with the present disclosure;



FIG. 2 is a schematic diagram of a system configured for use with the method of FIG. 1;



FIG. 3A is an exemplary screen shot showing a display of slice images of a fluoroscopic 3D reconstruction in accordance with the present disclosure;



FIG. 3B is an exemplary screen shot showing a virtual fluoroscopy image presented simultaneously with slice images of a fluoroscopic 3D reconstruction in accordance with the present disclosure;



FIG. 3C is an exemplary screen shot showing a display of a fluoroscopic 3D reconstruction in accordance with the present disclosure;



FIG. 4 is a flow chart of a method for navigating to a target using real-time two-dimensional fluoroscopic images in accordance with the present disclosure; and



FIG. 5 is a perspective view of one illustrative embodiment of an exemplary system for navigating to a soft-tissue target via the airways network in accordance with the method of FIG. 4.





DETAILED DESCRIPTION

The term “target”, as referred to herein, may relate to any element, biological or artificial, or to a region of interest in a patient's body, such as a tissue (including soft tissue and bone tissue), an organ, an implant or a fiducial marker.


The term “target area”, as referred to herein, may relate to the target and at least a portion of its surrounding area. The term “target area” and the term “body region” may be used interchangeably when the term “body region” refers to the body region in which the target is located. Alternatively or in addition, the term “target area” may also refer to a portion of the body region in which the target is located, all according to the context.


The terms “and”, “or” and “and/or” may be used interchangeably, while each term may incorporate the others, all according to the term's context.


The term “medical device”, as referred to herein, may include, without limitation, optical systems, ultrasound probes, marker placement tools, biopsy tools, ablation tools (i.e., microwave ablation devices), laser probes, cryogenic probes, sensor probes, and aspirating needles.


The terms “fluoroscopic image” and “fluoroscopic images” may refer to a 2D fluoroscopic image/s and/or to a slice-image of any fluoroscopic 3D reconstructions, all in accordance with the term's context.


The terms “virtual fluoroscopic image” or “virtual fluoroscopic images” may refer to a virtual 2D fluoroscopic image/s and/or to a virtual fluoroscopy slice-image/s of a virtual fluoroscopic 3D reconstruction or any other 3D reconstruction, all in accordance with the term's context.


The present disclosure is directed to systems, methods and computer program products for facilitating the identification and marking of a target by a user in real-time fluoroscopic images of a body region of interest generated via a standard fluoroscope. Such real-time fluoroscopic images may include two-dimensional images and/or slice-images of a three-dimensional reconstruction. In particular, the identification and marking of the target in the real-time fluoroscopic data may be facilitated by using synthetic or virtual fluoroscopic data, which includes a marking or an indication of the target, as a reference. The virtual fluoroscopic data may be generated from previously acquired volumetric data and preferably such that it would imitate, as much as possible, fluoroscopic type of data. Typically, the target is better shown in the imaging modality of the previously acquired volumetric data than in the real-time fluoroscopic data.


The present disclosure is further directed to systems and methods for facilitating the navigation of a medical device to a target and/or its area using real-time two-dimensional fluoroscopic images of the target area. The navigation is facilitated by using local three-dimensional volumetric data, in which small soft-tissue objects are visible, constructed from a sequence of fluoroscopic images captured by a standard fluoroscopic imaging device available in most procedure rooms. The fluoroscopic-based constructed local three-dimensional volumetric data may be used to correct a location of a medical device with respect to a target or may be locally registered with previously acquired volumetric data. In general, the location of the medical device may be determined by a tracking system. The tracking system may be registered with the previously acquired volumetric data. A local registration of the real-time three-dimensional fluoroscopic data to the previously acquired volumetric data may be then performed via the tracking system. Such real-time data, may be used, for example, for guidance, navigation planning, improved navigation accuracy, navigation confirmation, and treatment confirmation.


Reference is now made to FIG. 1, which is a flow chart of a method for identifying and marking a target in a 3D fluoroscopic reconstruction in accordance with the present disclosure. In a step 100, a CT scan and a fluoroscopic 3D reconstruction of a body region of a patient may be received. The CT scan may include a marking or an indication of a target located in the patient's body region. Alternatively, a qualified medical professional may be directed to identify and mark the target in the CT scan. In some embodiments the target may be a soft-tissue target, such as a lesion. In some embodiments the imaged body region may include at least a portion of the lungs. In some embodiments, the 3D reconstructions may be displayed to the user. In some embodiments the 3D reconstruction may be displayed such that the user may scroll through its different slice images. Reference is now made to FIG. 3A, which is an exemplary screen shot 300 showing a display of slice images of a fluoroscopic 3D reconstruction in accordance with the present disclosure. Screen shot 300 includes a slice image 310, a scrolling bar 320 and an indicator 330. Scrolling bar 320 allows a user to scroll through the slice images of the fluoroscopic 3D reconstruction. Indicator 330 indicates the relative location of the slice image currently displayed, e.g., slice image 310, within the slice images constituting the fluoroscopic 3D reconstruction.


In some embodiments, the receiving of the fluoroscopic 3D reconstruction of the body region may include receiving a sequence of fluoroscopic images of the body region and generating the fluoroscopic 3D reconstruction of the body region based on at least a portion of the fluoroscopic images. In some embodiments, the method may further include directing a user to acquire the sequence of fluoroscopic images by manually sweeping the fluoroscope. In some embodiments, the method may further include automatically acquiring the sequence of fluoroscopic images. The fluoroscopic images may be acquired by a standard fluoroscope, in a continuous manner and about a plurality of angles relative to the body region. The fluoroscope may be swept manually, i.e., by a user, or automatically. For example, the fluoroscope may be swept along an angle of 20 to 45 degrees. In some embodiments, the fluoroscope may be swept along an angle of 30±5 degrees.


In some embodiments, the fluoroscopic 3D reconstruction may be generated based on tomosynthesis methods, and/or according to the systems and methods disclosed in US Patent Application No. 2017/035379 and U.S. patent application Ser. No. 15/892,053, as mentioned above and incorporated herein by reference. The CT scan may be generated according and via methods and systems as known in the art. The CT scan is a pre-operative CT scan, i.e., generated previously (i.e., not in real-time) and prior to a medical procedure for which the identification and marking of the target may be required.


In a step 110, at least one virtual fluoroscopy image may be generated based on the CT scan of the patient. The virtual fluoroscopy image can then include the target and the marking of the target, as indicated in the CT scan. The aim is to generate an image of the target, which includes a relatively accurate indication of the target, and which resembles the fluoroscopic type of images. A user may then use the indication of the target in the synthetic image to identify and mark the target in the real-time fluoroscopic volume (e.g., by identifying the target in one or more slice images). In some embodiments, the virtual fluoroscopy image may be of a type of 2D fluoroscopic image, e.g., a virtual 2D fluoroscopic image. In some embodiments, the virtual fluoroscopy image may be of a type of fluoroscopic 3D reconstruction slice image, e.g., a virtual slice image.


In some embodiments, the virtual 2D fluoroscopic image may be generated based on Digitally Reconstructed Radiograph techniques.


In some embodiments, the generation of the virtual fluoroscopy slice image may be generated according to the following steps. In a first step, the received CT volume is aligned with the fluoroscopic 3D reconstruction. In a second step, an estimation of a pose of the fluoroscopic device while capturing the set of fluoroscopic images used to generate the fluoroscopic 3D reconstruction in a selected position, e.g., in AP (anteroposterior) position, with respect to the target or patient is received or calculated. In a third step, a slice or slices of the CT scan volume perpendicular to the selected position and which include the target are generated. In a fourth step, the CT slice or slices are projected according to the estimated fluoroscope pose to receive a virtual fluoroscopy slice image.


In some embodiments, generation of a virtual fluoroscopy slice image of the target area may include the following steps. In a first step, virtual fluoroscope poses around the target may be obtained. In some embodiments, the virtual fluoroscope poses may be generated by simulating a fluoroscope trajectory while the fluoroscope scans the target. In some embodiments, the method may further include the generation of the 3D fluoroscopic reconstruction, as described with respect to step 430 of FIG. 4. The estimated poses of the fluoroscopic device while capturing the sequence of fluoroscopic images used to generate the fluoroscopic 3D reconstruction may be then utilized. In a second step, virtual fluoroscopic images may be generated by projecting the CT scan volume according to the virtual fluoroscope poses. In a third step, a virtual fluoroscopic 3D reconstruction may be generated based on the virtual fluoroscopic images. In some embodiments, the virtual fluoroscopic 3D reconstruction may be generated while using the method of reconstruction of the 3D fluoroscopic volume with adaptations. The resulting virtual fluoroscopic volume may then look more like the fluoroscopic volume. For example, the methods of fluoroscopic 3D reconstruction disclosed in US Patent Application No. 2017/035379, US Patent Application No. 2017/035380 and U.S. patent application Ser. No. 15/892,053, as detailed above and herein incorporated by reference, may be used. In a fourth step, a slice image which includes the indication of the target may be selected from the virtual fluoroscopic 3D reconstruction.


In some embodiments, when marking of the target in a slice image of a fluoroscopic 3D reconstruction is desired, generating and using a virtual slice image as a reference may be more advantageous. In some embodiments, when marking of the target in a fluoroscopic 2D image is desired, generating and using a virtual fluoroscopic 2D image may be more advantageous.


In a step 120, the virtual fluoroscopy image and the fluoroscopic 3D reconstruction may be displayed to a user. The indication of the target in the virtual fluoroscopy image may be then used as a reference for identifying and marking the target in slice images of the fluoroscopic 3D reconstruction. Thus, facilitating the identification and marking of the target in the fluoroscopic 3D reconstruction. The identification and marking of the target performed by the user may be then more accurate. A user may use the virtual fluoroscopy as a reference prior to the identification and marking of the target in the real-time fluoroscopic images and/or may use it after such identification and marking.


Various workflows and displays may be used to identify and mark the target while using virtual fluoroscopic data as a reference according to the present disclosure. Such displays are exemplified in FIGS. 3B and 3C. Reference is now made to FIG. 3B, which is an exemplary screen shot 350 showing a virtual fluoroscopy image 360 displayed simultaneously with fluoroscopic slice images 310a and 310b of a fluoroscopic 3D reconstruction in accordance with the present disclosure. Screen shot 350 includes a virtual fluoroscopy image 360, fluoroscopic slice images 310a and 310b, scroll bar 320 and indicator 330. Virtual fluoroscopy image 360 includes a circular marking 370 of a target. Fluoroscopic slice images 310a and 310b include circular markings 380a and 380b of the target correspondingly performed by a user. In some embodiments, the user may visually align between fluoroscopic slice images 310a and 310b and markings 380a and 380b and virtual fluoroscopic image 360 and marking 370 to verify markings 380a and 380b. In some embodiments, the user may use virtual fluoroscopic image 360 and marking 370 to mark fluoroscopic slice images 310a and 310b. In this specific example, two fluoroscopic slice images are displayed simultaneously. However, according to other embodiments, only one fluoroscopic slice image may be displayed or more than two. In this specific example, the virtual fluoroscopy image is displayed in the center of the screen and the fluoroscopic slice images are displayed in the bottom sides of the screen. However, any other display arrangement may be used.


Reference is now made to FIG. 3C, which is an exemplary screen shot 355 showing a display of at least a portion of a 3D fluoroscopic reconstruction 365. Screen shot 355 includes the 3D reconstruction image 365, which includes at least a portion (e.g., a slice) of the 3D fluoroscopic reconstruction; delimited areas 315a and 315b, scroll bar 325, indicator 335 and button 375. Delimited areas 315a and 315b are specified areas for presenting slice images of the portion of the 3D reconstruction presented in 3D reconstruction image 365 selected by the user (e.g., selected by marking the target in these slice images). Button 374 is captioned “Planned Target”. In some embodiments, once the user press or click button 374, he is presented with at least one virtual fluoroscopic image showing the target and a marking of it to be used as reference. Once button 374 is pressed, the display may change. In some embodiments, the display presented once button 374 is pressed may include virtual fluoroscopy images only. In some embodiments, the display presented once button 374 is pressed may include additional images, including slice images of the 3D reconstruction. Scroll bar 325, and indicator 335 may be used by the user to scroll through slices of at least the portion of the 3D reconstruction presented in 3D reconstruction image 365.


In some embodiments, the virtual fluoroscopy image and the fluoroscopic 3D reconstruction (e.g., a selected slice of the fluoroscopic 3D reconstruction) may be displayed to a user simultaneously. In some embodiments, the virtual fluoroscopy image and the fluoroscopic 3D reconstruction may be displayed in a non-simultaneous manner. For example, the virtual fluoroscopy image may be displayed in a separate alternative screen or in a pop-up window.


In an optional step 130, the user may be directed to identify and mark the target in the fluoroscopic 3D reconstruction. In some embodiments, the user may be specifically directed to use the virtual fluoroscopy image/s as a reference. In some embodiments, the user may be instructed to identify and mark the target in two fluoroscopic slice images of the fluoroscopic 3D reconstruction captured at two different angles. Marking the target in two fluoroscopic slice images may be required when the slice width is relatively thick, and such that additional data would be required to accurately determine the location of the target in the fluoroscopic 3D reconstruction. In some embodiments, the user may need or may be required to only identify the target and may be directed accordingly. In some embodiments, the target may be automatically identified in the fluoroscopic 3D reconstruction by a dedicated algorithm. The user may be then required to confirm and optionally amend the automatic marking using the virtual fluoroscopy image as a reference.


In some embodiments, the identification and marking of a target may be performed in one or more two-dimensional fluoroscopic images, i.e., fluoroscopic images as originally captured. One or more fluoroscopic images may be then received and displayed to the user instead of the fluoroscopic 3D reconstruction. The identification and marking of the target by a user may be then performed with respect to the received one or more fluoroscopic images.


In some embodiments, the set of two-dimensional fluoroscopic images (e.g., as originally captured), which was used to construct the fluoroscopic 3D reconstruction, may be additionally received (e.g., in addition to the 3D fluoroscopic reconstruction). The fluoroscopic 3D reconstruction, the corresponding set of two-dimensional fluoroscopic images and the virtual fluoroscopy image may be displayed to the user. The user may then select if to identify and mark the target in one or more slice images of the fluoroscopic 3D reconstruction, one or more images of the two-dimensional fluoroscopic images or in both.


Reference is now made to FIG. 2, which is a schematic diagram of a system 200 configured for use with the method of FIG. 1. System 200 may include a workstation 80, and optionally a fluoroscopic imaging device or fluoroscope 215. In some embodiments, workstation 80 may be coupled with fluoroscope 215, directly or indirectly, e.g., by wireless communication. Workstation 80 may include a memory or storage device 202, a processor 204, a display 206 and an input device 210. Processor or hardware processor 204 may include one or more hardware processors. Workstation 80 may optionally include an output module 212 and a network interface 208. Memory 202 may store an application 81 and image data 214. Application 81 may include instructions executable by processor 204, inter alia, for executing the method steps of FIG. 1. Application 81 may further include a user interface 216. Image data 214 may include the CT scan, the fluoroscopic 3D reconstructions of the target area and/or any other fluoroscopic image data and/or the generated one or more virtual fluoroscopy images. Processor 204 may be coupled with memory 202, display 206, input device 210, output module 212, network interface 208 and fluoroscope 215. Workstation 80 may be a stationary computing device, such as a personal computer, or a portable computing device such as a tablet computer. Workstation 80 may embed a plurality of computer devices.


Memory 202 may include any non-transitory computer-readable storage media for storing data and/or software including instructions that are executable by processor 204 and which control the operation of workstation 80 and in some embodiments, may also control the operation of fluoroscope 215. Fluoroscope 215 may be used to capture a sequence of fluoroscopic images based on which the fluoroscopic 3D reconstruction is generated. In an embodiment, memory or storage device 202 may include one or more storage devices such as solid-state storage devices such as flash memory chips. Alternatively, or in addition to the one or more solid-state storage devices, memory 202 may include one or more mass storage devices connected to the processor 204 through a mass storage controller (not shown) and a communications bus (not shown). Although the description of computer-readable media contained herein refers to a solid-state storage, it should be appreciated by those skilled in the art that computer-readable storage media can be any available media that can be accessed by the processor 204. That is, computer readable storage media may include non-transitory, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media may include RAM, ROM, EPROM, EEPROM, flash memory or other solid-state memory technology, CD-ROM, DVD, Blu-Ray or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information, and which may be accessed by workstation 80.


Application 81 may, when executed by processor 204, cause display 206 to present user interface 216. User interface 216 may be configured to present to the user the fluoroscopic 3D reconstruction and the generated virtual fluoroscopy image, as shown, for example, in FIGS. 3A and 3B. User interface 216 may be further configured to direct the user to identify and mark the target in the displayed fluoroscopic 3D reconstruction or any other fluoroscopic image data in accordance with the present disclosure.


Network interface 208 may be configured to connect to a network such as a local area network (LAN) consisting of a wired network and/or a wireless network, a wide area network (WAN), a wireless mobile network, a Bluetooth network, and/or the internet. Network interface 208 may be used to connect between workstation 80 and fluoroscope 215. Network interface 208 may be also used to receive image data 214. Input device 210 may be any device by means of which a user may interact with workstation 80, such as, for example, a mouse, keyboard, foot pedal, touch screen, and/or voice interface. Output module 212 may include any connectivity port or bus, such as, for example, parallel ports, serial ports, universal serial busses (USB), or any other similar connectivity port known to those skilled in the art.


Reference is now made to FIG. 4, which is a flow chart of a method for navigating to a target using real-time two-dimensional fluoroscopic images in accordance with the present disclosure. The method facilitates navigating to a target area within a patient's body during a medical procedure. The method utilizes real-time fluoroscopic-based three-dimensional volumetric data. The fluoroscopic three-dimensional volumetric data may be generated from two-dimensional fluoroscopic images.


In a step 400, a pre-operative CT scan of the target area may be received. The pre-operative CT scan may include a marking or indication of the target. Step 400 may be similar to step 100 of the method of FIG. 1.


In a step 410, one or more virtual fluoroscopy images may be generated based on the pre-operative CT scan. The virtual fluoroscopy images may include the target and the marking or indication of the target. Step 400 may be similar to step 110 of the method of FIG. 1.


In a step 420, a sequence of fluoroscopic images of the target area acquired in real time about a plurality of angles relative to the target area may be received. The sequence of images may be captured while a medical device is positioned in the target area. In some embodiments, the method may include further steps for directing a user to acquire the sequence of fluoroscopic images. In some embodiments, the method may include one or more further steps for automatically acquiring the sequence of fluoroscopic images.


In a step 430, a three-dimensional reconstruction of the target area may be generated based on the sequence of fluoroscopic images.


In some embodiments, the method further comprises one or more steps for estimating the pose of the fluoroscopic imaging device while acquiring each of the fluoroscopic images, or at least a plurality of them. The three-dimensional reconstruction of the target area may be then generated based on the pose estimation of the fluoroscopic imaging device.


In some embodiments, a structure of markers may be placed with respect to the patient and the fluoroscopic imaging device, such that each fluoroscopic image includes a projection of at least a portion of the structure of markers. The estimation of the pose of the fluoroscopic imaging device while acquiring each image may be then facilitated by the projections of the structure of markers on the fluoroscopic images. In some embodiments, the estimation may be based on detection of a possible and most probable projection of the structure of markers as a whole on each image.


Exemplary systems and methods for constructing such fluoroscopic-based three-dimensional volumetric data are disclosed in the above commonly owned U.S. Patent Publication No. 2017/0035379, U.S. patent application Ser. No. 15/892,053 and U.S. Provisional Application Ser. No. 62/628,017, which are incorporated by reference.


In some embodiments, once the pose estimation process is complete, the projection of the structure of markers on the images may be removed by using well known methods. One such method is disclosed in commonly-owned U.S. Patent Application No. 62/628,028, entitled: “IMAGE RECONSTRUCTION SYSTEM AND METHOD”, filed on Feb. 8, 2018, to Alexandroni et al., the entire content of which is hereby incorporated by reference.


In a step 440, one or more virtual fluoroscopy images and the fluoroscopic 3D reconstruction may be displayed to a user. The display may be according to step 120 of the method of FIG. 1 and as exemplified in FIG. 3B. In some embodiments, one or more virtual fluoroscopy images and the fluoroscopic 3D reconstruction may be displayed to a user simultaneously. In some embodiments, the virtual fluoroscopy image and the fluoroscopic 3D reconstruction may be displayed in a non-simultaneous manner. For example, the virtual fluoroscopy image may be displayed in a separate screen or may be displayed, e.g., upon the user's request, instead of the display of the fluoroscopic 3D reconstruction.


In a step 450, a selection of the target from the fluoroscopic 3D reconstruction may be received via the user. In some embodiments, the user may be directed to identify and mark the target in the fluoroscopic 3D reconstruction while using the one or more virtual fluoroscopy images as a reference.


In a step 460, a selection of the medical device from the three-dimensional reconstruction or the sequence of fluoroscopic images may be received. In some embodiments, the receipt of the selection may include automatically detecting at least a portion of the medical device in the sequence of fluoroscopic images or three-dimensional reconstruction. In some embodiments, a user command either accepting or rejecting the detection may be also received. In some embodiments the selection may be received via the user. Exemplary automatic detection of a catheter in fluoroscopic images is disclosed in commonly-owned U.S. Provisional Application No. 62/627,911 to Weingarten et al., entitled “System And Method For Catheter Detection In Fluoroscopic Images And Updating Displayed Position Of Catheter”, the contents of which are incorporated herein by reference.


In a step 470, an offset of the medical device with respect to the target may be determined. The determination of the offset may be based on the received selections of the target and the medical device.


In some embodiments, the method may further include a step for determining the location of the medical device within the patient's body based on data provided by a tracking system, such as an electromagnetic tracking system. In a further step, the target area and the location of the medical device with respect to the target may be displayed to the user on a display. In another step, the display of the location of the medical device with respect to the target may be corrected based on the determined offset between the medical device and the target.


In some embodiments, the method may further include a step for generating a 3D rendering of the target area based on the pre-operative CT scan. A display of the target area may then include a display of the 3D rendering. In another step, the tracking system may be registered with the 3D rendering. A correction of the location of the medical device with respect to the target based on the determined offset may then include the local updating of the registration between the tracking system and the 3D rendering in the target area. In some embodiments, the method may further include a step for registering the fluoroscopic 3D reconstruction to the tracking system. In another step and based on the above, a local registration between the fluoroscopic 3D reconstruction and the 3D rendering may be performed in the target area.


In some embodiments, the target may be a soft tissue target. In some embodiments, the target area may include at least a portion of the lungs and the medical device may be configured to be navigated to the target area through the airways luminal network.


In some embodiments, the method may include receiving a selection of the target from one or more images of the sequence of fluoroscopy images in addition or alternatively to receiving a selection of the target from the fluoroscopic 3D reconstruction. The sequence of fluoroscopy images may be then displayed to the user in addition or instead of the display of the fluoroscopic 3D reconstruction correspondingly. The method of FIG. 4 should be then adapted accordingly.


A computer program product for navigating to a target using real-time two-dimensional fluoroscopic images is herein disclosed. The computer program product may include a non-transitory computer-readable storage medium having program code embodied therewith. The program code may be executable by at least one hardware processor to perform the steps of the method of FIG. 1 and/or FIG. 4.



FIG. 5 is a perspective view of one illustrative embodiment of an exemplary system for facilitating navigation to a soft-tissue target via the airways network in accordance with the method of FIG. 4. System 500 may be further configured to construct fluoroscopic-based three-dimensional volumetric data of the target area from 2D fluoroscopic images. System 500 may be further configured to facilitate approach of a medical device to the target area by using Electromagnetic Navigation Bronchoscopy (ENB) and for determining the location of a medical device with respect to the target.


System 500 may be configured for reviewing CT image data to identify one or more targets, planning a pathway to an identified target (planning phase), navigating an extended working channel (EWC) 512 of a catheter assembly to a target (navigation phase) via a user interface, and confirming placement of EWC 512 relative to the target. One such EMN system is the ELECTROMAGNETIC NAVIGATION BRONCHOSCOPY® system currently sold by Medtronic PLC. The target may be tissue of interest identified by review of the CT image data during the planning phase. Following navigation, a medical device, such as a biopsy tool or other tool, may be inserted into EWC 512 to obtain a tissue sample from the tissue located at, or proximate to, the target.


As shown in FIG. 5, EWC 512 is part of a catheter guide assembly 540. In practice, EWC 512 is inserted into a bronchoscope 530 for access to a luminal network of the patient “P.” Specifically, EWC 512 of catheter guide assembly 540 may be inserted into a working channel of bronchoscope 530 for navigation through a patient's luminal network. A locatable guide (LG) 532, including a sensor 544 is inserted into EWC 512 and locked into position such that sensor 544 extends a desired distance beyond the distal tip of EWC 512. The position and orientation of sensor 544 relative to the reference coordinate system, and thus the distal portion of EWC 512, within an electromagnetic field can be derived. Catheter guide assemblies 540 are currently marketed and sold by Medtronic PLC under the brand names SUPERDIMENSION® Procedure Kits, or EDGE™ Procedure Kits, and are contemplated as useable with the present disclosure. For a more detailed description of catheter guide assemblies 540, reference is made to commonly-owned U.S. Patent Publication No. 2014/0046315, filed on Mar. 15, 2013, by Ladtkow et al, U.S. Pat. Nos. 7,233,820, and 9,044,254, the entire contents of each of which are hereby incorporated by reference.


System 500 generally includes an operating table 520 configured to support a patient “P,” a bronchoscope 530 configured for insertion through the patient's “P's” mouth into the patient's “P's” airways; monitoring equipment 535 coupled to bronchoscope 530 (e.g., a video display, for displaying the video images received from the video imaging system of bronchoscope 530); a locating or tracking system 550 including a locating module 552, a plurality of reference sensors 554 and a transmitter mat coupled to a structure of markers 556; and a computing device 525 including software and/or hardware used to facilitate identification of a target, pathway planning to the target, navigation of a medical device to the target, and/or confirmation and/or determination of placement of EWC 512, or a suitable device therethrough, relative to the target. Computing device 525 may be similar to workstation 80 of FIG. 2 and may be configured, inter alia, to execute the methods of FIG. 1 and FIG. 4.


A fluoroscopic imaging device 510 capable of acquiring fluoroscopic or x-ray images or video of the patient “P” is also included in this particular aspect of system 500. The images, sequence of images, or video captured by fluoroscopic imaging device 510 may be stored within fluoroscopic imaging device 510 or transmitted to computing device 525 for storage, processing, and display, e.g., as described with respect to FIG. 2. Additionally, fluoroscopic imaging device 510 may move relative to the patient “P” so that images may be acquired from different angles or perspectives relative to patient “P” to create a sequence of fluoroscopic images, such as a fluoroscopic video. The pose of fluoroscopic imaging device 510 relative to patient “P” and while capturing the images may be estimated via structure of markers 556 and according to the method of FIG. 4. The structure of markers is positioned under patient “P”, between patient “P” and operating table 520 and may be positioned between patient “P” and a radiation source or a sensing unit of fluoroscopic imaging device 510. The structure of markers is coupled to the transmitter mat (both indicated 556) and positioned under patient “P” on operating table 520. Structure of markers and transmitter map 556 are positioned under the target area within the patient in a stationary manner. Structure of markers and transmitter map 556 may be two separate elements which may be coupled in a fixed manner or alternatively may be manufactured as a single unit. Fluoroscopic imaging device 510 may include a single imaging device or more than one imaging device. In embodiments including multiple imaging devices, each imaging device may be a different type of imaging device or the same type. Further details regarding the imaging device 510 are described in U.S. Pat. No. 8,565,858, which is incorporated by reference in its entirety herein.


Computing device 525 may be any suitable computing device including a processor and storage medium, wherein the processor is capable of executing instructions stored on the storage medium. Computing device 525 may further include a database configured to store patient data, CT data sets including CT images, fluoroscopic data sets including fluoroscopic images and video, fluoroscopic 3D reconstruction, navigation plans, and any other such data. Although not explicitly illustrated, computing device 525 may include inputs, or may otherwise be configured to receive, CT data sets, fluoroscopic images/video and other data described herein. Additionally, computing device 525 includes a display configured to display graphical user interfaces. Computing device 525 may be connected to one or more networks through which one or more databases may be accessed.


With respect to the planning phase, computing device 525 utilizes previously acquired CT image data for generating and viewing a three-dimensional model or rendering of the patient's “P's” airways, enables the identification of a target on the three-dimensional model (automatically, semi-automatically, or manually), and allows for determining a pathway through the patient's “P's” airways to tissue located at and around the target. More specifically, CT images acquired from previous CT scans are processed and assembled into a three-dimensional CT volume, which is then utilized to generate a three-dimensional model of the patient's “P's” airways. The three-dimensional model may be displayed on a display associated with computing device 525, or in any other suitable fashion. Using computing device 525, various views of the three-dimensional model or enhanced two-dimensional images generated from the three-dimensional model are presented. The enhanced two-dimensional images may possess some three-dimensional capabilities because they are generated from three-dimensional data. The three-dimensional model may be manipulated to facilitate identification of target on the three-dimensional model or two-dimensional images, and selection of a suitable pathway through the patient's “P's” airways to access tissue located at the target can be made. Once selected, the pathway plan, three-dimensional model, and images derived therefrom, can be saved and exported to a navigation system for use during the navigation phase(s). One such planning software is the ILOGIC® planning suite currently sold by Medtronic PLC.


With respect to the navigation phase, a six degrees-of-freedom electromagnetic locating or tracking system 550, e.g., similar to those disclosed in U.S. Pat. Nos. 8,467,589, 6,188,355, and published PCT Application Nos. WO 00/10456 and WO 01/67035, the entire contents of each of which are incorporated herein by reference, or other suitable system for determining location, is utilized for performing registration of the images and the pathway for navigation, although other configurations are also contemplated. Tracking system 550 includes a locating or tracking module 552, a plurality of reference sensors 554, and a transmitter mat 556 (coupled with the structure of markers). Tracking system 550 is configured for use with a locatable guide 532 and particularly sensor 544. As described above, locatable guide 532 and sensor 544 are configured for insertion through EWC 512 into a patient's “P's” airways (either with or without bronchoscope 530) and are selectively lockable relative to one another via a locking mechanism.


Transmitter mat 556 is positioned beneath patient “P.” Transmitter mat 556 generates an electromagnetic field around at least a portion of the patient “P” within which the position of a plurality of reference sensors 554 and the sensor element 544 can be determined with use of a tracking module 552. One or more of reference sensors 554 are attached to the chest of the patient “P.” The six degrees of freedom coordinates of reference sensors 554 are sent to computing device 525 (which includes the appropriate software) where they are used to calculate a patient coordinate frame of reference. Registration is generally performed to coordinate locations of the three-dimensional model and two-dimensional images from the planning phase, with the patient's “P's” airways as observed through the bronchoscope 530, and allow for the navigation phase to be undertaken with precise knowledge of the location of the sensor 544, even in portions of the airway where the bronchoscope 530 cannot reach. Further details of such a registration technique and their implementation in luminal navigation can be found in U.S. Patent Application Pub. No. 2011/0085720, the entire content of which is incorporated herein by reference, although other suitable techniques are also contemplated.


Registration of the patient's “P's” location on the transmitter mat 556 is performed by moving LG 532 through the airways of the patient's “P.” More specifically, data pertaining to locations of sensor 544, while locatable guide 532 is moving through the airways, is recorded using transmitter mat 556, reference sensors 554, and tracking module 552. A shape resulting from this location data is compared to an interior geometry of passages of the three-dimensional model generated in the planning phase, and a location correlation between the shape and the three-dimensional model based on the comparison is determined, e.g., utilizing the software on computing device 525. In addition, the software identifies non-tissue space (e.g., air filled cavities) in the three-dimensional model. The software aligns, or registers, an image representing a location of sensor 544 with the three-dimensional model and/or two-dimensional images generated from the three-dimension model, which are based on the recorded location data and an assumption that locatable guide 532 remains located in non-tissue space in the patient's “P's” airways. Alternatively, a manual registration technique may be employed by navigating the bronchoscope 530 with the sensor 544 to pre-specified locations in the lungs of the patient “P”, and manually correlating the images from the bronchoscope to the model data of the three-dimensional model.


Following registration of the patient “P” to the image data and pathway plan, a user interface is displayed in the navigation software which sets for the pathway that the clinician is to follow to reach the target. One such navigation software is the ILOGIC® navigation suite currently sold by Medtronic PLC.


Once EWC 512 has been successfully navigated proximate the target as depicted on the user interface, the locatable guide 532 may be unlocked from EWC 512 and removed, leaving EWC 512 in place as a guide channel for guiding medical devices including without limitation, optical systems, ultrasound probes, marker placement tools, biopsy tools, ablation tools (i.e., microwave ablation devices), laser probes, cryogenic probes, sensor probes, and aspirating needles to the target.


A medical device may be then inserted through EWC 512 and navigated to the target or to a specific area adjacent to the target. A sequence of fluoroscopic images may be then acquired via fluoroscopic imaging device 510, optionally by a user and according to directions displayed via computing device 525. A fluoroscopic 3D reconstruction may be then generated via computing device 525. The generation of the fluoroscopic 3D reconstruction is based on the sequence of fluoroscopic images and the projections of structure of markers 556 on the sequence of images. One or more virtual fluoroscopic images may be then generated based on the pre-operative CT scan and via computing device 525. The one or more virtual fluoroscopic images and the fluoroscopic 3D reconstruction may be then displayed to the user on a display via computing device 525, optionally simultaneously. The user may be then directed to identify and mark the target while using the virtual fluoroscopic image as a reference. The user may be also directed to identify and mark the medical device in the sequence of fluoroscopic 2D-dimensional images. An offset between the location of the target and the medical device may be then determined or calculated via computer device 525. The offset may be then utilized, via computing device 525, to correct the location of the medical device on the display with respect to the target and/or correct the registration between the three-dimensional model and tracking system 550 in the area of the target and/or generate a local registration between the three-dimensional model and the fluoroscopic 3D reconstruction in the target area.


System 500 or a similar version of it in conjunction with the method of FIG. 4 may be used in various procedures, other than ENB procedures with the required obvious modifications, and such as laparoscopy or robotic assisted surgery.


The terms “tracking” or “localization”, as referred to herein, may be used interchangeably. Although the present disclosure specifically describes the use of an EM tracking system to navigate or determine the location of a medical device, various tracking systems or localization systems may be used or applied with respect to the methods and systems disclosed herein. Such tracking, localization or navigation systems may use various methodologies including electromagnetic, Infra-Red, echolocation, optical or imaging-based methodologies. Such systems may be based on pre-operative imaging and/or real-time imaging.


In some embodiments, the standard fluoroscope may be employed to facilitate navigation and tracking of the medical device, as disclosed, for example, in U.S. Pat. No. 9,743,896 to Averbuch. For example, such fluoroscopy-based localization or navigation methodology may be applied in addition to or instead of the above-mentioned EM tracking methodology, e.g., as described with respect to FIG. 5, to facilitate or enhance navigation of the medical device.


From the foregoing and with reference to the various figure drawings, those skilled in the art will appreciate that certain modifications can also be made to the present disclosure without departing from the scope of the same.


Detailed embodiments of the present disclosure are disclosed herein. However, the disclosed embodiments are merely examples of the disclosure, which may be embodied in various forms and aspects. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure.


While several embodiments of the disclosure have been shown in the drawings, it is not intended that the disclosure be limited thereto, as it is intended that the disclosure be as broad in scope as the art will allow and that the specification be read likewise. Therefore, the above description should not be construed as limiting, but merely as exemplifications of embodiments. Those skilled in the art will envision other modifications within the scope and spirit of the claims appended hereto.

Claims
  • 1. A system for intraluminal navigation comprising: a workstation configured for receiving pre-procedure images; and an application stored in a memory and configured for execution on a processor of the workstation, the application executing steps of: receiving images from a fluoroscopic sweep of a desired portion of a patient;generating a three-dimensional (3D) volumetric reconstruction from the received fluoroscopic images;receiving an indication of a position of a distal tip of a catheter in an image associated with the 3D volumetric reconstruction;receiving an indication of a position of a target in two fluoroscopic images associated with the 3D volumetric reconstruction, wherein the two fluoroscopic images are from two different angles of the fluoroscopic sweep; anddisplaying the 3D volumetric reconstruction and the catheter relative to the target in the 3D volumetric reconstruction.
  • 2. The system of claim 1, wherein the position of the target is identified in two slice images of the 3D volumetric reconstruction.
  • 3. The system of claim 2, wherein the two slice images are taken from two different angles of the 3D volumetric reconstruction.
  • 4. The system of claim 2, where the indication of the position of the distal tip of the catheter and the target is automatically generated by the application.
  • 5. A method of updating a registration between a patient and a pre-procedure three-dimensional (3D) model comprising: receiving images from a fluoroscopic sweep of a desired portion of a patient;generating a three-dimensional (3D) volumetric reconstruction from the received fluoroscopic images;receiving an indication of a position of a distal tip of a catheter in an image associated with the 3D volumetric reconstruction,;receiving an indication of a position of a target in two fluoroscopic images associated with the 3D volumetric reconstruction wherein the two fluoroscopic images are from two different angles of the fluoroscopic sweep; anddisplaying the 3D volumetric reconstruction and the catheter relative to the target in the 3D volumetric reconstruction.
  • 6. The method of claim 5, wherein the pre-procedure 3D model is derived from computed tomography (CT) images.
  • 7. The method of claim 6, wherein the position of the target is identified in two slice images of the 3D volumetric reconstruction.
  • 8. The method of claim 7, wherein the two slice images are taken from two different angles of the 3D volumetric reconstruction.
  • 9. A method for navigation of soft tissue of a patient comprising: receiving a signal representative of a location of a catheter within a body of a patient;displaying a position of the catheter in a three-dimensional (3D) model derived from pre-procedure images;receiving images from a fluoroscopic sweep of a desired portion of the patient;estimating a pose of a fluoroscopic imaging device for each fluoroscopic image captured in the fluoroscopic sweep;generating a three-dimensional (3D) volumetric reconstruction from the received fluoroscopic images;receiving an indication of the position of a distal tip of a catheter in an image associated with the 3D volumetric reconstruction;receiving an indication of a position of a target in two fluoroscopic images associated with the 3D volumetric reconstruction, wherein the two fluoroscopic images are from two different angles of the fluoroscopic sweep; anddisplaying the 3D volumetric reconstruction and the catheter relative to the target in the 3D volumetric reconstruction.
  • 10. The method of claim 9, wherein the images captured by the fluoroscopic sweep include representations of a plurality of markers located beneath the patient.
  • 11. The method of claim 10, wherein the representations of the markers in the fluoroscopic images enable pose estimation of the fluoroscopic imaging device.
  • 12. The method of claim 9 further comprising navigating a catheter to a position proximate the target in the 3D model.
  • 13. The method of claim 9, wherein the signal representative of a location of the catheter in the body of the patient is an electromagnetic signal.
  • 14. The method of claim 13, further comprising a step of registering the 3D model to the patient by determining a location of at least a portion of a patient's airways based on received electromagnetic signals.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority to U.S. patent application Ser. No. 16/885,188 filed May 27, 2020, now allowed, which claims priority to U.S. patent application Ser. No. 16/022,222 filed Jun. 28, 2018, now U.S. Pat. No. 10,699,448, which claims priority to U.S. Provisional Application Ser. No. 62/526,798, filed on Jun. 29, 2017, the entire content of which is incorporated by reference herein. This application also claims priority to U.S. Provisional Application Ser. No. 62/641,777, filed on Mar. 12, 2018, the entire content of which is incorporated by reference herein. This application further claims priority to U.S. Provisional Application Ser. No. 62/628,017, filed on Feb. 8, 2018, the entire content of which is incorporated by reference herein. In addition, this application also claims priority to U.S. Provisional Application Ser. No. 62/570,431, filed on Oct. 10, 2017, the entire content of which is incorporated by reference herein.

US Referenced Citations (391)
Number Name Date Kind
4686695 Macovski Aug 1987 A
5042486 Pfeiler et al. Aug 1991 A
5057494 Sheffield Oct 1991 A
5251635 Dumoulin et al. Oct 1993 A
5321113 Cooper et al. Jun 1994 A
5376795 Hasegawa et al. Dec 1994 A
5383454 Bucholz et al. Jan 1995 A
5588033 Yeung Dec 1996 A
5622170 Schulz Apr 1997 A
5638819 Manwaring et al. Jun 1997 A
5647361 Damadian Jul 1997 A
5706324 Wiesent et al. Jan 1998 A
5744802 Muehllehner et al. Apr 1998 A
5772594 Barrick Jun 1998 A
5829444 Ferre et al. Nov 1998 A
5852646 Klotz et al. Dec 1998 A
5873822 Ferre et al. Feb 1999 A
5902239 Buurman May 1999 A
5909476 Wang et al. Jun 1999 A
5930329 Navab Jul 1999 A
5951475 Gueziec et al. Sep 1999 A
5963612 Navab Oct 1999 A
5963613 Navab Oct 1999 A
5980504 Sharkey et al. Nov 1999 A
6003517 Sheffield et al. Dec 1999 A
6038282 Wiesent et al. Mar 2000 A
6049582 Navab Apr 2000 A
6050724 Schmitz et al. Apr 2000 A
6055449 Navab Apr 2000 A
6081577 Webber Jun 2000 A
6092928 Mattson et al. Jul 2000 A
6118845 Simon et al. Sep 2000 A
6120180 Graumann Sep 2000 A
6139183 Graumann Oct 2000 A
6149592 Yanof et al. Nov 2000 A
6188355 Gilboa Feb 2001 B1
6236704 Navab et al. May 2001 B1
6243439 Arai et al. Jun 2001 B1
6285739 Rudin et al. Sep 2001 B1
6289235 Webber et al. Sep 2001 B1
6307908 Hu Oct 2001 B1
6314310 Ben-Haim et al. Nov 2001 B1
6317621 Graumann et al. Nov 2001 B1
6351513 Bani-Hashemi et al. Feb 2002 B1
6359960 Wahl et al. Mar 2002 B1
6382835 Graumann et al. May 2002 B2
6389104 Bani-Hashemi et al. May 2002 B1
6404843 Vaillant Jun 2002 B1
6424731 Launay et al. Jul 2002 B1
6470207 Simon et al. Oct 2002 B1
6484049 Seeley et al. Nov 2002 B1
6485422 Mikus et al. Nov 2002 B1
6490475 Seeley et al. Dec 2002 B1
6491430 Seissler Dec 2002 B1
6546068 Shimura Apr 2003 B1
6546279 Bova et al. Apr 2003 B1
6549607 Webber Apr 2003 B1
6580938 Acker Jun 2003 B1
6585412 Mitschke Jul 2003 B2
6662036 Cosman Dec 2003 B2
6666579 Jensen Dec 2003 B2
6697664 Kienzle et al. Feb 2004 B2
6707878 Claus et al. Mar 2004 B2
6714810 Grzeszczuk et al. Mar 2004 B2
6731283 Navab May 2004 B1
6731970 Schlossbauer et al. May 2004 B2
6768784 Green et al. Jul 2004 B1
6782287 Grzeszczuk et al. Aug 2004 B2
6785356 Grass et al. Aug 2004 B2
6785571 Glossop Aug 2004 B2
6801597 Webber Oct 2004 B2
6810278 Webber et al. Oct 2004 B2
6823207 Jensen et al. Nov 2004 B1
6851855 Mitschke et al. Feb 2005 B2
6856826 Seeley et al. Feb 2005 B2
6856827 Seeley et al. Feb 2005 B2
6865253 Blumhofer et al. Mar 2005 B2
6898263 Avinash et al. May 2005 B2
6912265 Hebecker et al. Jun 2005 B2
6928142 Shao et al. Aug 2005 B2
6944260 Hsieh et al. Sep 2005 B2
6956927 Sukeyasu et al. Oct 2005 B2
7010080 Mitschke et al. Mar 2006 B2
7010152 Bojer et al. Mar 2006 B2
7016457 Senzig et al. Mar 2006 B1
7035371 Boese et al. Apr 2006 B2
7048440 Graumann et al. May 2006 B2
7066646 Pescatore et al. Jun 2006 B2
7106825 Gregerson et al. Sep 2006 B2
7117027 Zheng et al. Oct 2006 B2
7129946 Ditt et al. Oct 2006 B2
7130676 Barrick Oct 2006 B2
7142633 Eberhard et al. Nov 2006 B2
7147373 Cho et al. Dec 2006 B2
7165362 Jobs et al. Jan 2007 B2
7186023 Morita et al. Mar 2007 B2
7233820 Gilboa Jun 2007 B2
7251522 Essenreiter et al. Jul 2007 B2
7327872 Vaillant et al. Feb 2008 B2
7343195 Strommer et al. Mar 2008 B2
7369641 Tsubaki et al. May 2008 B2
7426256 Rasche et al. Sep 2008 B2
7440538 Tsujii Oct 2008 B2
7467007 Lothert Dec 2008 B2
7474913 Durlak Jan 2009 B2
7502503 Bojer et al. Mar 2009 B2
7505549 Ohishi et al. Mar 2009 B2
7508388 Barfuss et al. Mar 2009 B2
7603155 Jensen et al. Oct 2009 B2
7620223 Xu et al. Nov 2009 B2
7639866 Pomero et al. Dec 2009 B2
7664542 Boese et al. Feb 2010 B2
7671887 Pescatore et al. Mar 2010 B2
7689019 Boese et al. Mar 2010 B2
7689042 Brunner et al. Mar 2010 B2
7693263 Bouvier et al. Apr 2010 B2
7711082 Fujimoto et al. May 2010 B2
7711083 Heigl et al. May 2010 B2
7711409 Keppel et al. May 2010 B2
7712961 Horndler et al. May 2010 B2
7720520 P et al. May 2010 B2
7725165 Chen et al. May 2010 B2
7734329 Boese et al. Jun 2010 B2
7742557 Brunner et al. Jun 2010 B2
7761135 Pfister et al. Jul 2010 B2
7778685 Evron et al. Aug 2010 B2
7778690 Boese et al. Aug 2010 B2
7787932 Vilsmeier et al. Aug 2010 B2
7804991 Abovitz et al. Sep 2010 B2
7831096 Williamson et al. Nov 2010 B2
7835779 Anderson et al. Nov 2010 B2
7844094 Jeung et al. Nov 2010 B2
7853061 Gorges et al. Dec 2010 B2
7877132 Rongen et al. Jan 2011 B2
7899226 Pescatore et al. Mar 2011 B2
7907989 Borgert et al. Mar 2011 B2
7912180 Zou et al. Mar 2011 B2
7912262 Timmer et al. Mar 2011 B2
7949088 Nishide et al. May 2011 B2
7950849 Claus et al. May 2011 B2
7991450 Virtue et al. Aug 2011 B2
8000436 Seppi et al. Aug 2011 B2
8043003 Vogt et al. Oct 2011 B2
8045780 Boese et al. Oct 2011 B2
8050739 Eck et al. Nov 2011 B2
8090168 Washburn et al. Jan 2012 B2
8104958 Weiser et al. Jan 2012 B2
8111894 Haar Feb 2012 B2
8111895 Spahn Feb 2012 B2
8126111 Uhde et al. Feb 2012 B2
8126241 Zarkh et al. Feb 2012 B2
8150131 Harer et al. Apr 2012 B2
8180132 Gorges et al. May 2012 B2
8195271 Rahn Jun 2012 B2
8200316 Keppel et al. Jun 2012 B2
8208708 Homan et al. Jun 2012 B2
8229061 Hanke et al. Jul 2012 B2
8248413 Gattani et al. Aug 2012 B2
8270691 Xu et al. Sep 2012 B2
8271068 Khamene et al. Sep 2012 B2
8275448 Camus et al. Sep 2012 B2
8306303 Bruder et al. Nov 2012 B2
8311617 Keppel et al. Nov 2012 B2
8320992 Frenkel et al. Nov 2012 B2
8326403 Pescatore et al. Dec 2012 B2
8335359 Fidrich et al. Dec 2012 B2
8340379 Razzaque et al. Dec 2012 B2
8345817 Fuchs et al. Jan 2013 B2
8374416 Gagesch et al. Feb 2013 B2
8374678 Graumann Feb 2013 B2
8423117 Pichon et al. Apr 2013 B2
8442618 Strommer et al. May 2013 B2
8467589 Averbuch et al. Jun 2013 B2
8515527 Vaillant et al. Aug 2013 B2
8526688 Groszmann et al. Sep 2013 B2
8526700 Isaacs Sep 2013 B2
8532258 Bulitta et al. Sep 2013 B2
8532259 Shedlock et al. Sep 2013 B2
8548567 Maschke et al. Oct 2013 B2
8565858 Gilboa Oct 2013 B2
8625869 Harder et al. Jan 2014 B2
8666137 Nielsen et al. Mar 2014 B2
8670603 Tolkowsky et al. Mar 2014 B2
8675996 Liao et al. Mar 2014 B2
8693622 Graumann et al. Apr 2014 B2
8693756 Tolkowsky et al. Apr 2014 B2
8694075 Groszmann Apr 2014 B2
8706184 Mohr et al. Apr 2014 B2
8706186 Fichtinger et al. Apr 2014 B2
8712129 Strommer et al. Apr 2014 B2
8718346 Isaacs et al. May 2014 B2
8750582 Boese et al. Jun 2014 B2
8755587 Bender et al. Jun 2014 B2
8781064 Fuchs et al. Jul 2014 B2
8792704 Isaacs Jul 2014 B2
8798339 Mielekamp et al. Aug 2014 B2
8827934 Chopra et al. Sep 2014 B2
8831310 Razzaque et al. Sep 2014 B2
8855748 Keppel et al. Oct 2014 B2
9001121 Finlayson et al. Apr 2015 B2
9001962 Funk Apr 2015 B2
9008367 Tolkowsky et al. Apr 2015 B2
9031188 Belcher et al. May 2015 B2
9036777 Ohishi et al. May 2015 B2
9042624 Dennerlein May 2015 B2
9044190 Rubner et al. Jun 2015 B2
9044254 Ladtkow et al. Jun 2015 B2
9087404 Hansis et al. Jul 2015 B2
9095252 Popovic Aug 2015 B2
9104902 Xu et al. Aug 2015 B2
9111175 Strommer et al. Aug 2015 B2
9135706 Zagorchev et al. Sep 2015 B2
9171365 Mareachen et al. Oct 2015 B2
9179878 Jeon Nov 2015 B2
9216065 Cohen et al. Dec 2015 B2
9232924 Liu et al. Jan 2016 B2
9262830 Bakker et al. Feb 2016 B2
9265468 Rai Feb 2016 B2
9277893 Tsukagoshi et al. Mar 2016 B2
9280837 Grass et al. Mar 2016 B2
9282944 Fallavollita et al. Mar 2016 B2
9375268 Long Jun 2016 B2
9401047 Bogoni et al. Jul 2016 B2
9406134 Klingenbeck-Regn et al. Aug 2016 B2
9445772 Callaghan et al. Sep 2016 B2
9445776 Han et al. Sep 2016 B2
9466135 Koehler et al. Oct 2016 B2
9743896 Averbuch Aug 2017 B2
9918659 Chopra et al. Mar 2018 B2
10004558 Long et al. Jun 2018 B2
10194897 Cedro et al. Feb 2019 B2
10373719 Soper et al. Aug 2019 B2
10376178 Chopra Aug 2019 B2
10405753 Sorger Sep 2019 B2
10478162 Barbagli et al. Nov 2019 B2
10480926 Froggatt et al. Nov 2019 B2
10524866 Srinivasan et al. Jan 2020 B2
10555788 Panescu et al. Feb 2020 B2
10569071 Harris et al. Feb 2020 B2
10603106 Weide et al. Mar 2020 B2
10610306 Chopra Apr 2020 B2
10638953 Duindam et al. May 2020 B2
10639114 Schuh et al. May 2020 B2
10674970 Averbuch et al. Jun 2020 B2
10682070 Duindam Jun 2020 B2
10702137 Deyanov Jul 2020 B2
10706543 Donhowe et al. Jul 2020 B2
10709506 Coste-Maniere et al. Jul 2020 B2
10772485 Schlesinger et al. Sep 2020 B2
10796432 Mintz et al. Oct 2020 B2
10823627 Sanborn et al. Nov 2020 B2
10827913 Ummalaneni et al. Nov 2020 B2
10835153 Rafii-Tari et al. Nov 2020 B2
10885630 Li et al. Jan 2021 B2
20020045916 Gray et al. Apr 2002 A1
20020122536 Kerrien et al. Sep 2002 A1
20020147462 Mair et al. Oct 2002 A1
20020163996 Kerrien et al. Nov 2002 A1
20020188194 Cosman Dec 2002 A1
20030013972 Makin Jan 2003 A1
20030088179 Seeley et al. May 2003 A1
20040120981 Nathan Jun 2004 A1
20050143777 Sra Jun 2005 A1
20050220264 Homegger Oct 2005 A1
20050245807 Boese et al. Nov 2005 A1
20050281385 Johnson et al. Dec 2005 A1
20060182216 Lauritsch et al. Aug 2006 A1
20060251213 Bernhardt et al. Nov 2006 A1
20060262970 Boese et al. Nov 2006 A1
20070276216 Beyar et al. Nov 2007 A1
20080045938 Weide et al. Feb 2008 A1
20080262342 Averbruch et al. Oct 2008 A1
20110085720 Averbuch Apr 2011 A1
20130279780 Grbic et al. Oct 2013 A1
20130303945 Blumenkranz et al. Nov 2013 A1
20140035798 Kawada et al. Feb 2014 A1
20140046175 Ladtkow et al. Feb 2014 A1
20140046315 Ladtkow et al. Feb 2014 A1
20150148690 Chopra et al. May 2015 A1
20150227679 Kamer et al. Aug 2015 A1
20150265368 Chopra et al. Sep 2015 A1
20160000303 Klein et al. Jan 2016 A1
20160005194 Schretter et al. Jan 2016 A1
20160019716 Huang et al. Jan 2016 A1
20160120522 Weingarten May 2016 A1
20160125605 Lee et al. May 2016 A1
20160157939 Larkin et al. Jun 2016 A1
20160183841 Duindam et al. Jun 2016 A1
20160192860 Allenby et al. Jul 2016 A1
20160206380 Sparks et al. Jul 2016 A1
20160287343 Eichler et al. Oct 2016 A1
20160287344 Donhowe et al. Oct 2016 A1
20160302747 Averbuch Oct 2016 A1
20170035379 Weingarten et al. Feb 2017 A1
20170035380 Weingarten et al. Feb 2017 A1
20170112571 Thiel et al. Apr 2017 A1
20170112576 Coste-Maniere et al. Apr 2017 A1
20170209071 Zhao et al. Jul 2017 A1
20170258418 Averbuch et al. Sep 2017 A1
20170265952 Donhowe et al. Sep 2017 A1
20170311844 Zhao et al. Nov 2017 A1
20170319165 Averbuch Nov 2017 A1
20180078318 Barbagli et al. Mar 2018 A1
20180144092 Flitsch et al. May 2018 A1
20180153621 Duindam et al. Jun 2018 A1
20180235709 Donhowe et al. Aug 2018 A1
20180240237 Donhowe et al. Aug 2018 A1
20180256262 Duindam et al. Sep 2018 A1
20180263706 Averbuch Sep 2018 A1
20180279852 Rafii-Tari et al. Oct 2018 A1
20180296283 Crawford et al. Oct 2018 A1
20180325419 Zhao et al. Nov 2018 A1
20180360342 Fuimaono Dec 2018 A1
20190000559 Berman et al. Jan 2019 A1
20190000560 Berman et al. Jan 2019 A1
20190008413 Duindam et al. Jan 2019 A1
20190038365 Soper et al. Feb 2019 A1
20190065209 Mishra et al. Feb 2019 A1
20190110839 Rafii-Tari et al. Apr 2019 A1
20190175062 Rafii-Tari et al. Jun 2019 A1
20190175799 Hsu et al. Jun 2019 A1
20190183318 Froggatt et al. Jun 2019 A1
20190183585 Rafii-Tari et al. Jun 2019 A1
20190183587 Rafii-Tari et al. Jun 2019 A1
20190192234 Gadda et al. Jun 2019 A1
20190209016 Herzlinger et al. Jul 2019 A1
20190209043 Zhao et al. Jul 2019 A1
20190216548 Ummalaneni Jul 2019 A1
20190239723 Duindam et al. Aug 2019 A1
20190239831 Chopra Aug 2019 A1
20190250050 Sanborn et al. Aug 2019 A1
20190254649 Walters et al. Aug 2019 A1
20190269470 Barbagli et al. Sep 2019 A1
20190269818 Dhanaraj et al. Sep 2019 A1
20190269819 Dhanaraj et al. Sep 2019 A1
20190272634 Li et al. Sep 2019 A1
20190298160 Ummalaneni et al. Oct 2019 A1
20190298451 Wong et al. Oct 2019 A1
20190320878 Duindam et al. Oct 2019 A1
20190320937 Duindam et al. Oct 2019 A1
20190336238 Yu et al. Nov 2019 A1
20190343424 Blumenkranz et al. Nov 2019 A1
20190350659 Wang et al. Nov 2019 A1
20190365199 Zhao et al. Dec 2019 A1
20190365479 Rafii-Tari Dec 2019 A1
20190365486 Srinivasan et al. Dec 2019 A1
20190380787 Ye et al. Dec 2019 A1
20200000319 Saadat et al. Jan 2020 A1
20200000526 Zhao Jan 2020 A1
20200008655 Schlesinger et al. Jan 2020 A1
20200030044 Wang et al. Jan 2020 A1
20200030461 Sorger Jan 2020 A1
20200038750 Kojima Feb 2020 A1
20200043207 Lo et al. Feb 2020 A1
20200046431 Soper et al. Feb 2020 A1
20200046436 Tzeisler et al. Feb 2020 A1
20200054399 Duindam et al. Feb 2020 A1
20200054408 Schuh et al. Feb 2020 A1
20200060771 Lo et al. Feb 2020 A1
20200069192 Sanborn et al. Mar 2020 A1
20200077870 Dicarlo et al. Mar 2020 A1
20200078023 Cedro et al. Mar 2020 A1
20200078095 Chopra et al. Mar 2020 A1
20200078103 Duindam et al. Mar 2020 A1
20200085514 Blumenkranz Mar 2020 A1
20200109124 Pomper et al. Apr 2020 A1
20200129045 Prisco Apr 2020 A1
20200129239 Bianchi et al. Apr 2020 A1
20200138514 Blumenkranz et al. May 2020 A1
20200138515 Wong May 2020 A1
20200142013 Wong May 2020 A1
20200155116 Donhowe et al. May 2020 A1
20200155232 Wong May 2020 A1
20200170623 Averbuch Jun 2020 A1
20200170720 Ummalaneni Jun 2020 A1
20200179058 Barbagli et al. Jun 2020 A1
20200188021 Wong et al. Jun 2020 A1
20200188038 Donhowe et al. Jun 2020 A1
20200205903 Srinivasan et al. Jul 2020 A1
20200205904 Chopra Jul 2020 A1
20200214664 Zhao et al. Jul 2020 A1
20200229679 Zhao et al. Jul 2020 A1
20200242767 Zhao et al. Jul 2020 A1
20200275860 Duindam Sep 2020 A1
20200297442 Adebar et al. Sep 2020 A1
20200315554 Averbuch et al. Oct 2020 A1
20200330795 Sawant et al. Oct 2020 A1
20200352427 Deyanov Nov 2020 A1
20200364865 Donhowe et al. Nov 2020 A1
20200383750 Kemp et al. Dec 2020 A1
20210000524 Barry et al. Jan 2021 A1
Foreign Referenced Citations (45)
Number Date Country
0013237 Jul 2003 BR
0116004 Jun 2004 BR
0307259 Dec 2004 BR
0412298 Sep 2006 BR
112018003862 Oct 2018 BR
1644519 Dec 2008 CZ
486540 Sep 2016 CZ
2709512 Aug 2017 CZ
2884879 Jan 2020 CZ
19919907 Nov 2000 DE
69726415 Sep 2004 DE
10323008 Dec 2004 DE
102004004620 Aug 2005 DE
0917855 May 1999 EP
1593343 Nov 2005 EP
1644519 Dec 2008 EP
3413830 Sep 2019 EP
3478161 Feb 2020 EP
3641686 Apr 2020 EP
3644885 May 2020 EP
3644886 May 2020 EP
3749239 Dec 2020 EP
PA03005028 Jan 2004 MX
PA03000137 Sep 2004 MX
PA03006874 Sep 2004 MX
225663 Jan 2005 MX
226292 Feb 2005 MX
PA03010507 Jul 2005 MX
PA05011725 May 2006 MX
06011286 Mar 2007 MX
246862 Jun 2007 MX
2007006441 Aug 2007 MX
265247 Mar 2009 MX
284569 Mar 2011 MX
9926826 Jun 1999 WO
9944503 Sep 1999 WO
0010456 Mar 2000 WO
0016684 Mar 2000 WO
0167035 Sep 2001 WO
0187136 Nov 2001 WO
2004081877 Sep 2004 WO
2005015125 Feb 2005 WO
2005082246 Sep 2005 WO
2009081297 Jul 2009 WO
2015101948 Jul 2015 WO
Non-Patent Literature Citations (12)
Entry
“Image-Based Bronchoscopy Navigation System Based on CT and C-arm Fluoroscopy”, Big Data Analytics in the Social and Ubiquitous Context : 5th International Workshop on Modeling Social Media, MSM 2014, 5th International Workshop on Mining Ubiquitous and Social Environments, MUSE 2014 and First International Workshop on Machine LE, No. 558, Mar. 29, 2014 (Mar. 29, 2014).
Extended European Search Report issued in European Application No. 18823602.0 dated Mar. 4, 2021.
Australian Examination Report No. 2 issued in Appl. No. AU 2016210747 dated Oct. 18, 2017 (4 pages).
Canadian Office Action issued in Appl. No. 2,937,825 dated Mar. 26, 2018 (4 pages).
CT scan—Wikipedia, the free encyclopedia [retrieved from Internet on Mar. 30, 2017]. published on Jun. 30, 2015 as per Wayback Machine.
Extended European Search Report from Appl. No. EP 16182953.6-1666 dated Jan. 2, 2017.
F. Natterer, The Mathematics of Computerized Tomography, Wiley, 1989.
G. Ramm and A.I. Katsevich, The Radon Transform and Local Tomography, CRC Press, 1996.
G.T. Herman and Attila Kuba, Discrete Tomography, Birhauser, 1999.
G.T. Herman et al., Basic Methods of Tomography and Inverse Problems, Hildger, 1987.
International Search Report and Written Opinion of the International Searching Authority issued in corresponding Appl. No. PCT/US2018/040222 dated Nov. 12, 2018 (16 pages).
Patcharapong Suntharos, et al., “Real-time three dimensional CT and MRI to guide interventions for congenital heart disease and acquired pulmonary vein stenosis”, Int. J. Cardiovasc. Imaging, vol. 33, pp. 1619-1626 (2017).
Related Publications (1)
Number Date Country
20210049796 A1 Feb 2021 US
Provisional Applications (4)
Number Date Country
62641777 Mar 2018 US
62628017 Feb 2018 US
62570431 Oct 2017 US
62526798 Jun 2017 US
Continuations (2)
Number Date Country
Parent 16885188 May 2020 US
Child 17089151 US
Parent 16022222 Jun 2018 US
Child 16885188 US