This invention relates generally to image-guided endoscopy and, in particular, to a system and method wherein real-time measurements of actual instrument movements are compared in real-time to precomputed insertion depth values based upon shape models, thereby providing continuous prediction of the instrument's location and orientation and technician-free guidance irrespective of adverse events.
Bronchoscopy is a procedure whereby a flexible instrument with a camera on the end, called a bronchoscope, is navigated through the body's tracheobronchial airway tree. Bronchoscopy enables a physician to perform biopsies or deliver treatment [39]. This procedure is often performed for lung cancer diagnosis and staging. Before a bronchoscopy takes place, a 3D multidetector computed tomography (MDCT) scan is created of the patient's chest consisting of a series of two-dimensional (2D) images [15, 38, 5]. A physician then uses the MDCT scan to identify a region of interest (ROI) he/she wishes to navigate to. ROIs may be lesions, lymph nodes, treatment delivery sites, lavage sites, etc. Next, either a physician plans a route to each ROI by looking at individual 2D MDCT slices or automated methods compute routes to each ROI [6, 8]. Later, during bronchoscopy, the physician attempts to maneuver the bronchoscope to each ROI along its pre-defined route. Upon reaching the planned destination, there is typically no visual indication that the bronchoscope is near the ROI, as the ROI often resides outside of the airway tree (extraluminal), while the bronchoscope is inside the airway tree (endoluminal). Because of the challenges in standard bronchoscopy, physician skill levels vary greatly, and navigation errors occur as early as the second airway generation [6, 31].
With the advances in computers, researchers are developing image-guided intervention (IGI) systems to help guide physicians during surgical procedures [11, 32, 37, 27]. Bronchoscopy-guidance systems are IGI systems that provide navigational instructions to guide a physician maneuvering a bronchoscope to an ROI [8, 4, 3, 24, 35, 14, 2, 9, 33, 30, 13, 36, I]. In order to explain how these systems provide navigational instructions, it is necessary to formally define the elements involved. The patient's chest, encompassing the airway tree, vasculature, lungs, ribs, etc., makes up the physical space. During standard bronchoscopy, two different data manifestations of the physical space are created (
The second data manifestation created during live bronchoscopy, referred to as the real space, consists of the bronchoscope camera's live stream of video frames depicting the real world from within the patient's airway tree. Each live video frame, referred to as IR, represents a view from the real camera CR.
To provide navigational instructions, the bronchoscopy-guidance system attempts to place CV in virtual space in an orientation roughly corresponding to CR in physical space. If a bronchoscopy-guidance system can do this correctly, the views, IV and IR, produced by CV and CR, are said to be synchronized. With synchronized views, the guidance system can then relate navigational information that exists in the virtual space to the physician, ultimately providing guidance to reach an ROI.
Currently, bronchoscopy guidance systems fall under two categories based on the synchronization method for IV and IR:1) electromagnetic navigation bronchoscopy (ENB); and 2) image-based bronchoscopy [3, 24, 35, 14, 2, 9, 13, 36, 29, 34, 28, 26, 40]. ENB systems track the bronchoscope through the patient's airways by affixing an electromagnetic (EM) sensor to the bronchoscope and generating an EM field through the patient's body [2, 9, 36, 28, 40]. As the sensor is maneuvered through the lungs, the ENB system reports its position within the EM field in real time. Image-based bronchoscopy systems derive views from the MDCT data and compare them to live bronchoscopic video using image-based registration and tracking techniques [3, 24, 35, 14, 13, 29, 34, 28, 26]. In both cases, VB views are displayed to provide guidance. Both ENB and image-based bronchoscopy methods have shortcomings that prevent continuous robust synchronization. ENB systems suffer from patient motion (breathing, coughing, etc.), electromagnetic signal noise, and require expensive equipment. Image-based bronchoscopy techniques rely on the presence of adequate information in the bronchoscope video frames to enable registration. Often times, video frames lack enough structural information to allow for image-based registration or tracking. For example, the camera CR may be occluded by blood, mucous, or bubbles. Other times, CR may be pointed directly at an airway wall. Because registration and tracking techniques are not robust to these events, an attending technician is required to operate the system.
This invention overcomes the drawbacks of electromagnetic navigation bronchoscopy (ENB) and image-based bronchoscopy systems by comparing real-time measurements of actual instrument movements to precomputed insertion depth values provided by shape models. The preferred methods implement this comparison in real-time, providing continuous prediction of the instrument's tip location and orientation. In this way, the invention enables technician-free guidance and continuous procedure guidance irrespective of adverse events.
A method of determining the location of an endoscope within a body lumen according to the invention comprises the step of precomputing a virtual model of an endoscope that approximates insertion depths at a plurality of view sites along a predefined path to a region of interest (ROI). A “real” endoscope is provided with a device such as an optical sensor to observe actual insertion depths during a live procedure. The observed insertion depths are compared in real time to the precomputed insertion depths at each view site along the predefined path, enabling the location of the endoscope relative to the virtual model to be predicted at each view site by selecting the view site with the precomputed insertion depth that is closest to the observed insertion depth. An endoluminal rendering may then be generated providing navigational instructions based upon the predicted locations. The lumen may form part of an airway tree, and the endoscope may be a bronchoscope.
The device operative to observe actual insertion depths may additionally be operative to observe roll angle, which may be used to rotate the default viewing direction at a selected view site. The method of Gibbs at al. may be used to predetermine the optimal path leading to an ROI. The method may further include the step of displaying the rendered predicted locations and actual view sites from the device. The virtual model may be a MDCT image-based shape model, and the precomputing step may allow for an inverse lookup of the predicted locations. The method may include the step of calculating separate insertion depths to each view site along the medial axes of the lumen, and the endoscope may be approximated as a series of line segments.
In accordance with certain preferred embodiments, the lumen is defined using voxel locations, and the method may include the step of calculating separate insertion depths to any voxel location within the lumen and/or approximating the shape of the endoscope to any voxel location within the lumen. The insertion depth to each view site may be calculated by summing distances along the lumen medial axes. The insertion depth to each voxel location within the lumen may be calculated by finding the shortest distance from a root voxel location to every voxel location within the lumen using Dijkstra's algorithm, or calculated by using a dynamic programming algorithm. The shape of the endoscope may be approximated using the lumen medial axes or through the use of Dijkstra's algorithm. The edge weight used in Dijkstra's algorithm may be determined using a dot product and the Euclidean distance between voxel locations within the lumen. If utilized, the dynamic programming function may include an optimization function based on the dot product between voxel locations within the lumen.
To overcome the drawbacks of ENB and image-based bronchoscopy systems, we propose a fundamentally different method. Our method compares real-time measurements of the bronchoscope movement to precomputed insertion depth values in the lungs provided by MDCT-image-based bronchoscope-shape models. Our method uses this comparison to provide a real-time, continuous prediction of the bronchoscope tip's location and orientation. In this way, our method then enables continuous procedure guidance irrespective of adverse events. It also enables technician-free guidance.
Let M be a 3D MDCT scan of the patient's airway tree N. While we focus on bronchoscopy, the invention is applicable to any procedure requiring guidance though a tubular structure, such as the colon or vasculature.
A virtual N is segmented from M using the method of Graham et al. [10]. This results in a binary-valued volume:
representing a set of voxels Vseg, where v(x, y, z)εVsegv(x, y, z)=1.
Using the branching organ conventions of Kiraly et al., the centerlines of N can be derived using the method developed by Yu et al., resulting in a tree T=(V,B,P) [16, 41, 42]. V is a set of view sites {v1, . . . vj}, where J≧1 is an integer. Each view site v=(x,y,z,α,β,γ), where (x,y,z) denotes v's 3D position in M and (α,β,γ) denotes the Euler angles defining the default view direction at v. Each vεV is located on one of the centerlines of N. Therefore, V is referred to as the set of the airway tree's centerlines, and it represents the set of centralized axes that follow all possible navigable routes in N. B is a set of branches {b1, . . . , bk}, where each b={vc, . . . , vi}, vc, . . . , viεV, and 0≦c≦i. Each branch must begin at either the first view site at the origin of the organ, called the root site, or at a bifurcation. Each branch must end at either a bifurcation or at any terminating view site e. A terminating view site is any view site that has no children. P is a set of paths, {p1, . . . , pm}, where each p consists of connected branches. A path must begin at the root site and end at a terminating view site e.
The invention comprises two major aspects (
All virtual-endoscopy-driven IGI systems require a fundamental connection between the virtual space and physical space. In ENB-based systems, the connection involves a registration of the EM field in physical space to the 3D MDCT data representing virtual space. Image-based bronchoscopy systems draw upon some form of registration between the live bronchoscopic video of physical space and VB renderings devised from 3D MDCT-based virtual space. Our method uses a fundamentally different connection. Live measurements of the bronchoscope's movements through physical space, as made by a calibrated sensor mounted outside a patient's body, are linked to the virtual-space representation of the airway tree N.
The sensor tracks the bronchoscope surface that moves past the sensor. If the sensor is oriented correctly, the “Y” component (up-down) gives the insertion depth, while the “X” component (left-right) gives the roll angle (
Because a bronchoscope is a torsionally-stiff, semi-rigid object, any roll measured along the shaft of the bronchoscope will propagate throughout the entire shaft [21]. Simply stated, if the physician rotates the bronchoscope at the handle, the tip of the bronchoscope will also rotate the same amount. This is what gives the physician control to maneuver the bronchoscope.
The measurement sensor sends the insertion depth and roll angle measurements to a prediction engine running in real time on a computer. An algorithm uses these measurements to predict a view site location and orientation. We now discuss bronchoscope models and how they can be used for calculating insertion depths to view sites.
Previous research by Kukuk et al. focused on modeling bronchoscopes to gain insertion-depth estimates for robotic planning [21, 23, 18, 22, 20, 19]. Kukuk's goal was to preplan a series of bronchoscope insertions, rotations, and tip articulations to reach a target. In doing so, the method calculates an insertion depth to points in an airway tree using a search algorithm. It models a bronchoscope as a series of rigid “tubes” connected by “joints.” A bronchoscope's shape is determined by the lengths and diameters of the tubes as well as how the tubes connect to each other. Each joint allows only a discrete set of possible angles between two consecutive tubes. Using a discrete set of possible angles reduces the search space to a finite number of solutions. However, the solution space grows exponentially as the number of tubes increases. In practice, the human airway-tree structure reduces the search space, and the algorithm can find solutions in a feasible time. However, the method cannot find a solution to any arbitrary location in the airways in a feasible time. Therefore, we use a different method for calculating a bronchoscope model, as explained next.
Similar to the method of Kukuk et al., our bronchoscope-model calculation is done offline to allow for real-time bronchoscope location prediction. The purpose of a bronchoscope model is to precompute and store insertion depths to every airway-tree view site so that later, during bronchoscopy, they may be compared to true insertion measurements provided by the sensor. Precomputation allows for an inverse lookup of the predicted location during a live bronchoscopy.
To begin our description of the bronchoscope model, consider an ordered list of 3D points {ua, ub, . . . , uk}, where each ua, ub, etc.εVseg, ua is the proximal end of the trachea, and uk is a view site. Connecting each consecutive pair of 3D points creates a list of connected line segments that define our bronchoscope model S(k), as shown below:
S(k)={
This representation of a bronchoscope approximates the bronchoscope shape when the bronchoscope tip is located at view site k. By converting each line segment
where x iterates through the list of ordered vectors and is the ∥ûx∥2 is the L2-norm of vector ûx. Using this method, we can calculate a separate insertion depth to each view site along the centerlines of all airway-tree branches.
Unlike the method of Kukuk, which uses 3D tubes connected by joints, we approximate a bronchoscope as a series of line segments that have diameter 0; i.e., S(k) technically models only the central axis of the real bronchoscope [21]. As this approximation unrealistically allows the bronchoscope model to touch the airway wall in the segmentation Vseg, we prefer to account for the non-zero diameter of the real bronchoscope in our bronchoscope-model calculation.
To do this, we first point out that the central axis of the real bronchoscope can only be as close as its radius r to the airway wall. To account for this, we erode the segmentation of N, Vseg, using the following equation:
{circumflex over (V)}
seg
=V
seg
b, (4)
where b is a spherical structuring element having a radius r and is the morphological erosion operation. In the eroded image {circumflex over (V)}seg, if the bronchoscope model touches the airway wall, then the central axis of the bronchoscope is a distance r from the true airway wall.
{circumflex over (V)}seg loses small branches that have a diameter <2r. Because we do not want to exclude any potentially plausible bronchoscope maneuvers, we force the centerlines of small branches to be contained in {circumflex over (V)}seg as well as all voxels along the line segments between any two consecutive view sites. Overriding the erosion ensures that we can calculate a bronchoscope model for every view site. Thus, {circumflex over (V)}seg is redefined to only include the voxels that remain after the erosion and view-site inclusion.
As discussed below, we consider three methods for creating a bronchoscope model: (a) Centerline; (b) Dijkstra-based; and (c) Dynamic Programming.
The centerline model is the simplest bronchoscope model. The list of 3D points S(k), terminating at an arbitrary view site k, consists of all ancestor view sites traced back to the proximal end of the trachea. This method gives a rough approximation to a true bronchoscope, because the view sites never touch the walls of the segmentation, which is not the case with a real bronchoscope in N. Furthermore, a real bronchoscope does not bend around corners in the same manner as the centerlines can.
Dijkstra's shortest-path algorithm finds the shortest distance between two nodes in an arbitrary graph, where the distance depends on edge weights between nodes [17]. For computing a bronchoscope model, we use Dijkstra's algorithm as follows. First, the edge weight between two nodes, j and k, is defined as:
w(j,k)=wE(j,k)+wa(j,k), (5)
where j and k are voxels in {circumflex over (V)}seg, wE(j,k) is the Euclidean distance between j and k, and wa(j,k) is the edge weight due to the angle between the incident vectors coming into voxels j and k. wE(j,k) is given by:
where kd is the dth coordinate of the 3D point k. wa(j,k) is given by:
w
a(j,k)=β1−(ĵi·{circumflex over (k)}i)p, (7)
where ĵ, is the normalized incident vector coming in to voxel j, {circumflex over (k)}i is the normalized incident vector coming in to voxel k from j, (m·n) represents the dot product of vectors, in and n, and β and p are constants.
These two weight terms serve different purposes. In the cost (5), wE(j,k) penalizes longer solutions, while wa(j,k) penalizes solutions where the bronchoscope model makes a sharp bend. This encourages solutions that put less stress on the bronchoscope.
The incident vectors, ĵi and {circumflex over (k)}i in (7), are known during model computation, as Dijkstra's algorithm is greedy [17]. It greedily adds nodes to a set of confirmed nodes with known shortest distances. In our implementation, j is already in the set of known shortest-distance nodes.
Algorithms 1 and 2 detail our implementation of the Dijkstra-based bronchoscope model. Algorithm 1 computes a bronchoscope model for each view site in an airway tree and stores them in a data structure. Algorithm 2 extracts the bronchoscope model to a view site vs out of the data structure from Algorithm 1.
indicates data missing or illegible when filed
Because we are selecting discrete points to be members of the set of bronchoscope-model points, we have no guarantee that the line segment connecting these two points will remain in the segmentation at all times. The “Dist” function in Algorithm 1 checks if a line segment between two model points exits the segmentation, by stepping along the line segment at a small step size and ensuring that the nearest voxel to each step point is inside the segmentation.
Dynamic programming (DP) algorithms find optimal solutions based on an optimization function for problems that have optimizable overlapping subproblems [17]. Before defining our use of DP for defining a bronchoscope model, it is necessary to recast the bronchoscope-model problem. Recall that S(k) is a list of connected line segments per (2). Similar to (3), we again represent a line segment as a vector. However, this time we represent the line segment using the end point of the line segment. Therefore, line segment
where N (k) is a neighborhood about voxel k, {circumflex over (k)}i is the normalized vector from t to k, and {circumflex over (t)}i is the incident vector coming into voxel t from its parent voxel.
Using this method, we calculate an optimal bronchoscope model from the root site to every voxel in {circumflex over (V)}seg. In the memorized DP framework, solutions are built from the “bottom up,” and results are saved so later recalculation is not needed [17]. First, the DP algorithm determines the optimal solution to every voxel using only one link and an automatically generated unit vector coming into the root site {circumflex over (r)}i. The solution to an arbitrary voxel xε{circumflex over (V)}seg is simply the line segment from the root site to x. The algorithm stores the dot product between {circumflex over (r)}i and the normalized vector from the root site to x in a 2D array that is indexed by x and the number of links used.
Next, the algorithm determines the optimal solution to every voxel using two links. To find the optimal solution using two links, the method uses the previously calculated data from the optimal solution with one link. The algorithm calculates the solution to an arbitrary voxel using two links by adding a link from each neighbor to x, providing several candidate bronchoscope models to voxel x. For each candidate bronchoscope model, the method next calculates the minimum dot product found for the solution with one link (from the 2D array) and the new dot product (created with the addition of the new link). Finally, the method chooses the bronchoscope model with the maximum of all the minimum dot products. This is akin to selecting the bronchoscope model whose sharpest angle is as straight as possible, given the segmentation. The same procedure is carried out for all other voxels. We store the maximum of the minimum values in the 2D array saving the best solution to each voxel. Solutions are built up to a user-defined number of links in this manner. The algorithm also maintains another 2D table that contains back pointers. This table indicates the parent of each voxel so that we can retrieve the voxels belonging to S(k).
Algorithm 3 specifies the DP algorithm for computing all of the bronchoscope models for a given airway tree segmentation. Algorithm 4 shows how to trace backwards through the output of Algorithm 3 to retrieve a bronchoscope model leading to view site vs.
indicates data missing or illegible when filed
indicates data missing or illegible when filed
We implemented the bronchoscope tracking method for testing purposes. The computer-based prediction engine and the bronchoscope model generation software were written in Visual C++. with MFC interface controls. We interfaced two computer mice to the computer. The first served as a standard computer mouse to interface to software. The second mouse was a Logitech MX 1100 wireless laser mouse that served as the measurement sensor. The measurement-sensor inputs were tagged as such so that its input could be identified separately from the standard computer-mouse inputs. The method ran on a computer with two 2.99 GHz processors and 16 GB of RAM for both the precomputation of the bronchoscope models and for later real-time bronchoscope tracking. During tracking, every time the sensor provided a measurement, the tracking method invoked the prediction engine to predict a bronchoscope location using the most recent measurements.
We performed two tests. The first used a PVC-pipe setup to compare the accuracy of the three bronchoscope models for predicting a bronchoscope location, while the second test involved a human airway-tree phantom to test the entire real-time implementation. For both experiments, the Dijkstra-based model parameters were set as follows: β=100, p=3.5, neighborhood=25×25×25 cube (±12 voxels in all three dimensions). The DP model parameters were set as follows: neighborhood=25×25×25 cube, max number of line segments=60. Note that the optimal solutions for all view sites considered in our tests required fewer than the maximum allowed 60 line segments.
The PVC-pipe setup involved three PVC-pipe segments connected with two 90″ bends along with 26 screws inserted through the side of the complete PVC pipe (
Given this setup, the bronchoscope could be inserted to each screw location to compare a predicted bronchoscope tip location to the real known bronchoscope tip location. The test ran as follows:
1. Insert the bronchoscope into the PVC pipe to the first screw tip (location serves as a registration location), using the bronchoscopic video feed for guidance and verification.
2. Place tape around the bronchoscope shaft to mark the insertion depth to the first screw location.
3. Advance the bronchoscope to the next screw tip, as in step 1.
4. Place tape around the bronchoscope shaft to mark the insertion depth to the current screw tip location.
5. Repeat steps 3 and 4 until the last screw tip location is reached.
6. Remove the bronchoscope and manually measure the distance from the first tape mark to all other tape marks, providing a relative insertion depth to each screw tip location.
7. Run the prediction algorithm using manually measured insertion depths relative to the first screw for each of the three bronchoscope models.
8. Compute the Euclidean distance between the predicted locations and the actual screw tip location.
We repeated this test over three trials and averaged the results of the three trials (Table I). The centerline model performed the worst, while the DP model performed the best. On average, the DP model was off by <2 mm. The largest error occurred in PVC-pipe locations where we utilized the bronchoscope's articulating tip to get the bronchoscope to touch a screw; we detected an error of −19 mm to the screw located just beyond the second 90% bend. Once we advanced the bronchoscope 2 cm beyond that location to where the articulating tip was not heavily utilized, the error shrank to −3 mm.
The second experiment evaluated the entire implementation. During this experiment, we maneuvered a bronchoscope through an airway tree phantom. A third party constructed the phantom using airway-surface data we extracted from an MDCT scan (case 21405-3a). Thus, the phantom serves as the real physical space, while the MDCT scan serves as the virtual space. The experimental apparatus (
Prior to the test, the bronchoscope shaft was covered with semi-transparent tape to allow for the optical sensor to have a less reflective surface to track. During the test, we inserted the bronchoscope to each tape mark, following a 75 mm preplanned route to a fictional ROI, depicted in
1. Insert the bronchoscope to the first tape mark to register the virtual space and the physical space. Record the roll angle by using the manual angle measurement apparatus (
2. Insert the bronchoscope to the next tape mark.
3. Record the three different bronchoscope predictions produced by the three different bronchoscope models.
4. Record the true insertion depth (known by multiplying the tape mark number by 3 mm) and the true roll angle of the bronchoscope (recorded from apparatus).
5. Remove the bronchoscope.
6. Repeat steps 1 through 5 inserting to each subsequent tape mark in step 2 until the target is reached.
We calculated errors using both the hand-made measurements (representing an error-free sensor) and the sensor measurements, providing four different sets of measurements. Error IH is the Euclidean distance between the predicted and true bronchoscope locations using the hand-made measurements. Error IIH is the Euclidean distance between the predicted bronchoscope location and closest view site to the true bronchoscope location using hand-made measurements. Error IIH does not penalize our method for constraining the predicted location to the centerlines. These errors quantify the error using a hypothetical, error-free sensor and therefore quantify the error in a system with a perfect sensor. The next two errors, IS and IIS, use the measurements provided by the sensor instead of the hand-made measurements, providing the overall error of the method. Table II shows error IH and IIH, while Table III shows error IS and IIS evaluating the whole method.
Recording both hand-made measurements and the optical sensor measurements allowed us to determine how accurate the mouse sensor was. Table IV quantifies how far off the mouse sensor measurements were from the hand-made measurements during the phantom experiment.
The centerline model consistently overestimated the bronchoscopic insertion depth required to reach each view site. The Dijkstra-based model on average underestimated the required insertion depth. The insertion depth calculated from the DP solution tends to be between the other two models, indicating that it might be the best bronchoscope model for estimating an insertion depth to a location in the lungs among the three tested.
Tables II and III indicate that the accuracy of the bronchoscope location prediction using the DP model is within 2 mm of the true location on average. Given that an ROI has a typical size of roughly 10 mm or greater in diameter, an average error of only 2 mm in accuracy is acceptable for guiding a physician to ROIs. Furthermore, a typical airway branch is anywhere between 8 mm and 60 mm in length. In lower generations (close to trachea) the branch lengths tend to be longer, and in higher generations (periphery) they tend to be shorter. Thus, in airway branches, an error of only 2 mm is acceptable to prevent misleading views from incorrectly guiding a physician.
The PVC-pipe experiment excluded any error from the sensor, yet it resulted in higher Euclidean distance errors on average than the phantom experiment, including the error from all method components. This is because the PVC-pipe model experiment involved navigating the bronchoscope up to a distance of 480 mm while, in the phantom experiment, the bronchoscope was only navigated up to 75 mm. Therefore, with less distance to travel, less error accumulated. Also, the path in the phantom experiment was relatively straight while the path in the PVC-pipe experiment contained 90 degree angles.
To aid the physician in staying on the correct route to the ROI, the system provides directions that are fused onto the live bronchoscope view when the virtual space and the physical space are synchronized. Assuming that a physician can follow these directions, then the two spaces will remain synchronized. Detecting if and when a physician goes off the path is possible by generating candidate views down possible branches and comparing them to the bronchoscopic video [43].
We first select candidate locations by using the above mentioned method to track the bronchoscope along two possible branches after a bifurcation, instead of just 1 route. This provides the system with two candidate bronchoscope locations. Next, we register the VB views generated from each possible branch to the live bronchoscopic video and then compare each VB view to the bronchoscopic video. This assigns a probability to each candidate view indicating if it was generated from the real bronchoscope's location. We use Bayesian inferencing techniques to combine multiple probabilities allowing the system to detect which branch the physician maneuvered the bronchoscope into in real time [43]. Near the end of either of the possible branches, the system selects the branch with the highest Bayesian inference probability as the correct branch. When the system detects that the bronchoscope is not on the optimal route to the ROI, the highlighted paths on the VB view are red instead of blue, and a traffic light indicator signals the physician to retract the bronchoscope until the physician is on the correct route.
The system invokes this branch selection algorithm every x mm of bronchoscope insertion (default x=2 mm). In between invocation of this branch selection algorithm, the system generates VB views along the branch that currently has the highest Bayesian inference. The further the bronchoscope is inserted, the more refined the Bayesian inference probability becomes. Before a view is displayed to a physician, the system can register it to the current bronchoscope video in real time using the method of Merritt et al. [26, 43].
Our method uses a sensor to measure movements made by the bronchoscope to predict where the tip of the bronchoscope is with high accuracy. This bronchoscope guidance method provides VB views that indicate where the physician is in the lungs. Encoded on these views are simple directions for the physician to follow to reach the ROI. If the physician can follow the directions, the bronchoscope will always stay on the correct path, providing continuous, real-time guidance, improving the success rate of bronchoscopic procedures. Furthermore, the system can signal the physician when they maneuver off the correct route.
This method is suited for more than just sampling ROIs during bronchoscopy. It could be useful for treatment delivery including fiducial marker planning and insertion for radiation therapy and treatment. The system, at a higher level, is suitable for thoracic surgery planning. While our system is implemented for use in the lungs, the methods presented are applicable to any application where a long thin device must be tracked along a preplanned route. Some examples include tracking a colonoscope through the colon and tracking a catheter through vasculature [7].
This application claims priority from U.S. Provisional Patent Application Ser. No. 61/439,529, filed Feb. 4, 2011, the entire content of which is incorporated herein by reference.
This invention was made with government support under NM Grant Nos. R01-CA074325 and R01-CA151433 awarded by the National Cancer Institute. The government has certain rights in the invention.
Number | Date | Country | |
---|---|---|---|
61439529 | Feb 2011 | US |