The present invention relates to x-ray imaging, and more particularly, to needle tracking in 2D fluoroscopic image sequences.
In image guided abdominal interventions, needle tracking has important applications. Needle tracking provides the positions of needles in fluoroscopic images of the abdomen. Needle tracking can be used to compensate for breathing motion in fluoroscopic image sequences and to guide real-time overlaying of 3D images, which are acquired prior to an intervention. Needle detection and tracking in fluoroscopic image sequences is challenging due to the low signal to noise ratio of the fluoroscopic images, as well as the need for real-time speed and a high level of accuracy. Furthermore, different types of needles typically show a large variation in shape and appearance in fluoroscopic images, thus increasing the difficulty of implementing automatic needle tracking.
Since a needle is essentially a one-dimensional thin structure, tracking methods that use regional features, such as holistic intensity, textures, and color histograms, cannot track a needle well in fluoroscopic images. Active contour and level set based methods rely heavily on intensity edges, so they are easily attracted to image noise in fluoroscopic images. Considering the noise level in typical fluoroscopic images, conventional methods cannot deliver the desired speed, accuracy, and robustness for abdominal interventions. Accordingly, a robust, efficient, and accurate method of needle tracking is desirable.
The present invention provides a method and system for needle tracking in fluoroscopic image sequences. Embodiments of the present invention provide a hierarchical framework to continuously and robustly track a needle for image-guided interventions. Embodiments of the present invention can be used to track the motion of a needle for breathing motion compensation in abdominal interventions.
In one embodiment of the present invention, needle segments are detected in a plurality of frames of the fluoroscopic image sequence. The needle in a current frame of the fluoroscopic image sequence is then detected by tracking the needle from a previous frame of the fluoroscopic image sequence based on the needle position in the previous frame and the detected needle segments in the current frame. The needle can be initialized in a first frame of the fluoroscopic image sequence, for example, using an interactive needle detection method.
These and other advantages of the invention will be apparent to those of ordinary skill in the art by reference to the following detailed description and the accompanying drawings.
The present invention relates to a method and system for needle tracking in fluoroscopic image sequences. Embodiments of the present invention are described herein to give a visual understanding of the needle tracking method. A digital image is often composed of digital representations of one or more objects (or shapes). The digital representation of an object is often described herein in terms of identifying and manipulating the objects. Such manipulations are virtual manipulations accomplished in the memory or other circuitry/hardware of a computer system. Accordingly, is to be understood that embodiments of the present invention may be performed within a computer system using data stored within the computer system.
Embodiments of the present invention provide a framework for detecting and tracking a needle for breathing motion compensation in abdominal interventions. In this framework, an interactive needle detection method can be used to initialize the needle position and shape at the beginning of an intervention. At the acquisition of subsequent images, the needle is continuously tracked, and the positions of the needle in the frames of a fluoroscopic image sequence are output and used for breathing motion compensation in the intervention. Embodiments of the present invention utilize a hierarchical framework, in which at each frame, an offline learned (trained) needle segment detector automatically identifies small segments of a needle and provides primitive features for subsequent tracking. Based on the identified segments an affine tracking method robustly tracks the needle motion across successive frames.
At step 104, a needle is initialized in a first frame of the fluoroscopic image sequence. As used herein, the term “first frame” refers to any frame in a sequence of fluoroscopic images at which the needle tracking process is initialized. According to an exemplary embodiment, in order to ensure accuracy and robustness during interventions, an interactive needle detection method can be used to initialize the needle in the first frame of the fluoroscopic image sequence. The interactive needle detection can detect the needle in the frame in response to user inputs to constrain a search for the needle. For example, a user may select two points in the frame, each one at or near one end of the needle. The user can select the points by clicking on the points with a user input device, such as a mouse. If the initial needle detected based on the two points selected by the user is not satisfactory, additional points may be selected by the user to further constrain the needle detection method in order to obtain refined detection results. Such an interactive detection method is described in greater detail in Mazouer et al., “User-Constrained Guidewire Localization in Fluoroscopy,” Medical Imaging 2009: Physics of Medical Imaging, Proc. SPIE (2009), Volume 7258 (2009), pp. 72561K-72591K-9 (2009), which is incorporated herein by reference. Although an interactive needle detection method is described above, the present invention is not limited thereto and a fully automatic needle detection method or fully manual needle detection may also be used to initialize the needle in the first frame of the fluoroscopic image sequence.
Returning to
The appearance of a needle is difficult to distinguish in a fluoroscopic image due to the low signal to noise ratio. Traditional edge detectors and ridge detectors will produce many false detections while missing thin needle parts. According to an embodiment of the present invention, a learning based needle segment detection method is used. This learning based needle segment detection method utilizes needle segment detector that is trained offline using training data to detect needle segments in each frame of the fluoroscopic image sequence. Such a trained needle segment detector can identify weak needle segments in low quality fluoroscopic images.
According to an advantageous embodiment, the learning based needle detection method uses a probabilistic boosting tree (PBT) to train a needle segment detector based on annotated training data. PBT is a supervised method extended from the well-known AdaBoost algorithm, which combines weak classifiers into a strong classifier. PBT further extends AdaBoost into a tree structure and is able to model complex distributions of objects, which is desirable in handling different types of needles in fluoroscopy. According to an advantageous implementation, Haar features are extracted from images as the features used in the PBT classifier. Haar features measure image differences of many configurations and are fast to compute. In order to train the needle segment detector, numerous needles are annotated in training fluoroscopic images. Segments of the needles are cropped as positive training samples and image patches outside the needles are used as negative training samples. The training samples are then used to train the PBT based needle segment detector offline.
During online needle segment detection of an input image, such as a frame of the received fluoroscopic image sequence, the trained needle segment detector can identify if a patch of the image belongs to a needle or the background. The output of the PBT-based needle segment detector, denoted as P(x) given an image patch at the position x, is a combination of outputs from a collection of learned weak classifiers Hk(x) with associated weights αk. The numeric output can be further interpreted as probabilistic measurements of needle segments, as expressed in Equation (1):
An image patch at the position x can be classified as a needle segment if P(x) is greater than a threshold (e.g., P(x)>0.5), and classified as background if P(x) is not greater than the threshold. As illustrated in
Returning to
By using the detected needle segments as primitive features, a coarse-to-fine affine tracking method is utilized to detect a needle candidate with maximum posterior probability. This coarse-to-fine tracking is based on a variable bandwidth kernel-based density smoothing method, which provides effective and efficient needle tracking results.
The needle tracking is formalized in a probabilistic inference framework to maximize the posterior probability of a tracked needle given the fluoroscopic images. In this framework, a needle hypothesis at the t-th frame is deformed from a previous frame. The needle hypothesis a the t-th frame is denoted as Γt(x;u):
Γt(x;u)=T(Γt-1(x),ux), (2)
where T is a needle shape transformation function and u, is the motion parameter. Γt-1(x) is a tracked (detected) needle at a previous frame, which acts as a template for tracking the needle at the t-th frame. For simplicity of notation, a needle candidate is denoted hereinafter as Γt(x). The posterior probability P(Γt(x)|Zt) can be expressed as:
P(Γt(x)|Zt)∝P(Γt(x))P(Zt|Γt(x)). (3)
The tracked needle {circumflex over (Γ)}t(x) is estimated as the needle candidate that maximizes the posterior probability, i.e.,
In equation (3), P(Γt(x)) is a prior probability, which can be propagated from previous tracking results. The prior probability can be modeled as:
where D(Γt(x),Γt-1(x)) is the average of the shortest distances from points on candidate Γt(x) to the shape template Γt-1(x). A large kernel size σΓ can be selected to allow large needle movements between frames. The likelihood measurement model P(Zt|Γt(x)), is another component that plays a crucial role in achieving robust tracking results. Given a needle represented by N points Γt(x)={x1, x2, . . . , xN} that are interpolated from control points, the needle Γt(x) is in an N-dimensional space, which makes the measurement model P(Zt|Γt(x)) difficult to represent. In order to simplify the model, measurement independency can be assumed along a needle, i.e., P(Zt|xi,Γt(x))=P(Zt|xi). Using this assumption, the measurement model P(Zt|Γt(x)) can be decomposed into measurements as individual needle points, expressed as:
where P(Zt|xi) is the measurements at individual points on a needle, and P(xi|Γt(x)) is the weights of individual points on the needle. According to an advantageous implementation, such weights can be set as equal to each other.
Embodiments of the present invention utilize multi-resolution affine tracking, which is based on a kernel-based smoothing method. In order to obtain measurements at each point x in a frame is computationally expensive and is prone to measurement noise at individual points. For example, measurements at points that are classified by the PBT-based needle segment detector as non-needle segments may not be reliable. The measurements can be made more robust and more efficient to compute by using kernel-based estimation (or smoothing). In the kernel-based estimation, measurements are made at a set of sampled locations xjs, of an entire image. In an advantageous implementation, the sampled locations xjs are the points in a frame classified a needle segments by the trained needle segment detector. Based on Markov conditional independence, it can be assumed that observations at sampling points xjs are independent of un-sampled points xi, i.e., P(Zt|xi,xjs)=P(Zt|xjs). Therefore, the kernel-based measurement estimation can be expressed as:
where P(xjs|xi)=Gσ(xjs,xi) is a Gaussian kernel with a bandwidth σ. The kernel-based measurement estimation can obtain smooth measurements in a neighborhood, reduce computations necessary to calculate the measurements, and allow for multi-resolution tracking
Affine tracking recovers the motion of the needle between two successive frames, i.e., ux=u=(c,r,θ,sc,sr), where c, r, and θ are translation and rotation parameters, and sc and sr are scale factors in two directions. The affine tracking is formulated determining motion parameters to maximizing the posterior probability, i.e., maximizing E(u), as expressed below:
Tracking the affine motion can be efficiently implemented using variable bandwidths in kernel-based measurement smoothing. For example, translation searching can be performed at multiple resolutions for each frame, with decreased intervals {d1>d2> . . . >dT}. During the multi-resolution tracking, the corresponding bandwidth in equation (6) varies accordingly and can be denoted as σi. Incrementally decreasing bandwidth σi leads to a coarse-to-fine tracking scheme. For example, to search the translation of a needle, a larger kernel bandwidth σi is used at coarse resolutions to avoid missing tracking caused by larger sampling intervals. At fine resolutions, a smaller kernel bandwidth is used to obtain finer tracking results. Rotation and scale searches are performed using the multi-resolution manner similar to the translation. At coarse resolutions, the method searches a large range of rotation and scaling of a needle, and uses a larger bandwidth σi. At fine resolutions, a small range of rotation and scaling is searched with a small bandwidth. In this method, the translation, rotation and scaling factors are searched simultaneously. The kernel bandwidth in Eqn. (6) is set as proportional to the sampling intervals, so the tracking results automatically adapt to different resolutions.
As illustrated in
Returning to
The above-described methods for needle tracking in fluoroscopic image sequences may be implemented on a computer using well-known computer processors, memory units, storage devices, computer software, and other components. A high level block diagram of such a computer is illustrated in
The foregoing Detailed Description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention.
This application claims the benefit of U.S. Provisional Application No. 61/242,078, filed Sep. 14, 2009, the disclosure of which is herein incorporated by reference.
Number | Date | Country | |
---|---|---|---|
61242078 | Sep 2009 | US |