The present invention relates to segmenting a medical instrument in an ultrasound image and, more particularly, to dynamically performing the segmentation responsive to acquiring the image. Performing “dynamically” or in “real time” is interpreted in this patent application as completing the data processing task without intentional delay, given the processing limitations of the system and the time required to accurately measure the data needed for completing the task.
Ultrasound (US) image guidance increases the safety and efficiency of needle guided procedures by enabling real-time visualization of needle position within the anatomical context. The ability to use ultrasound methods like electronic beam steering to enhance the visibility of the needle in ultrasound-guided procedures has become a significant competitive area in the past few years.
While real-time 3D ultrasound is available, 2DUS is much more widely used for needle-based clinical procedures due to its increased availability and simplified visualization capabilities.
With 2DUS, it is possible to electronically steer the US beam in a lateral direction perpendicular to the needle orientation, producing strong specular reflections that enhance needle visualization dramatically.
Since the current orientation of the probe with respect to the needle is not typically known at the outset, the beam steering angle needed to achieve normality with the needle is also unknown.
Also, visualization is difficult when the needle is not directly aligned within the US imaging plane and/or the background tissue contains other linear specular reflectors such as bone, fascia, or tissue boundaries.
In addition, artifacts in the image come about for various reasons, e.g., grating lobes from steering a linear array at large angles and specular echoes from the above-mentioned linear and other specular reflectors offering a sharp attenuation change to ultrasound incident at, or close to, 90 degrees.
“Enhancement of Needle Visibility in Ultrasound-Guided Percutaneous Procedures” by Cheung et al. (hereinafter “Cheung”) discloses automatic segmenting of the needle in an ultrasound image and determining the optimum beam steering angle.
Problematically, specular structures that resemble a needle interfere with needle detection. Speckle noise and imaging artifacts can also hamper the detection.
The solution in Cheung is for the user to jiggle the needle, thereby aiding the segmentation based on difference images.
In addition, Cheung requires user interaction in switching among modes that differ as to the scope of search for the needle. For example, a user reset of the search scope is needed when the needle is lost from view.
Cheung segmentation also relies on intensity-based edge detection that employs a threshold having a narrow range of effectiveness.
What is proposed herein below addresses one or more of the above concerns.
In addition to above-noted visualization difficulties, visualization is problematic when the needle is not yet deeply inserted into the tissue. The Cheung difficulty in distinguishing the needle from “needle-like” specular deflector is exacerbated in detecting a small portion of the needle, as when the needle insertion is just entering the field of view. In particular, Cheung applies a Hough transform to edge detection output of the ultrasound image. Specular structures competing with the needle portion may appear longer, especially at the onset of needle entry into the field of view. They may therefore accumulate more votes in the Hough transform, and thereby be identified as the most prominent straight-line feature in the ultrasound image, i.e., the needle.
Yet, the clinical value of needle detection is questionable if, to determine the needle's pose, there is a need to wait until the needle is more deeply inserted. It would be better if the needle could be detected earlier in the insertion process, when the physician can evaluate its trajectory and change course without causing more damage and pain.
Reliable needle segmentation would allow automatic setting of the optimal beam steering angle, time gain compensation, and the image processing parameters, resulting in potentially enhanced visualization and clinical workflow.
In addition, segmentation and detection of the needle may allow fusion of ultrasound images with pre-operative modalities such as computed tomography (CT) or magnetic resonance (MR) imaging, enabling specialized image fusion systems for needle-based procedures.
A technological solution is needed to automatic needle segmentation that does not rely of the assumption that the needle is the brightest linear object in the image.
In an aspect of what is proposed herein, a classification-based medical image segmentation apparatus includes an ultrasound image acquisition device configured for acquiring, from ultrasound, an image depicting a medical instrument; and machine-learning-based-classification circuitry configured for using machine-learning-based-classification to, dynamically responsive to the acquiring, segment the instrument by operating on information derived from the image.
In sub-aspects or related aspects, US beam steering is employed to enhance the appearance of specular reflectors in the image. Next, a pixel-wise needle classifier trained from previously acquired ground truth data is applied to segment the needle from the tissue background. Finally, a Radon or Hough transform is used to detect the needle pose. The segmenting is accomplished via statistical boosting of wavelet features. The whole process of acquiring an image, segmenting the needle, and displaying an image with a visually enhanced and artifact-free needle-only overlay is done automatically and without the need for user intervention.
Validation results using ex-vivo and clinical datasets show enhanced detection in challenging ex-vivo and clinical datasets where sub-optimal needle position and tissue artifacts cause intensity based segmentation to fail.
Details of the novel, real time classification-based medical image segmentation are set forth further below, with the aid of the following drawings, which are not drawn to scale.
The apparatus 100 further includes machine-learning-based-classification circuitry 168 that embodies a boosted classifier 172, such as Adaboost™ which is the most well-known statistical boosting algorithm.
For user interaction in monitoring via live imaging, the apparatus also includes a display 176 and user controls 180.
In the clinical procedure, the wavelets 204 are oriented incrementally in different angular directions as part of a sweep through an angular range, since 2D Log-Gabor filters can be oriented to respond to the spatial frequencies in different directions. At each increment, a respective needle image 208 is acquired at the current beam angle 236; the oriented wavelet 204 is applied, thereby operating on the image, to derive information, i.e., Fi,x,y 212, from the image; and the above-described segmentation operates on the derived information. In the latter step, the boosted classifier 172 outputs a binary pixel-map 240 Mx,y whose entries are apportioned between needle pixels and background pixels. Depending on the extraction mode chosen by the operator, or depending on the implementation, the needle portion of the map 240 can be extracted 244a and directly overlaid 252 onto a B-mode image, or a line detection algorithm such a Radon transform or Hough transform (HT) 248 can be used to derive a position and angle of the needle 136. In this latter case, a fresh needle image can be acquired, the background then being masked out, and the resulting, extracted 244b “needle-only” image superimposed 256 onto a current B-mode image. Thus, the extraction mode can be set for “pixel map” or “ultrasound image.” It is reflected in steps S432, S456 and S556 which are discussed further below in connections with
Subroutines callable for performing the clinical procedure are shown in exemplary implementations in
In a first subroutine 400, the wavelets 204 are oriented for the current beam angle 236 (step S404). The needle image 208 is acquired (step S408). The wavelets 204 are applied to the needle image 208 (step S412). The output is processed by the boosted statistical classifier 172 (step S416). The binary pixel map 240 is formed for the needle image 208 (step S420).
In a second subroutine 410, the beam angle 236 is initialized to 5° (step S424). The first subroutine 400 is invoked (step S428), to commence at entry point “A” in
The user notifications in steps S540 and S548 can be sensory, e.g., auditory, tactile or visual. For example, illuminations on a panel or display screen may, while in an “on” state, indicate that the respective mode of operation is active.
A needle-presence-detection mode 588 of operation corresponds, for example, to steps S512-S532.
A needle-insertion-detection mode 592 corresponds, for example, to steps S512 and S544-S552.
A needle visualization mode 596 corresponds, for example, to steps S556-S564. One can exit the needle visualization mode 596, yet remain in the needle-insertion-detection mode 592. If, at some time thereafter, the needle-insertion-detection mode 592 detects re-entry of the needle 136 into the field of view 116, the needle visualization mode 596 is re-activated automatically and without the need for user intervention. In the instant example, the needle-presence-detection mode 588 enables the needle-insertion-detection mode 592 and thus is always active during that mode 592.
The above modes 588, 592, 596 may be collectively or individually activated or deactivated by the user controls 180, and may each be incorporated into a larger overall mode.
Each of the above modes 588, 592, 596 may exist as an option of the apparatus 100, user-actuatable for example, or alternatively may be part of the apparatus without any option for switching off the mode.
It is the quality and reliability of the needle segmentation proposed herein above that enables the modes 588, 592, 596.
Although the proposed methodology can advantageously be applied in providing medical treatment to a human or animal subject, the scope of the present invention is not so limited. More broadly, techniques disclosed herein are directed to machine-learning-based image segmentation in vivo and ex vivo.
A classification-based medical image segmentation apparatus includes an ultrasound image acquisition device configured for acquiring, from ultrasound, an image depicting a medical instrument such as needle; and machine-learning-based-classification circuitry configured for using machine-learning-based-classification to, dynamically responsive to the acquiring, segment the instrument by operating on information derived from the image. The segmenting can be accomplished via statistical boosting of parameters of wavelet features. Each pixel of the image is identified as “needle” or “background.” The whole process of acquiring an image, segmenting the needle, and displaying an image with a visually enhanced and artifact-free needle-only overlay may be performed automatically and without the need for user intervention. The reliable needle segmentation affords automatic setting of the optimal beam steering angle, time gain compensation, and the image processing parameters, resulting in enhanced visualization and clinical workflow.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments.
For example, the needle-insertion-detection mode 592 is capable of detecting at least part of the needle 136 when as little as 7 millimeters of the needle has been inserted into the body tissue, and, as mentioned herein above, 2.0 mm in the ultrasound field of view.
Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. Any reference signs in the claims should not be construed as limiting the scope.
A computer program can be stored momentarily, temporarily or for a longer period of time on a suitable computer-readable medium, such as an optical storage medium or a solid-state medium. Such a medium is non-transitory only in the sense of not being a transitory, propagating signal, but includes other forms of computer-readable media such as register memory, processor cache and RAM.
A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
This application claims the benefit of U.S. Provisional Patent Application No. 61/918,912, filed on Dec. 20, 2013 which is hereby incorporated by reference herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2014/066411 | 11/28/2014 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
61918912 | Dec 2013 | US | |
62019087 | Jun 2014 | US |