The present invention relates to detection of image features, and more particularly to feature detection in ultrasound images.
Ultrasound is a commonly used medical imaging modality. Compared to other medical imaging modalities, such as X-ray, magnetic resonance (MR), and positron emission tomography (PET), ultrasound has many advantages, as ultrasound is fast, portable, relatively low cost, and presents little risk to patients.
One limitation of ultrasound is image quality. Ultrasound images are often corrupted by speckle resulting from the coherent accumulation of random scattering in a resolution cell of the ultrasound beam. While the texture of the speckle does not correspond to any underlying structure, the local brightness of the speckle pattern is related to the local echogenicity of underlying scatterers. The speckle can have a detrimental effect on image quality and interpretability, and can cause inaccurate feature detection in ultrasound images.
Conventional approaches, such as the Canny edge detector, commonly detect features in images based on gradient operators. Often this is achieved by convolution of the image with a bandpass kernel K, which can be modeled as the derivative of a Gaussian function,
where σ2 is the variance. The gradient can then be defined as Gx=K*I, Gy=KT*I, where I is the image and K is a 1D horizontal kernel. A feature map for identifying features in the image has a value equal to the gradient magnitude, F=√{square root over (Gx2+Gy2)} for each pixel in the image However, since the gradient of an image is sensitive to the speckle, the speckle can adversely affect the feature map, leading to inaccurate feature detection. While increasing the variance may help to blur over the speckle, the effect of the speckle is often still apparent in feature maps generated using conventional methods. Furthermore, larger variances also blur edges in images, making actual image features of images more difficult to detect.
The present invention provides a method and system for feature detection in ultrasound images which is less sensitive to speckle. Embodiments of the present invention provide an information-theoretic method which utilizes models of the speckle distribution in ultrasound images to estimate the speckle distribution in various regions of an ultrasound image. By comparing estimates of the speckle distribution in neighboring regions of an ultrasound image, it is possible to detect salient features embedded in speckle.
In one embodiment of the present invention, speckle distributions are estimated in first and second windows on opposing sides of a pixel of an ultrasound image. A divergence value is calculated for the pixel between the estimated speckle distributions in the first and second windows. These steps may be performed for each pixel in the ultrasound image, and a feature map may be generated based on the divergence calculated between the estimated speckle distributions for each pixel. It is also possible that the speckle distributions are estimated in third and fourth windows on opposing sides of each pixel in a different direction than the first and second pixels. A divergence value can be calculated for each pixel between the estimated speckle distributions in the third and fourth windows and combined with the divergence value calculated for each pixel between the estimated speckle distributions in the first and second windows.
These and other advantages of the invention will be apparent to those of ordinary skill in the art by reference to the following detailed description and the accompanying drawings.
a) illustrates an exemplary cardiac ultrasound image;
b) illustrates a feature map of the cardiac ultrasound image of
c) illustrates a feature map of the cardiac ultrasound image of
a)-4(d) illustrate an effect of window size on exemplary results of an embodiment of the present invention;
a)-5(f) illustrate exemplary results of embodiments of the present invention compared with results of conventional methods; and
The present invention is directed to a method for feature detection in ultrasound images using an information-theoretic approach based on speckle intensity distribution models derived from physical models of ultrasound image formation.
where QI (x,y) is complex.
In order to produce a real image, envelope detection is performed by calculating the magnitude of the IQ image QI(x,y) using an absolute value operator 106, thus generating a magnitude IQ image M(x,y). The speckle in the magnitude IQ image M(x,y) can be modeled as having a Rayleigh distribution,
where M(x,y) is real. The Rayleigh Distribution is described in detail in J. Goodman, Speckle Phenomena in Optics: Theory and Applications Version 5.0, 2005, which is incorporated herein by reference.
Since the magnitude IQ image M(x,y) has a large dynamic range, the magnitude IQ image M(x,y) can be logarithmically transformed using a log operator 108 to generate an image I(x,y) (Log Mag IQ image) suitable for display. The speckle in the image I(x,y) can be modeled as having a Fisher-Tippett (FT) distribution,
The FT distribution is described in detail in O. Michailovich and A. Tannenbaum, “Despeckling of Medical Ultrasound Images,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 53, no. 1, January 2006, which is incorporated herein by reference.
Given a region Ω in a Mag IQ image M(x,y), it is possible to estimate the Rayleigh distribution in the region Ω based on the pixel intensities of pixels in the region Ω. Since the speckle in M(x,y) is modeled as having a Rayleigh distribution, the estimation of the Rayleigh distribution in the region Ω is an estimation of the speckle in the region Ω. The maximum likelihood of the Rayleigh distribution in the region Ω can be estimated using the log likelihood of Equation (2). The log likelihood of Equation (2) can be expressed as l(M(x,y),σ)=∫Ωln p(M(x,y))dxdy, such that:
In order to determine a maximum likelihood (ML) estimator for the Rayleigh distribution, l(M(x,y),σ) can be differentiated with respect to σ, and set equal to zero to determine the ML estimate of σ2, such that:
Thus, given a region Ω in M(x,y), the ML estimate of the parameter σ2 can be calculated from the pixel intensities in the region Ω assuming a Rayleigh distribution, and used to estimate the speckle in the region Ω.
Given a region Ω in a Log Mag IQ image I(x,y), it is possible to estimate the FT distribution in the region Ω based on the pixel intensities of pixels in the region Ω. Since the speckle in I(x,y) is modeled as having an FT distribution, the estimation of the FT distribution in the region Ω is an estimation of the speckle in the region Ω. An ML estimate for the FT distribution is derived similarly to the Rayleigh distribution described above. Accordingly, the parameter σ2, which is the ML estimator for the FT distribution, can be determined based on the log likelihood of Equation (3), such that:
Thus, given a region Ω in I(x,y), the ML estimate of the parameter σ2 can be calculated from the pixel intensities in the region Ω assuming a FT distribution, and used to estimate the speckle in the region Ω.
Given two distributions, p and q, estimated from two different regions of an image, it is possible to measure the divergence or “distance” between the distributions p and q. For example, Kullback-Liebler (KL) divergence, or relative entropy, is an information-theoretic measure between two distributions p and q. KL divergence is described in detail in T. Cover and J. Thomas, Elements of Information Theory, New York: John-Wiley and Sons, 1991, which is incorporated herein by reference. The relative entropy D(p∥q) measures the inefficiency of assuming that a distribution is q when the true distribution is p. The KL divergence between two distributions p and q is defined as:
In the KL divergence defined in Equation (7), we assume that
and
The KL divergence between two distributions p and q is asymmetric, that is, D(p∥q)≠D(q∥p). However, the KL divergence can be symmetrized using the J-divergence,
The J-divergence can be thought of as a measure of “distance” between the probability distributions p and q.
For an image M(x,y) (i.e., a Mag IQ image) that can be modeled locally with Rayleigh distributions, a distribution pM can be estimated in one window of pixels, and another distribution qM can be estimated in another window of pixels, such that:
where M(x,y) is the intensity at pixel (x,y) in the magnitude IQ image and σ12 and σ22 are the parameters of the distributions pM and qM, respectively.
The J-divergence can be computed between the distributions pM and qM as a measure of how “different” the regions are. The derivation of the expression for D(pM∥qM) is shown below, and D(qM∥pM) can be similarly derived to obtain the J-divergence. In the derivation below M is used to represent M(x,y) for convenience. Based on Equations (7)-(9), D(pM∥qM) can be expressed as:
By expanding the ln term in Equation (10), performing the integration, and simplifying, Equation (10) can be re-written as:
Therefore, the J-divergence can be expressed as:
where σ12 and σ22 are estimated parameters of the Rayleigh distributions determined from Equation (5).
For an image I(x,y) (i.e., a Log Mag IQ image) that can be modeled locally with FT distributions, distributions pFT and qFT can be estimated in different windows of I(x,y) as FT distributions having parameters σ12 and σ22 respectively. The J-divergence can be computed between the distributions pFT and qFT as a measure of how “different” the windows are. The derivation of the expression for D(pFT∥qFT) is shown below, and D(qFT∥qFT) can be similarly derived to obtain the J-divergence. In the derivation below, I is used to represent I(x,y) for convenience. Based on Equations(3) and (7), D(pFT∥qFT) can be expressed as:
By expanding the ln term in Equation (13), performing the integration, and simplifying, Equation (13) can be re-written as:
Therefore, the J-divergence can be expressed as:
where σ12 and σ22 are estimated parameters of the FT distributions determined from Equation (6).
As described above, speckle in a region of an image can be estimated using a Rayleigh distribution or a FT distribution, and the divergence between distributions estimated in two regions can be calculated using the J-divergence. According to embodiments of the present invention, the above-described operations can be used for feature detection in ultrasound images.
At step 202, for a pixel in the ultrasound image, adjacent windows to the pixel are defined. An adjacent window to a particular pixel is a region of the ultrasound image including at least one adjacent pixel to the particular pixel. According to a preferred embodiment of the present invention, adjacent windows to the pixel can be defined on opposing sides of the pixel in first and second directions (i.e., x and y directions). The first and second directions can be orthogonal with respect to each other, such that the pixel is surrounded by adjacent windows. The windows can have any shape or size, but it is preferable that windows on opposing sides of the pixel have the same shape and size.
Returning to
At step 206, the divergence is calculated between the speckle distributions estimated in windows on opposing sides of the pixel. The calculated divergence is a measure of how “different” the opposing windows are. For example, the J-divergence can be calculated between the speckle distributions estimated in windows on opposing sides of the pixel using Equation (12) or (15) for Rayleigh distributions or FT distributions, respectively. The J-divergence is calculated for each set of opposing windows. For example, when windows are defined on opposing sides of the pixel in the x and y directions, a J-divergence Jx(x,y) is calculated between the speckle distributions estimated in the windows on opposing sides of the pixel in the x direction, and a J-divergence Jy(x,y) is calculated between the speckle distributions estimated in the windows on opposing sides of the pixel in the y direction.
At step 208, it is determined whether all of a plurality of pixels in the ultrasound image have been processed. If any of the plurality of pixels have not been processed, the method proceeds to a next pixel at step 210, and loops back to step 202 for the next pixel. If it is determined that all of the plurality of pixels have been processed, the method proceeds to step 212.
At step 212, a feature map of the ultrasound images in generated. The feature map is generated by assigning each pixel in the ultrasound image an intensity value based on the divergence calculated for each pixel. For example, if Jx(x,y) represents the J-divergence calculated for a pixel (x,y) in the x direction, and Jy(x,y) represents the J-divergence calculated for a pixel (x,y) in the y direction, the feature map FJ(x,y) can be defined as:
FJ(x,y)=√{square root over (Jx(x,y)2+Jy(x,y)2)}{square root over (Jx(x,y)2+Jy(x,y)2)}. (16)
Accordingly, pixels in the ultrasound image at which a large divergence is detected between speckle distributions estimated surrounding the pixels will have higher intensity values in the feature map FJ(x,y). Thus, the feature map FJ(x,y) detects edges and features in the ultrasound image. The feature map FJ(x,y) can be output to a display device in order to be displayed thereon.
b) illustrates a feature map 304 of the cardiac ultrasound image 302 of
Sensitivity of the feature detection method described above can be adjusted by varying the window size of the windows. For example, increasing the window size may give a better statistical modeling of a distribution's parameter in the window, and adjusts the scale of the features detected.
a)-5(f) illustrate exemplary results of embodiments of the present invention compared with results of conventional methods.
Parzen-windowed histograms of the windows without estimating the speckle distributions in the windows. As illustrated in
The above-described method for feature detection in ultrasound images can be implemented on a computer using well-known computer processors, memory units, storage devices, computer software, and other components. A high level block diagram of such a computer is illustrated in
Embodiments of the present invention are described herein to give a visual understanding of the feature detection method. It is to be understood that these embodiments may be performed within a computer system using data stored within the computer system. Accordingly, some steps of the method can occur as internal representations within the computer system.
The foregoing Detailed Description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention
This application claims the benefit of U.S. Provisional Application No. 60/745,042, filed Apr. 18, 2006, the disclosure of which is herein incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
5409007 | Saunders et al. | Apr 1995 | A |
5653235 | Teo | Aug 1997 | A |
6181810 | Zhang et al. | Jan 2001 | B1 |
6200267 | Burke | Mar 2001 | B1 |
6468218 | Chen et al. | Oct 2002 | B1 |
6674879 | Weisman et al. | Jan 2004 | B1 |
6984211 | Hao et al. | Jan 2006 | B2 |
20040054281 | Adam et al. | Mar 2004 | A1 |
Number | Date | Country | |
---|---|---|---|
20070242864 A1 | Oct 2007 | US |
Number | Date | Country | |
---|---|---|---|
60745042 | Apr 2006 | US |