This invention relates to acoustic imaging.
Improving signal vs. noise in acoustic imaging is of general interest for various applications, including medical imaging. One approach that has been considered for accomplishing this is use of coherence imaging, which is based on the typically different coherence exhibited by acoustic signals from a target and electronic noise. However, conventional coherence-based techniques such as short-lag spatial coherence (SLSC) and coherent flow power Doppler (CFPD) undesirably provide non-uniform resolution. Accordingly, it would be an advance in the art to provide improved coherence-based acoustic imaging.
In this work, we provide acoustic imaging based on angular coherence (as opposed to spatial coherence). The target is insonified with collimated acoustic beams at several different incidence angles. The resulting images are processed to determine angular coherence by averaging over angle, and then integration of the angular coherence for relatively small angular differences is used to provide the output angular coherence image. In cases where flow imaging is done, multiple acquisitions are acquired, the images are first filtered to suppress signals from stationary features of the target, and the final flow image is computed by summing the squares of the angular coherence images (on a pixel by pixel basis).
Significant advantages are provided. Fast acquisition, uniform image resolution, and low noise are simultaneously provided. More specifically:
1. Compared to conventional short-lag spatial coherence (SLSC) and coherent flow power Doppler (CFPD) flow imaging methods (which utilize spatial coherence beamforming), this work provides uniformly high image resolution with noise reduction. Conventional SLSC and CFPD have a non-uniform resolution, which deteriorates with distance from the focus; this work provides uniformly high resolution over the entire image.
2. Compared to synthetic transmit focusing techniques utilized with conventional B-mode and flow imaging techniques, this work provides better suppression of electronic noise, and thus better image quality deep in tissue.
3. Compared to conventional synthetic-transmit-focusing with short-lag spatial coherence beamforming, this work provides higher power output and significantly reduces the acquisition and image reconstruction time.
This work is suitable for use with any application of acoustic imaging, including but not limited to medical imaging and non-destructive evaluation of industrial parts. It is expected to be especially useful in connection with ultrasound flow imaging of slow flow and small blood vessels or the vasculature deep in tissue of the difficult-to-image patient (e.g. overweight/obese patients).
i) averaging the acoustic images vs. angle to estimate an angular coherence function at each spatial point of the acoustic images; and
ii) integrating the angular coherence function over a predetermined angular range to provide the angular coherence image. This angular coherence image can be provided as an output.
Here a collimated acoustic beam is an acoustic beam with substantially planar wavefronts in all locations within the field-of-view. The half divergence angle of such a beam in homogeneous media is smaller than or equal to three times the limit imposed by the diffraction of the acoustic aperture that is used to generate the beam. For a Gaussian beam with half width w and wavelength λ, the intended half divergence angle θ is roughly θ≤3λ/(πw). With inhomogeneous media, aberration may increase the half divergence angle.
The predetermined angular range is preferably less than or equal to 30% of a total angular range of the three or more distinct incidence angles.
The acoustic images can be provided as complex-valued functions of two spatial variables or as real-valued functions of two spatial variables. Complex-valued functions can be represented as having real and imaginary components in the averaging the acoustic images vs. angle. Alternatively, complex-valued functions can be represented as having in-phase and quadrature components in the averaging the acoustic images vs. angle. Real-valued functions can represent radio-frequency ultrasound signal intensity in the averaging the acoustic images vs. angle. Radio-frequency signals in the context of ultrasound imaging are the ultrasound echoes recorded by the transducers as a function of time or depth.
Averaging the acoustic images vs. angle can further include spatial averaging over a predetermined spatial range. For example, if the acoustic imaging system provides an axial resolution the spatial averaging can be done over an axial range substantially equal to the axial resolution. Similarly, if the acoustic imaging system provides a lateral resolution the spatial averaging can be done over a lateral range substantially equal to the lateral resolution. As used herein, “substantially equal” means equal to within +/−10%.
The three or more distinct incidence angles are preferably seven or more distinct incidence angles. The acoustic images can be 2-D brightness mode images. Alternatively, the acoustic images can be flow acoustic images that are filtered to suppress signals from stationary parts of the target. In such cases, it is preferred to perform angular coherence imaging for three or more acquisitions that are separated by a predetermined time delay, and to provide an output flow image by summing squares of the angular coherence image for each acquisition.
More specifically, flow imaging according to the present approach can be accomplished as follows
1) Plane waves with different transmit angles are emitted, each of which produces one acoustic image. The acoustic images produced in this step are denoted as (Angle 1, Acquisition 1), (Angle 2, Acquisition 1), (Angle 3, Acquisition 1), and etc.
2) Wait for a fixed amount of time (e.g., 1 ms).
3) Repeat step 1 and 2 for at least two more times (at least 3 times in total). The images produced in this step are denoted as (Angle 1, Acquisition 2), (Angle 2, Acquisition 2), (Angle 3, Acquisition 2), and (Angle 1, Acquisition 3), (Angle 2, Acquisition 3), (Angle 3, Acquisition 3), and etc.
4) Filter the acoustic images to remove stationary signals. The filtering is conducted among images produced with the same angle index but different acquisition indices. For example, Angle 1 images in all acquisitions, including (Angle 1, Acquisition 1), (Angle 1, Acquisition 2), (Angle 1, Acquisition 3), and so on, are filtered as one ensemble; and then Angle 2 images in all acquisitions; and so on. The result is one filtered flow image corresponding to each of the acquired acoustic images.
5) Produce one angular coherence image from the filtered images in Acquisition 1, including (Angle 1, Acquisition 1), (Angle 2, Acquisition 1), (Angle 3, Acquisition 1), and so on as described above. Then similarly produce one angular coherence image for each of the other acquisitions.
6) Sum the squares of the angular coherence images.
Mathematical Development
To better appreciate the present invention, the following exemplary mathematical development is provided. The method can be regarded as including 4 major steps.
1. Tissue insonification with a synthetic transmit focusing technique, such as virtual source and plane wave synthetic aperture. (
2. Next, the transmitted wave is backscattered by the tissue (
3. The process described in 1 and 2 is repeated with plane waves at M different angles into the tissue (
4. For the same point in each of the images produced from different transmit angles α, the normalized coherence (i.e. a function that computes the similarity of the signals) of every pair of signals received at different plane wave angles are computed as a function of the difference between angles (i.e. the spatial coherence is computed across the angles of f(x,y,α1).
in which, Δα=α1−α2·R(x,y,z,Δα) is then averaged across the angles α to produced an averaged coherence function
For the computation of normalized coherence, various techniques can be used to produce similar results. First of all, instead of RF data, the complex IQ (in-phase and quadrature) data can be used as an alternative. Using IQ data, the computation can be represented as
where IQ(x,y,z,α) represent the complex IQ signal at location (x,y,z) with transmit angle α; IQ*(x,y,z,α) represent the complex conjugate of IQ(x,y,z,α); and ∥ represent the l2 norm of the IQ signal.
In implementation with discrete-time signals, various techniques can be used. For example,
in which, the angular range is from −α0 to α0. The IQ signal IQ(x,y,z,α) can be replaced with RF signal f(x,y,z,α) according to the previous description.
Alternatively, the average can be calculated as
in which, N represent the number of angles α1 between the range −α0 and α0 used in the computation. Additionally, a spatial kernel can be used in any of the implementations above. For example, using an axial kernel in z dimension in the implementation follows
in which the axial kernel length is 2z0, and zi is the summation variable. The function sqrt( ) represent the square root function. Another example is
Similar kernels in x and y dimensions can be used as well.
The pixel value of the resulting image point, g(x,y), is then calculated by integrating or summing the normalized spatial coherence function between 0 and 30% of the maximums difference between the angles.
g(x,y)=∫0ρ
in which, ρ≈A·Δαmax, where A is a fraction, usually between 0.01 and 0.3 and represents the fraction of the aperture width or fraction of the total angle encompassing all transmits.
The process is carried out for each pixel (x, y), and a B-mode image can be produced (
The normalized angular coherence function for plane-wave transmits APWT can be expressed as
where Δp=p1−p2, Δq=q1−q2, Crx is the autocorrelation of the receive aperture function, k is the wave number and p and q are normalized spatial frequencies (i.e., p and q are effectively angles).
The physical implication of Eq. (8) is that the cross-correlation function of the backscattered signals from plane-wave transmits at different angles and a spatially incoherent homogeneous medium is proportional to the normalized autocorrelation of the receive aperture function. This can be considered as an extension to the van Cittert Zernike theorem.
The transmit angular spacing (kΔp, kΔq) in Eq. 8 can be expressed as fractions of the maximum angle sampled by the receive aperture (ηp, ηq)=(kΔp/kpmax, kΔq/qmax) as
in which, 0≤ηp, ηq≤1. If the transmit angular spacing is greater than the maximum angle sampled by the receive aperture (i.e., ηp or ηq>1), APWT(ηp, ηq)=0.
In cases where flow imaging is performed, transmission from the 17 angles or virtual elements are repeated multiple times, and the images g(x,y,i) from the acquisitions are summed using a power estimator,
P(x,y)=Σi=1Ng2(x,y,i), (10)
in which, g(x,y,i) is the angular coherence image produced from the ith acquisition, and N is the number of acquisitions. P(x,y) is the flow image (
In addition, both the B-mode image g(x,y) and the flow image P(x,y) can be computed using the “recursive” method. That is, the signals from the same angle or virtual element, but previous cycle, are updated with the values from the new transmission, and P(x,y) are recalculated, thus improving frame rate and continuity of the image.
This application claims the benefit of U.S. provisional patent application 62/318,614, filed on Apr. 5, 2016, and hereby incorporated by reference in its entirety.
This invention was made with Government support under contracts EB015506 and HD086252 awarded by the National Institutes of Health. The Government has certain rights in the invention.
Number | Name | Date | Kind |
---|---|---|---|
4694434 | von Ramm | Sep 1987 | A |
6216540 | Nelson | Apr 2001 | B1 |
6552841 | Lasser | Apr 2003 | B1 |
20090306510 | Hashiba | Dec 2009 | A1 |
20110030448 | Moore | Feb 2011 | A1 |
20130109971 | Dahl et al. | May 2013 | A1 |
20150293222 | Huang | Oct 2015 | A1 |
20150342567 | Ustuner et al. | Dec 2015 | A1 |
Number | Date | Country | |
---|---|---|---|
20170281121 A1 | Oct 2017 | US |
Number | Date | Country | |
---|---|---|---|
62318614 | Apr 2016 | US |