This application claims the benefit of U.S. Provisional Patent Application No. 60/701,635, filed Jul. 22, 2005, the entire disclosure of which is incorporated herein by reference.
This invention relates generally to ultrasound imaging systems and, more particularly, to adaptive optimization of ultrasound imaging system parameters by making use of stored channel data.
Medical ultrasound imaging systems need to support a set of imaging modes for clinical diagnosis. The basic imaging modes are timeline Doppler, color flow velocity and power mode, B-mode, and M-mode. In B-mode, the ultrasound imaging system creates two-dimensional images of tissue in which the brightness of a pixel is based on the intensity of the return echo. In color flow imaging mode, the movement of fluid (e.g., blood) or tissue can be imaged. Measurement of blood flow in the heart and vessels using the Doppler effect is well known. The phase shift of backscattered ultrasound waves can be used to measure the velocity of the moving tissue or blood. The Doppler shift may be displayed using different colors to represent speed and direction of flow. In the spectral Doppler imaging mode, the power spectrum of these Doppler frequency shifts is computed for visual display as velocity-time waveforms.
State-of-the-art ultrasound scanners may also support advanced or emerging imaging modes including contrast agent imaging, 3D imaging, spatial compounding, and extended field of view. Some of these advanced imaging modes involve additional processing of images acquired in one or more basic imaging modes, and attempt to provide enhanced visualization of the anatomy of interest.
A new trend in ultrasound technology development is the emergence of compact or portable ultrasound scanners that leverage the unceasing advances in system-on-a-chip technologies. It is anticipated that these compact scanners, though battery-operated, will support more and more of the imaging modes and functions of conventional cart-based scanners.
Regardless of physical size, ultrasound imaging systems are comprised of many subsystems. In a typical ultrasound imaging system, the main signal path includes the transducer, transmitter, receiver, image data processor(s), display system, master controller, and user-input system. The transducer, transmitter, and receiver subsystems are responsible primarily for the acquisition, digitization, focusing and filtering of echo data. The image data processor performs detection (e.g., echo amplitude for B-mode, mean velocity for color flow mode), and all subsequent pixel data manipulation (filtering, data compression, scan conversion and image enhancements) required for display. As used herein, the term pixel (derived from “picture element”) image data simply refers to detected image data, regardless of whether it has been scan converted into an x-y raster display format.
Conventional ultrasound systems generally require optimal adjustments of numerous system parameters involved in a wide range of system operations from data acquisition, image processing, and audio/video display. These system parameters can be divided into two broad categories: 1) user-selectable or adjustable; and 2) engineering presets. The former refers to all system parameters that the user can adjust via the user control subsystem. This includes imaging default parameters (e.g., gray map selection) that the user can program via the user control subsystem and store in system memory. In contrast, “engineering presets” refer to a wide range of system processing and control parameters that may be used in any or all of the major subsystems for various system operations, and are generally pre-determined and stored in system memory by the manufacturer. These may include control or threshold parameters for specific system control mechanisms and/or data processing algorithms within various subsystems.
The need to optimize both kinds of system parameters is a long-standing challenge in diagnostic ultrasound imaging, mainly because (1) the majority of sonographers or users often lack the time and/or training to properly operate a very broad range of user-controls for optimal system performance; and (2) engineering presets are usually pre-determined by the manufacturer based on “typical” or “average” system operating conditions including patient type (body size and fat/muscle composition), normal and abnormal tissue characteristics for various application types, and environmental factors (e.g., ambient light condition).
For a compact scanner, user-control design is particularly challenging because the space available on the console for imaging control keys can be very limited. This means that the overall user-control subsystem will be restricted and/or more difficult to use (e.g., accessing multiple layers of soft-key menus) compared to conventional cart-based scanners.
Another related challenge for all ultrasound scanners is ergonomics. Even for an expert sonographer who is proficient at using all of the available system controls, the repetitive hand motions required to scan with an ultrasound probe, and to adjust many control keys for each ultrasound examination protocol, are widely recognized as a source of repetitive stress injuries for sonographers.
Therefore, there is a need for more automated control of imaging parameters in ultrasound systems.
Embodiments of the present invention provide an ultrasound scanner equipped with an image data processing unit that can perform adaptive parameter optimization during image formation and processing. An exemplary embodiment employs a figure of merit scheme for optimizing one or more imaging parameters. The imaging parameters that may be optimized include system timing and control parameters as well as user-specified processing parameters. One of the key parameters is the speed of sound assumed by the imaging system for transmit and receive focusing. Other parameters include, for example, receive aperture, coherence factor, and the like.
In accordance with an aspect of the present invention, a method of processing data for generating an ultrasound image comprises obtaining channel data by digitizing ultrasound echo data produced by individual transducer elements on an ultrasound scanner in performing an image scan; storing the channel data in a memory; reconstructing an ultrasound image for each of a plurality of trial values of at least one parameter to be optimized using the stored channel data in the memory; evaluating the image quality of the reconstructed ultrasound image for each trial value of the at least one parameter; and determining the optimized value of the at least one parameter based on the evaluated image quality. The channel data may be digitized RF echo data, digitized baseband echo data, or in some other suitable form.
In some embodiments, evaluating the image quality comprises computing a figure of merit for the reconstructed ultrasound image for each trial value of the at least one parameter. The method may further comprise performing actual imaging using the optimized value of the at least one parameter to produce an image frame. The obtaining, storing, reconstructing, evaluating, and determining may be performed in real time. The at least one parameter includes one or more of: the sound speed being used to process the channel data to produce an image frame from the image scan; transmit control parameters for a transmitter of the ultrasound scanner to specify at least one of transmit waveform, aperture function, delay profile, and pulsed repetition frequency for one or more imaging modes; electronic array focusing parameters for a receiver of the ultrasound scanner to specify at least one of front end filter, front end gain, and receive aperture function as a function of time/depth and the time delay profiles for image reconstruction; or image processing parameters for an image data processor of the ultrasound scanner to specify at least one of display dynamic range, gray or color maps, and spatial/temporal filtering.
In specific embodiments, evaluating the image quality comprises selecting an image region for evaluation. Evaluating the image quality comprises computing one or more image focus quality parameters; and determining the optimized value comprises determining the optimal value of the at least one parameter to be optimized by comparing the one or more image focus quality parameters. The one or more image focus quality parameters are used to maximize an overall lateral spatial resolution of the selected image region due to improved focusing. Computing one or more image focus quality parameters comprises providing a focus quality spectrum.
In some embodiments, multiple sets of channel data are obtained, stored, and processed by the reconstructing, evaluating, and determining. The method further comprises computing a figure of merit for the reconstructed ultrasound image for each trial value of the at least one parameters, for each set of channel data; and combining the figure of merits for the multiple sets of channel data to provide a combined figure of merit.
In accordance with another aspect of the invention, an ultrasound system comprises a channel data memory to store channel data obtained by digitizing ultrasound echo data produced by an image scan; an image data processor configured to process the stored channel data in the memory to reconstruct an ultrasound image for each of a plurality of trial values of at least one parameter to be optimized; and a parameter optimization unit configured to evaluate an image quality of the reconstructed ultrasound image for each trial value of the at least one parameter, and to determine the optimized value of the at least one parameter based on the evaluated image quality.
In accordance with another aspect of the present invention, an ultrasound system comprises a channel data memory to store channel data obtained by digitizing ultrasound echo data produced by an image scan; an image data processor configured to process the stored channel data in the memory to reconstruct an ultrasound image for each of a plurality of trial values of at least one parameter to be optimized; and a parameter optimization unit including a parameter optimization program stored in a computer readable storage medium. The parameter optimization program includes code for evaluating an image quality of the reconstructed ultrasound image for each trial value of the at least one parameter, and code for determining the optimized value of the at least one parameter based on the evaluated image quality.
In subsequent SW beam formation and image processing, the raw channel data is reused which provides a good match to current DSP memory and caching architectures, while the FPGA front-end and DSP implementation provide increased development flexibility. Furthermore, channel data storage provides offline Matlab processing of multiple frames of acquired I/Q channel data, which can be used in verifying the SW system processing chain, algorithm development with real static and dynamic data, and as a research tool. The result of the processing is sent to a display unit 22 such as an LCD or CRT monitor.
Signal processing in the ultrasound scanner begins with waveform shaping and delay of the excitation pulses applied to each element of a transducer array to generate a focused, steered, and apodized pulsed wave that propagates into the body. Many characteristics of the transmitted acoustic pulse are adjusted in such a manner to be closely linked with some adjustment in the receive signal processing, the simplest link being the setting of a particular imaging mode. For example, standard pulse shaping adjusts the pulse length for a given transmit firing depending upon whether the returned echoes are ultimately to be used for B-mode, pulsed Doppler, or color Doppler imaging. Equally critical is the center frequency of the pulse which, for modern broadband transducers, can be set over a wide range, applicable to the part of the body being scanned. Some scanners also routinely shape the envelope of the pulse to improve the propagation characteristics of the resulting sound wave.
One of the adjustable parameters that affect image formation is the system assumed sound speed. Echoes resulting from scattering of the sound by tissue structures are received by all elements within the transducer array. Processing of these echo signals routinely begins at the individual channel (element) level with the application of apodization functions, dynamic focusing and steering delays, and frequency demodulation to reduce the cost of the former. Knowledge of the speed of sound is important. Typically, a nominal sound speed is used, e.g., 1.54 mm/μs, which is the value assumed by most systems. The error between the actual and assumed sound speed is one source of aberration in the detected image and leads to defocusing and increased acoustic clutter noise. The present invention provides a technique of correcting aberration caused by sound speed errors by optimizing the system assumed sound speed estimation using the channel data. In addition, the optimized system assumed sound speed for the focusing and steering delays can be separate from the sound speed used for image display, so as to maintain image registration if desired, thus avoiding image contraction or expansion due to different system assumed sound speeds. As a result, the transmit/receive focusing is altered based on the determined optimal sound speed, while maintaining the image scaling in spite of the sound speed change. The technique can be used for parameter optimization of other imaging parameters in the scanner system including, for example, receive delay profile, receive aperture, coherence factor, and the like.
The parameter optimization block 220 performs parameter optimization using any desired approach. In an exemplary embodiment, a figure of merit scheme is employed. The scheme takes one or more inputs and calculates a figure of merit based upon estimated values of one or more parameters. The goal is to optimize the focus quality of the image by iteratively varying one or more imaging parameters according to the figure of merit scheme. Advantageously, the channel data is stored in the memory 206 and can be used repeatedly to calculate the figure of merit in the parameter optimization block 220 (see arrow 232). Other inputs may include, for example, beam formation data after beam forming in the digital processor 208 (arrow 234), acoustic data after image processing 210 (arrow 236), and scan converted data after scan conversion 212 (arrow 238). Based on the figure of merit calculation performed by the parameter optimization block 220, one or more parameters may be adjusted, including the speed of sound used in the digital processor 208 to process the data (arrow 242). Examples of other parameters that may be adjusted include image processing parameters for the digital processor 208 to specify at least one of display dynamic range, gray or color maps, and spatial/temporal filtering (arrow 242); transmit control parameters for the transmitter 202 to specify at least one of transmit waveform, aperture function, delay profile, pulsed repetition frequency for one or more imaging modes, and transmit waveform characteristics (arrow 244); electronic array focusing parameters for the receiver 204 to specify at least one of front end filter, front end gain, and receive aperture function as a function of time/depth and the time delay profiles for image reconstruction (arrow 246); and a control logic module in the parameter optimization block 220 can be used to make these adjustments.
The parameter optimization procedure is summarized in the flow diagram of
The parameter optimization procedure is preferably automated and programmed into the hardware and/or software of the ultrasound scanner. The computer program is stored in a computer-readable storage medium and executed by a computer processor. The user can optimize the image by pressing an auto-optimization button of the ultrasound scanner. Real time processing is desirable but not required.
In the ultrasound scanner of
The parameter optimization block 220 is configured to perform image optimization functions. Specifically, based on pixel image data from the image data processor (digital processor 208 and scan converter 212), the parameter optimization block 220 can automatically adjust parameters in the transmitter 408, receiver 404, and/or image data processor directly through signal/data paths. By automating the user-controls based on actual image data, the efficiency, reproducibility, and ease of use of ultrasound scanner can be significantly enhanced.
An example of an automatic sound speed correction algorithm is used to illustrate the parameter optimization methodology. It is meant to be representative, but not exhaustive, of the capabilities of adaptive parameter optimization method of the present invention.
The process of freezing live imaging and storing a frame of channel data is performed in block 401. In the image region selection block 402, the region for estimating the body sound speed is selected. Block 403 reconstructs the selected image region. Block 404 computes the focus quality parameters which allow the SSC algorithm to estimate the body sound speed in a subsequent step. Block 405 involves storing the focus quality parameters. Block 406 generates the receive reference delay profiles from a sound speed specified in a search list and used in constructing the reconstruction look-up table LUTR. The reconstruction look-up table LUTR contains receive beamformation parameters such as channel, range, and line dependent delay, phase, apodization profiles, gains, etc. Block 407 generates the reconstruction look-up table LUTR. Block 408 processes the focus quality parameters to determine the best estimate of the body sound speed for the region of the body selected. Blocks 409, 410, and 411 involve generating and saving the receive reference delay profiles using the estimated body sound speed, re-running the SSC algorithm to produce new timing parameters, generating the reconstruction look-up table LUTR, updating any GUI parameters needed, and finally, returning the user to live imaging. These blocks are discussed in greater detail below.
In block 401, upon activation of the “optimize” button, or suitable user control, the system will freeze imaging, leaving the last multi-mode frame processed, i.e., B/CD-mode, etc., frozen on the display. In one implementation, digital signal processors will store/retain the last frame's B-mode channel data only into an external memory unit for sound speed reprocessing.
In block 402, the region of the image to be used by SSC algorithm is controlled by parameters specified in the database. While the entire current image region can be used by the SSC algorithm to determine the optimal sound speed, pre-specified parameters place constraints on the region of the image utilized in order to aide the SSC algorithm and make it more robust, as well as minimize computation time. The parameters could include, for example, minimum start depth, image width fraction, image range fraction, minimum number of lines, minimum number of range samples, etc.
For the minimum start depth, it has been found through experimentation on channel datasets obtained from phantoms as well as from the body, that the region of the image less than about 5 mm does not demonstrate good sound speed specificity across a wide range of sound speeds. This is most likely due to the fact that there are not many receive channels on for the transducers used, and due to the timing error being cumulative, proportional to the round-trip time, i.e.,
however this can vary based upon the transducer's actual geometry.
The image width fraction is used to exclude the edges of the image which may be generated from portions of the transducer which may not have good contact (note that this may be more useful for curved arrays). This can also be used to reduce the computation time for the LUTR, however, at the expense of possibly excluding potential target areas necessary for a good estimate of the sound speed by the SSC algorithm. The image range fraction is used to exclude the beginning and ending regions of the image, mainly to reduce computation time for the LUTR, however, at the expense of possibly excluding potential target areas necessary for a good estimate of the sound speed by the SSC algorithm. This is applied on the region prior to the application of the minimum start depth. The minimum number of lines is used, in conjunction with the other constraints, to limit how small the SSC region in the lateral direction can be and still produce accurate sound speed estimates. The minimum number of range samples is used, in conjunction with the other constraints, to limit how small the SSC region in the range direction can be and still produce accurate sound speed estimates.
In block 403, using the generated reconstruction LUTR, the selected SSC image region is reconstructed and I/Q data is produced. Note that the reconstructed range limits are determined by the current displayed range limits and the trial sound speed being used to evaluate the focus quality parameters. The line limits are determined by pre-specified constraints.
In block 404 involving the computation of image focus quality parameters, the SSC algorithm seeks to determine the optimal system sound speed which maximizes the overall lateral spatial resolution of the selected image region due to improved focusing, under the assumption that this system sound speed represents the best estimate for the overall body sound speed. There are many ways in which to determine the lateral spatial resolution, for example, imaging point and/or wire targets contained in a phantom. Quantifying the spatial resolution can be performed in either the lateral spatial domain directly, or in the spatial spectral domain. Because the body does not contain point targets in general, one must be content with determining and quantifying the lateral spatial resolution from point-like structures, speckle, contrast lesions, etc. Therefore the question becomes whether a technique can be found to extract the improvement in image quality and hence, lateral spatial resolution, from both phantoms and real-world body images, as the system sound speed is varied. Through experimentation on both phantom images as well as body images across several imaging applications, it has been found that a reasonable measure of image quality improvement or degradation as the system sound speed is varied through modification of the receive delays, consists of a) averaging lateral spatial spectra across range for each trial sound speed to produce a representative spatial spectrum (computed on a suitably detected I/Q image), and b) integrating each spectrum across a specified set of spatial frequencies producing the total energy—a single number indicating the focus quality of the image for each trial sound speed, defined as Qf(c). The idea is that any overall increase in the spectral density (and subsequently energy) as the sound speed used for receive focusing is varied—broadening the average lateral spatial spectrum, the better the focusing and spatial resolution. This approach is useful only to the extent that an increase in the focus quality Qf(c) is well correlated with observed improvements in lateral spatial resolution and contrast.
Several different focus quality spectrum Qf(s,c) definitions were examined and compared to each other using both phantom images and body images. They were:
where zIQ is the reconstructed I/Q image region, Nr is the number of range samples averaged from starting range r1(c) to ending range r2(c)—both a function of the trial sound speed in order to ensure that the same collection of scatterers, i.e., image features, are analyzed (since the image will be contracted or expanded due to the mapping of time t into range r through sound speed c, i.e., r=ct/2), and where FFT{•} indicates a lateral spatial transform across usl (ultrasound line) with an appropriate amount of zero padding to a power of 2 number of lines, i.e., 256, 512, etc., and s is the normalized spatial frequency. This produces a spatial spectrum as a function of trial sound speed c as shown in
Prior to plotting each spectrum in dB, each has been normalized by its maximum value and in addition, the normalized spectrum computed for sound speed 1.54 mm/μsec was subtracted (in dB) in order to see improvements or degradations from a 1.54 mm/μsec sound speed image (since this is usually the standard sound speed used on most systems). Thus, the plots of
Looking at the differential spectra shown in
For each of these focus quality spectra Qf(s,c) definitions, the focus quality Qf(c) was computed by averaging Qf(s,c) in the log domain, i.e., averaging Qf
It can be seen from
Square of Sums:
and
where Ns is the number of samples from s=0 to s=smax.
In block 405, after computation of the image focus quality parameters Qf
Block 406 involves generating a receive delay data set. Given a specified trial sound speed, the system needs to create channel, range, and line dependent receive delay profiles for the generation of the reconstruction LUTR. This can be computed in a variety of ways. For example, the receive delays can be computed by the digital processor based upon the transducer geometry, trial sound speed, and image point location. The receive delays can also be computed by interpolating in range, angle, and sound speed from sparsely computed reference delay data sets.
In block 407, upon completion of generating channel, range, and line dependent receive delay profiles for the specified sound speed, the reconstruction LUTR needs to be generated for the current iteration. This involves several steps.
First, the range and line limits of the selected SSC image region need to be computed based upon the current imaging region and pre-specified parameters. The line limits are straightforward, constant throughout SSC iterations, and analogous to how the user zoom box line limits are determined. The range limits are more involved, however. They depend not only upon database specified parameters, but also upon the sound speeds used for data capture and SSC iteration. Thus, the range limits and hence, reconstruction LUTR, will change on each SSC iteration. The range limits for the reconstruction LUTR are computed in the following manner in order to ensure that the same set of image scatterers are analyzed for each distinct sound speed by the SSC algorithm. This also ensures that the image region ultimately viewed by the user after the SSC algorithm completes the optimization will be the same. Now for a given target return time t0, appearing at range r0=cacqt0/2, upon using a different sound speed for receive delay computation, the range at which the target will appear is given by
a simple scaling based upon the ratio of the trial sound speed ci and the channel data acquisition sound speed cacq, as shown in
The starting and ending ranges, as well as the number of range samples, are given by
where Nr is the number of range samples in the reconstruction LUTR for a given trial sound speed ci, and Δrgrid is the range grid sample spacing.
The next step is to perform all other reconstruction LUTR computations as before, which involve computation of the channel, range, and line dependent parameters described previously, computed for the trial sound speed dependent range samples.
In block 408, once all the image focus quality parameters have been computed and stored for all the trial sound speeds specified (yes to step 414), the SSC algorithm will perform the following: 1) determine whether the parameters support a robust sound speed estimate, and 2) if so, compute an optimal sound speed estimate.
In the first step, the focus quality parameters computed, along with any pre-specified parameters, should allow the SSC algorithm to determine whether a robust sound speed estimate is possible. This can be accomplished by comparing de-normalized focus quality maximum Qf
In the second step, when it is determined that the focus quality parameters will produce a reliable optimal sound speed estimate, then the question becomes how to compute an optimal sound speed estimate. While selecting the sound speed location of the total focus quality QfT
where ΔQfThreshold
Block 409 generates the receive reference delay profiles using the estimated body sound speed. Block 410 saves the receive delay profiles. Block 411 involves re-running the STC algorithm to produce new timing parameters, generating the reconstruction look-up table LUTR, updating any GUI parameters needed, and finally, returning the user to live imaging.
There are other ways in which to compute the focus quality parameters. For example, averaging over normalized spatial frequency may be performed prior to averaging over range, producing a range dependent focus quality function Qf(r,c). The computation of the focus quality parameters Qf
There are other uses for the focus quality parameters. For example, the range variation in the focus quality Qf(r,c) can potentially be used to infer the depth dependence of sound speed. Another example is potentially using the focus quality variation with sound speed to determine the sound speed homogeneity of the body and alter imaging parameters to improve the image under these conditions, such as changing frequency, etc.
It is to be understood that the above description is intended to be illustrative and not restrictive. Many embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the appended claims along with their full scope of equivalents.
Number | Name | Date | Kind |
---|---|---|---|
4265126 | Papadofrangakis | May 1981 | A |
4604697 | Luthra | Aug 1986 | A |
4852576 | Inbar et al. | Aug 1989 | A |
4852577 | Smith et al. | Aug 1989 | A |
5161535 | Short | Nov 1992 | A |
5260871 | Goldberg | Nov 1993 | A |
5269289 | Takehana et al. | Dec 1993 | A |
5313948 | Murashita et al. | May 1994 | A |
5357962 | Green | Oct 1994 | A |
5357965 | Hall et al. | Oct 1994 | A |
5365929 | Peterson | Nov 1994 | A |
5409010 | Beach et al. | Apr 1995 | A |
5415173 | Miwa et al. | May 1995 | A |
5417215 | Evans et al. | May 1995 | A |
5555534 | Maslak | Sep 1996 | A |
5566674 | Weng | Oct 1996 | A |
5579768 | Klesenski | Dec 1996 | A |
5581517 | Gee et al. | Dec 1996 | A |
5623928 | Wright et al. | Apr 1997 | A |
5654509 | Miele et al. | Aug 1997 | A |
5690111 | Tsujino | Nov 1997 | A |
5720289 | Wright et al. | Feb 1998 | A |
5776063 | Dittrich et al. | Jul 1998 | A |
5782766 | Weng et al. | Jul 1998 | A |
5799111 | Guissin | Aug 1998 | A |
5857973 | Ma et al. | Jan 1999 | A |
5871019 | Belohlavek | Feb 1999 | A |
5935074 | Mo et al. | Aug 1999 | A |
5954653 | Hatfield et al. | Sep 1999 | A |
5984870 | Giger et al. | Nov 1999 | A |
6016285 | Wright et al. | Jan 2000 | A |
6036643 | Criton et al. | Mar 2000 | A |
6068598 | Pan et al. | May 2000 | A |
6069593 | Lebby et al. | May 2000 | A |
6102859 | Mo | Aug 2000 | A |
6110119 | Hall | Aug 2000 | A |
6113544 | Mo | Sep 2000 | A |
6120446 | Ji et al. | Sep 2000 | A |
6142943 | Mo et al. | Nov 2000 | A |
6162176 | Washburn et al. | Dec 2000 | A |
6193663 | Napolitano et al. | Feb 2001 | B1 |
6221020 | Lysyansky et al. | Apr 2001 | B1 |
6263094 | Rosich et al. | Jul 2001 | B1 |
6312385 | Mo et al. | Nov 2001 | B1 |
6315728 | Muzilla et al. | Nov 2001 | B1 |
6322509 | Pan et al. | Nov 2001 | B1 |
6358205 | Ustuner et al. | Mar 2002 | B1 |
6390983 | Mo et al. | May 2002 | B1 |
6398733 | Simopoulos et al. | Jun 2002 | B1 |
6423003 | Ustuner et al. | Jul 2002 | B1 |
6434262 | Wang | Aug 2002 | B2 |
6450959 | Mo et al. | Sep 2002 | B1 |
6464637 | Criton et al. | Oct 2002 | B1 |
6464640 | Guracar et al. | Oct 2002 | B1 |
6464641 | Pan et al. | Oct 2002 | B1 |
6468218 | Chen et al. | Oct 2002 | B1 |
6491636 | Chenal et al. | Dec 2002 | B2 |
6497661 | Brock-Fisher | Dec 2002 | B1 |
6503203 | Rafter et al. | Jan 2003 | B1 |
6512854 | Mucci et al. | Jan 2003 | B1 |
6547737 | Njemanze | Apr 2003 | B2 |
6577967 | Mo et al. | Jun 2003 | B2 |
6679847 | Robinson et al. | Jan 2004 | B1 |
6860854 | Robinson et al. | Mar 2005 | B2 |
6926671 | Azuma et al. | Aug 2005 | B2 |
6932770 | Hastings et al. | Aug 2005 | B2 |
6980419 | Smith et al. | Dec 2005 | B2 |
7022075 | Grunwald et al. | Apr 2006 | B2 |
7627386 | Mo et al. | Dec 2009 | B2 |
20040179332 | Smith et al. | Sep 2004 | A1 |
20060074320 | Yoo et al. | Apr 2006 | A1 |
20060079778 | Mo et al. | Apr 2006 | A1 |
20080146922 | Steins et al. | Jun 2008 | A1 |
20100189329 | Mo et al. | Jul 2010 | A1 |
Number | Date | Country |
---|---|---|
WO 2008051738 | May 2008 | WO |
Entry |
---|
Anderson, M.E., et al., “The Impact of Sound Speed Errors on Medical Ultrasound Imaging”, J. Acous. Soc. of Am., Jun. 2000, pp. 3540-3548, vol. 107, No. 6. |
Haun, Mark Alden, “New Approached to Aberration Correction in Medical Ultrasound Imaging”, Ph.D. Thesis, University of Illinois at Urbana-Champaign, 2003. |
Jellins, J. et al., “Velocity compensation in water-coupled breast echography”, Ultrasonics, Sep. 1973, pp. 223-226. |
Liu, D., et al., “Adaptive Ultrasonic Imaging Using SONOLINE Elegra 198 ”, 2000 IEEE Ultrasonics Symposium, 4 pages. |
Nock, L., et al., “Phase aberration correction in medical ultrasound using speckle brightness as a quality factor”, J. Acous. Soc. Am., May 1989, vol. 85, No. 5, pp. 1819-1833. |
U.S. Appl. No. 10/961,709, filed Oct. 7, 2004, Mo et al. |
U.S. Appl. No. 11/492,471, filed Jul. 24, 2006, Napolitano et al. |
U.S. Appl. No. 11/586,212, filed Oct. 24, 2006, Steins et al. |
U.S. Appl. No. 12/340,578, filed Dec. 19, 2008, Mo et al. |
U.S. Appl. No. 12/628,169, filed Nov. 30, 2009, Mo et al. |
Freeman, S., “Retrospective Dynamic Transmit Focusing”, Ultrasonic Imaging 17, (1995), pp. 173-196. |
Gammelmark, K.L., et al. “Multi-Element Synthetic Transmit Aperture Imaging using Temporal Encoding”, 2002 SPIE Medical Imaging Meeting: Ultrasonic Imaging and Signal Processing, 2002, pp. 25-36. |
Haider, B., “Synthetic Transmit Focusing for Ultrasound Imaging”, 2000 IEEE Ultrasonics Symposium, 2000, pp. 1215-1218. |
Hergum, T., et al., “Parallel Beamforming using Synthetic Transmit Beams”, 2004 IEEE International Ultrasonics, Ferroelectrics, and Frequency Control Joint 50th Anniversary Conference, 2004, pp. 1401-1404. |
Nitzpon, P., et al., “New Pulsed Wave Doppler Ultrasound System to Measure Blood Velocities Beyond the Nyquist Limit,” 1995, IEEE Transactions Ultrasonics and Ferroelectrics, and Frequency Control, vol. 42, No. 2, pp. 265-279. |
Robinson, B., et al., “Synthetic Dynamic Transmit Focus”, 2000 IEEE Ultrasonics Symposium, pp. 1209-1214. |
Tortoli, P., et al., “Velocity Profile Reconstruction Using Ultrafast Spectral Analysis of Doppler Ultrasound,” IEEE Transactions Sonics and Ultrasonics, Jul. 1985, vol. SU-32, No. 4, pp. 555-561. |
International Search Report and Written Opinion of Apr. 3, 2008 for PCT Application No. PCT/US07/81253, 9 pages. |
Office Action of Nov. 5, 2010 for U.S. Appl. No. 12/628,169, 8 pages. |
Office Action of Aug. 3, 2010 for U.S. Appl. No. 11/586,212, 8 pages. |
Office Action of Jun. 24, 2010 for U.S. Appl. No. 11/492,471, 11 pages. |
Office Action of Dec. 2, 2008 for U.S. Appl. No. 10/961,709, 4 pages. |
Office Action of Sep. 14, 2007 for U.S. Appl. No. 10/961,709, 6 pages. |
Notice of Allowance of May 15, 2009 for U.S. Appl. No. 10/961,709, 8 pages. |
Number | Date | Country | |
---|---|---|---|
60701635 | Jul 2005 | US |