The present invention relates to generating ultrasound volume rendered images at a higher rate than that at which the underlying 3 D ultrasound data is acquired. In particular, the present invention relates to generating volume rendered images at higher rates by incorporating new ultrasound data into the 3 D data set as soon as it is acquired and re-projecting at higher rates.
Three dimensional ultrasound imaging, both single sweep (3 D) and real-time (commonly known as 4 D or Live 3 D), is becoming more and more prevalent on modern ultrasound systems. Clinically, it is used in many applications including: OB (for example for baby faces and for diagnosis of congenital defects, Cardiac (for example for quantitative assessment of ejection fraction and for visualization of cardiac function), and others.
Real-time (Live) 3 D involves acquisition and display of a full volume of data at a rate fast enough for the 3 D display to show 3 D rendered images or multiple slices at a clinically useful rate. Capture of 3 D data for general imaging applications is done using motorized or 2 D array transducers.
Ultrasound volume rendered images are generated by projecting a 3 D data set onto a 2 D surface. These images are typically generated at the same rate at which the underlying 3 D ultrasound data is acquired, which is limited by acoustic propagation time and/or (for mechanical 3 D probes) mechanical limitations. Most clinicians would prefer these rates to be higher.
Motorized acquisition is done by mechanically moving a ID array under control of the Motor Controller and acquiring beam data. The probe is moved continuously and scan lines (beams) from the entire volume or only at sites where multiple 2 D slice views are desired are acquired during rotation or translation. The focal delays, weights and timing for these beams is set by the Front End Controller. Acquisition using 2 D array transducers (X-Matrix) is done by steering beams electronically in both azimuth and elevation, again under the control of the Front End Controller and typically, also a micro-beam former within the 2 D transducer itself. The RF beams so formed are then fed through a Signal Conditioning module, which typically performs various standard ultrasound signal processing operations such as envelope detection, compression, etc.
The scan converter of the visualization software generates the volume or slice view frames by assembling the scan lines by position, as shown in
In most cases ultrasound 3 D or 4 D views, known as rendered views, are generated by projecting the entire volume of data along rays in the direction of a viewpoint onto a 2 D plane. Controls can be manipulated to adjust the viewpoint direction, transparency and texture of the volume as well as trim and sculpt away outer regions to better vie interior regions. The result is a “3 D image”, which provides qualitative visualization of the volume. While specific implementations differ, volume rendering approximates the propagation of light (or ultrasound) through a semi-opaque volume. The basic steps of all volume-rendering algorithms consist of assigning colors and opacities to each sample in the volume projecting the samples along linear rays to a 2 D image, and accumulating the samples projected along each ray. This process is shown in
One limitation of existing ultrasound systems operating in real time 3 D is that the volume rendered images are typically generated at the same rate at which the underlying 3 D ultrasound data is acquired—i.e. the visualization rate is the same as the acquisition rate. For a large field view (especially in OB and General Imaging) and for an acceptable image quality, a very large number of acoustic scan lines must be acquired in order to adequately sample the volume, resulting in acquisition rates that may be as low as a few Hz. This is true even for a Matrix (i.e. 2 D) array. Since the visualization rate is the same as the acquisition rate, this creates a problem for the user who is trying to interact, in real-time, with the anatomy being visualized. One way to improve the volume rates is to acquire less data, but this sacrifices either field of view or image quality or both.
It would be desirable to provide ultrasound volume images generated at a higher rate that avoids the drawbacks of the aforementioned prior art.
Real-time spatial compounding (known as SonoCT at Philips), which involves averaging ultrasound data obtained from multiple, overlapping 2 D images acquired from different angles, has a similar problem in that a large amount of acoustic data is required to generate one complete compounded image, so in effect the compounded frame rate is low. However, experience from SonoCT has shown that the user experience is much improved if the compounded images are updated as soon as any new information arrives—specifically, by updating the compounded image each time a new component frame (i.e. one steering angle) is acquired, as opposed to waiting for an entire compound sequence (see U.S. Pat. No. 6,126,599). Essentially, we are presenting the compounded images at the component frame rate instead of the compound frame rate, and these are similar to the images that would be obtained if one could interpolate perfectly between the truly independent compounded images. The user typically perceives the frame rate to be about 2× the actual compounded rate. Another advantage is latency, since the user sees new information at the rate of the component frames instead of the fully compounded image.
Since volume projection is a very similar concept to the frame averaging used in SonoCT, these same benefits can be transferred to 3 D volume rendered imaging by updating the rendered image as each component 2 D slice is obtained, or at some other intermediate rate. The idea is to update the volume rendered image at a rate that is determined by clinical need and processing power, instead of at the 3 D volume acquisition rate.
The present invention provides for generating ultrasound volume images at a higher rate by generating rendered images at the same rate as that of the acquired 2 D frames rather than at the rate for the acquired 3 D volumes, or at some intermediate rate.
a illustrates a conventional 3 D scan conversion for a linear sweep;
b illustrates a conventional 3 D scan conversion for a fan sweep;
Referring to the drawings,
Since Volume rendering at the 2 D acquisition frame rate results in rendered volumes that have much image data in common (only 1 out of N 2 D frames is unique), so that the rendered images look very similar, in practice it is more likely that volume rendering will occur at a rate somewhere between the 2 D acquisition rate (1/t) and the 3 D acquisition rate (1/Nt). Also, volume rendering at the 2 D acquisition rate may also exceed the system processing resources, since volume rendering is quite processing intensive. Experience from SonoCT has suggested that a volume rendering rate of around (2/Nt), i.e. twice the acquisition volume rate, may represent a good compromise.
This concept requires a 3 D volume buffer (10), as shown in
Thus, the present invention provides a method and system for modifying the typical 3 D volume rendering (shown in
One issue is the risk of “tears” between parts of the volume that have been acquired at different times. This can be mitigated by always projecting at or close to right angles to the 2 D sweep direction in which case any artifacts will be no worse than they would be in the projected views that would normally (i.e. at the acquisition volume rate) be displayed. On a Matrix array, this is easy to ensure for projections that are not directly along the beam axis since, in principle, 2 D slices can be swept in any orientation as long as the apex of the beams is at the transducer.
The present invention can run on any ultrasound system that supports real-time 3 D imaging and therefore the present invention is not limited to any one ultrasound system. By way of illustrative examples but not intended to be limiting, the present invention can run on the following ultrasound systems: Philips iU22; Philips iE33; GE Logic 9; GE Voluson; Siemens Antares; and Toshiba Aplio.
While presently preferred embodiments have been described for purposes of the disclosure, numerous changes in the arrangement of method steps and apparatus parts can be made by those skilled in the art. Such changes are encompassed within the spirit of the invention as defined by the appended claims.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/IB06/54722 | 12/8/2006 | WO | 00 | 6/13/2008 |
Number | Date | Country | |
---|---|---|---|
60750752 | Dec 2005 | US |