The present invention relates to diagnostic ultrasound methods and systems. In particular, the present invention relates to methods and systems for visualizing ultrasound data sets on a model.
Numerous ultrasound methods and systems exist for use in medical diagnostics. Various features have been proposed to facilitate patient examination and diagnosis based on ultrasound images of the patient. For example, certain systems offer various techniques for obtaining volume rendered data. Systems have been developed to acquire information corresponding to a plurality of two-dimensional representations or image planes of an object for three-dimensional reconstruction and surface modeling.
Heretofore, quantitative object time data has yet to be shown associated with the areas of a surface model. In the past, ultrasound methods and systems were unable to present quantitative time data with surface model rendering techniques.
A need exists for improved methods and systems that are able to implement surface model rendering techniques for the visualization of quantitative data.
An ultrasound method for visualization of quantitative data on a surface model is provided. The ultrasound method acquires ultrasound information from an object. The information acquired defines ultrasound images along at least first and second scan planes through the object and is stored in a buffer memory. The method then constructs a surface model of the object based on the ultrasound information. Timing information associated with local areas on the object is determined. The surface model and timing information are displayed with the timing information being positioned proximate regions of the surface model corresponding to local areas on the object.
In accordance with an alternative embodiment, an ultrasound system is provided that includes a probe to acquire ultrasound information from an object and a memory for storing the ultrasound information along at least first and second scan planes through the object. A processor for constructing a surface model of the object based on the ultrasound information is included. The processor determines timing information associated with local areas on the object. The system includes a display for presenting a surface model and the timing information, with the timing information being positioned proximate regions of the surface model corresponding to the local areas on the object.
The beamformer 110 delays, apodizes and sums each electrical signal with other electrical signals received from the array transducer 106. The summed signals represent echoes from the ultrasound beams or lines. The summed signals are output from the beamformer 110 to an RF processor 112. The RF processor 112 may generate in phase and quadrature (I and Q) information. Alternatively, real value signals may be generated from the information received from the beamformer 110. The RF processor 112 gathers information (e.g. I/Q information) related to one frame and stores the frame information with time stamp and orientation/rotation information into an image buffer 114. Orientation/rotation information may indicate the angular rotation one frame makes with another. For example, in a tri-plane situation whereby ultrasound information is acquired simultaneously for three differently oriented planes or views, one frame may be associated with an angle of 0 degrees, another with an angle of 60 degrees, and a third with an angle of 120 degrees. Thus, frames may be added to the image buffer 114 in a repeating order of 0 degrees, 60 degrees, 120 degrees, . . . 0 degrees, 60 degrees, 120 degrees . . . . The first and fourth frame in the image buffer 114 have a first common planar orientation. The second and fifth frames have a second common planar orientation and the third and sixth frames have a third common planar orientation. Alternatively, in a biplane situation, the RF processor 112 may collect frame information and store the information in a repeating frame orientation order of 0 degrees, 90 degrees, 0 degrees, 90 degrees, . . . . The frames of information stored in the image buffer 114 are processed by the 2D display processor 116. Other acquisition strategies may include multi-plane variations of interleaving and frame rate decimation . . . . Also rotation of multi-plane to get higher spatial resolution by combining several beats.
The 2D display processors 116, 118, and 120 operate alternatively and successfully in round-robin fashion processing image frames from the image buffer 114. For example, the display processors 116, 118, and 120 may have access to all of the data slices in the image buffer 114, but are configured to operate upon data slices having one angular orientation. For example, the display processor 116 may only process image frames from the image buffer 114 associated with an angular rotation of 0 degrees. Likewise, the display processor 118 may only process 60 degree oriented frames and the display processor 120 may only process 120 degree oriented frames.
The 2D display processor 116 may process a set of frames having a common orientation from the image buffer 114 to produce a 2D image or view of the scanned object in a quadrant 126 of a computer display 124. The sequence of image frames played in the quadrant 126 may form a cine loop. Likewise, the display processor 118 may process a set of frames from the image buffer 114 having a common orientation to produce a second different 2D view of the scanned object in a quadrant 130. The display processor 120 may process a set of frames having a common orientation from the image buffer 114 to produce a third different 2D view of the scanned object in a quadrant 128.
For example, the frames processed by the display processor 116 may produce an apical 2-chamber view of the heart to be shown in the quadrant 126. Frames processed by the display processor 118 may produce an apical 4-chamber view of the heart to be shown in the quadrant 130. The display processor 120 may produce frames to form an apical parasternal long-axis (PLAX) view of the heart to be shown in the quadrant 128. All three views of the human heart may be shown simultaneously in real time in the three quadrants 126, 128, and 130 of the computer display 124.
A 2D display processor, for example the processor 116, may perform filtering of the frame information received from the image buffer 114, as well as processing of the frame information, to produce a processed image frame. Some forms of processed image frames may be B-mode data (e.g. echo signal intensity or amplitude) or Doppler data. Examples of Doppler data include color flow data, color power Doppler), or Doppler Tissue data. The display processor 116 may then perform scan conversion to map data from a polar to Cartesian coordinate system for display on a computer display 124.
Optionally, a 3D display processor 122 may be provided to process the outputs from the other 2D display processors 116, 118, and 120. Processor 122 may combine the 3 views produced from 2D display processors 116, 118, and 120 to form a tri-plane view in a quadrant 132 of the computer display 124. The tri-plane view may show a 3D image, e.g. a 3D image of the human heart, aligned with respect to the 3 intersecting planes of the tri-plane. In one embodiment, the 3 planes of the tri-plane intersect at a common axis of rotation.
A user interface 134 is provided which allows the user to input scan parameters 136. The scan parameters 136 may allow the user to designate what number of planes in the scan is desired. The scan parameters may allow for adjusting the depth and width of a scan of the object for each of the planes of the tri-plane. When performing simultaneous acquisition of scan data from three planes, the beamformer 110 in conjunction with the transmitter 102 signals the array transducer 106 to produce ultrasound beams that are focused within and adjacent to the three planes that slice the scan object. The reflected ultrasound echoes are gathered simultaneously to produce image frames that are stored in the image buffer 114. As the image buffer 114 is being filled by the RF processor 112, the image buffer 114 is being emptied by the 2D display processors 116, 118, and 120. The 2D display processors 116, 118, and 120 form the data for viewing as 3 views of the scan object in corresponding computer display quadrants 126, 130, and 128. The display of the 3 views in quadrants 126, 130, and 128, as well as an optional displaying of the combination of the 3 views in quadrant 132, is in real time. Real time display makes use of the scan data as soon as the data is available for display.
The probe 202 is moved, such as along a linear or arcuate path, while scanning a region of interest (ROI). At each linear or arcuate position, the probe 202 obtains scan planes 210. Alternatively, a matrix array transducer probe 202 with electronic beam steering may be used to obtain the scan planes 210 without moving the probe 202. The scan planes 210 are collected for a thickness, such as from a group or set of adjacent scan planes 210. The scan planes 210 are stored in the memory 212, and then passed to a volume scan converter 214. In some embodiments, the probe 202 may obtain lines instead of the scan planes 210, and the memory 212 may store lines obtained by the probe 202 rather than the scan planes 210. The volume scan converter 214 may process lines obtained by the probe 202 rather than the scan planes 210. The volume scan converter 214 receives a slice thickness setting from a control input 216, which identifies the thickness of a slice to be created from the scan planes 210. The volume scan converter 214 creates a 2D frame from multiple adjacent scan planes 210. The frame is stored in slice memory 218 and is accessed by a surface rendering processor 220. The surface rendering processor 220 performs surface rendering upon the frame at a point in time by performing an interpolation of the values of adjacent frames. The output of the surface rendering processor 220 is passed to the video processor 222 and the display 224. The position of each echo signal sample (voxel) is defined in terms of geometrical accuracy (i.e., the distance from one voxel to the next) and ultrasonic response (and derived values from the ultrasonic response). Suitable ultrasonic responses include gray scale values, color flow values, tissue velocity, strain rate and angio or power Doppler information, and combinations thereof. B-mode data may be utilized to outline the model. The surface of the model is defined through surface rendering. Once the surface of the model is defined, quantitative information is then mapped onto the surface. The mapping operation may be achieved between frames by interpolation of adjacent frames or planes at different depths in first and second scan planes that intersect with one another along a common axis.
At 304, acquired ultrasound information is stored in a data memory 212 (
At 306, a model, for example a 3D surface model, a bull's eye model, or a heart mitral valve (MV) ring model is constructed of the object based on the ultrasound information acquired. An outline of the object may be manually determined by mouse clicking on a series of points (or mouse drawing of contours) in one of the planar images of the object at predefined times in the cyclical motion of the object. The mouse may be part of the user interface 134. A manual determination of the outline may be done in each of the three data planes. In an alternative embodiment, the outline of the object/landmark may be determined automatically within the data planes by the 3D display processor 122 of
At 308, quantitative object timing information, such as one of tissue velocity, displacement, strain, and strain rate, associated with local areas on the object is determined. The object timing information associated with the local areas is relative to a reference time for the local areas. In the example of the human heart, the reference time may be the QRS point in the heart cycle. The timing information defines a time from the reference time to when the local area reaches a particular state in a series of states through which the object cycles. Quantitative object timing information may be used to detect malfunctioning of tissues of the object. For example, an area of tissue may be found in the human heart to lag in time from QRS to reach peak velocity in contrast to surrounding tissue areas. The lag in time to reach peak velocity may indicate the presence of diseased tissue.
At 310, a model, e.g. a 3D surface model, bull's eye surface model, or mitral valve ring surface model, and object timing information are displayed. The timing information being displayed is positioned proximate regions of the model corresponding to the local areas on the object. The timing information may constitute at least one of color coding of, and vector indicia on, the regions of the model. Color coding with a range of colors indicating a range from normal to abnormal may be used to visualized the desired parameter of quantitative object timing data, e.g. time to peak tissue velocity, time to peak strain. Such color coding may visually identify asynchronous areas of tissue, for example in the heart, that are unhealthy.
The geometrical surface model 408 may be color coded to determine 308 and to visually display 310 the mapping of quantitative object timing data. For example, a portion of the surface model 408, for example portion 430, may be colored red-orange, while the rest of the surface outline is colored green. The color coding of the surface model 408 may indicate the portion 430 having a reddish-orange color for mapped TSI data which indicates tissue with delayed motion, while the rest of the surface is colored in green to indicate tissue with early motion.
Color coding is accomplished according to 308 and 310 in
The tissue delay of different regions of interest may be compared to identify a degree of delay. For example, ROIs corresponding to segments within the left ventricle may be compared to identify the most delayed segment. Similarly, segments may be compared between the left and right ventricles. Although
The system and method described herein include automatic detection of peaks, zero-crossings or other features of tissue velocity, displacement, strain rate and strain data as a function of time. By processing only the image frames within the selected time interval, the processing time is shortened and the possibility of false positives is lowered, such as may occur when an incorrect peak is identified. The system and method color codes the delay of samples in the image in relation to the onset of the QRS, and presents the data as a parametric image, both in live display and in replay. Thus, heart segments or other selected tissue with delayed motion might be more easily visualized than with other imaging modes. Therefore, patients who will respond favorably to cardiac resynchronization therapy (CRT) may be more easily selected, and the optimal position for the left ventricle pacing lead for a cardiac pacemaker may be located by identifying the most delayed site within the left ventricle. Furthermore, the effect of the various pacemaker settings, such as AV-delay and VV-delay, may be studied to find the optimal settings.
Optionally, the model may not be a surface model. Instead, the model may be a splat rendering, Bulls-Eye, a mitral ring model and the like.
While the invention has been described in terms of various specific embodiments, those skilled in the art will recognize that the invention can be practiced with modification within the spirit and scope of the claims.
This application claims priority to and the benefit of the filing date of U.S. Provisional Application No. 60/581,675 filed on Jun. 22, 2004 and which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
60581675 | Jun 2004 | US |