1. Field of the Invention
This invention relates generally to medical imaging systems. More specifically, this invention relates to high speed graphics processing, for example, for rendering and displaying ultrasound image data on a display.
2. Related Art
Doctors and technicians commonly employ medical imaging systems to obtain, display, and study anatomical images for diagnostic purposes. In ultrasound imaging systems, for example, a doctor may obtain heart images in an attempt to learn whether the heart functions properly. In recent years, these imaging systems have become very powerful, and often include high density ultrasound probes capable of obtaining high resolution images of a region of interest.
It would be beneficial in many instances for a doctor, using such probes, to view a rapid or real-time image sequence of a three dimension region over a significant section of anatomy. However, preparing and displaying such images has typically been a time consuming and difficult task for the imaging system. In order to prepare and display the images, the imaging system must analyze a vast amount of complex data obtained during the examination, determine how to render the data in three dimensions, and convert that data into a form suitable for the attached display.
As a result, the imaging systems typically spent a relatively large percentage of time and processing power to render and display images. In a sophisticated imaging system, such processing power could instead be applied to many other tasks, for example, presenting a more user-friendly interface and responding more quickly to commands. Furthermore, the degree of time and processing power required to render and display the images limited the amount and sophistication of rendering and other display options that could be applied, while still maintaining a suitable frame rate.
Therefore, there is a need for systems and methods that address the difficulties set forth above and others previously experienced.
In one embodiment, graphics processing circuitry for a medical imaging system includes a graphics processing unit, a system interface coupled to the graphics processing unit, and a graphics memory coupled to the graphics processing unit. The graphics memory holds an image data block, a vertex data block, and rendering plane definitions. The image data block stores image data entries for at least one imaging beam and the vertex data block stores vertex entries that define rendering shapes. The graphics processing unit accesses the image data entries and vertex entries to render a volume according to the rendering plane definitions.
Other systems, methods, features and advantages of the invention will be or will become apparent to one with skill in the art upon examination of the following figures and detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the invention, and be protected by the accompanying claims.
The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the marking systems and methods. In the figures, like reference numerals designate corresponding parts throughout the different views.
The ultrasound system 100 includes a transmitter 102 which drives image sensor, such as ultrasound probe 104. The ultrasound probe 104 includes an array of transducer elements 106 that emit pulsed ultrasonic signals into a region of interest 108 (e.g., a patient's chest). In some examinations, the probe 104 may be moved over the region of interest 108, or the beamformer 114 may steer ultrasound beams, in order to acquire image information over the scan planes 110, 111 across the region of interest 108. Each scan plane may be formed from multiple adjacent beams (two of which are labeled 140, 142).
The transducer array 106 may conform to one of many geometries, as examples, a 1D, 1.5D, 1.75D, or 2D probe. The probe 104 is one example of an image sensor that may be used to acquire imaging signals from the region of interest 108. Other examples of image sensors include solid state X-ray detectors, image intensifier tubes, and the like. Structures in the region of interest 108 (e.g., a heart, blood cells, muscular tissue, and the like) back-scatter the ultrasonic signals. The resultant echoes return to the transducer array 106.
In response, the transducer array 106 generates electrical signals that the receiver 112 receives and forwards to the beamformer 114. The beamformer 114 processes the signals for steering, focusing, amplification, and the like. The RF signal passes through the RF processor 116 or a complex demodulator (not shown) that demodulates the RF signal to form in-phase and quadrature (I/Q) data pairs representative of the echo signals, or multiple individual values obtained from amplitude detection circuitry. The RF or I/Q signal data may then be routed directly to the sample memory 118.
The ultrasound system 100 also includes a signal processor 120 to coordinate the activities of the ultrasound system 100, including uploading beam data and rendering parameters to the graphics processing circuitry 138 as explained in more detail below. The graphics processing circuitry 138 stores beam data, vertex data, and rendering parameters that it uses to render image frames and output the display signals that drive the display 126. The display 126 may be, as examples, a CRT or LCD monitor, hardcopy device, or the like.
The signal processor 120 executes instructions out of the program memory 128. The program memory 128 stores, as examples, an operating system 130 for the ultrasound system 100, user interface modules, system operating parameters, and the like. In general, the signal processor 120 performs selected processing operations on the acquired ultrasound information chosen from the configured ultrasound modalities present in the imaging system 100. The signal processor 120 may process in real-time acquired ultrasound information during a scanning session as the echo signals are received. Additionally or alternatively, the ultrasound information may be stored in the sample memory 118 during a scanning session and processed and displayed later after the examination is complete. In general, the ultrasound system 100 may acquire ultrasound image data at a selected frame rate (e.g., 5-50 2D or 3D images per second) and, by employing the graphics processing circuitry 138, coordinate display of derived 2D or 3D images at the same or different frame rate on the display 126.
The probe 104 may be used in conjunction with techniques including scanning with a 2D array and mechanical steering of 1-1.75D arrays. The beamformer 114 may steer the ultrasound beams to acquire image data over the entire region of interest 108. As will be explained in more detail below, the probe 104 may acquire image data for a full volume around the region of interest 108, and transfer that data to the graphics processing circuitry 138 for rendering.
When the probe 104 moves, or the beamformer 114 steers firings, along a linear or arcuate path, the probe 104 scans the region of interest 108. At each linear or arcuate position, the probe 104 fires an ultrasound beam into the region of interest 108 to obtain image data for a scan plane 110, 111. Adjacent scan planes may be acquired in order to cover a selected anatomical thickness. An operator may set the thickness by operating the control input 134.
More generally, the probe 104 obtains image components to reconstruct a three dimensional volume. Thus, as one example, the probe 104 may obtain image components in the form of regular sector scan planes that are assembled to form the volume. However, the probe 104 and graphics processing circuitry 138 are not limited to sector scan planes. In general, the probe 104 and graphics processing circuitry 138 may instead obtain and operate on a wide range of image components, including scan planes of different shape, curved surfaces, and the like, to render a complete volume. Thus, although the explanation below refers, for purposes of illustration, to “scan planes”, the methods and systems are more generally applicable to image components that may be assembled to render a three dimensional volume. Further, the graphics processing circuitry 138 may cut away data from one side of a plane. Several such cut away planes enables a user to cut away unwanted volume data.
With regard to
The GPU 202 may be, for example, an NVidia GeForce3™ GPU, or another commercially available graphics processor that supports volume textures. The display interface 206 may be an Red, Green, Blue CRT display driver, or a digital flat panel monitor driver, as examples. The display interface 206 takes image frames prepared by the GPU 202 that are stored in the frame memory 220 and generates the display control signals to display the image frames on a selected display. The system interface 206 provides a mechanism for communicating with the remainder of the image processing system 100. To that end, the system interface 206 may be implemented as a Peripheral Component Interconnect (PCI) interface, Accelerated Graphics Port (AGP) interface, or the like.
With regard next to
The array 300 includes beam data for four beams numbered zero (0) through three (3). Each beam includes 16 samples along it's length, labeled 0 through 15. Each beam has a start point (e.g., the first sample for that beam) and an end point (e.g., a last sample for that beam). The array 300 includes a beam 0 start point 302 (0,0) and a beam 0 end point 304 (0,15), as well as a beam 1 start point 306 (1,0) and a beam 1 end point 308 (1,15). The array 300 also includes a beam 2 start point 310 (2,0) and a beam 2 end point 312 (2,15), as well as a beam 3 start point 314 (3,0) and a beam 3 end point 316 (3,15).
As will be described in more detail below, the GPU 202 may render and display ultrasound images by setting up the graphics memory 208 to define triangles (or other shapes that the GPU 202 can process) that form the image.
Note that the sequence of triangles 502-512 in the triangle strip 500 give the appearance of an arc-shaped sector image for a scan plane. In general, a larger number of triangles (e.g., 512) may be employed to form a sector image that more closely conforms to any desired sector shape. The number of triangles employed is not limited by the number of ultrasound beams. Rather, a given beam may be considered a sector in its own right, and divided and rendered using many triangles. Since vertex coordinates in general may be stored as floating point numbers, it is possible to create these triangles by defining several start and end vertices per beam with sub-beam precision. The graphics hardware may then automatically interpolate between beams that are actually obtained by the beamformer 114.
While the graphics processing circuitry 138 may be employed to render and display a single scan plane composed of multiple triangles, the graphics processing circuitry 138 may also be employed to render a complete volume using the image data obtained by the probe 104 (e.g., multiple scan planes). When for example, the scan planes are rendered from back to front (e.g., in order of depth, or distance from a specified viewplane), the graphics processing circuitry 138 generates a three dimensional volume image.
In one embodiment, the graphics processing circuitry 138 may employ alpha-blending (sometimes referred to as alpha compositing) during the volume rendering process. To that end, the signal processor 120 or the graphics processing circuitry 138 associates transparency data with each pixel in each scan plane. The transparency data provides information to the graphics processing circuitry 138 concerning how a pixel with a particular color should be merged with another pixel when the two pixels are overlapped. Thus, as scan planes are rendered from back to front, the transparency information in pairs of pixels will help determine the pixel that results as each new plane is overlaid on the previous result.
For example,
Each scan plane 602-606 is formed from multiple ultrasound beams. Each ultrasound beam will be associated with many sampling points taken along the beam. The sampling points for each beam (e.g., the start and end points) may be employed to define triangles for the GPU 202 to render.
Thus, for example, with regard to
The first scan plane 602 includes three ultrasound beams 702, 704, and 706. The beam 702 includes a start point 708 and an end point 710. The beam 704 includes the start point 708 and the end point 712. The beam 706 includes the start point 708 and the end point 714.
The first scan plane 602 will be approximated by two triangles 716 and 718. The adjacent triangles 716 and 718 share two common vertices. The two triangles 716 and 718 spread out in a triangle fan from the apex vertex. However, as illustrated above with regard to
Turning briefly to
As the GPU 202 renders a volume, the GPU 202 blends each plane or image component with the content held by the frame buffer 908. Optionally, the graphics memory 208 may also include a vertex data index 910.
A more detailed view of the parameters 1000 in the graphics memory 208 is shown in
The GPU 202 employs the data in the beam data block 902 as texture memory. In other words, when the GPU 202 renders the triangles that form the image planes, the GPU 202 turns to the data in the beam data block 902 for texture information. As a result, the triangles are rendered with ultrasound imaging data as the applied texture, and the resultant images therefore show the structure captured by the imaging system 100.
Because it is a dedicated hardware graphics processor, the GPU 202 generates image frames at very high speed. The imaging system 100 may thereby provide very fast image presentation time to doctors and technicians working with the imaging system 100. Furthermore, with the GPU 202 performing the processing intensive graphics operations, the remaining processing power in the imaging system 100 is free to work on other tasks, including interacting with and responding to the doctors and technicians operating the imaging system 100.
The vertex data block 904 includes vertex entries 1004 that define rendering shapes (e.g., triangles, or other geometric shapes that the GPU 202 can manipulate). The vertex data entries 1004, for example, may specify triangle vertices for the GPU 202. Each vertex entry 1004 may include a spatial location for the vertex and a texture location for the vertex. The spatial location may be an x, y, z coordinate triple, to identify the location of the vertex in space. The spatial location may be provided by the beamformer 114 that controls and steers the beams.
The texture location may be a pointer into the beam data block 902 to specify the data value for that vertex. In one implementation, the texture location is expressed as a texture triple u, v, w that indexes the beam data block 902. More particularly, when the sample point values are conceptually organized along a u-axis, a v-axis, and a w-axis, the texture triple u, v, w specifies a point in the beam data block 902 from which the GPU 202 retrieves a sample point value for the vertex in question. The texture triples are stored, in general, as floating point numbers. Thus, sample points may be specified with sub-sample precision. When the selected GPU 202 supports tri-linear interpolation, the GPU 202 may then map interpolated texture values to the frame buffer 908 rather than selecting the closest sample from an ultrasound beam. As a result, the GPU 202 may generate smooth images even when the number of ultrasound beams in a 3D dataset is limited.
In one implementation, the order of the vertices in the vertex data block 904 will specify a series of triangles in a geometric rendering shape, for example a triangle strip, triangle list, or triangle fan. To that end, the processor 120 may store the vertices in the vertex data block 904 such that each scan plane may be approximated by a series of triangles. Generally, a triangle strip is a set of triangles for which each triangle shares two vertices with a preceding triangle. The first three vertices define a triangle and then each additional vertex defines another triangle by using the two preceding vertices.
For the example shown above in
The GPU 202 retrieves the vertices from the vertex data block 904. As the GPU 202 renders the triangles, the GPU 202 applies texture to the triangles specified by the texture triples. In doing so, the GPU 202 retrieves sample point values from the beam data block 902 for the pixels that constitute each rendered triangle. Thus, while the vertex entries specify the boundary sample point values at the three vertices of a given triangle, the GPU 202 employs the data taken along each beam (away from the vertices) to render the area inside the triangle.
With regard to the rendering parameters 906, those parameters include a viewpoint definition 1006 and pixel rendering data 1008. The viewpoint definition 1006 specifies the rendering viewpoint for the GPU 202 and may be given by a point on an arbitrary plane, and a view plane normal to specify a viewing direction. Multiple viewpoint definitions (rendering plain definitions) 1006 may be provided so that the GPU 202 can render and display image frames drawn from multiple viewpoints, as an aid in helping a doctor or technician locate or clearly view features of interest.
Additionally, the vertex data index 910 may specify three or more sets of rendering geometries that the GPU 202 may employ to render the image components from back to front from any desired direction. Each set of rendering geometries defines, as examples, one or more rendering planes at a given depth or curved surfaces for the GPU 202. Each rendering plane may be specified using a vertex list interpreted as a triangle strip. The plane (or curved surface) along which the triangle strip lies defines the rendering plane or curved surface.
The rendering planes may be specified at any given angle with regard to the image components obtained. As examples, a first set of rendering geometries may be as described above with regard to sector planes (e.g., along each beam). A second set of rendering geometries may then be defined using rendering planes that are orthogonal to the first set of rendering planes (e.g., cutting across each beam at pre-selected sample points along the beams). A third set of rendering geometries may be employed when viewing the image components from a direction approximately parallel to the sector planes. In that instance, a third set of rendering geometries may then be defined in which each rendering plane has a different fixed distance to the center of a sector (with a viewpoint above the center of a sector).
With regard next to the pixel rendering data 1008, that data provides, for example, a lookup table 1016 that maps between beam data values and color or transparency values. As a result, the GPU 202 may correspondingly apply that color or transparency to a pixel rendered using a particular beam data value. Thus, for example, increasingly dark values may be given transparency levels that makes the GPU 202 render them increasingly transparent, while increasingly bright values may be given transparency levels that makes the GPU 202 render them increasingly opaque. As noted above, the GPU 202 employs the transparency values when performing alpha blending during rendering.
In another implementation, the graphics memory 208 may also include a vertex data index 910. The vertex data index 910 includes one or more vertex index sets. In the example shown in
Each vertex index set includes one or more pointers into the vertex data block 904. Each pointer may be, for example, an integer value specifying one of the vertices in the vertex data block 904. Each vertex index set thus specifies. (in the same manner as explained above with regard to
Note that the GPU 202 may be instructed to mix two or more sets of beam data together at any given point. For instance, one dataset in the beam data block 902 may be B-mode (tissue) sample point values, while a second dataset in the beam data block 902 may be colorflow sample point values. One or more of the vertex entries may then specify two or more texture coordinates to be mixed. As an example, the vertex entry 1005 specifies two different texture coordinates (u, v, w) from which the GPU 202 will retrieve texture data when rendering that particular vertex.
In this regard, one of the datasets in the beam data block 902 may store local image gradients. The GPU 202 may then perform hardware gradient shading as part of the rendering process. In certain images, gradient shading may improve the visual appearance of tissue boundaries. One or more lightsource definitions 1018 may therefore be provided so that the GPU 202 may determine local light reflections according to the local gradients. The lightsource definitions 1018 may include, as examples, spatial (e.g., x, y, z) positions for the light sources, as well as lightsource characteristics including brightness or luminosity, emission spectrum, and so forth.
Furthermore, the datasets in the beam data block 902, in conjunction with a dataset in the vertex data block 904 (or vertex data index 910) may define other graphics objects or image components. For example, the beam data block 902 and vertex data block 904 may store triangle strips that define an anatomical model (e.g., a heart ventricle). The anatomical model may then be rendered with the ultrasound image data to provide a view that shows the model along with the actual image data acquired. Such a view may help a doctor or technician locate features of interest, evaluate the scanning parameters employed when obtaining the image data, and so forth.
The graphics processing circuitry 138 may also be employed in stereoscopic displays. To that end, the signal processor 120 may command the GPU 202 to render a volume from a first viewing direction, and then render a volume from a slightly different viewing direction. The two renderings may then be displayed on the display 126. When viewed through stereoscopic or three dimensional viewing glasses, the stereoscopic display yields a very realistic presentation of the rendered volume. The viewing directions may be specified by the stereoscopic viewpoint definitions, two of which are labeled 1020 and 1022 in
With regard next to
The signal processor 120 then prepares the vertex entries that define the triangles used to render an image component. For example, the vertex entries may specify triangle lists that define planes, curved surfaces, anatomical models, and the like. The signal processor 120 transfers the vertex entries to the vertex data block 904 (Step 1106). Similarly, the signal processor 120 prepares and transfers the vertex index sets described above into the vertex data index 910 (Step 1108).
In addition, the signal processor 120 may transfer the rendering parameters 906 into the graphics memory 208 (Step 1110). The rendering parameters include, as examples, viewpoint definitions, transparency lookup tables, light source definitions, stereoscopic viewpoints, and other pixel rendering information. Once the data and parameters have been transferred, the signal processor 120 may then initiate rendering of the three dimensional volume (Step 1112.) To that end, for example, the signal processor 120 may send a rendering command to the GPU 202.
While various embodiments of the invention have been described, it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible that are within the scope of this invention.