This disclosure relates generally to an ultrasound imaging system and method for displaying a volume-rendered image and an planar image that are both colorized according to the same depth-dependent scheme.
Conventional ultrasound imaging systems acquire three-dimensional ultrasound data from a patient and are then able to generate and display multiple types of images from the three-dimensional ultrasound data. For example, conventional ultrasound imaging systems may generate and display a volume-rendered image based on the three-dimensional ultrasound data and/or conventional ultrasound imaging systems may generate one or more planar images from the three-dimensional ultrasound data. The volume-rendered image is a perspective view of surfaces rendered from the three-dimensional ultrasound data while the planar image is an image of a plane through the volume included within the three-dimensional ultrasound data. Users would typically use a volume-rendered image to get an overview of an organ or structure and then view one or more planar images of slices through the volume-rendered image in order to obtain more-detailed views of key portions of the patient's anatomy. Planar images generated from three-dimensional ultrasound data are very similar to images generated from conventional two-dimensional ultrasound modes, such as B-mode, where every pixel is assigned an intensity based on the amplitude of the ultrasound signal received from the location in the patient corresponding to the pixel.
Conventional ultrasound imaging systems typically allow the user to control rotation and translation of the volume-rendered image. In a similar manner, conventional ultrasound imaging systems allow the user to control the position of the plane being viewed in any planar images through adjustments in translation and tilt. Additionally, ultrasound imaging systems typically allow the user to zoom in on specific structures and potentially view multiple planar images, each showing a different plane through the volume captured in the three-dimensional ultrasound data. Due to all of the image manipulations that are possible on conventional ultrasound imaging systems, it is easy for users to become disoriented within the volume. Between adjustments and rotations to volume-rendered images and adjustments, including translations, rotations, and tilts to the planar images, it may be difficult for even an experienced clinician to remain oriented with respect to the patient's anatomy while manipulating and adjusting the volume-rendered image and/or the planar images.
For these and other reasons an improved method and system for generating and displaying images generated from three-dimensional ultrasound data is desired.
The above-mentioned shortcomings, disadvantages and problems are addressed herein which will be understood by reading and understanding the following specification.
In an embodiment, a method of ultrasound imaging includes generating a volume-rendered image from three-dimensional ultrasound data, wherein the volume-rendered image is colorized with at least two colors according to a depth-dependent color scheme. The method includes displaying the volume-rendered image. The method includes generating a planar image from the three-dimensional ultrasound data, wherein the planar image is colorized according to the same depth-dependent color scheme as the volume rendered image. The method also includes displaying the planar image.
In another embodiment, a method of ultrasound imaging includes generating a volume-rendered image from three-dimensional ultrasound data and applying a depth-dependent color scheme to the volume-rendered image. The method includes displaying the volume-rendered image after applying the depth-dependent color scheme to the volume-rendered image. The method includes generating a planar image of a plane that intersects the volume-rendered image, applying the depth-dependent color scheme to the planar image, and displaying the planar image after applying the depth-dependent color scheme to the planar image.
In another embodiment, an ultrasound imaging system includes a probe adapted to scan a volume of interest, a display device, a user interface, and a processor in electronic communication with the probe, the display device, and the user interface. The processor is configured to generate a volume-rendered image from the three-dimensional ultrasound data, apply a depth-dependent color scheme to the volume-rendered image, and display the volume-rendered image on the display device. The processor is configured to generate a planar image of a plane that intersects the volume-rendered image, apply the depth-dependent color scheme to the planar image, and display the planar image on the display device at the same time as the volume-rendered image.
Various other features, objects, and advantages of the invention will be made apparent to those skilled in the art from the accompanying drawings and detailed description thereof.
In the following detailed description, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific embodiments that may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the embodiments, and it is to be understood that other embodiments may be utilized and that logical, mechanical, electrical and other changes may be made without departing from the scope of the embodiments. The following detailed description is, therefore, not to be taken as limiting the scope of the invention.
The ultrasound imaging system 100 also includes a processor 116 to process the ultrasound data and generate frames or images for display on a display device 118. The processor 116 may include one or more separate processing components. For example, the processor 116 may include a central processing unit (CPU), a microprocessor, a graphics processing unit (GPU), or any other electronic component capable of processing inputted data according to specific logical instructions. Having a processor that includes a GPU may advantageous for computation-intensive operations, such as volume-rendering, which will be described in more detail hereinafter. The processor 116 is in electronic communication with the probe 105, the display device 118, and the user interface 115. The processor 116 may be hard-wired to the probe 105 and the display device 118, and the user interface 115, or the processor 116 may be in electronic communication through other techniques including wireless communication. The display device 118 may be a flat panel LED display according to an embodiment. The display device 118 may include a screen, a monitor, a projector, a flat panel LED, or a flat panel LCD according to other embodiments.
The processor 116 may be adapted to perform one or more processing operations according to a plurality of selectable ultrasound modalities on the ultrasound data. Other embodiments may use multiple processors to perform various processing tasks. The processor 116 may also be adapted to control the acquisition of ultrasound data with the probe 105. The ultrasound data may be processed in real-time during a scanning session as the echo signals are received. For purposes of this disclosure, the term “real-time” is defined to include a process performed with no intentional lag or delay. An embodiment may update the displayed ultrasound image at a rate of more than 20 times per second. The images may be displayed as part of a live image. For purposes of this disclosure, the term “live image” is defined to include a dynamic image that is updated as additional frames of ultrasound data are acquired. For example, ultrasound data may be acquired even as images are being generated based on previously acquired data and while a live image is being displayed. Then, according to an embodiment, as additional ultrasound data are acquired, additional frames or images generated from more-recently acquired ultrasound data are sequentially displayed. Additionally or alternatively, the ultrasound data may be stored temporarily in a buffer during a scanning session and processed in less than real-time in a live or off-line operation. Other embodiments of the invention may include multiple processors (not shown) to handle the processing tasks. For example, a first processor may be utilized to demodulate and decimate the ultrasound signal while a second processor may be used to further process the data prior to displaying an image. It should be appreciated that other embodiments may use a different arrangement of processors.
The processor 116 may be used to generate an image, such as a volume-rendered image or a planar image, from a three-dimensional ultrasound data acquired by the probe 105. According to an embodiment, the three-dimensional ultrasound data includes a plurality of voxels, or volume elements. Each of the voxels is assigned a value or intensity based on the acoustic properties of the tissue corresponding to a particular voxel.
Referring to both
In an exemplary embodiment, gradient shading may be used to generate a volume-rendered image in order to present the user with a better perception of depth regarding the surfaces. For example, surfaces within the three-dimensional ultrasound data 150 may be defined partly through the use of a threshold that removes data below or above a threshold value. Next, gradients may be defined at the intersection of each ray and the surface. As described previously, a ray is traced from each of the pixels 163 in the view plane 154 to the surface defined in the dataset 150. Once a gradient is calculated at each of the rays, a processor 116 (shown in
According to all of the non-limiting examples of generating a volume-rendered image listed hereinabove, the processor 116 may use color in order to convey depth information to the user. Still referring to
Still referring to
Optionally, embodiments of the present invention may be implemented utilizing contrast agents. Contrast imaging generates enhanced images of anatomical structures and blood flow in a body when using ultrasound contrast agents including microbubbles. After acquiring ultrasound data while using a contrast agent, the image analysis includes separating harmonic and linear components, enhancing the harmonic component and generating an ultrasound image by utilizing the enhanced harmonic component. Separation of harmonic components from the received signals is performed using suitable filters. The use of contrast agents for ultrasound imaging is well known by those skilled in the art and will therefore not be described in further detail.
In various embodiments of the present invention, ultrasound data may be processed by other or different mode-related modules. The images are stored and timing information indicating a time at which the image was acquired in memory may be recorded with each image. The modules may include, for example, a scan conversion module to perform scan conversion operations to convert the image frames from Polar to Cartesian coordinates. A video processor module may be provided that reads the images from a memory and displays the image in real time while a procedure is being carried out on a patient. A video processor module may store the image in an image memory, from which the images are read and displayed. The ultrasound imaging system 100 shown may be a console system, a cart-based system, or a portable system, such as a hand-held or laptop-style system according to various embodiments.
The screen shot 300 includes a volume-rendered image 302, a first planar image 304, a second planar image 306, and a third planar image 308.
Referring to
Referring now to
At step 408 the processor 116 displays a volume-rendered image, such as volume-rendered image 302, on the display device 118. It should be noted that the volume-rendered image 302 is displayed after the processor 116 has applied the depth-dependent color scheme to the volume-rendered image at step 406. As such, the pixels in the volume-rendered image 302 are colorized according to the depths of the structure represented in each of the pixels. On
Still referring to
Next, at step 412, the processor 116 applies the depth-dependent color scheme to a portion of the first planar image 304. The processor 116 colorizes the first planar image 304 by applying the same depth-dependent color scheme that was used to colorize the volume-rendered image 302. In other words, the same colors are associated with the same ranges of depths when colorizing both the volume-rendered image 302 and the first planar image 304. As with the volume-rendered image 302, the hatching and the cross-hatching represent the regions of the first planar image 304 that are colored the first color and the second color respectively. According to an embodiment, only the portions of the first planar image 304 within a first view port 309 are colored according to the depth-dependent color scheme. For example, the processor 116 may access the depth buffer 117 in order to determine the depths of the structures associated with each of the pixels in the first planar image. Then, the processor 116 may colorize the first planar image based on the same depth-dependent color scheme used to colorize the volume-rendered image. That is, the processor 116 may assign the same first color to pixels showing structures that are within the first range of depths and the processor 116 may assign the same second color to pixels showing structures within the second range of depths. The first view port 309 graphically shows the extent of the volume of data used to generate the volume-rendered image 302. In other words, the first view port 309 shows the intersection of the plane shown in the first planar image 304 and the volume from which the volume-rendered image 302 is generated. According to an embodiment, the user may manipulate the first view port 309 through the user interface 115 in order to alter the size and/or the shape of the data used to generate the volume-rendered image 302. For example, the user may use a mouse or trackball of the user interface 115 to move a corner or a line of the first view port 309 in order to change the size and/or the shape of the volume used to generate the volume-rendered image 302. According to an embodiment, the processor 116 may generate and display an updated volume-rendered image in response to the change in volume size or shape as indicated by the adjustment of the first view port 309. The updated volume-rendered image may be displayed in place of the volume-rendered image 302. For example, if the user were to change the first view port 309 so that the first view port 309 was smaller in size, then the volume-rendered image would be regenerated using a smaller volume of data. Likewise, if the user were to change the first view port 309 so that the first view port 309 was larger in size, an updated volume-rendered image would be generated based on a larger volume of data. According to an embodiment, updated volume-rendered images may be generated and displayed in real-time as the user adjusts the first view port 309. This allows the user to quickly see the changes to the volume-rendered image resulting from adjustments in the first view port 309. The size and resolution of the three-dimensional ultrasound dataset used to generate the volume-rendered image as well as the speed of the processor 116 will determine how fast it is possible to generate and display the updated volume-rendered image. The updated volume-rendered image may be colorized according to the same depth-dependent color scheme as the volume-rendered image 302 and the first planar image 304.
Since the first planar image 304 is colorized according to the same depth-dependent color scheme as the volume-rendered image 302, it is very easy for a user to understand the precise location of structures located in the first planar image 304. For example, since structures represented in the first color (represented by the single hatching on
At step 414, the planar image is displayed. The planar image may include the first planar image 304. According to an exemplary embodiment, the first planar image 304 may be displayed on the display device 118 at the same time as the volume-rendered image as depicted in
This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims.