This disclosure relates generally to a method and system for using a first image generated from volumetric ultrasound data to identify an acquisition target in order to then acquire additional ultrasound data of the acquisition target.
Many modem ultrasound imaging systems are capable of acquiring volumetric ultrasound data. Volumetric ultrasound data is typically very useful because it is oftentimes possible to generate an image from volumetric ultrasound data including either all or else a significant portion of an organ. Visualizing an image including an entire organ or a large portion of an organ is useful because it is easier for the user to remain oriented within the image. However, images generated from volumetric ultrasound data still suffer from several limitations. Specifically, images generated from volumetric ultrasound data of a diagnostically useful field-of-view typically suffer from lower spatial resolution and lower temporal resolution than images generated from conventional two-dimensional ultrasound data. Or, conversely, the user may have to accept a much smaller field-of-view in order to increase the spatial resolution and the temporal resolution of the image generated from volumetric ultrasound data. Unfortunately, if a small field-of-view is selected, many of the benefits of using volumetric ultrasound data are negated.
Therefore, for these and other reasons, an improved method of ultrasound imaging and an improved ultrasound imaging system are desired.
In an embodiment, a method of ultrasound imaging includes displaying a first sequence of images generated from first ultrasound data, wherein the first ultrasound data includes volumetric ultrasound data. The method includes selecting an acquisition target from the first sequence of images and automatically configuring an acquisition parameter based on the selected acquisition target. The method includes implementing the acquisition parameter to acquire second ultrasound data of the acquisition target. The method includes displaying a second sequence of images generated from the second ultrasound data, wherein the second sequence of images is of a higher frame rate than the first sequence of images.
In another embodiment, a method of ultrasound imaging includes acquiring volumetric ultrasound data, displaying an image generated from the volumetric ultrasound data, and adjusting the position of an icon on the image to control a position of a plane. The method includes automatically configuring an acquisition parameter based on the position of the plane. The method includes implementing the acquisition parameter to acquire two-dimensional ultrasound data at the position of the plane and displaying a two-dimensional image generated from the two-dimensional ultrasound data.
In another embodiment, an ultrasound system includes a probe adapted to scan a volume of interest, a display device, and a processor in electronic communication with the probe and the display device. The processor is configured to control the probe to acquire first ultrasound data, where the first ultrasound data includes volumetric ultrasound data. The processor is configured to display a first image based on the first ultrasound data on the display device. The processor is configured to automatically configure an acquisition parameter based on the selection of an acquisition target in the first image. The processor is configured to implement the acquisition parameter to acquire second ultrasound data of the acquisition target, where the second ultrasound data is of higher temporal resolution than the first ultrasound data. The processor is configured to display an image generated from the second ultrasound data on the display device.
Various other features, objects, and advantages of the invention will be made apparent to those skilled in the art from the accompanying drawings and detailed description thereof.
In the following detailed description, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific embodiments that may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the embodiments, and it is to be understood that other embodiments may be utilized and that logical, mechanical, electrical and other changes may be made without departing from the scope of the embodiments. The following detailed description is, therefore, not to be taken as limiting the scope of the invention.
The ultrasound imaging system 100 also includes a processor 116 to process the ultrasound data and generate frames or images for display on a display device 118. The processor 116 is in electronic communication with the probe 105 and the display device 118. The processor 116 may be hard-wired to the probe 105 and the display device 118, or the probe may be in electronic communication through other techniques includes wireless communication. The processor 116 may be adapted to perform one or more processing operations according to a plurality of selectable ultrasound modalities on the ultrasound data. Other embodiments may use multiple processors to perform various processing tasks. The processor 116 may also be adapted to control the acquisition of ultrasound data with the probe 105. The ultrasound data may be processed in real-time during a scanning session as the echo signals are received. For purposes of this disclosure, the term “real-time” is defined to include a process performed with no intentional lag or delay. An embodiment may update the displayed ultrasound image at a rate of more than 20 times per second. The images may be displayed as part of a live image. For purposes of this disclosure, the term “live image” is defined to include a dynamic image that updates as additional frames of ultrasound data are acquired. For example, ultrasound data may be acquired even as images are being generated based on previously acquired data and while a live image is being displayed. Then, according to an embodiment, as additional ultrasound data are acquired, additional frames or images generated from more-recently acquired ultrasound data are sequentially displayed. Additionally or alternatively, the ultrasound data may be stored temporarily in a buffer (not shown) during a scanning session and processed in less than real-time in a live or off-line operation. Some embodiments of the invention may include multiple processors (not shown) to handle the processing tasks. For example, a first processor may be utilized to demodulate and decimate the ultrasound signal while a second processor may be used to further process the data prior to displaying an image. It should be appreciated that other embodiments may use a different arrangement of processors.
Still referring to
Optionally, embodiments of the present invention may be implemented utilizing contrast agents. Contrast imaging generates enhanced images of anatomical structures and blood flow in a body when using ultrasound contrast agents including microbubbles. After acquiring ultrasound data while using a contrast agent, the image analysis includes separating harmonic and linear components, enhancing the harmonic component and generating an ultrasound image by utilizing the enhanced harmonic component. Separation of harmonic components from the received signals is performed using suitable filters. The use of contrast agents for ultrasound imaging is well known by those skilled in the art and will therefore not be described in further detail.
In various embodiments of the present invention, ultrasound information may be processed by other or different mode-related modules. A non-limiting list of modes includes: B-mode, Color Doppler, power Doppler, M-mode, spectral Doppler anatomical M-mode, strain, and strain rate. For example, one or more modules may generate B-mode, color Doppler, power Doppler, M-mode, anatomical M-mode, strain, strain rate, spectral Doppler images and combinations thereof, and the like. The images are stored and timing information indicating a time at which the image was acquired in memory may be recorded with each image. The modules may include, for example, a scan conversion module to perform scan conversion operations to convert the image frames from Polar to Cartesian coordinates. A video processor module may be provided that reads the images from a memory and displays the image in real time while a procedure is being carried out on a patient. A video processor module may store the image in an image memory, from which the images are read and displayed. The ultrasound imaging system 100 shown may include a console system, a cart-based system, or a portable system, such as a hand-held or laptop-style system according to various embodiments.
Referring to both
At step 204, the processor 116 generates an image from the first ultrasound data. According to an embodiment, the image may include a volume-rendered image. The term “volume-rendered image” is defined to include a two-dimensional representation of three-dimensional data. Typically, each sample point or voxel within the volume is assigned an opacity or weight. Then, through a technique such as ray-casting, pixels are assigned a value based on a combination of voxels values along rays originating from a focal point. Other embodiments may use different techniques to generate volume-rendered images.
According to other embodiments, at step 204, other types of images may be generated from the volumetric data. For example, according to an embodiment the image may include a slice of volumetric ultrasound data. Those skilled in the art will appreciate that this type of image closely resembles an image generated from two-dimensional ultrasound data. According to an embodiment, a user may be able to select the slice or plane of the image that is generated at step 204. For example, a user may be able to adjust the position of a cut-plane through the volume in order to determine the anatomical structures or features included in the image. At step 206, the image is displayed on the display device 118.
If additional first ultrasound data is required at step 207, the method 200 returns to step 202, where additional first ultrasound data is acquired. The method 200 may iteratively cycle through steps 202, 204, 206, and 207 multiple times. According to an embodiment, the most recently acquired image may replace the image that was displayed during the previous iteration of steps 202, 204, 206, and 207. By cycling through steps 202, 204, 206, and 207 multiple times, the method 200 may result in the display of a first sequence of images. Collectively, the displaying of the first sequence of images in this manner is often referred to as displaying a live or real-time image.
Referring now to
The acquisition target may be selected in different ways at step 208 according to other embodiments. For example, the user may identify just a structure, such as the structure 224. According to an embodiment, the processor 116 (shown in
At step 210, the processor 116 automatically configures one or more acquisition parameters based on the acquisition target selected during step 208. The acquisition parameters are the settings that control the ultrasound data that will be acquired by the probe 105. The acquisition parameters control the ultrasound beams, which in turn control which portions of a patient's anatomy are imaged. For example, the acquisition parameters may control the position of the plane that is acquired when acquiring two-dimensional ultrasound data and the acquisition parameters may control the position and size of the volume that is acquired when acquiring volumetric ultrasound data. Non-limiting examples of acquisition parameters include: beam depth, beam steering angles, beam width, and beam spacing. The processor 116 configures the acquisition parameters in order to enable the acquisition of additional ultrasound data including the acquisition target that was identified during step 208.
At step 212, the processor 116 implements the acquisition parameters that were configured during step 210 in order to acquire second ultrasound data. According to an embodiment, the second ultrasound data may be of a smaller volume than the first ultrasound data. By acquiring a smaller volume of data, it is possible for the processor 116 to acquire data with higher temporal resolution and potentially higher spatial resolution as well. Higher temporal resolution data enables the user to view a live or dynamic image with a higher frame rate. Higher spatial resolution ultrasound data allows for the generation of higher resolution images. For example, images with higher spatial resolution allow the user to discern smaller details within the acquisition target. According to another embodiment, the second ultrasound data may include two-dimensional ultrasound data.
At step 214, the processor 116 generates an image from the second ultrasound data. Then, at step 216, the processor 116 displays the image generated from the second ultrasound data on the display device 118. If additional second ultrasound data is desired at step 217, then the method returns to step 212. In a manner similar to that previously described with respect to steps 202, 204, 206, and 207, the method 200 may iteratively repeat steps 212, 214, 216, and 217 multiple times in order to generate and display a second sequence of images. Collectively, the second sequence of images forms a second live ultrasound image. According to an embodiment, the acquisition parameter configured during step 210 was selected in part to give the second sequence of images a higher frame-rate than the first sequence of images. According to other embodiments, the individual images in the second sequence of images may also have higher spatial resolution than the individual images in the first sequence of images. If no additional second ultrasound data is desired at step 217, then the method 200 ends.
Referring to
Referring now to
According to an embodiment, a two-dimensional rendering of the plane may be displayed based on the volumetric data. The use of the two-dimensional rendering of the plane will be described hereinafter.
At step 258, the processor 116 configures an acquisition parameter based on the position of the plane as determined by the position of the icon 301 on the volume-rendered image 302. The acquisition parameters are configured in order to enable the acquisition of two-dimensional data including the first plane. The acquisition parameters determine the location from which the two-dimensional ultrasound data is acquired. The acquisition parameters are selected to enable the acquisition of two-dimensional ultrasound data from the desired plane within a subject. Examples of acquisition parameters include: beam depth, beam steering angle, beam width, beam spacing, and the like.
At step 260, the processor 116 implements the acquisition parameters configured during step 258. As discussed hereinabove, the acquisition parameters may have been selected to enable the ultrasound imaging system 100 to acquire two-dimensional ultrasound data of a plane selected by positioning the icon 301. At step 262, the processor 116 displays a two-dimensional image on the display device 118. According to other embodiments, the acquisition parameters may be configured to enable the ultrasound imaging system to acquire ultrasound data for two or more planes. For example, other embodiments may have an icon with multiple lines, where each line represents a plane.
Referring to
As described hereinabove, in an embodiment a slice based on the volumetric ultrasound data may be displayed at the same time as a two-dimensional image based on two-dimensional ultrasound data. The user may compare the slice based on the volumetric ultrasound data to the two-dimensional image in order to confirm that the two-dimensional image contains the intended anatomical structure. According to an embodiment, the two-dimensional image may have better spatial resolution than the image generated from the volumetric ultrasound data, thus making the two-dimensional image more diagnostically useful. According to embodiments where a live volume-rendered image and a live two-dimensional image are displayed, the live two-dimensional image may exhibit higher temporal resolution than the live volume-rendered image. The higher spatial resolution allows the user to identify motion of the structure more accurately.
This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims.