This application is based upon and claims the benefit of priority from prior Japanese Patent Applications No. 2011-022999, filed Feb. 4, 2011; and No. 2012-020678, filed Feb. 2, 2012, the entire contents of all of which are incorporated herein by reference.
Embodiments described herein relate generally to an ultrasonic diagnostic apparatus, an ultrasonic image processing apparatus, and an ultrasonic image acquisition method.
An ultrasonic diagnostic apparatus emits ultrasonic pulses generated by transducers provided in an ultrasonic probe into an object to be examined, and receives reflected ultrasonic waves generated by differences in acoustic impedance of the tissues of the object via the transducers, thereby acquiring biological information. This apparatus can perform real-time display of image data by the simple operation of bringing the ultrasonic probe into contact with the surface of the body, and hence is widely used for morphological diagnosis and functional diagnosis of various organs.
The above ultrasonic diagnostic apparatus is also used for image diagnosis of the circulatory system. For example, the apparatus measures a blood flow velocity in a specific region at a desired depth from the surface of the body by using a pulse Doppler method, calculates, for example, feature amounts associated with a blood flow such as a PI (Pulsatility Index), RI (Resistance Index), and S/D and flow velocity index values such as a maximum flow velocity value, mean flow velocity value, and minimum flow velocity value, and displays them in real time. The operator can quickly and visually recognize the blood flow state of the patient by observing the displayed blood flow indices.
However, since the conventional apparatus uses the pulse Doppler method for calculating various blood flow indices such as PI, RI, S/D, maximum values, average values, and minimum values, the blood flow indices which the apparatus can calculate are limited to local areas corresponding to one or two rasters. Therefore, the observer (e.g., a doctor) can visually recognize the blood flow in a local area quickly but cannot do so with respect to an area wider than a predetermined area (refer to
As described above, the blood flow indices which the conventional ultrasonic diagnostic apparatus can calculate are limited to those in a local area. Therefore, if the conventional ultrasonic diagnostic apparatus is used for measuring the blood flow velocity in the entire cervical vessel, a target blood vessel has to be first visualized in a long axis view and then the entire blood vessel has to be visually observed from one portion to another to detect an abnormality, while simultaneously moving the pulse-Doppler sampling position (the gate position) along the long axis of the blood vessel. This being so, a physical burden is imposed on a patient, and an operation burden is imposed on a doctor.
Under the above circumstances, the object is to provide an ultrasonic diagnostic apparatus, an ultrasonic image processing apparatus and an ultrasonic image processing method, which enable calculation of a blood-vessel feature amount for a wider area than before and which enable the observer to visually recognize the calculation result quickly and easily.
In general, according to one embodiment, an ultrasonic diagnostic apparatus includes a detection unit configured to detect a distribution of velocity information at each position in a predetermined area in an object over a predetermined interval by scanning the predetermined area with an ultrasonic wave, a calculation unit configured to calculate at least one feature amount based on at least one of a maximum flow velocity value, a minimum flow velocity value, and a mean flow velocity value at the each position in the predetermined interval by using velocity information at each position over the predetermined interval, and a display unit configured to display the feature amount in a predetermined form.
The embodiment will be described below with reference to the accompanying drawings. Note that the same reference numerals in the following description denote constituent elements having almost the same functions and arrangements, and a repetitive description will be made only when required.
The ultrasonic probe 12 is a device (probe) which transmits ultrasonic waves to an object, and receives reflected waves from the object based on the transmitted ultrasonic waves. The ultrasonic probe 12 has, on its distal end, an array of a plurality of piezoelectric transducers, a matching layer, a backing member, and the like. The piezoelectric transducers transmit ultrasonic waves in a desired direction in a scan area based on driving signals from the ultrasonic transmission unit 21, and convert reflected waves from the object into electrical signals. The matching layer is an intermediate layer which is provided for the piezoelectric transducers to make ultrasonic energy efficiently propagate. The backing member prevents ultrasonic waves from propagating backward from the piezoelectric transducers. When the ultrasonic probe 12 transmits an ultrasonic wave to an object P, the transmitted ultrasonic wave is sequentially reflected by the discontinuity surface of an acoustic impedance of an internal body tissue, and is received as an echo signal by the ultrasonic probe 12. The amplitude of this echo signal depends on an acoustic impedance difference on the discontinuity surface by which the echo signal is reflected. The echo produced when a transmitted ultrasonic pulse is reflected by a moving blood flow is subjected to a frequency shift depending on the velocity component of the moving body in the ultrasonic transmission/reception direction due to the Doppler effect.
Note that the ultrasonic probe 12 according to this embodiment may be a two-dimensional array probe (i.e., a probe having ultrasonic transducers arranged in the form of a two-dimensional matrix) or a probe which can acquire volume data, e.g., a mechanical 4D probe (i.e., a probe which can execute ultrasonic scanning while mechanically swinging an ultrasonic transducer array in a direction perpendicular to the array direction). Obviously, the ultrasonic probe 12 may be a one-dimensional array probe.
The input device 13 is connected to an apparatus body 11 and includes various types of switches, buttons, a trackball, a mouse, and a keyboard which are used to input, to the apparatus body 11, various types of instructions, conditions, an instruction to set a region of interest (ROI), various types of image quality condition setting instructions, and the like from an operator.
The monitor 14 displays morphological information and blood flow information in the living body as images based on video signals from the display processing unit 30.
The ultrasonic transmission unit 21 includes a trigger generation circuit, delay circuit, and pulser circuit (none of which are shown). The trigger generation circuit repetitively generates trigger pulses for the formation of transmission ultrasonic waves at a predetermined rate frequency fr Hz (period: 1/fr sec). The delay circuit gives each trigger pulse a delay time necessary to focus an ultrasonic wave into a beam and determine transmission directivity for each channel. The pulser circuit applies a driving pulse to the probe 12 at the timing based on this trigger pulse.
The ultrasonic transmission unit 21 has a function of instantly changing a transmission frequency, transmission driving voltage, or the like to execute a predetermined scan sequence in accordance with an instruction from the control processor 29. In particular, the function of changing a transmission driving voltage is implemented by linear amplifier type transmission circuit capable of instantly switching its value or a mechanism of electrically switching a plurality of power supply units.
The ultrasonic reception unit 22 includes an amplifier circuit, A/D converter, delay circuit, and adder (none of which are shown). The amplifier circuit amplifies an echo signal received via the probe 12 for each channel. The A/D converter converts each analog echo signal into a digital echo signal. The delay circuit gives the digitally converted echo signals delay times necessary to determine reception directivities and perform reception dynamic focusing.
The adder then performs addition processing for the signals. With this addition, a reflection component from a direction corresponding to the reception directivity of the echo signal is enhanced to form a composite beam for ultrasonic transmission/reception in accordance with reception directivity and transmission directivity.
The B-mode processing unit 23 receives an echo signal from the reception unit 22, and performs logarithmic amplification, envelope detection processing, and the like for the signal to generate data whose signal intensity is expressed by a luminance level.
The blood flow detection unit 24 extracts a blood flow signal from the echo signal received from the ultrasonic reception unit 22, and generates blood flow data. In general, the blood flow detection unit 24 extracts a blood flow by CFM (Color Flow Mapping). In this case, the blood flow detection unit 24 analyzes the blood flow signal to obtain blood flow information such as mean velocities, variances, and powers as blood flow data at multiple points.
The raw data memory 25 generates B-mode raw data as B-mode data on ultrasonic scanning lines by using a plurality of B-mode data received from the B-mode processing unit 23. The raw data memory 25 generates color raw data as blood flow data on ultrasonic scanning lines by using a plurality of blood flow data received from the blood flow detection unit 24. Note that for the purpose of reducing noise or smooth concatenation of images, a filter may be inserted after the raw data memory 25 to perform spatial smoothing.
With the wide-area feature amount image generation function (to be described later), the feature amount calculation unit 26 receives blood flow information obtained by CFM over a predetermined interval from the raw data memory 25, and calculates a flow index value and feature amount associated with a blood flow at each position in the blood vessel under the control of the control processor 29. In this case, a flow index value associated with a blood flow is, for example, a maximum value, mean value, or minimum value, and a feature amount associated with a blood flow is, for example, a PI, RI, or S/D.
The image processing unit 27 generates B-mode image data, CFM image data, and volume data by using the B-mode raw data and color raw data received from the raw data memory 25. The image processing unit 27 performs predetermined image processing such as volume rendering, MPR (Multi Planar Reconstruction), and MIP (Maximum Intensity Projection). The image processing unit 27 generates a feature amount image in which different colors are assigned in accordance with feature amount values by using the feature amounts at the respective positions which are calculated by the feature amount calculation unit 26.
Note that for the purpose of reducing noise or smooth concatenation of images, a two-dimensional filter may be inserted after the image processing unit 27 to perform spatial smoothing.
The display processing unit 28 executes various kinds of processes associated with a dynamic range, luminance (brightness), contrast, y curve correction, RGB conversion, and the like for various kinds of image data generated/processed by the image processing unit 27.
The control processor 29 has the function of an information processing apparatus (computer) and controls the operation of the main body of this ultrasonic diagnostic apparatus. The control processor 29 reads out a dedicated program for implementing a wide-area feature amount image generation function (to be described later) from the storage unit 30, and the like from a storage unit 30, expands the program in its own memory, and executes computation, control, and the like associated with each type of processing.
The storage unit 30 stores the dedicated program for implementing the wide-area feature amount image generation function, diagnosis information (patient ID, findings by doctors, and the like), a diagnostic protocol, transmission/reception conditions, a color table for assigning different colors in accordance with calculated feature amount values, and other data groups. The storage unit 30 is also used to store images in the image memory (not shown), as needed. It is possible to transfer data in the storage unit 30 to an external peripheral device via the interface unit 31.
The interface unit 31 is an interface associated with the input device 13, a network, and a new external storage device (not shown). The interface unit 31 can transfer, via a network, data such as ultrasonic images, analysis results, and the like obtained by this apparatus to another apparatus.
The wide-area feature amount image generation function of the ultrasonic diagnostic apparatus 1 will be described next. This function serves to generate a feature amount image such that flow velocity index values and feature amounts associated with blood flows at the respective positions in the blood vessel are calculated using blood flow information obtained by CFM over a predetermined interval and different colors are assigned in accordance with the feature amount values. This makes it possible to quickly and easily observe feature amounts in a wide area as compared with the prior art.
The operator executes, via the input device 13, inputting of patient information and selection of an imaging mode, scan sequence, transmission/reception conditions, and the like for ultrasonically scanning a predetermined area in an object (step S1). In this case, the operator selects the CFM mode as an imaging mode, and inputs a sample volume, transmission voltage, and the like as transmission/reception conditions. The storage unit 30 automatically stores the input and selected pieces of information, conditions, and the like.
The operator brings the ultrasonic probe 12 into contact with the surface of the object at a desired position to execute ultrasonic scanning in the CFM mode for an area including a diagnosis region (a desired blood vessel in this case) as an area to be scanned. The echo signals acquired by ultrasonic scanning in the CFM mode are sent to the blood flow detection unit 24 via the ultrasonic reception unit 22. The blood flow detection unit 24 extracts blood flow signals by CFM, obtains blood flow information such as mean velocities, variances, and powers as blood flow data at multiple points, and generates blood flow velocity information (color data) for each frame. The raw data memory 25 generates color raw data for each frame by using a plurality of color data received from the blood flow detection unit 24 (step S2).
The feature amount calculation unit 26 receives blood flow information, of the blood flow information obtained by CFM, which corresponds to a predetermined interval from the raw data memory 25, and calculates a flow velocity index value and a feature amount at each position in the blood vessel (step S3).
Nth frame are respectively defined as Vmax(x, y), Vmin(x, y), and Vmean(x, y). Furthermore, PI(x, y) and RI(x, y) at the sample x and the raster y in the range from the first frame to the Nth frame are respectively defined by equations (1) and (2) given below:
PI(x,y)=(Vmax(x,y)−Vmin(x,y))/Vmean(x,y) (1)
RI(x,y)=(Vmax(x,y)−Vmin(x,y))/Vmax(x,y) (2)
Upon receiving velocity information V(x, y, 1) (where x and y are natural numbers satisfying 1≦x≦400 and 1≦y≦200) of the first frame from the raw data memory 25, the feature amount calculation unit 26 temporarily stores the information in its memory.
Upon receiving velocity information V(x, y, 2) of the second frame from the raw data memory 25, the feature amount calculation unit 26 temporarily stores the information in its memory, and compares the information with the velocity information V(x, y, 1) of the first frame to calculate the maximum velocity Vmax(x, y), minimum velocity Vmin(x, y), and mean velocity Vmean(x, y) at the sample x and the raster y. In addition, the feature amount calculation unit 26 calculates PI(x, y) and RI(x, y) by using obtained Vmax(x, y), Vmin(x, y), and Vmean(x, y) according to equations (1) and (2). The feature amount calculation unit 26 stores acquired Vmax(x, y), Vmin(x, y), and Vmean(x, y) in a flow velocity index value storage unit 260, and also stores PI(x, y) and RI(x, y) in a feature amount storage unit 261.
Upon receiving velocity information V(x, y, 3) of the third frame from the raw data memory 25, the feature amount calculation unit 26 temporarily stores the information in its memory, and compares the velocity information V(x, y, 3) with the maximum velocity Vmax(x, y) of the first and second frames. If the velocity information V(x, y, 3) is larger than the maximum velocity Vmax(x, y) of the first and second frames, the feature amount calculation unit 26 updates the maximum velocity Vmax(x, y). If the velocity information V(x, y, 3) is smaller than the maximum velocity Vmax(x, y) of the first and second frames, the feature amount calculation unit 26 keeps the maximum velocity Vmax(x, y). The feature amount calculation unit 26 calculates Vmin(x, y) in the same manner, and calculates the average velocity Vmean(x, y) of the first to third frames by using the pieces of velocity information of the first, second, and third frames (or the average velocity Vmean(x, y) of the first and second frames or the velocity information V(x, y, 3) of the third frame). The feature amount calculation unit 26 also calculate PI(x, y) and RI(x, y) by using obtained Vmax(x, y), Vmin(x, y), and Vmean(x, y) according to equations (1) and (2). The feature amount calculation unit 26 then stores acquired Vmax(x, y), Vmin(x, y), and Vmean(x, y) in the flow velocity index value storage unit 260, and PI(x, y) and RI(x, y) in the feature amount storage unit 261.
Thereafter, the feature amount calculation unit 26 sequentially executes similar processing up to the Nth frame. As a result, the flow velocity index value storage unit 260 and the feature amount storage unit 261 store pieces of information like those shown in
Note that it is not necessary to calculate PI(x, y) and RI(x, y) and store (update) them in the feature amount storage unit 261 at the same timings as those for the calculation of Vmax(x, y), Vmin(x, y), and Vmean(x, y) and storing (updating) of them in the flow velocity index value storage unit 260. For example, the apparatus may detect heartbeats based on biological information such as blood flow information and ECG information obtained by CFM, calculate PI(x, y) and RI(x, y) with reference to them (e.g., for each heartbeat, and store (update) them in the feature amount storage unit 261.
The image processing unit 27 then generates a wide-area feature amount image by using the acquired blood flow information (step S4). That is, the image processing unit 27 generates a wide-area feature amount image with each feature amount being represented by PI(x, y) by assigning different colors in accordance with the obtained values of PI(x, y) in step S4. The image processing unit 27 also generates a wide-area feature amount image with each feature amount being represented by RI(x, y) by assigning different colors in accordance with the values of RI(x, y) obtained in step S4. The monitor 14 displays the generated wide-area feature amount images in a predetermined form after predetermined display processing is performed on the image (step S5).
The above wide-area feature amount image generation function can be variously modified. Typical modifications of this wide-area feature amount image generation function will be described below.
As shown in
The wide-area feature amount image generation function according to the second modification simultaneously displays wide-area feature amount images of a plurality of slices, as shown in
Note that the display form according to the second modification is effective, for example, when the operator wants to simultaneously observe a plurality of wide-area feature amount images acquired at different timings.
The wide-area feature amount image generation function according to the third modification spatially associates a plurality of wide-area feature amount images to display them as one composite image (also called a fusion image, concatenated image, combined image, or panorama image). It is possible to generate this composite image by, for example, calculating moving amounts from changes in image between B-mode frames and concatenating a plurality of wide-area feature amount images upon spatially associating them with each other.
The wide-area feature amount image generation function according to the fourth modification displays calculated feature amount information and a calculated flow velocity index value as character information.
The wide-area feature amount image generation function according to the fifth modification displays calculated feature amount information in the form of a graph.
The wide-area feature amount image generation function according to the sixth modification specifies a desired position (e.g., a position where the PI value is large) on a wide-area feature amount image, and executes pulse Doppler processing upon automatically setting a sampling position at the specified position.
Assume that a wide-area feature amount image like that shown in
The wide-area feature amount image generation function according to the seventh modification displays, at once, feature amounts associated with more areas in the blood vessel to be displayed by concatenating areas in which feature amounts have been measured (feature amount measurement areas).
In addition, it is possible to generate and display one composite image like that shown in
The feature amount calculation in step S3 can be executed by each heartbeat or every plurality of heartbeats. This can be realized by performing the feature amount calculation mentioned in step S3 based on velocity information V (x, y, n) in each frame from the first frame to the nth frame over a heartbeat or a plurality of heartbeats (n is an integer satisfying 1≦n≦N). In the case of generating the concatenated images of
The ultrasonic diagnostic apparatus described above calculates feature amounts such as PI values associated with blood flows at the respective positions in the blood vessel by using blood flow information obtained by CFM over a predetermined interval, and assigns different colors in accordance with the calculated values, thereby generating and displaying a feature amount image. It is therefore possible to calculate feature amounts such as PI values associated with a wide area, as compared with conventional pulse Doppler processing, and to display the result as a wide-area feature amount image. The operator can visually recognize flow velocity index values and feature amounts associated with a wide blood vessel area, as compared with the prior art, quickly and easily by observing the displayed wide-area feature amount image.
The conventional ultrasonic diagnostic apparatus examines the overall blood vessel by moving a sampling position in pulse Doppler processing along the blood vessel, and hence takes much time. In contrast to this, for example, when measuring a blood flow velocity in cervical vessel ultrasonic examination, the apparatus extracts a target blood vessel in a long axis view by using this wide-area feature amount image, and executes screening. This makes it possible to quickly and easily discriminate the presence/absence of an abnormality. This allows the operator to finish examination if there is no abnormality and to perform detailed examination by moving a sampling position in pulse Doppler processing if there is an abnormality, thus improving the examination efficiency.
Note that the present invention is not limited to the embodiment described above, and constituent elements can be modified and embodied in the execution stage within the spirit and scope of the invention.
(1) Each function associated with this embodiment can also be implemented by installing programs for executing the corresponding processing in a computer such as a workstation and expanding them in a memory. In this case, the programs which can cause the computer to execute the corresponding techniques can be distributed by being stored in recording media such as magnetic disks (floppy® disks, hard disks, and the like), optical disks (CD-ROMs, DVDs, and the like), and semiconductor memories.
(2) The blood flow information acquired in the CFM mode may be stored in advance, and this wide-area feature amount image may be generated and displayed afterward.
(3) The above embodiment has exemplified the case in which a wide-area feature amount image is generated and displayed by assigning different colors to corresponding positions in accordance with acquired feature amount values. However, this embodiment is not limited to this case. For example, it is possible to generate and display a wide-area feature amount image by assigning different colors to corresponding positions in accordance with acquired flow velocity index values.
(4) The above embodiment has explained an example in the case of executing wide-area feature amount image generation processing by generating spatial distribution of velocity information using a series of signals obtained by an imaging mode for performing CFM.
However, this embodiment is not limited to this example. It is also possible to execute wide-area feature amount image generation processing by generating spatial distribution of velocity information using a series of signals obtained by other image modes. For example, wide-area feature amount image generation processing may be executed by generating spatial distribution of velocity information using a series of signals obtained by an imaging mode which executes Doppler processing with respect to a series of signals obtained by a B-mode scan. Furthermore, other than the B-mode, spatial distribution of velocity information may be generated by, for example, executing high-speed B-mode scan in which a scanning range is limited, and performing correlation processing (for example, speckle tracking processing) between frames of the obtained B-mode image. The present wide-area feature amount image generation processing is also executable by using such spatial distribution of velocity information.
In addition, various inventions can be formed by proper combinations of a plurality of constituent elements disclosed in the above embodiments. For example, several constituent elements may be omitted from all the constituent elements disclosed in the above embodiments. Furthermore, constituent elements in the different embodiments may be properly combined.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms;
furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Number | Date | Country | Kind |
---|---|---|---|
2011-022999 | Feb 2011 | JP | national |
2012-020678 | Feb 2012 | JP | national |