The present invention relates to an apparatus for displaying a high contrast ultrasonic image with reduced speckle noise.
An ultrasonic diagnostic apparatus that can noninvasively pickup an image of an affected area in real time has been widely used as a monitoring tool for initial diagnosis, fetal diagnosis, and treatment at a clinic. In recent years, portable ultrasonic diagnostic apparatuses have been developed, which facilitates emergency treatments with the apparatus carried to the patient at any place in or out of hospitals. In addition to the portable size of the apparatus, the easy operation of the ultrasonic diagnostic apparatus promotes the scalable network between a patient's house and a hospital, thereby the achievement of home medical care can be expected in which the patient himself/herself transfers an image of him/her to the hospital while operating the apparatus for a remote diagnosis.
An ultrasonic image can be advantageously picked up in a short period of time and displayed in real time, but disadvantageously has a weak signal strength which indicates a tissue structure relative to a noise level (low S/N ratio), which often makes diagnosis of the image difficult. Thus, in order to promote the accuracy in diagnosis and the information sharing between a doctor and a patient, a technology for displaying an image having more objective information in a manner easy to understand is expected.
An ultrasonic image has mottles called speckles, which is one of the factors that makes the diagnosis of an image difficult. Spatial compound sonography is a method for reducing the speckles and displaying a high quality ultrasonic image. The speckles on a two dimensional ultrasonic image (B-mode image) are the spatially standing signals which are generated when the signals reflected from minute scatterers interfere with each other, the scatterers being dispersed in an object under inspection, and have a spatial frequency which is lower than that of a signal indicating a structure of the object under inspection. As a result, when a surface for image pickup is moved in a slice direction by a distance which is equal to the size of a speckle, the speckles randomly change on the image, while the structure of the object under inspection does not significantly change. If such images acquired at slightly different positions in the slice direction are added to each other, the signals from the structure of the object under inspection is enhanced at every addition, but the signals from the randomly changing speckles become smoothed. As a result, an S/N ratio between a speckle and the structure is improved, thereby a high contrast image having an enhanced S/N ratio is constructed.
So far, a number of technologies for adding a plurality of images and extracting specific information therefrom have been reported, and a few of them will be explained below.
Patent Document 1 describes a technology in which a plurality of blood flow images (CFM: Color Flow Mapping) are stored in a memory, and an operator selectively removes frames which are used in an adding process among the displayed images, so that the remained images are added to construct an image.
Patent Document 2 relates to a technology in which signals are received in an efficient and stable manner from a contrast medium after a small amount of the contrast medium is given, and the distribution of blood vessels can be clearly imaged. An imaging of blood vessel information requires a crush of the contrast medium which is flowing in the blood using ultrasonic waves, and a construction of an image by receiving strong nonlinear signals which are generated at the crush. Because strong ultrasonic waves are necessary in crushing a contrast medium, there is a problem that quite different S/Ns are generated between the area where the transmission signals are focused and the other areas, and the image by a contrast medium has non-uniformity in the direction of its depth. To address the problem, in the technology described in Japanese Patent Application Laid-Open No. 2002-209898, the area where the transmission signals are focused are changed in the depth direction at a number of stages to construct a plurality of images which is called as a group of multi-shot images, on which an adding process is performed. In addition, a specific area of each frame of the group of multi-shot images is weighted before the adding process.
Patent Document 1: Japanese Patent No. 3447150
Patent Document 2: Japanese Patent Laid-Open No. 2002-209898
In constructing an image by performing an adding/subtracting process on a plurality of images to enhance specific information of the images, and displaying the image in real time, problems occur such as a shift of the position of a subject, an increased time for processes, and an increased memory.
The technology described in Patent Document 1, which was made without consideration of any position shift of an object under inspection, is hardly applied to an object under inspection which is movable. In addition, an operator selects an image to be used in an adding process, which makes it impossible to display the image in real time. The technology described in Patent Document 2 has another problem that, in constructing and displaying one image in an adding process, every frame to be added has to be stored in a memory once, and the number of frames in the memory has to be larger than that required in the adding process.
One object of the present invention is to provide an ultrasonic diagnostic apparatus which displays a high contrast ultrasonic image with reduced speckles.
Another object of the present invention is to extract information which changes over time. In the present invention, a weighting is applied to a cumulative added image, and an operator controls the amount of information which changes over time and is extracted from the images to be used.
In order to achieve the above described objects, in the ultrasonic diagnostic apparatus of the present invention, a motion vector (information of stop-motion) of an object under inspection which is generated between a captured B-mode image and another B-mode image captured just before the image is detected, and based on the detected motion vector, an image transformation process is performed while an image addition process is performed by multiplying a weight factor, so that an enhanced image of an edge structure of the object under inspection is displayed.
Now, the structures of typical examples of an ultrasonic diagnostic apparatus according to the present invention will be listed below.
which is constructed by the B-mode images of the previous frames.
According to the present invention, a high contrast image in which an edge structure of tissues is enhanced and the speckle noise is reduced is achieved, thereby an ultrasonic image having high visibility can be displayed.
Now, examples of the present invention will be explained in detail below with reference to the drawings.
In an ultrasonic diagnostic apparatus of the present example, a two dimensional ultrasonic image (hereinafter, referred to as a B-mode image) is constructed by transmitting/receiving ultrasonic waves to and from an object under inspection, and then a region on the B-mode image in which a motion vector of the object under inspection is detected is defined, so that a motion vector generated between a currently captured B-mode image and a B-mode image of the next to the last frame is detected in every measurement region, and based on the motion vector, a cumulative added image is transformed (corrected), which is multiplied by a weight factor and added to the captured B-mode image, thereby a new cumulative added image is constructed and displayed on a displaying section in real time.
First, with reference to the block diagram of
An ultrasonic transducer (hereinafter, referred to as a transducer) 2 is configured with a plurality of piezo-electric devices which are arranged in parallel to each other. An analog signal is transmitted from a transmitting beamformer 3 to each of the piezo-electric devices through a D/A converter 4, which cases ultrasonic waves to be radiated to an object under inspection 1. The ultrasonic waves transmitted from each of the piezo-electric devices are electronically delayed by the transmitting beamformer 3 to be focused at a predefined depth. The signal of the transmitted waves is reflected within the object under inspection 1 to be received at each of the piezo-electric devices in the transducer again. The reflected echo received by each of the piezo-electric devices includes attenuated components which are variable depending on the depth where the transmitted waves reach and are corrected at a TGC (Time Gain Control) section 5, and then is converted to a digital signal at an A/D converter 6 to be transmitted to a receiving beamformer 7.
At the receiving beamformer 7, a result of addition is output after a multiplication by a delay time corresponding to a distance from a focal point to each of the piezo-electric devices. The focused ultrasonic waves are two dimensionally scanned to obtain a two dimensional distribution of the reflected echo from the object under inspection 1. The receiving beamformer 7 outputs an RF signal having a real part and an imaginary part which are sent to an envelope detector 8 and a measurement region setting section 11. The signal sent to the envelope detector 8 is converted into a video signal, and is compensated between the scan lines therein by a scan converter 9, so as to construct a B-mode image, that is two dimensional image data.
The constructed B-mode image is sent to a motion detector 10. At this point of time, in an image memory #1 11, a B-mode image of one frame before the B-mode image which is captured in the motion detector 10 is stored. When the constructed B-mode image is the image of the first frame, the image is not processed at the motion detector 10 but passes there through, to be input into the image memory #1 11. On the stored B-mode image in the image memory #1 11, a measurement region having the most appropriate size at which a motion vector is detected is defined by a measurement region setting section 12 depending on the size of the structure of the subject under inspection. After the definition of the measurement region, the B-mode image is sent to the motion detector 10. At the motion detector 10, the B-mode image from the measurement region setting section 12 and the B-mode image from the scan converter 9 are used to detect a motion vector in the measurement region. An approach for detecting a motion vector is a cross correlation method or a least square method. In a deformation unit 13, based on the motion vector detected in the motion detector 10, a cumulative added image which is read out from an image memory #2 14 is transformed. In a weight parameter unit 16, the acquired image and the cumulative added image are multiplied by a weight factor, and in the accumulation unit 13, the acquired image and the cumulative added image are added. The cumulative added image constructed in the accumulation unit 13 is once stored in an image memory #2 14 and then displayed on the displaying section 15. The cumulative added image stored in an image memory #2 14 is sent to the accumulation unit 13 via the weight parameter unit 16 when a next adding process is performed.
Next, in accordance with the flowchart of
First, at Step 1, a B-mode image fn is constructed. When the constructed B-mode image fn is the first image (n=1), the image fn is stored in the image memory #1 11 (Step 2), and a next B-mode image is captured (
of the images of the first to n−1st frames is read in from the image memory #2 14, which is subjected to a transformation process based on the motion vector detected at Step 5 (Step 9,
subjected to the transformation process and the weighting process is then subjected to an adding process at the accumulation unit 13 using the image fn as a reference (Step 6,
constructed as described above is stored in the image memory #2 14 (Step 7), to be displayed on the displaying section 15 (Step 8).
An approach for defining a measurement region at Step 4 will be explained below by way of
In the present invention, a plurality of measurement regions 24 are defined in a B-mode image (fn−1) 25, and one measurement region which best matches with each of the measurement region 24 is searched out from an acquired B-mode image (fn) 21 using a cross correlation method or a least square method. Any motion within each one measurement region is reckoned as a rigid body motion without transformation, and individual motion vectors obtained in each of the measurement regions are combined, so that the entire motion of the object under inspection with transformation is detected. The measurement regions are defined in the B-mode image (fn−1) 25 so as to make the regions uniform, the regions being used in an adding process using a weight factor, which will be explained later. A weight factor is set to be large relative to an added image. On the contrary, if measurement regions are defined in the acquired B-mode image (fn) 21, the regions extracted from an added image include an added region and a non-added region having speckles. Thus, an adding process using such regions causes the speckles to round up by a weight factor, which produces artifact.
The signal components used in a detection can be generally classified in two types: low frequency components for example at edge structures or boundaries between tissues of an object under inspection; and speckles of high frequency components due to the interferences between ultrasonic waves which are scattered by the minute scatterers dispersed in the object under inspection. In the present invention, these two types are not distinguished from each other, and the measurement regions are defined all over an image to calculate motions. When a B-mode image is used to detect a motion between images, the speckle of high frequency components not only enhances the accuracy in detection of a motion vector, but also enables the detection of a motion at a substantial site of tissues that has no characteristic structure. A measurement region should have a size larger than a speckle which is the minimum element of a B-mode image, and is defined to have a size about twice to three times that of the speckle. As for the abdominal tissues such as a liver or a kidney, a measurement region of about 5×5 mm2 is defined.
For the definition of a measurement region, alternatively, an approach for defining measurement regions having different sizes for different tissue parts may be used. A measurement region should have a smaller size to obtain a higher detection resolution of a motion vector, but in this case, the number of the regions is naturally increased, which increases the processing time required for the detection. Therefore, a measurement region which includes a characteristic structure such as an edge of tissues is defined to have a larger size in accordance with the spatial frequency in the region than the other regions. For example, in
Next, a specific approach for a motion vector detection at Step 5 and an image adding process at Step 6 will be explained below by way of
The symbol | | in the equation 2 represents an absolute value. The detection of a motion vector using the image after a decimation process may include a detection error of ±1 pixel. To eliminate the error, a measurement region 56 is redefined from the image fn by moving the measurement region 51 from the original position by the motion vector V57, and the search region 55 is redefined from the image fn−1 to be larger than the measurement region 56 by 1-2 pixels from each side of the measurement region 56 (Step 6). The redefined measurement region 56 and the search region 55 are used to detect again a motion vector V2 using the similar approach as that at Step 5 (Step 7). Through the above described processes, the motion vector to be corrected in an adding process eventually has a value of ((2×V)+V2) with a use of the motion vector V57.
An adding process transforms a cumulative added image which is constructed with images of from the first frame to the next to the last frame and adds it to a captured image. The adding process is performed with an acquired image as a reference, thereby the tissues are positioned in the same relationship with the cumulative added image
and the image fn−1. After the detection of the motion vector, the cumulative added image of from the first frame to the n−1st frame
is read in from the added images storing section 14 to the weight parameter unit 16 to be multiplied by a weight factor, which is input to the accumulation unit 13 to be subjected to a transformation process using the motion vector ((2×V)+V2) detected in the motion detector. The weighting to the cumulative added image
after the transformation process is carried out in a form expressed as the following equation 3 which uses a weight factor (α,β), so as to construct a new cumulative added image.
When the equation 3 is expanded to indicate the factors for each of the frames which construct the cumulative added image, the following equation 4 can be obtained.
The weight factor of each frame which can be obtained by the equation 4 is an important parameter to determine the effect of addition.
There can be several method for controlling a weight for a cumulative added image and controlling any residual error. In a first method, an operator manually adjusts a dial. When an operator moves a transducer by a considerable distance to search for a region of interest, the α value is set to be low so as to display an image similar to an ordinary B-mode image, so that the α value is increased for inspecting the region of interest. Also, when the operator wants to delete any residual error, the α value may be temporarily set to be low so as to reduce the artifact due to the detection error. In a second method, the α value is automatically controlled in accordance with the size of a detected motion vector. When the equation 1 and the equation 2 have the value c which is larger than a predetermined threshold, the α value is automatically decreased and the residual error is reduced. In other words, the second method is the automated first method. In a third method, a refresh button (delete button) is provided so that a diagnostic device or transducer causes the added images to be deleted from a memory to perform an adding process again from the first frame. The button allows an operator to eliminate any residual error to 0 as needed. A press down of the refresh button causes the value α=0 to be input in an adding process. The refresh button may be provided separately from a dial which changes the α value shown in
The added image may be displayed all over a screen, but as shown in
A second example of the present invention will be explained below. The second example is characterized in that, during the motion detection process and the adding process shown in the first example, an added image is constructed with the certain number of frames in another memory, and when an operator uses the refreshing function, the certain number of added frames can be displayed as a cumulative added image.
The structure of the apparatus will be explained below by way of the block diagram shown in
The number of frames for constructing a refreshing added image depends on the number of memories that can be loaded, but the present example will be explained below with five frames.
As in the case of the example 1, an image of the frame which is captured just before the current image is stored in the image memory #1 11 for a detection of a motion vector, but in the example 2, four images in total of two to five frames before the current image are stored in the refreshing image memory #1 17.
After a capture of an image is started, and as the motion vector detection process and the image addition process progress and a cumulative added image
of the five frames is constructed at the accumulation unit 13, the cumulative added image passes through the image subtraction unit 19, and is stored in the refreshing image memory #2 20. The result of a motion vector detection of each image in the motion detector 10 is stored in the motion vector memory 18. Also, as the image data stored in a B-mode image memory at this stage, an image f5 is stored in the image memory #1 11 and the images f4, f3, f2, and f1 are stored in the refreshing image memory #1 17.
Next, when a new image f6 is captured into the accumulation unit 13 through the motion detector 10, and a cumulative added image
is constructed, the cumulative added image
is stored in the image memory #2 14 and at the same time, is input into the image subtraction unit 19. At this point of time, the image f5 is input from the image memory #1 11 into the refreshing image memory #1 17, and at the same time, the image f1 is output to the image subtraction unit 19. In the image subtraction unit 19, a subtraction process is performed to subtract the image f1 from the added images
Actually, the image f1 itself is not subtracted, but the information of the image f1 of what is done to the image f1 from an adding process of the image f1 to the addition of the image f6 including a transformation process is subtracted. Since the information of misalignment between images is stored in the motion vector memory 18, an addition of such information forms a transformation history of a specified image. Thus, when all of the motion vectors detected in the motion detector 10 in the image f2 to image f6 are added, and the image f1 is transformed based on the adding result, the image f1 is constructed as it is included in the added images
When the transformed image f1 is subtracted from the cumulative added image
an added image
constructed with five frames from the second to sixth frames for refreshing is constructed. The added image
is stored in the refreshing image memory #2 20. Because the image f2 will be subtracted in the next procedure, the motion vector detected between the image f1 and the image f2 is deleted from the motion vector memory 18.
Through the above described steps, an added image of five frames from an acquired image is constantly stored in the refreshing image memory #2 20. And when the refresh function is activated, the refreshing added image is displayed on the displaying section 15 instead of the cumulative added image stored in the image memory #2 14, so that the refreshing added image is stored in the image memory #2 14 as a new cumulative added image.
A third example of the present invention will be explained below. The third example is characterized in that, an added image of acquired and cumulative added images is divided in a plurality of unit to construct unit images, and a weight factor is assigned to each unit image, so that an added image which is effective in addition and has reduced residual error is displayed.
The structure of the apparatus is shown in
The step for detecting a motion vector by using an image fn input from an ultrasonic image constructing unit and an image fn−1 of one frame before the image fn is similar to those in the above examples 1 and 2. However, in the present example 3, when an added image of five frames is constructed in the accumulation unit 13, the image is stored in the unit image memories 203 as a unit image. That is, for example, when the adding process for a 26th image f26 is finished, five unit images
are stored in total in the unit image memories 203 (
When the image f26 is input into the accumulation unit 13, the five unit images stored in the unit image memories 203 are captured in a deformation unit 204, where a transformation process is performed based on a motion vector detected in the motion detector 10. The image subjected to the transformation process is input to the weight parameter unit 16 to be multiplied by a weight factor together with an acquired image f26, on which an adding process is performed in an accumulation unit 13 as expressed by the following equation 5. The adding process is performed by multiplying each of the five unit images and the captured image f26 by a predetermined weight factor, and for example when the weight factor is set to be (0.02, 0.03, 0.1, 0.27, 0.325, 0.35), the distribution of the weight factors of each unit image relative to the added images can be shown by the graph of
When a 31st image f31 is captured, a newly constructed unit image
is stored in the unit image memories 203 and also the existed unit image
is deleted, so that a next adding process is performed on a unit image
Each unit image used in the adding process is subjected to a transformation process in the accumulation unit 13, which is stored again in the unit image memories 203.
As described above, each unit is multiplied by a weight factor, which enables an automatic reduction of residual errors while the adding effect being maintained. And in accordance with the detection result of motion vector, a weight factor for a certain unit can be automatically controlled, for example a smaller weight factor for a unit with a larger detection error.
In the explanation of the above example, the number of the added frames is 30 and the number of images for constructing a unit is five, but as described in the example 1, an adjustment of a weight factor in accordance with the motion size of an object under inspection enables a control of an adding effect and residual error as needed. Moreover, the smaller number of the images for constructing a unit enables a control of a weight factor on a frame basis, which enables a control of an adding effect as needed, and also a removal of a specific frame with large error from an adding process.
Number | Date | Country | Kind |
---|---|---|---|
2006-044649 | Feb 2006 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2006/325721 | 12/25/2006 | WO | 00 | 11/10/2008 |