The present invention relates to an ultrasonic diagnostic device, and particular to image processing of an ultrasonic image.
In an ultrasonic image obtained by transmitting and receiving ultrasonic waves, noise referred to as fogging (or artifacts), etc. may appear, in particular near an ultrasonic probe (probe). This fogging is considered to be generated due to, for example, multiple reflection and a side lobe, and causes degradation of the image quality of the ultrasonic image. Therefore, techniques for removing fogging in the ultrasonic image have been proposed.
For example, Patent Document 1 discloses an ultrasonic diagnostic device that suppresses a relatively slow-moving, fixed echo (such as fogging) with a filter for attenuating a particular frequency component using chronologically-received ultrasonic signals.
Patent Document 2 also discloses a method for improving the image quality of an ultrasonic image by applying multiresolution decomposition to the image.
However, although, in Patent Document 1, a fixed echo is suppressed when, for example, a relatively low frequency component is attenuated as the particular frequency component, information of a site that is important for diagnosis, such as a relatively slow-moving tissue including the myocardium at the end of ventricular diastole, for example, may also be suppressed. Meanwhile, the technology of multiresolution decomposition disclosed in Patent Document 2 is expected to be applied to ultrasonic images in various ways.
In view of the above-described background, the inventors of the present invention have continued research and development of the technology for reducing an image portion that appears in an ultrasonic image and is referred to as fogging or a stationary artifact. They especially focused on image processing applying multiresolution decomposition.
The present invention has been achieved in the process of that research and development, and the purpose of the present invention is to reduce an image portion of fogging or a stationary artifact that appears in an ultrasonic image, using multiresolution decomposition.
A preferable ultrasonic diagnostic device that serves the above purpose has a probe that transmits and receives ultrasonic waves, a transmitting and receiving section that obtains a received signal from the probe by controlling the probe, a resolution processing section that forms a plurality of resolution images, each having different resolution, by resolution conversion processing of an ultrasonic image obtained based on the received signal, a reduction processing section that determines the degree of reduction in each portion of the image based on the plurality of resolution images, and an image forming section that forms an ultrasonic image subjected to reduction processing according to the degree of reduction in each portion of the image.
In a preferable embodiment, the reduction processing section estimates the degree of structure for each portion in the image based on a difference image between the plurality of resolution images, and determines the degree of reduction for each portion in the image based on the estimation result.
In a preferable embodiment, the reduction processing section estimates the degree of motion for each portion in the image based on images obtained over a plurality of time phases from at least one of the plurality of resolution images, and determines the degree of reduction for each portion in the image based on the estimation result.
In a preferable embodiment, the reduction processing section estimates the degree of structure and the degree of motion for each portion in the image, and determines a subtraction component that determines the degree of reduction for each portion in the image based on the degree of tissue and the degree of motion, and the image forming section forms an ultrasonic image from which the subtraction component is subtracted.
In a preferable embodiment, the reduction processing section subtracts an optimal luminance value determined based on a lowest luminance value in the ultrasonic image from a luminance value of each pixel, thereby generating a subtraction candidate component, and determines the subtraction component based on a subtraction weight and the subtraction candidate component, the subtraction weight being determined according to the degree of structure and the degree of motion.
In a preferable embodiment, the resolution processing section forms, as the plurality of resolution images, at least one high-resolution image and a plurality of low-resolution images; the reduction processing section determines the degree of reduction in each portion in the image based on the plurality of low-resolution images, and forms a low-resolution image component that has been subjected to reduction processing according to the degree of reduction; and the image forming section synthesizes a high-resolution image component obtained from the high-resolution image and the low-resolution image component, thereby forming an ultrasonic image.
The present invention reduces an image portion that appears in an ultrasonic image and is referred to as fogging or a stationary artifact, etc., and preferably, removes the image portion completely.
When the ultrasonic beam is scanned in the region including the diagnostic object, and the transmitting and receiving section 12 collects the echo data along the ultrasonic beam; that is, line data, an image processing section 20 forms ultrasonic image data based on the collected line data. The image processing section 20 forms, for example, image data of a B-mode image.
The image processing section 20 forms a plurality of resolution images, each having different resolution, by resolution conversion processing of an ultrasonic image obtained based on a received signal, determines the degree of reduction in portions in the images based on the plurality of resolution images, and forms an ultrasonic image that has been subjected to the reduction processing according to the degree of reduction in the portions in the image. In order to form the ultrasonic image (image data), the image processing section 20 suppresses stationary noise that appears in the ultrasonic image. In particular, noise such as that referred to as fogging (or artifacts) is reduced. In order to reduce the noise such as fogging, the image processing section 20 has the functions of multiresolution decomposition, motion estimation, structure estimation, fogging removal, and image reconstruction. Then, the image processing section 20 forms, for example, a plurality of image data representing the heart, which is the diagnostic object, over a plurality of frames, and outputs the data to a display processing section 30.
The display processing section 30 performs, for example, coordinate conversion processing on the image data obtained from the image processing section 20, the coordinate conversion processing converting the image data from the ultrasonic scanning coordinate system to the image display coordinate system, and further adds a graphic image, for example, as necessary, thereby forming a display image including the ultrasonic image. The display image formed in the display processing section 30 is displayed on a display section 40.
Among the components (function blocks) shown in
In addition, except for the probe 10, the components shown in
The overall structure of the ultrasonic diagnostic device of
The myocardium portion shown in
Therefore, the image processing section 20, for example, calculates a standard deviation of a luminance value over the plurality of frames (time phases) for each pixel (which is assumed to have the coordinates (i, j)), and use the standard deviation as the index for evaluating the degree of motion (amount of motion). In doing so, it becomes possible to distinguish between the myocardium portion and the fogging portion according to the strength of the degree of motion (the size of the amount of motion).
However, the myocardium portion has a portion having a relatively small degree of motion, for example, in the cardiac wall. Therefore, if, for example, only the degree of motion (amount of motion) is evaluated, the inside of the myocardium portion may not be able to be identified as the myocardium portion.
Therefore, the image processing section 20 of the present ultrasonic diagnostic device further performs structure estimation using multiresolution decomposition, and distinguishes between the myocardium portion and the fogging portion in the ultrasonic image.
Further, it shows a low-resolution image Ex(Ex(Gn+2)) obtained by performing up-sampling processing on the ultrasonic image Gn+2 twice. The low-resolution image Ex(Ex(Gn+2)) has the same resolution as the low-resolution image Gn+2 and has the same image size as the ultrasonic image Gn.
The image processing section 20 compares, for example, the ultrasonic image Gn with the low-resolution image Ex(Ex(Gn+2)) shown in
In the myocardium portion in the ultrasonic image, characteristics of the myocardium tissue (structure) including, for example, minute roughness on the tissue surface or in the tissue are represented. Therefore, for example, if a pixel located on the myocardium surface or in the myocardium is a target pixel, a relatively large luminance difference appears between the target pixel and its surrounding pixels in the ultrasonic image Gn, which has relatively high resolution.
In contrast, because the low-resolution image Ex(Ex(Gn+2)) is a dull (blurred) image compared to the ultrasonic image Gn due to the lowered resolution (down-sampling processing), a luminance difference between the target pixel and its surrounding pixels becomes smaller.
Therefore, the larger the luminance difference between the target pixel and the surrounding pixels becomes in the ultrasonic image Gn, the greater extent to which the target pixel in the low-resolution image Ex(Ex(Gn+2)) is changed from the ultrasonic image Gn, and the larger a pixel value in the difference image (luminance difference).
Thus, the image processing section 20 determines that the greater the pixel value of the difference image (luminance difference) becomes, the stronger the degree of structure (tissue).
Further,
Unlike the myocardium portion (
As described in detail below, the image processing section 20 generates a subtraction component for subtracting (removing) the fogging based on the above-described structure estimation and motion estimation.
For, example, as shown in
Then, the image processing section 20 calculates the weight in subtraction from the structure estimation result and the motion estimation result.
With the above-described processing, the fogging portion having the small degree of structure and the small degree of motion is reduced or removed from the original image including the myocardium portion and the fogging portion so as to maintain the myocardium portion as much as possible and, more preferably, to maintain the myocardium portion completely. Next, a specific structure example of the image processing section 20 for implementing the above-described processing will be described.
The multiresolution decomposition section 31 creates a Gaussian pyramid of the input diagnostic image. The input diagnostic image is assumed to be G0, and data of each layer generated in multiresolution decomposition section 31 is assumed to be a Gn component (where n is an integer greater than or equal to 0).
Thus, the Gn component generated in the multiresolution decomposition section 31 in
In the specific example shown in
Although, in the specific example shown in
Further, although, in the above specific example, the decimation processing is performed after the two-dimensional low-pass filter is applied in the down-sampling section 3101 (
Further, in the low-pass filter (LPF) described below, the two-dimensional low-pass filter may be applied, or the one-dimensional low-pass filter may be applied in each dimension. Further, although in the above specific example, the structure in which the Gaussian pyramid processing is performed has been described as an example of the multiresolution decomposition section, the structure may be changed to a structure in which multiresolution decomposition is performed using, for example, discrete wavelet transformation, Gabor transformation, or a band-pass filter in the frequency domain.
Referring again to
Thus, the data of the layers created in the high frequency component calculation section 41 in
Although, in the above-described specific example, the G0 component to the G2 component have been input to the high frequency component calculation section 41 to obtain the L0 component and the L1 component, this specific example is not limiting, and, for example, Gn components of more layers may also be input to obtain more Ln components.
Further, although, in the above-described specific example, the structure in which the Laplacian pyramid processing is performed as an example of high frequency component calculation has been indicated, it may be changed to a structure in which the high frequency component is calculated using, for example, discrete wavelet transformation, Gabor transformation, or a band-pass filter in the frequency domain.
Referring to
Although, in the above-described specific example, assuming that n=2, the structure estimation value Str2 is obtained by inputting the G2 component to the G4 component to the structure calculation section 51, this specific example is not limiting, and, for example, at least two components of the Gn components may be input to obtain the structure estimation value Str2.
Further, although, in the above-described specific example, the structure estimation value Str2 is obtained by obtaining the difference between the G2 component and the component obtained by up-sampling the G4 component twice, this specific example is not limiting, and the difference may be obtained using consecutive layers or more separate layers. Further, although, in the above-described specific example, the difference between the G2 component and the component obtained by up-sampling the G4 component twice has been calculated, the final structure estimation value Str2 may be calculated by calculating another component, such as, for example, a difference between the G1 component and a component obtained by up-sampling the G3 component twice, and considering structure estimation values respectively calculated from the two of the difference obtained from the G2 component and G4 component and the difference obtained from the G1 component and the G3 component (two differences).
In S107, determination is made as to whether or not at least one of the multiplied values obtained in S104 and S106 is negative. The process proceeds to S109 if even only one value is negative, or if not, the process proceeds to S108.
In S108, determination that the target point is not zero cross is made, and the process proceeds to S113 without changing the difference value of the target point (pixel).
In S109, determination is made as to whether only one of the multiplied values obtained in S104 and S106 is negative. If only one of the multiplied values is negative, the process proceeds to S110, and if the two multiplied values are both negative, the process proceeds to S111. In S110, an average between absolute values of the two points in the direction where the multiplied value becomes negative is assumed as a value of the target point, and the process proceeds to S113.
In S111, by selecting a direction in which absolute values of the multiplied values obtained in S104 and S106 are maximum, a maximum inclination direction is selected, and the process proceeds to S112. In S112, an average between the absolute values of the two points in the direction selected in S111 is adopted as a value of the target point, and the process proceeds to S113.
In S113, determination is made as to whether or not values of all the target points have been determined. If the values of all the target points are determined, the process is ended, and if not, the process returns to S102, and processing for the next target point is performed.
Although, in the above-described specific example, the difference values of the points adjacent to the target point vertically and horizontally have been obtained, this is not limiting, and, for example, a step of calculating a difference value in the orthogonal direction may be provided to detect zero cross in more directions. Further, although, in the above-described specific example, the comparison has been made for each direction, the maximum inclination direction may be calculated by obtaining all values of adjacent points and performing, for example, principal component analysis. Although, in zero cross removal, preferably, the average of the absolute values in the maximum inclination direction is input, this is not limiting, and for example, an average value of absolute values of adjacent four points may also be input.
Referring to
In S201, a multiG2 buffer is obtained. In S202, a head address of the oldest time phase t is obtained. In S203, a head address of the second oldest time phase t-1 is obtained. In S204, a data array of the time phase t-1 is all copied to a data array of the time phase t. In S205, it is assumed that t=t-1.
In S206, determination is made as to whether or not t=0 holds true. If t=0 holds true, the process proceeds to S207, and if not, the process proceeds to S203 to copy the next time phase. In S207, the G2 component of the current frame is obtained. In S208, the G2 component of the data of the current time phase is copied to the data array of t=0, and updating of the multiG2 buffer is ended.
According to the specific example shown in
Further, by adopting Str2 as G2 in the specific example of
Referring to
The background subtraction section 71 calculates a fogging component included in the Gn component based on the estimation of the tissue motion and the estimation of the tissue structure, thereby calculating an nrGn component subjected to fogging reduction processing.
A subtraction component calculation section 101 calculates a subtraction component from the average image frameAve component calculated in the weight calculation section 81, the optimal luminance value base calculated in the optical luminance value estimation section 91, and the subtraction weight weight calculated in the weight calculation section 81 and subjected to a low-pass filter (LPF) in an LPF section 12-3.
Preferably, the calculated subtraction component is subjected to a low-pass filter (LPF) in an LPF section 12-4 and smoothed in the space direction, and then, it is smoothed in the time direction in an adjusting section 7101 based on the following equation.
diffi,jt=diffDatai,j×beta+diffi,jt-1×(1−beta) [Equation 1]
diff: subtraction components up to the last frame
diffData: a calcula: a calculated subtraction
beta: parameter
In doing so, the diagnostic image reconstructed by a below-described processing can suppress local subtraction and a large luminance change in the same pixel between the frames and provide a diagnostic image with less sense of congruity. The subtracter 13-4 subtracts the spatially and timely-smoothed subtraction component from the Gn component of the current frame stored in the multiGn component, to thereby calculate an nrGn component from which the fogging is reduced.
Although, in the above-described specific example, it is assumed that n=2, this specific example is not limiting. Further, although, in the above-described specific example, calculation has been carried out based on the subtraction component calculated in the current frame in the adjusting section 7101 and the weighting addition value of the subtraction component that has been updated up to the last frame, all the data so far or similar parameters may also be stored to perform weighting as appropriate.
The nrGn component obtained in the background subtraction section 71 is input to the image reconstruction section 111. The subtraction component obtained in the background subtraction section 71 is further input to the background subtraction section 71 again for the next frame calculation. The weighting calculation section 81 calculates the average image frameAve component and the subtraction weight weight as an evaluation value representing an estimation value of fogging.
The values calculated in the average value calculation section 8101, the variance value calculation section 8102, and the average value calculation section 8103 are subjected to the low-pass filters in LPF sections 12-5, 12-6, and 12-7, respectively. The data subjected to the low-pass filter in the LPF section 12-5 are output as the average image frameAve. The data subjected to the low-pass filter in the LPF sections 12-6 and 12-7 are also input to a weight determination section 8104.
Here, calculation performed in the weight determination section 8104 will be described in more detail. The weight determination section 8104 calculates a weight weight that maintains, among the subtraction candidate components obtained after processing described below, components that are estimated to be the fogging and removes components that are not estimated to be the fogging, so as not to allow them to be the subtraction components. In other words, as an evaluation value representing the estimation value of the fogging, the subtraction weight weight is given as 0≦weight≦1. This is a normalized evaluation value indicating the “conspicuity” of the fogging, and, in order to calculate this evaluation value, in the present embodiment, an evaluation value of the fogging is obtained using the motion and the structure as examples.
The fogging is noise that appears near the probe 10 and has a small amount of motion and no structure. Therefore, preferably, a component with a smaller amount of motion and weaker structure strength is determined to be fogging, and the weight is closer to 1. In contrast, if the component has a larger amount of motion or a stronger structure, the component may have information of the myocardium and the like, and the weight is closer to 0.
In doing so, the weight determination section 8104 calculates the subtraction weight weight based on the values calculated in the LPF sections 12-6 and 12-7, for example, using a method described below.
First, because the value calculated in the LPF section 12-6 is a value obtained by smoothing, for each pixel, the variance values using the plurality of frames, if this value is small, it is understood that, in that region, the luminance change was small, and the motion of the pixel was small. Thus, the weight for the motion can be calculated, for example, according to a reduction function in the following equation using a calculated value in the pixel (i, j) and a parameter gamma.
Calculated value
Next, because the value calculated in the LPF section 12-7 is a value obtained by smoothing, for each pixel, the structure estimation values using the plurality of frames, if this value is small, it is understood that, in that region, the structure is weak. Thus, the weight for the structure can be calculated, for example, according to a reduction function in the following equation using the calculated value in the pixel (i, j) and a parameter delta.
Calculated value
The weighting for the subtraction candidate component can be calculated, for example, according to a reduction function in the following equation using the weight for the motion in Equation 2 and the weight for the structure in Equation 3.
Although the above-described specific example has used the reduction function in which the weight becomes closer to 1 in the spot which is estimated to be fogging, a reduction function other than this specific example may also be used. In addition, although, in the present embodiment, the variance value calculation section 8102 and the average value calculation section 8103 have performed, for each pixel, calculations based on the values of the plurality of frames, calculation may be performed using pixel data in the range given by the kernel size m*n (m≧0, n≧0), for example. Further, although, in the present embodiment, weighting for the motion has been calculated from the variance of the luminance value, weighting may also be calculated using an evaluation value used in performing block matching, etc., such as, for example, an SAD (Sum of Absolute Difference).
Referring to
base=min(frameAve)×epsilon [Equation 5]
Although, in the present embodiment, the optical luminance value base has been calculated by the above-described method, this is not limiting. Preferably, because the optimal luminance value base is a value for estimating the luminance value which should be held by a noise portion such as fogging and the like, an optional luminance value other than the optimal luminance value may be automatically calculated from the histogram of the image using a discrimination analysis method. Further, an optional luminance value may be given by the user.
As such, by estimating the optimal luminance value, it is possible to control the subtraction candidate component obtained through a below-described processing, and make an adjustment so as not to excessively reduce the luminance of the portion estimated to be the fogging. In doing so, it is possible to prevent the diagnostic image reconstructed through the below-described processing from including a sense of congruity.
Referring to
A conditioned multiplication section 10101 calculates a subtraction component diffData from the calculated subtraction candidate component and the subtraction weight weight. An adjustment section 10102 makes an adjustment of the obtained subtraction component diffData using a parameter alpha, for example. The subtraction component calculation section 101 calculates, for each pixel (i, j), the subtraction component diffData based on the following equation, for example.
diffDatai,j=alpha×(frameAvei,j−base)×weighti,j [Equation 6]
In S304, because the subtraction candidate component is positive, the component is multiplied by the subtraction weight weight, thereby determining a subtracted value. In S305, because the subtraction candidate component is negative, the pixel has a luminance value lower than the optimal luminance value. Then, the subtracted value is set to be 0 so as not to perform processing. In S306, determination is made as to whether or not values of all the target pixels have been determined. If the values of all the target pixels have been determined, the process is ended, and if not, the process proceeds to S302 to determine a value of the next target pixel.
Referring to
Thus, there are obtained image data nrG0 from which the fogging has been reduced, and preferably, from which the fogging has been removed. The image data nrG0 have sample density and the resolution equal to those of the image data input to the image processing section 20.
Further, although in the above-described embodiment, the G0 component, the G1 component, the L0 component, the L1 component, and the nrG2 component from which the fogging is reduced have been obtained, this is not limiting, and more layers may be used. Furthermore, in the above embodiment, preferably, by performing the fogging reduction processing in the Gn component of the layer where n≧1, and adding an Lk component (0≦k≦n) while up-sampling the nrGn component from which the fogging is reduced, it is possible to reduce the “stickiness” observed in a simple filter and the like and perform reconstruction of the diagnostic image with less sense of congruity.
The image data nrG0 reconstructed in the image reconstruction section 111 are transmitted to the display processing section 30, and this allows the display section 40 to display an ultrasonic image from which the fogging is reduced, and more preferably, an ultrasonic from which the fogging is removed. Thus, for example, by reducing the fogging efficiently without reducing the myocardium information to a large extent, it is possible to display the ultrasonic image having a good visibility (for example, a B-mode image).
Although, in the specific example shown in
Further, although in the above-described variation (second embodiment), the structure where the only one feature calculation section 121 is positioned in the image processing section 20 has been indicated, this is not limiting, and the number of feature calculation section 121 may be increased according to three or more number of features desired to be used.
Although, in this variation (second embodiment), the structure where the only one feature data update section 6103 is positioned in the data update section 61 has been indicated, this is not limiting, and the number of the feature data update section 6103 may be increased according to three or more number of features desired to be used.
Although, in this variation (second embodiment), the structure where one multiFtr buffer is added as an input to the background subtraction section 71 has been indicated, this is not limiting, and the number of input buffers may be increased according to three or more number of features desired to be used. Further, in conjunction with this, the number of buffers input to the weight calculation section 81 may be increased according to the number of features desired to be used.
Although, in this variation (second embodiment), the structure where one multiFtr buffer is added as an input in the weight calculation section 81 has been indicated, this is not limiting, and the number of input buffers may be increased according to three or more number of features desired to be used. In addition, in conjunction with this, the number of the average value calculation section 8105 and the LPF section 12-8 may be increased according to three or more number of features desired to be used.
For example, the feature storage section 131 stores in advance a feature of the fogging portion and a return value according to that feature. At this time, there may also be stored a feature of the structure of the myocardium, etc. that is important for diagnosis and a return value according to that feature. Therefore, by inputting the features calculated in the feature calculation section 121 to the feature storage section 131, the feature calculation section 121 can obtain return values according to the features. Features are calculated by using these return values as the feature estimation values Ftr.
Although, in this variation (third embodiment), the structure where the only one feature storage section 131 is positioned in the image processing section 20 has been indicated, this is not limiting, and the number of feature storage section 131 may be increased to three or more number of features desired to be used. Further, the second embodiment and the third embodiment can be used together.
Although the image processing based on the two-dimensional image has been described above, fogging reduction processing for a three-dimensional image may also be performed. In case of processing of a three-dimensional image, preferably, the down-sampling section 3101 (
Further, the signals obtained from the transmitting and receiving section 12 may be subjected to processing, such as wave detection and logarithmic transformation, and subsequently subjected to fogging reduction in the image processing section 20. Subsequently, the coordinate transform processing may be performed in a digital scan converter. Naturally, the signals obtained from the transmitting and receiving section 12 may be subjected to fogging reduction in the image processing section 20 and subsequently subjected to the processing, such as wave detection and logarithmic transformation. Alternatively, the signals may be subjected to the coordinate transform processing in the digital scan converter and then subjected to fogging reduction in the image processing section 20.
Although the preferred embodiments of the present invention have been described, the above-described embodiments are merely examples in all respects and are not intended to limit the scope of the present invention. The present invention includes various variations without departing from the spirit of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
2013-247181 | Nov 2013 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2014/080703 | 11/13/2014 | WO | 00 |