The present invention is directed to a technique that relates to an ultrasound imaging method and an ultrasound imaging apparatus, allowing a tissue boundary to be clearly discerned, upon imaging a living body through the use of ultrasound waves.
In an ultrasound imaging apparatus used for medical diagnostic imaging, there is known a method for estimating a distribution of elastic modulus of tissues, based on an amount of change in a small area of a diagnostic image sequence (B-mode image), converts a degree of stiffness to a color map for display. However, in the case of peripheral zone of tumor, for instance, there is not found a large difference in acoustic impedance nor in elastic modulus, relative to surrounding tissue, and in this situation, it is not possible to discern the boundary between the tumor and the surrounding tissue, in the diagnostic image sequence nor in the elasticity image.
Therefore, there is a method for obtaining a motion vector of each region in a diagnostic image, according to a block matching process on two diagnostic image data items being different chronologically, and generating a scalar field image from the motion vectors. With this configuration, it is possible to discern the tissue boundaries where neither the acoustic impedance nor the elastic modulus is different significantly from the surroundings.
However, in a region of the image data including high noise, such as a marginal domain for signal penetration where echo signals become faint, an error vector may occur due to noise influence, upon obtaining the motion vector, and this may deteriorate the discerning degree of the boundary. Therefore, in the Patent Document 1, upon obtaining the motion vector, a degree of similarity of image data is calculated between a region of interest and multiple regions as destination candidates of the region of interest, and according to a distribution of the degree of similarity, a degree of reliability is determined as to the motion vector that is obtained with regard to the region of interest. If the degree of reliability is low, it is possible to remove the motion vector, or the like, and therefore this may enhance the degree of discerning the boundary.
PCT International Publication No. WO2011/052602
The method for discerning the tissue boundary by obtaining the motion vector, as described in the Patent Document 1, and the like, needs two steps; firstly, obtaining the motion vector of each region on the image by the block matching process, and secondly, converting the motion vector into scalar to generate a scalar field image.
An object of the present invention is to provide an ultrasound imaging apparatus which generates the scalar field image directly without obtaining the motion vector, so as to make the boundaries in a test subject discernible.
In order to achieve the object above, according to a first aspect of the present invention, the ultrasound imaging apparatus as described in the following will be provided. In other words, the ultrasound imaging apparatus incorporates a transmitter configured to transmit an ultrasound wave to an object, a receiver configured to receive an ultrasound wave coming from the object, and a processor 10, configured to process the received signal in the receiver and generate images of at least two frames. The processor sets multiple regions of interest in one frame, out of at least the two frames of images being generated, and sets on one of the other frames, search regions each wider than the region of interest, respectively for the multiple regions of interest. The processor sets in the search region, multiple candidate regions each having a corresponding size of the region of interest, and obtains norm between a pixel value of the region of interest and a pixel value of the candidate region, for each of the multiple candidate regions, thereby obtaining a norm distribution within the search region and generating a value (scalar value) representing a state of the norm distribution, as a pixel value of the region of interest that is associated with the search region.
According to the present invention, a value representing the state of the norm distribution in the search region is obtained. If there is a boundary, the norm indicates a low value along the boundary. With the configuration above, an image is generated assuming the value representing the state of the norm distribution (scalar value), as the pixel value of the region of interest being associated with the search region, and therefore, it is possible to generate an image showing the boundaries of a test subject, without generating a vector field.
The ultrasound imaging apparatus of the present invention is provided with a transmitter configured to transmit an ultrasound wave to an object, a receiver configured to receive the ultrasound wave coming from the object, and a processor configured to process a received signal in the receiver and generate images of at least two frames. The processor sets multiple regions of interest in one frame, out of the two or more frames of images being generated, and sets in one of the other frames, search regions each wider than the region of interest, respectively for the multiple regions of interest. In the search region, there are provided multiple candidate regions, each in a size corresponding to the region of interest. The processor obtains the norm between the pixel value of the region of interest and the pixel value of the candidate region, for each of the multiple candidate regions, thereby obtaining the norm distribution within the search region, and generating an image assuming a value representing the state of the norm distribution (scalar value) as the pixel value of the region of interest that is associated with the search region. Here, it is also possible to calculate the norm by directly using an amplitude value or a phase value of the received signal, instead of the pixel value, in the region of interest. A linear change is reflected on the original received signal more accurately and higher resolution may be achieved relative to the pixel value, since logarithmic compression processing is applied to the pixel value.
If there is a boundary, the norm indicates a low value along the boundary. Therefore, an image is generated by assuming the value representing the state of the norm distribution (scalar value) as the pixel value of the region of interest that is associated with the search region, and accordingly, the ultrasound imaging apparatus of the present invention is allowed to generate an image representing the boundary of the test subject, without generating a vector field.
As the norm, the p-norm (also referred to as “power norm”) expressed by the following formula (1) may be employed.
It is to be noted here that Pm(i0, j0) represents a pixel value of the pixel at a predetermined position (i0, j0) (e.g., a center position) within the region of interest, Pm+Δ(i, j) represents a pixel value of the pixel at the position (i, j) within the candidate region, and p represents a real number being predetermined.
It is desirable that the aforementioned p is a real number being larger than 1.
As the value representing the state of the norm distribution (scalar value), statistics of the norm distribution may be employed. For example, as the statistics, it is possible to use a rate of divergence that is defined by a difference between a minimum norm value and an average value of the norm values in the norm distribution within the search region. It is alternatively possible to use a coefficient of variation as the statistics, which is obtained by dividing a standard deviation of the norm values by the average value, in the norm distribution within the search region.
As the value representing the state of the norm distribution (scalar value), a value other than the statistics may be used. By way of example, a first direction and a second direction are obtained, out of multiple direction centering on a specific region being set within the search region; the first direction in which the average of the norm values in the candidate regions located along the direction, becomes a minimum and the second direction passing through the specific region and being orthogonal to the first direction. Then, it is possible to use a value of ratio or a value of difference between the average of the norm values in the candidate regions along the first direction and the average of the norm values in the candidate regions along the second direction, as the value representing the state of the norm distribution as to the region of the interest that is associated with the search region. On this occasion, the norm distribution within the search region may be subjected to enhancement in advance, using the Laplacian filter, and the value of ratio or the value of difference may be obtained as to the distribution after the enhancement.
Alternatively, a matrix may be generated representing the norm distribution within the search region, an eigenvalue decomposition process is applied to the matrix to obtain an eigenvalue, and then this eigenvalue may be used as the value (scalar value) representing the state of the norm distribution as to the region of interest that is associated with the search region.
It is also possible to configure such that the processor further obtains the motion vector. By way of example, the processor selects as a destination of the region of interest, the candidate region in which the norm value becomes minimum in the search region, and obtains the motion vector that connects the position of the region of interest and the position of the candidate region being selected. The motion vector is generated for each of the multiple regions of interest, thereby generating the motion vector field. It is further possible for the processor to obtain as a boundary norm value, a total sum of a squared value of derivative of y direction with respect to x component and a squared value of derivative of x direction with respect to y component, as to each of multiple specific regions set in the motion vector field, and generates an image assuming the boundary norm value as the pixel value of the specific region.
If multiple regions of interest are set in a partially overlapping manner, it is possible to configure such that the processor stores a value obtained with regard to the overlapping region in a lookup table of the storage region, upon calculating the norm as to one region of interest, and the processor reads the value from the lookup table and uses the value, upon calculating the norm as to another region of interest. Similarly, if multiple candidate regions are set in a partially overlapping manner, it is also possible to store a value obtained with regard to the overlapping region in the lookup table of the storage region, and the processor reads the value from the lookup table and uses the value, upon calculating the norm as to another candidate region. Those configurations above may reduce the computations.
It is further possible for the processor to generate multiple frames of images on a time-series basis, the image being generated assuming the value representing the state of the norm distribution as the pixel value, calculates the amount of information entropy for each frame. If the amount of information entropy is smaller than a predetermined threshold, it may be determined not use the image as the image for displaying the frame. This configuration allows elimination of an abnormal image with a small amount of information entropy, enabling display of successive images with preferable visibility.
It is also possible to generate an extraction image that is obtained by extracting pixels each having a value representing the norm distribution, the value being equal to or larger than a predetermined value, and display the extraction image in a superimposed manner on the B-mode image. Since the pixel with the value representing the norm distribution, being equal to or higher than the predetermined value, corresponds to a pixel indicating a boundary, the extraction image may be displayed only on the boundary part in the B-mode image. In order to define the predetermined value, a histogram may be generated as to the value representing the state of the norm distribution and its frequency, with regard to the image that is generated assuming the value representing the state of the norm distribution as the pixel value. The histogram is searched for a bell-shaped distribution, and a minimum value of the bell-shaped distribution may be used as the aforementioned predetermined value.
The ultrasound imaging apparatus according to another aspect of the present invention is provided with a transmitter configured to transmit an ultrasound wave to an object, a receiver configured to receive the ultrasound wave coming from the object, and a processor configured to process a received signal in the receiver and generate images of at least two frames. The processor sets multiple regions of interest in a distribution of the received signals that correspond to one frame, out of the received signals corresponding to two or more frames of images being received. The processor sets a search region wider than the region of interest in another one frame, for each of the multiple regions of interest. The processor sets within the search region, multiple candidate regions in a size corresponding to the region of interest. The processor obtains the norm, between an amplitude distribution or a phase distribution in the region of interest, and an amplitude distribution or a phase distribution in the candidate region, for each of the multiple candidate regions, thereby obtaining the norm distribution within the search region. The processor generates an image assuming the value representing the state of the norm distribution, as the pixel value of the region of interest that is associated with the search region.
In addition, according to the present invention, an ultrasound imaging method is provided. In other words, the method transmits an ultrasound wave to an object, processes a received signal obtained by receiving the ultrasound wave coming from the object, and generates images of at least two frames. The method selects two frames from the images, and sets multiple regions of interest in one frame. The method sets a search region wider than the region of interest in the other frame, for each of the multiple regions of interest. The method sets multiple candidate regions each in a size corresponding to the region of interest within the search region. The method obtains the norm between the pixel value in the region of interest and the pixel value in the candidate region, for each of the multiple candidate regions, thereby obtaining the norm distribution within the search region. Then, the method generates an image, assuming the value representing the state of the norm distribution, as the pixel value of the region of Interest that is associated with the search region.
According to the present invention, a program for imaging ultrasound waves is provided. In other words, this program is provided for ultrasound imaging, allowing a computer to execute a first step to fifth step. In the first step, two frames are selected from ultrasound images of at least two frames. In the second step, multiple regions of interest is set in one frame. In the third step, a search region is set wider than the region of interest in another one frame for each of the multiple regions of interest. And multiple candidate regions are set in a size corresponding to the region of interest within the search region. In the fourth step, the norm between the pixel value in the region of interest and the pixel value in the candidate region is obtained, for each of the multiple candidate regions, thereby a norm distribution is obtained within the search region. In a fifth step, an image is generated assuming the value representing the state of the norm distribution as the pixel value of the region of interest that is associated with the search region.
A specific explanation will be provided as to the ultrasound imaging apparatus according to one embodiment of the present invention.
The ultrasound probe 1 on which the ultrasound elements are provided in one-dimensional array, serves as a transmitter configured to transmit an ultrasound beam (an ultrasound pulse) to a living body. The ultrasound probe 1 serves as a receiver configured to receive an echo signal (a received signal) reflected from the living body. Under the control of the control system 4, the transmit beamformer 3 outputs a transmit signal having a delay time in accordance with a transmit focal point. And the transmit signal is sent to the ultrasound probe 1 via the transmit-receive switch 5. The ultrasound beam is reflected or scattered within the living body and returned to the ultrasound probe 1. The ultrasound beam is converted to electrical signals by the ultrasound probe 1, and transferred to the receive beamformer 6 as the received signal, via the transmit-receive switch 5.
The receive beamformer 6 is a complex beam former for mixing two received signals which are out of phase by 90 degrees. The receive beamformer 6 performs a dynamic focusing to adjust the delay time in accordance with a receive timing under the control of the control system 4, so as to output radio frequency signals corresponding to the real part and the imaginary part. The envelope detector 7 detects the radio frequency signals. The signals are converted into video signals. The video signals are inputted into the scan converter 8, so as to be converted into image data (B-mode image data). The configuration described above is the same as the configuration of a well-known ultrasonic imaging apparatus. Further in the present invention, it is possible to implement ultrasound boundary detection according to the configuration for directly processing the RF signal.
In the apparatus of the present invention, the processor 10 implements the ultrasound boundary detection process. The processor 10 incorporates a CPU 10a and a memory 10b. The CPU 10a executes the program stored in the memory 10b in advance, thereby generating a scalar field image on which tissue boundaries in the test subject are detectable. With reference to
The parameter setter 11 performs a setting of parameters for the signal processing in the processor 10, and a setting for selecting an image for display in the synthesizer 12. An operator (a device operator) inputs those parameters from the user interface 2. As for the parameters for the signal processing, for instance, it is possible to accept from the operator a setting of the region of interest on a desired frame m, and a setting of a search region on the frame m+A that is different from the frame m. As for the setting for selecting the image for display, for instance, it is possible to accept from the operator a setting for selecting either one of the following to be displayed on a monitor; one image being obtained by synthesizing an original image and a vector field image (or a scalar image), and a sequence of at least two images being placed side by side.
The processor 10 calculates a p-norm distribution from thus extracted two frames, and generates a scalar field image (step 24). The processor generates a synthesized image obtained by superimposing the scalar field image being generated on the B-mode image, and display the synthesized image on the monitor 13 (step 27). It is further possible that in the step 23, frames sequentially different on a time-series basis are selected as the desired frame, the aforementioned steps 21 to 27 are repeated. The synthesized images are successively displayed, thereby displaying a moving picture made up of the synthesized images.
Next, as shown in
The processor 10 sets multiple candidate regions 33 within the search region 32, each candidate region having the size being equal to the size of the ROI 31 as shown in
The processor 10 uses the brightness distribution Pm+Δ(i, j) of the pixels in the candidate region 33 and the brightness distribution Pm(i0, j0) in the ROI 31 to calculate p-norm according to the aforementioned formula (1), and sets this p-norm as the p-norm value of the candidate region 33. By the aforementioned formula (1), P-th power of the absolute value of the difference is calculated between the brightness Pm(i0, j0) of the pixel at the position (i0, j0) in the ROI 31, and the brightness Pm+Δ(i, j) of the pixel at the position (i, j) in the candidate region 33 being associated with the position (i0, j0). Then by the formula (1), the value of the P-th power is added up, as to all the pixels in the candidate region 33, and raised to the 1/p-th power, and this result is the p-norm obtained. As the p-value, a predetermined real value, or a value accepted from the operator via the parameter setter 11 may be employed. The p-value is not limited to an integer, but it may be a decimal number.
The p-norm including “p” as power, as shown in the aforementioned formula (1), is a value corresponding to a concept of distance, and the p-norm represents similarity between the brightness distribution Pm(i0, j0) in the ROI 31 and the brightness distribution Pm+Δ(i, j) in the candidate region 33. In other words, if the brightness distribution Pm (i0, j0) in the ROI 31 is identical to the brightness distribution Pm+Δ(i, j) in the candidate region 33, the p-norm becomes zero. The larger is the difference between both the brightness distributions, the larger becomes the value.
The processor 10 calculates the p-norm value, as to all the candidate regions 33 in the search region (step 53). Accordingly, it is possible to obtain the p-norm distribution within the search region 32 that is associated with the ROI 31. The p-norm value thus obtained is stored in the memory 10b in the processor 10.
As shown in
As described above, the p-norm distribution is different depending on whether the ROI 31 is positioned in the static part of the test subject, or in the boundary being sliding, and the present invention utilizes this difference to create an image. Specifically, the statistics indicating the p-norm distribution in the search region 32 is obtained, and the obtained statistics is assumed as a scalar value of the ROI 31 that is associated with this search region (step 54). Any statistics may be applicable, as far as it is able to represent a difference of the p-norm distribution between the static part and the boundary part. Here, a rate of divergence obtained in the formula (2) is used as the statistics:
[Formula 2]
Rate of Divergence≡
Histograms in
As discussed above, in the step 54, the rate of divergence of the p-norm distribution (scalar value) is obtained. According to the scalar value, it is possible to indicate whether the ROI 31 is positioned in the static part or in the sliding part of the test subject.
The aforementioned steps 51 to 54 are repeated until the calculation is carried out as to all the ROIs 31 (step 55). The rates of divergence (scalar values) obtained as to all the ROIs 31 are converted into image pixel values (e.g., brightness values), and thereby generating an image (scalar field image) (step 56). According to the steps 51 to 56 as described above, the scalar field image of the step 24 is generated.
As shown in
Further in the scalar field image as shown in
As a comparative example,
As discussed above, in the tensor field image (
The scalar field image and the B-mode image obtained in the present invention are displayed in a manner superimposing one on another as shown in
In the present embodiment, it is further possible to generate the vector field image, and display this vector field image, the scalar field image, and the B-mode image in a superimposed manner. In this case, the process in step 25 is performed after the step 24, as shown in
The vector field image being obtained, the scalar field image, and the B-mode image are displayed in a superimposed manner (step 26).
In the present embodiment, it is sufficient if the p-value of the p-norm in the formula (1) is a real number. However, a parameter survey may be conducted on the p-value using appropriate variation width, with respect to a typical sample of the evaluation target, for instance. An optimum p-value may be set as a value which enables acquisition of a clear image with the least virtual images. In addition, it is desirable that the p-value is a real number larger than 1.
In the explanation above, the rate of divergence is obtained as the statistics representing the distribution of the p-norm in the search region 32, and the scalar field image is generated based on this value, but it is also possible to use a parameter other than the rate of divergence. By way of example, it is possible to use a coefficient of variation. The coefficient of variation is defined by the following formula. It is statistics obtained by normalizing the standard deviation by the average, representing the magnitude of variation in the distribution (i.e., a degree of difficulty in minimum value separation).
In the second embodiment, if any virtual image occurs in the scalar field image obtained in the first embodiment, this virtual image may be removed, or the like. In other words, a degree of reliability of the image region is identified, and a region with a low reliability is removed, or the like, thereby eliminating the virtual image and enhancing the reliability of the entire image. This will be explained with reference to
Upon receiving an instruction for removing the virtual image from the operator, the processor 10 reads and executes a program for removing the virtual image, and operates as shown in the flow of
A histogram as shown in
Since the second embodiment enables elimination of the virtual image, it is possible to provide a scalar field image on which the boundary of the test subject can be discerned more clearly.
In the first embodiment, statistics (the rate of divergence or the coefficient of variation) of the p-norm distribution is obtained to generate an image. In the third embodiment, an image is generated from the p-norm distribution where a tissue boundary is discernible, through the use of a different method. This processing method will be explained with reference to
In the p-norm value distribution in the search region 32 as described in the first embodiment, the candidate region 33 along the boundary in the test subject forms a region with small p-norm values (a valley of p-norm values) along the boundary. Therefore, the distribution of p-norm values has the characteristics that the values of the candidate region 33 along the boundary indicate smaller values than the candidate region 33 in the direction orthogonal to the boundary. With the use of the characteristics, an image is generated in the present embodiment.
Firstly, the processor 10 executes the processing from the step 21 to the step 23 in
The processes of the steps 113 and 114 are performed as to each of the eight directions 151 respectively illustrated in the eight patterns as shown in
A direction 151 in which the average of the p-norm values becomes a minimum value is selected out of the eight predetermined directions 151 (step 115). Next, the direction 152 orthogonal to the selected direction 151 is provided, and an average of the p-norm values of the candidate regions 33 being positioned along the direction 152 is obtained (step 116). The directions 152 orthogonal to the eight directions 151 are as illustrated in
Since the candidate regions along the boundary of the search region 32 in the test subject have small p-norm values (valley), the ratio obtained in the step 117 becomes a larger value, compared to the ROI 31 that is not located on the boundary. Therefore, by assuming the ratio as the pixel value, it is possible to generate an image which allows clear discerning of the boundary.
In the present embodiment, the ratio of the average of the p-norm values is used, but this is not the only example. It is also possible to employ other function values, such as a different value between the average of the p-norm values in the minimum direction 151 and the average of the p-norm values in the orthogonal direction 152.
In the explanation above, as shown in
In the candidate region 33 positioned at the boundary part of the test subject, the p-norm average value of the pixels in the direction along the boundary (the direction 151 along which the p-norm average value becomes minimum) is small, and the p-norm average value in the direction 152 being orthogonal thereto is large. Therefore, the ratio therebetween becomes a large value. On the other hand, in the candidate region 33 positioned in a homogeneous area other than the boundary of the test subject, the p-norm average value in the direction 151 and the p-norm average value in the direction 152 being orthogonal thereto becomes equivalent. Therefore the ratio therebetween becomes nearly 1. When the ratio is calculated as to the candidate regions 33 of the entire image of a target frame, the pixels in the candidate region 33 with a large ratio may correspond to the boundary part. Thus by generating an image assuming the ratio, as the pixel value of the central pixel of the candidate region 33, it is possible to generate an image which allows estimation of the boundary in units of pixels. Instead of the ratio, it is further possible to use another function value such as a difference value between the p-norm average value in the direction 151 having the minimum value and the p-norm average value in the direction 152 orthogonal thereto.
The fourth embodiment will be explained.
In the fourth embodiment, prior to subjecting the p-norm distribution in the search region 32 to the processing of
Specifically, the processes in the steps 21 to 23 in
Similarly, when the boundary of the test subject is obtained from the distribution of the pixel values within the candidate region 33 as explained in the latter half of the third embodiment, it is also possible that the Laplacian filter is applied to the pixel value distribution, subjecting the distribution to the enhancement. Thereafter, the p-norm average value or the ratio is obtained.
As the fifth embodiment, an explanation will be provided as to a processing method to generate an image in which the tissue boundary is discernible from the p-norm distribution by using an eigenvalue decomposition process.
Firstly, the processor 10 executes the processes in the steps 21 to 23 in
Here, “Nmn” represents the p-norm value obtained by the formula (1) as to the candidate regions 33 within the search region 32, and “m” and “n” indicate the positions of the candidate regions 33 within the search region 32.
The maximum eigenvalue or the linear combination of eigenvalues is obtained as the scalar value, as to all the ROIs 31, and a scalar field image is generated, assuming the scalar value as the pixel value (brightness value, or the like), similar to the step 56 in
As described above, according to the present invention, the scalar field image is generated by using the eigenvalue.
In the present embodiment, the maximum eigenvalue among the eigenvalues, or the linear combination of eigenvalues is employed, but it is not limited to those examples. It is further possible to use another one or more eigenvalues.
As the sixth embodiment, an explanation will be provided as to a method for generating a scalar field image that is capable of extracting a boundary based on a vector field, when a motion vector field image is generated by performing the process of the step 25 in
It is assumed that the motion vector field obtained in the step 25 of
Firstly, an explanation will be provided as to the case where a conventional strain tensor is obtained for the vector field in each of those models as described above, and the strain tensor is converted into a scalar field. The formula for obtaining the strain tensor is publicly known as described in the Patent Document 2, and it is defined by the following formula:
In the formula (5), the x-component of the motion vector is assumed as X, and the y-component thereof is assumed as Y.
The partial differential value expressed by the formula (5) is calculated as a difference average of each of the vector components on both sides of the ROI 131, for instance. Specifically, it is calculated by the formula (6) as to each of the models in
[Formula 6]
(∂Y/∂x,∂X/∂y) (6)
By way of example, in the vector field of
In the present invention, the motion vector field is converted into the scalar field by using the scalar value defined by the following formula (7). Since the formula (7) is in a format that includes the powers and the root of power, similar to the formula (1), it is referred to as the “boundary norm”.
When the boundary norm as shown above is obtained as to each of the models in
The seventh embodiment will be explained.
In the seventh embodiment, upon setting multiple ROIs 31 in the step 51 of the
Firstly, as shown in
Next, the target ROI 31-1 is selected (step 163), and further the candidate region 33 is selected within the search region 32 in association with the target ROI 31-1. According to the following formula (8) the p-th root of which corresponds to the formula (1), the p-norm sum is calculated, as to the pixels whose p-norm sum is not stored in the lookup table (i.e., the pixels not in the overlapping region 151-1), out of the pixels in the ROI 31-1 (step 165). It is to be noted that in the step 165, if the p-norm sum data of the overlapping region 151-1 is not recorded yet, the p-norm sum is also calculated as to the pixels in the overlapping region 151-1.
Next, the lookup table is referred to, and if there is stored the p-norm sum data of the pixels in the overlapping region 151-1 of the ROI 31-1, it is read out. Then, it is added to the p-norm sum obtained in the step 165, and the p-th root of the addition result is calculated, thereby obtaining the p-norm of the formula (1) (step 166). Accordingly, it is possible to obtain the p-norm value as to the candidate region of the ROI 31-1. Thus obtained p-norm value is stored in the memory 10b.
If the p-norm sum calculated in the step 166 includes the data in the overlapping region 151-1 not recorded yet in the lookup table, the p-norm sum of the overlapping region 151-1 is recorded in the lookup table (step 167). This is repeated as to all the candidate regions within the search region 32 that is associated with the ROI 31-1. Accordingly, it is possible to obtain a distribution of the p-norm values of the ROI 31-1 (step 168). After obtaining the distribution of p-norm values as to the ROI 31-1, the rate of divergence is obtained by the step 54, and it is set as the scalar value of the target ROI 31-1.
Next, the subsequent ROI 31-2 is selected (steps 162 and 163), and a candidate region is selected (step 164). According to the formula (8) the p-th root of which corresponds to the formula (1), the p-norm sum is calculated, as to a pixel whose p-norm sum is not stored in the lookup table (i.e., a pixel not in the overlapping region 151-1), out of the pixels in the ROI 31-2 (step 165). Since the p-norm sum data of the pixels in the overlapping region 151-1 of the ROI 31-2 is already stored in the lookup table, it is read out. Then, it is added to the p-norm sum obtained in the step 165, and the p-th root of the addition result is calculated, thereby obtaining the p-norm of the formula (1) (step 166). Accordingly, it is possible to obtain the p-norm value as to the candidate region of the ROI 31-2, using a small amount of computation without calculating the p-norm sum of the overlapping region 151-1.
Thus obtained p-norm value is stored in the memory 10b. The p-norm sum of the overlapping region 151-2 obtained in the calculation of the step 165 is recorded in the lookup table (step 167).
Repeating the processes in the above steps 163 to 168 as to all the ROIs allows a distribution of the p-norm values to be obtained (step 55). This eliminates the need for recalculation for the overlapping regions 151, enabling reduction of the amount of computation.
In the present embodiment, for the case where adjacent ROIs 31 are partially overlapping, an explanation has been provided as to the configuration where the overlapping region is configured and the p-norm sum thereof is stored in the lookup table. Also for the case where adjacent candidate regions 33 partially overlaps within the search region 32, an overlapping region is also configured to store the p-norm sum thereof in the lookup table, and this configuration may reduce the amount of computation.
The eighth embodiment will be explained.
By executing any of the aforementioned embodiments from the first to the seventh on the successive frames, it is possible to generate a continuous image of the scalar field or a continuous image of the vector field that are obtained from the norm distribution, and display the continuous image on a time-series basis. On this occasion, there is a possibility that an abnormal frame occurs, failing to generate an appropriate image for some reason. The eighth embodiment is directed to elimination of the abnormal frame, allowing an appropriate continuous image to be displayed.
Since the abnormal frame is characterized by that a delineated area becomes extremely small, it is possible to discriminate between the abnormal frame and a normal frame according to the judgment on this point. In the present embodiment, it is determined whether the delineated area is large or small, according to the magnitude of the information entropy. The information entropy of the vector field image is defined by the following formula (9):
[Formula 9]
H=−ΣP
x log Px−ΣPy log Py (9)
Here, Px represents event probability of the x-component of the vector, and Py represents event probability of the y-component of the vector. The information entropy H obtained by this formula indicates the combinatory entropy of the x-component and the y-component, representing an average information amount of the entire frame.
If the information entropy is calculated as to the scalar field image that is obtained from the p-norm distribution, and the like, according to the first to the seventh embodiments, only one variable exists in the right side of the formula (9).
Specifically, a threshold is set by the step 181 in
The ninth embodiment will be explained.
In the first embodiment, as shown in
The present invention is applicable to a medical ultrasound diagnostic apparatus/treatment apparatus, and a general apparatus that uses general electromagnetic waves including ultrasound waves to measure strain and/or misalignment.
Number | Date | Country | Kind |
---|---|---|---|
2011-237670 | Oct 2011 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2012/069244 | 7/27/2012 | WO | 00 | 12/30/2014 |