The present invention relates to an image processing apparatus and an image processing method that enhance an input image by, for example, generating and adding high frequency components to an enlarged input image that is an enlargement of an original image, in order to obtain an output image with high perceived resolution, and to an image display apparatus using this image processing apparatus and method.
The present invention also enhances an image in such a way that when an image including noise is input, after the noise is eliminated, an output image with high perceived resolution is obtained.
Images are generally reproduced and displayed after image signals representing the images have been subjected to appropriate image processing image (hereinafter, an ‘image signal’ or ‘image data’ may be referred to simply as an ‘image’).
In the image processing apparatus disclosed in patent document 1, for example, following multiresolution decomposition, a desired frequency band is enhanced by specifying an enhancement coefficient for the image in the desired frequency band according to the image signal in a lower frequency band.
In the image processing apparatus in which an appropriate enhancement coefficient is specified for the image in the desired frequency band of the decomposed multiresolution image, however, for some input images the enhancement processing is inappropriate or inadequate and output images with proper picture quality cannot be obtained.
If an image that has been subjected to an enlargement process is input, for example, part of the frequency spectrum of the image before the enlargement processing folds over and appears as a fold-over component on the high-frequency side of the frequency spectrum of the input image. Simply enhancing the high frequency component is then inappropriate, because the fold-over component is enhanced. If the frequency band is limited so as to enhance only a frequency band excluding the fold-over component, however, then enhancement of the high-frequency side of the frequency spectrum must be avoided, and in consequence, the enhancement process is inadequate.
If a noise-eliminated image is input, the high-frequency side of the frequency spectrum has been eliminated by the noise elimination process. Attempts to extract the high-frequency component therefore fail, which may make it impossible to carry out adequate image enhancement processing.
When an image including noise is input, simply enhancing the high-frequency component is inappropriate because the noise included in the enhanced frequency band is also enhanced.
The image processing apparatus of the invention includes:
a first intermediate image generating means for generating a first intermediate image by extracting a component in a vicinity of a certain frequency band of an input image;
a second intermediate image generating means for generating a second intermediate image from the first intermediate image; and
an adding means for adding the input image and the second intermediate image.
According to the present invention, adequate image enhancement processing can be carried out even if the frequency spectrum of the input image includes a fold-over component on the high-frequency side, or does not include adequate high-frequency components.
The illustrated image processing apparatus includes a first intermediate image generating means 1, a second intermediate image generating means 2, and an adding means 4.
The first intermediate image generating means 1 generates an intermediate image D1 (the first intermediate image) by extracting a component in a vicinity of a particular frequency band from an input image DIN.
The second intermediate image generating means 2 generates an intermediate image D2 (the second intermediate image) by carrying out certain processing, which will be described later, on intermediate image D1.
The adding means 4 adds the input image DIN, the first intermediate image D1, and the second intermediate image D2. The image obtained as the resulting sum by the adding means 4 is output as an output image DOUT.
The high-frequency component image generating means 1A and the low-frequency component image generating means 1B form a band-pass filter means for extracting the component in the particular frequency band. Image D1B is output from the first intermediate image generating means 1 as intermediate image D1.
The operation of the image processing apparatus in the first embodiment of the invention will be described in detail below. Unless otherwise specified, Fn will denote the Nyquist frequency of the input image DIN in the description below. First the detailed operation of the first intermediate image generating means 1 will be described.
In the first intermediate image generating means 1, the high-frequency component image generating means 1A generates image D1A by extracting only the high-frequency component of the input image DIN at and above a first frequency Fd. In the illustrated example, the first frequency Fd is slightly less than Fn/2. The high-frequency component can be extracted by performing a high-pass filtering process. The high-frequency component is extracted in the horizontal direction and vertical direction separately. The high-frequency component image generating means 1A therefore includes a horizontal high-frequency component image generating means 1Ah for generating an image D1Ah by performing a horizontal high-pass filtering process on the input image DIN to extract only a horizontal high-frequency component at and above a first horizontal frequency, and a vertical high-frequency component image generating means 1Av for generating an image D1Av by performing a vertical high-pass filtering process to extract a vertical high-frequency component at and above a first vertical frequency; image D1A includes image D1Ah and image D1Av.
Next, in the first intermediate image generating means 1, the low-frequency component image generating means 1B generates an image D1B by extracting only the low-frequency component of image D1A at and below a second frequency Fe. In the illustrated example, the second frequency Fe is slightly greater than Fn/2. The low-frequency component can be extracted by performing a low-pass filtering process. The low-frequency component is extracted in the horizontal direction and the vertical direction separately. The low-frequency component image generating means 1B therefore includes a horizontal low-frequency component image generating means 1Bh for generating an image D1Bh by performing a horizontal low-pass filtering process on image D1Ah to extract a horizontal low-frequency component at and below a second horizontal frequency, and a vertical low-frequency component image generating means 1Bv for generating an image D1Bv by performing a vertical low-pass filtering process on image D1Av to extract a vertical low-frequency component at and below a second vertical frequency; image D1B includes image D1Bh and image D1Bv. Image D1B is output from the first intermediate image generating means 1 as intermediate image D1. Intermediate image D1 includes an image D1h corresponding to image D1Bh and an image D1v corresponding to image D1Bv.
Next the detailed operation of the second intermediate image generating means 2 will be described.
In the second intermediate image generating means 2, first the non-linear processing means 2A generates image D2A by performing non-linear processing, which will be described later, on intermediate image D1. The non-linear processing is performed in the horizontal direction and vertical direction separately. The non-linear processing means 2A includes a horizontal non-linear processing means 2Ah for generating an image D2Ah by performing non-linear processing, which will be described later, on image D1h, and a vertical non-linear processing means 2Av for generating an image D2Av by performing non-linear processing, which will be described later, on image D1v; image D2A includes image D2Ah and image D2Av.
The operation of the non-linear processing means 2A will now be described in further detail. The horizontal non-linear processing means 2Ah and the vertical non-linear processing means 2Av included in the non-linear processing means 2A have the same structure. Here the horizontal non-linear processing means 2Ah performs processing in the horizontal direction, and the vertical non-linear processing means 2Av performs processing in the vertical direction.
The zero-crossing decision means 311h checks the pixel values in the input image DIN311h for changes in the horizontal direction. A point where the pixel value changes from positive to negative or from negative to positive is identified as a zero-crossing point, and the positions of the pixels preceding and following the zero-crossing point (the adjacently preceding and following pixels) are reported to the signal amplifying means 312h by a signal D311h. Preceding and following herein means the preceding and following positions in the sequence in which signals are supplied: the positions to the left and right when the pixel signals are supplied from left to right in the horizontal direction, or the positions above and below when the pixel signals are supplied from top to bottom in the vertical direction. The zero-crossing decision means 311h in the horizontal non-linear processing means 2Ah recognizes the pixels to the left and right of the zero-crossing point as the pixels preceding and following the zero-crossing point.
The signal amplifying means 312h identifies the pixels preceding and following the zero-crossing point (the adjacently preceding and following pixels) in accordance with signal D311h and generates a non-linear image D312h by amplifying the pixel values (increasing the absolute values) of only the pixels preceding and following the zero-crossing point. The amplification factor for the pixel values of the pixels preceding and following the zero-crossing point is a value greater than 1; the amplification factor for the pixel values of other pixels is 1.
The non-linear image D312h is output from the horizontal non-linear processing means 2Ah as image D2Ah.
The zero-crossing decision means 311v checks the pixel values in the input image DIN311v for changes in the vertical direction. A point where the pixel value changes from positive to negative or from negative to positive is identified as a zero-crossing point, and the positions of the pixels preceding and following the zero-crossing point (the adjacently preceding and following pixels) are reported to the signal amplifying means 312v by a signal D311v. The zero-crossing decision means 311v in the vertical non-linear processing means 2Av recognizes the pixels above and below the zero-crossing point as the pixels preceding and following the zero-crossing point.
The signal amplifying means 312v identifies the pixels preceding and following the zero-crossing point (the adjacently preceding and following pixels) from signal D311v and generates a non-linear image D312v by amplifying only the pixel values (increasing the absolute values) of the pixels preceding and following the zero-crossing point. The amplification factor for the pixel values of the pixels preceding and following the zero-crossing point is a value greater than 1, and the amplification factor for the pixel values of other pixels is 1.
The non-linear processing means 2A operates as described above.
Next, in the second intermediate image generating means 2, the high-frequency component image generating means 2B generates image D2B by extracting only the high-frequency component of image D2A at and above a third frequency Ff. The high-frequency component can be extracted by performing a high-pass filtering process. In the illustrated example, the third frequency Ff is equal to Fn/2.
The high-frequency component of the image is extracted in the horizontal direction and the vertical direction separately. The high-frequency component image generating means 2B includes a horizontal high-frequency component image generating means 2Bh for generating an image D2Bh by performing a horizontal high-pass filtering process on image D2Ah to extract a horizontal high-frequency component at and above a third horizontal frequency, and a vertical high-frequency component image generating means 2Bv for generating an image D2Bv by performing a vertical high-pass filtering process on image D2Av to extract a vertical high-frequency component at and above a third vertical frequency; image D2B includes image D2Bh and image D2Bv.
Image D2B is output from the second intermediate image generating means 2 as intermediate image D2. Intermediate image D2 includes an image D2h corresponding to image D2Bh and an image D2v corresponding to image D2Bv.
Next, the adding means 4 adds intermediate image D1 and intermediate image D2 to the input image DIN. Intermediate image D1 includes image D1h and image D1v, and intermediate image D2 includes image D2h and image D2v, so that to add intermediate image D1 and image D2 means to add all the above images D1h, D1v, D2h, and D2v.
The addition process performed by the adding means 4 is not limited to simple addition; weighted addition may be performed instead. Each of the images D1h, D1v, D2h, and D2v may be amplified by a different amplification factor before being added.
An example in which the image processing apparatus in the first embodiment is utilized as part of an image display apparatus will now be described. The description will clarify the effects of the image processing apparatus in the first embodiment.
If the image size of the original image DORG supplied to the input terminal U0 is smaller than the image size of the monitor U3, the image enlarging means U1 outputs an image DU1 obtained by enlarging the original image DORG. The image can be enlarged by the bicubic method, for example.
The image processing apparatus U2 of the first embodiment outputs an image DU2 obtained by performing the processing described above on image DU1. Image DU2 is displayed on the monitor U3.
The operation and effects of the image enlarging means U1 will be described below on the assumption that the number of pixels in the original image DORG is half of the number of pixels in the monitor U3 in both the horizontal and vertical directions.
The horizontal zero insertion means U1A generates the image DU1A shown in
For the sake of simplicity, a single frequency axis is used in
First the frequency spectrum of the original image DORG will be described. The image input as the original image DORG is generally a natural image and its spectral intensity is concentrated around the origin of the frequency space. The frequency spectrum of the original image DORG accordingly resembles spectrum SPO in
Next the spectral intensity of image DU1A will be described. Image DU1A is generated by inserting a pixel having a pixel value of 0 for each pixel in the original image DORG in the horizontal direction. This process causes the frequency spectrum to fold over at the Nyquist frequency of the original image DORG. Because a spectrum SPM is generated by fold-over of the spectrum SPO at frequencies of ±Fn/2, the frequency spectrum of image DU1A is represented as shown in
Next the frequency response of the horizontal low-frequency component passing means U1B will be described. The horizontal low-frequency component passing means U1B is implemented by a low-pass filter, and its frequency response decreases as the frequency increases, as shown in
Finally, the frequency spectrum of image DU1B will be described. Image DU1B is obtained by performing a low-pass filtering process, with the frequency response shown in
Among the processing by the image enlarging means U1, the effects in the frequency domain of the processing performed by the vertical zero insertion means U1C and the vertical low-frequency component passing means U1D will not be described, but from the content of the processing it can be easily understood that the effects are as described with reference to
In the subsequent description, spectrum SP2 will be referred to as the fold-over component. The fold-over component appears on an image as a spurious signal or noise having relatively high frequency components. This type of noise or spurious signal includes overshoot, jaggies, ringing, and the like.
The effects of the image processing apparatus according to the first embodiment will now be described.
In
First the frequency spectrum of the input image DIN will be described. Because image DU1D is input as the input image DIN, the frequency spectrum of the input image DIN, shown in
Next the frequency response of high-frequency component image generating means 1A will be described. Since high-frequency component image generating means 1A is implemented by a high-pass filter, its frequency response decreases as the frequency decreases, as shown in
Next the frequency response of the low-frequency component image generating means 1B will be described. Since the low-frequency component image generating means 1B is implemented by a low-pass filter, its frequency response decreases as the frequency increases, as shown in
Next the frequency response of the first intermediate image generating means 1 will be described. Among the frequency components of the input image DIN, the frequency component in the low-frequency region RL1 shown in
The intermediate region RM1, which does not include a fold-over component generated through the insertion of pixels having pixel values of 0 into the original image DORG, occupies part of the region at and below the Nyquist frequency Fn/2 of the original image DORG.
Next the frequency spectrum of intermediate image D1 will be described. The intermediate image D1 shown in
A high-frequency component corresponding to the high-frequency region RH2 is generated in non-linearly processed image D2A, as described later.
The above operations and effects will now be described in further detail.
As shown in
That is, a change in sampling interval does not change the position of the zero-crossing point in the signal representing the high-frequency component near the edge, but as the sampling interval decreases (or the resolution increases), the slope of the high-frequency component near the edge increases, and the position of the points that give the local maximum and minimum values approach the zero-crossing point.
Like
The adding means 4 adds intermediate image D1 and intermediate image D2 to the input image DIN to generate the output image DOUT. As described earlier, intermediate image D1 is obtained by excluding the fold-over component from the high-frequency component of the input image DIN, and corresponds to the high-frequency component near the Nyquist frequency of the original image DORG, as shown in
Supplying a high-frequency component in the band at and above the Nyquist frequency of the original image DORG can increase the perceived image resolution, so the perceived image resolution can be improved even if only intermediate image D2 is added to the input image DIN in the adding means 4. That is, instead of adding both the first intermediate image D1 and the second intermediate image D2 to the input image DIN as in
In addition, in the image processing apparatus in the first embodiment, the first intermediate image generating means 1 and the second intermediate image generating means 2 perform image processing in the horizontal direction and the vertical direction in parallel. Accordingly, the effects described above can be obtained not just in the horizontal or vertical direction but in any direction.
Considered in the frequency domain, the image processing apparatus in the first embodiment generates an image D2B corresponding to a high-frequency component near the Nyquist frequency ±Fn of the input image DIN on the basis of a component in the input image DIN near half the Nyquist frequency of the original image DORG, ±Fn/2, (or in a particular frequency band), in a frequency band from the origin to Fn. Even if the frequency component near the Nyquist frequency ±Fn has been lost in the input image DIN, accordingly, a frequency component near the Nyquist frequency ±Fn can be supplied by image D2B.
The location used as the particular frequency band is not limited to the vicinity of ±Fn/2. The frequency band to be used can be changed by suitably changing the frequency responses of high-frequency component image generating means 1A and low-frequency component image generating means 1B.
In the description given above, image enlargement processing is given as an example in which a frequency component near the Nyquist frequency Fn is lost, but that is not the only cause of the loss of frequency components near the Nyquist frequency Fn in the input image DIN; noise elimination and various other causes can also be considered. Therefore, the use of the image processing apparatus in the embodiments is not limited to processing following an image enlargement process.
The image processing apparatus shown in
The first intermediate image generating means 1 is structured, for example, as shown in
The second intermediate image generating means 2 is structured, for example, as shown in
The intermediate image postprocessing means 3 generates an intermediate image D3 (the third intermediate image) in which processing described below has been carried out on the second intermediate image D2.
The adding means 4 adds the input image DIN, the first intermediate image D1, and the third intermediate image D3. The image obtained as the resulting sum by the adding means 4 is output as an output image DOUT.
The operation of the intermediate image postprocessing means 3 will be described in detail below.
First, the sign comparing means 3A in the intermediate image postprocessing means 3 compares the pixel values of intermediate image D1 and intermediate image D2, and outputs the results as signal D3A. The comparison of signs is carried out both on the pixels of image D1h and image D2h (on pixels in the same position in both images) and on the pixels of image D1v and image D2v (on pixels in the same position in both images). The intermediate image postprocessing means 3 includes a horizontal sign comparing means 3Ah and a vertical sign comparing means 3Av; the horizontal sign comparing means 3Ah compares the signs of the pixels in image D1h and image D2h and outputs the results as a signal D3Ah; the vertical sign comparing means 3Av compares the signs of the pixels in image D1v and image D2v and outputs the results as a signal D3Av. Signal D3Ah and signal D3Av are output as signal D3A.
The operation of the sign comparing means 3A will now be described in further detail.
The horizontal sign comparing means 3Ah compares the signs of the values of pixels with the same coordinates in image D1h and image D2h. The signs of pixel value D1h(11) and D2h(11) are compared, the signs of pixel value D1h(12) and D2h(12) are compared, and in general, the signs of pixel value D1h(xy) and D2h(xy) are compared, comparing the signs of pixel values at the same coordinates; agreement or disagreement of the signs of the pixel values of each pixel is tested, and the results are output as a signal D3Ah. The vertical sign comparing means 3Av also compares the signs of the values of pixels with the same coordinates in image D1v and image D2v. The signs of pixel value D1v(11) and D2v(11) are compared, the signs of pixel value D1v(12) and D2v(12) are compared, and in general, the signs of pixel value D1v(xy) and D2v(xy) are compared, comparing the signs of pixel values at the same coordinates; agreement or disagreement of the signs of the pixel values of each pixel is tested, and the results are output as a signal D3Av. Signal D3Ah and signal D3Av are output from the intermediate image postprocessing means 3 as signal D3A.
The sign comparing means 3A operates as described above.
The pixel value modifying means 3B generates image D3B by modifying the pixel values in intermediate image D2 on the basis of signal D3A. Of the pixel values in intermediate image D2, the pixel value modifying means 3B sets the pixel values indicated by signal D3A to differ in sign from the pixel values in intermediate image D1 to zero. When the signs match, intermediate image D2 is output without change. This process is carried out on both image D2h and image D2v.
The pixel value modifying means 3B comprises a horizontal pixel value modifying means 3Bh and a vertical pixel value modifying means 3Bv.
The horizontal pixel value modifying means 3Bh receives image D2h and signal D3Ah and outputs an image D3Bh with pixel values that are set to zero when signal D3A indicates that the signs do not match, image D2h being output without change as image D3Bh when signal D3Ah indicates that the signs match.
The vertical pixel value modifying means 3Bv receives image D2v and signal D3Av and outputs an image D3Bv with pixel values that are set to zero when signal D3Av indicates that the signs do not match, image D2v being output without change as image D3Bv when signal D3Av indicates that the signs match.
Image D3Bh and image D3Bv are output from the pixel value modifying means 3B as image D3B.
Image D3B is then output from the intermediate image postprocessing means 3 as image D3. Image D3 includes an image D3h corresponding to image D3Bh and an image D3v corresponding to image D3Bv.
Finally the operation of the adding means 4 will be described. As stated above, the adding means 4 adds the input image DIN, intermediate image D1, and intermediate image D3 together to generate the output image DOUT. The output image DOUT is output from the image processing apparatus as the final output image.
Intermediate image D1 includes image D1h and image D1v and intermediate image D3 includes image D3h and image D3v, so to add the input image DIN, intermediate image D1, and intermediate image D3 together means to add all of images D1h, D1v, D3h, and D3v to image DIN.
As stated in the first embodiment, the addition process performed by the adding means 4 is not limited to simple addition; weighted addition may be performed instead. Each of the images D1h, D1v, D3h, and D3v may be amplified by a different amplification factor before being added to the input image DIN.
An example in which the image processing apparatus in the second embodiment is utilized as part of an image display apparatus will now be described. The description will clarify the effects of the image processing apparatus in the second embodiment. Unless otherwise specified, Fn will denote the Nyquist frequency of the input image DIN in the description below.
The image display apparatus in which the image processing apparatus in the second embodiment is utilized displays an image corresponding to the original image DORG on a monitor U3 as illustrated in
The image enlarging means U1 operates as described with reference to
The image processing apparatus U2 of the second embodiment outputs an image DU2 obtained by performing the processing described above on image DU1. Image DU2 is displayed on the monitor U3.
The operation and effects of the first intermediate image generating means 1 and second intermediate image generating means 2 in the second embodiment are as described with reference to
As described in relation to the first embodiment, the non-linear processing means 2A in the second intermediate image generating means 2 has the function of generating a high-frequency component corresponding to the high-frequency region RH2, and the high-frequency component image generating means 2B has the function of extracting only the high-frequency component generated by the non-linear processing means 2A. Since image D2B is output as image D2, the second intermediate image generating means 2 can output an intermediate image D2 having a high-frequency component corresponding to sampling interval S1.
Local weighted averages of the pixel values of the non-linearly processed image D2A are calculated as the low-frequency component of the non-linearly processed image D2A. In the vicinity of region R2, although some of the pixel values are positive, most of them are zero, so the values of the low-frequency component are slightly greater than zero. Compared with the pixel values of the non-linearly processed image D2A, the values of the low-frequency component are slightly greater. Near coordinate P2, however, since the pixel values of the pixels at positions other than coordinate P2 in the non-linearly processed image D2A take smaller values than the pixel value of the pixel at coordinate P2, the values of the low-frequency component are less than the pixel values of the non-linearly processed image D2A. Consequently, the magnitude relationship between the pixel values of the non-linearly processed image D2A and the values of the low-frequency component reverses between the vicinity of region R2 and the vicinity of coordinate P2.
Local weighted averages of the pixel values of the non-linearly processed image D2A are calculated as the low-frequency component of the non-linearly processed image D2A. In the vicinity of region R1, although some of the pixel values are negative, most of them are zero, so the values of the low-frequency component are slightly less than zero (negative values with slightly greater absolute values). Compared with the pixel values of the non-linearly processed image D2A, the values of the low-frequency component are slightly less. Near coordinate P1, however, since the pixel values of the pixels at positions other than coordinate P1 in the non-linearly processed image D2A take greater values (negative values with slightly smaller absolute values) than the pixel value of the pixel at coordinate P1, the values of the low-frequency component are greater than the pixel values of the non-linearly processed image D2A. Consequently, the magnitude relationship between the pixel values of the non-linearly processed image D2A and the values of the low-frequency component reverses between the vicinity of region R1 and the vicinity of coordinate P1.
If the changes in the pixel values of intermediate image D1 at the same location are examined, there is only a single change from negative to positive. Accordingly, if intermediate image D1 and intermediate image D2 are compared, they are found to have opposite signs in the vicinities of region R1 and region R2.
The local minimum and local maximum values near the zero crossing point Z are maintained in the pixels in intermediate image D3 at the positions expressed by coordinates P1 and P2. This means that the high-frequency component generated by the second intermediate image generating means 2, corresponding to sampling interval S1, is preserved in intermediate image D3. The intermediate image postprocessing means 3 therefore has the effect of removing the disagreement in positive and negative sign with intermediate image D1 while preserving the high-frequency component corresponding to sampling interval S1 generated by the second intermediate image generating means 2.
To take region R1, for example, the values in intermediate image D1 are negative, so it has the effect of enhancing the edge by reducing the pixel values in the input image DIN, but the values in intermediate image D2 are positive, so the edge cannot be enhanced by adding intermediate image D2. Furthermore, if the values added by intermediate image D2 are greater than the values subtracted by intermediate image D1 (if the absolute values in intermediate image D2 are greater than the absolute values in intermediate image D1), then the pixel values in the vicinity of region R1 become slightly greater than the surrounding values, as shown in
To take region R2, for example, the values in intermediate image D1 are positive, so it has the effect of enhancing the edge by increasing the pixel values in the input image DIN, but the values in intermediate image D2 are negative, so the edge cannot be enhanced by adding intermediate image D2. Furthermore, if the values subtracted by intermediate image D2 are greater than the values added by intermediate image D1 (if the absolute values in intermediate image D2 are greater than the absolute values in intermediate image D1), then the pixel values in the vicinity of region R2 become slightly smaller than the surrounding values, as shown in
Disagreement of the positive/negative signs of intermediate image D1 and intermediate image D2 thus gives rise to unnatural brightness variations in the vicinity of an edge. An edge normally has a certain length in a certain direction such as the horizontal direction or vertical direction in the image, so the unnatural brightness variations also have a certain length over which they appear beside the edge; the result is that unnatural lines or patterns are seen near the edge.
In other words, by adding intermediate image D1 and intermediate image D3 to the input image DIN in the adding means 4, it is possible to enhance the image without causing unnatural brightness variations.
As also stated above, intermediate image D1 is obtained by removing the fold-over component from the high-frequency component of the input image DIN, and corresponds to a high-frequency component in a vicinity of the Nyquist frequency of the original image DORG as shown in
In addition, in the image processing apparatus in the second embodiment, the first intermediate image generating means 1 and the second intermediate image generating means 2 perform image processing in the horizontal direction and the vertical direction in parallel. Accordingly, the effects described above can be obtained not just in the horizontal or vertical direction but in any direction.
Considered in the frequency domain, in the frequency band from the origin to Fn, the image processing apparatus in the second embodiment generates an image D2B corresponding to a high-frequency component near the Nyquist frequency ±Fn of the input image DIN on the basis of a component in the input image DIN near half the Nyquist frequency of the original image DORG, ±Fn/2, (or in a particular frequency band). Even if a frequency component near the Nyquist frequency ±Fn has been lost in the input image DIN, accordingly, a frequency component near the Nyquist frequency ±Fn can be supplied by image D2B.
The location used as the particular frequency band is not limited to the vicinity of ±Fn/2. The frequency band to be used can be changed by suitably changing the frequency responses of high-frequency component image generating means 1A and low-frequency component image generating means 1B.
In the description given above, image enlargement processing is given as an example in which a frequency component near the Nyquist frequency Fn are lost, but that is not the only cause of the loss of frequency components near the Nyquist frequency Fn in the input image DIN; noise elimination and various other causes can also be considered. Therefore, the use of the image processing apparatus in the second embodiment is not limited to processing following an image enlargement process.
The noise eliminating means U11 internally includes a low-frequency component passing means U11A; the noise eliminating means U11 eliminates noise included in the original image DORG by a method described later, and outputs a noise-eliminated image DU11 from which noise has been eliminated.
The image processing apparatus in the first embodiment, for example, may be used as the enhancement processing means U12. The enhancement processing means U12 then includes a first intermediate image generating means 1, a second intermediate image generating means 2, and an adding means 4, as illustrated, and carries out enhancement processing on the noise-eliminated image DU11 by a method described later to output an enhanced processed image DU12. The enhanced processed image DU12 is output to the exterior of the image processing apparatus in the third embodiment.
First, the operation of the noise eliminating means U11 will be described.
The noise eliminating means U11 internally includes a low-frequency component passing means U11A. The low-frequency component passing means U11A includes a low-pass filter. The noise eliminating means U11 uses the low-pass filter to eliminate noise from the original image DORG and outputs the noise-eliminated image DU11.
When there are irregularly oscillating components in an image, they generally appear as noise. These irregular oscillating components usually have high frequencies, which are rejected by the low-pass filter. It is therefore possible to use the low-pass filter to eliminate noise.
Next, the detailed operation of the U12 will be described.
The enhancement processing means U12 has, for example, the same internal configuration as the image processing apparatus in the first embodiment, and operates in the same way. As the input image DIN in
The effects of the image processing apparatus according to the third embodiment are described below.
First, the effects of the noise eliminating means U11 will be described.
To simplify the notation in
First the frequency response of the low-frequency component passing means U11A will be described. Ideally, the frequency response of the low-frequency component passing means U11A is shown by the dotted line in
Next, the frequency spectrum of the original image DORG will be described. The image input as the original image DORG is generally a natural image and its spectral intensity is concentrated around the origin of the frequency space. The frequency spectrum of the original image DORG accordingly resembles spectrum SP11 in
Next the spectrum SP12 of the noise-eliminated image DU11 will be described. The noise-eliminated image DU11 is generated by performing a low-pass filtering process on the original image DORG by use of the low-frequency component passing means U11A, which has the frequency response shown by the solid line in
First, in the region in which the absolute frequency value is slightly less than Fc, since the frequency response of the low-frequency component passing means U11A has a value slightly less than 1, the spectrum SP12 of the noise-eliminated image DU11 has values slightly less than those of the spectrum SP11 of the original image DORG.
Next, in the region in which the absolute frequency value is greater than Fc, since the frequency response of the low-frequency component passing means U11A has a value slightly greater than zero (substantially zero), there is more spectral loss than in the region in which the absolute frequency value is near Fc. The intensity of the spectrum SP12 of the noise-eliminated image DU11 is accordingly even smaller than in the region in which the absolute frequency value is slightly less than Fc.
The frequency spectrum SP12 of the noise-eliminated image DU11 has been described above; of the frequency components of the original image DORG, the spectral intensity on the high-frequency side is weakened by the noise eliminating means U11. The noise eliminating means U11 thus regards components with absolute frequency values greater than Fc as noise and weakens their intensity. It is known, however, that the perceived resolution of an image is reduced when its spectral intensity on the high-frequency side is weakened. Accordingly, while the noise eliminating means U11 has the effect of eliminating noise, it also reduces the perceived resolution of the image.
The above are the effects of the noise eliminating means U11.
Next, the effects of the enhancement processing means U12 will be described. The effects of the enhancement processing means U12 are similar to the effects of the image processing apparatus in the first embodiment, but the input image DIN differs from the input image in the first embodiment, leading to differences that will be understood from the description below.
In
First the frequency spectrum of the input image DIN will be described. Because a low-pass filtering process with the frequency response indicated by the solid line in
Next the frequency response of high-frequency component image generating means 1A will be described. Since high-frequency component image generating means 1A is implemented by a high-pass filter, its frequency response decreases as the frequency decreases, as shown in
Next the frequency response of the low-frequency component image generating means 1B will be described. Since the low-frequency component image generating means 1B is implemented by a low-pass filter, its frequency response decreases as the frequency increases, as shown in
Next the frequency response of the first intermediate image generating means 1 will be described. Among the frequency components of the input image DIN, the frequency component in the low-frequency region RL1 shown in
Next the frequency spectrum of intermediate image D1 will be described. The first intermediate image D1 shown in
A high-frequency component corresponding to the high-frequency region RH2 is generated in non-linearly processed image D2A, as described later.
The above effects will now be described in further detail.
A comparison of
As shown in
Thus in the vicinity of an edge, the position of the zero-crossing point Z in the signals representing the high-frequency component does not change, but when a low-pass filtering process is applied and the perceived resolution is reduced, the gradient of the high-frequency component near the edge becomes more gradual, and the positions of the points giving the local maximum and minimum values retreat from the zero-crossing point Z.
The first intermediate image D1 and second intermediate image D2 are added to the input image DIN by the adding means 4.
As stated above, a component is extracted from the original image DORG by taking the component with absolute frequency values in a region in a vicinity of Fc, after the high-frequency component of the original image DORG has been weakened by the low-frequency component passing means U11A, and corresponds to the component with absolute frequency values near Fc, as shown in
In addition, in the enhancement processing means U12, the first intermediate image generating means 1 and the second intermediate image generating means 2 perform image processing in the horizontal direction and the vertical direction in parallel, so the effects described above can be obtained not just in the horizontal or vertical direction but in any direction.
To summarize the above, in the image processing apparatus in the third embodiment, noise can be eliminated in the noise eliminating means U11, so even if enhancement processing is carried out in the enhancement processing means U12, the noise is not enhanced. Since the enhancement processing means U12 includes the non-linear processing means 2A, it is possible to add a high-frequency component that was weakened by the noise eliminating means U11, and the perceived image resolution can be increased by the enhancement processing means U12 even if a lowering of the perceived image resolution occurs in the noise eliminating means U11. In other words, image enhancement processing is possible without enhancing noise, and the lowering of perceived resolution that accompanies noise elimination does not occur.
The noise eliminating means U11 only has to eliminate noise by the use of a low-pass filter. A configuration may be considered in which a circuit is provided for estimating the amount of noise included in the original image DORG, and the low-pass filtering process in the low-frequency component passing means U11A can be controlled locally according to the output result of this circuit. An edge-preserving filter such as an E filter also eliminates noise by the use of a low-pass filter, and may be used as the noise eliminating means U11.
A configuration in which the image processing apparatus in the first embodiment is used as the enhancement processing means U12 has been described, but the image processing apparatus in the second embodiment may also be used as the enhancement processing means U12, with similar effects.
The image processing apparatus U10 in the third embodiment may be used as part of an image display apparatus as shown in
The intermediate image generating step ST1 includes, as shown in
The high-frequency component image generating step ST1A includes a horizontal high-frequency component image generating step ST1Ah and a vertical high-frequency component image generating step ST1Av, and the low-frequency component image generating step ST1B includes a horizontal low-frequency component image generating step ST1Bh and a vertical low-frequency component image generating step ST1Bv.
The intermediate image processing step ST2 includes, as shown in
The non-linear processing step ST2A includes a horizontal non-linear processing step ST2Ah and a vertical non-linear processing step ST2Av, and the high-frequency component image generating step ST2B includes a horizontal high-frequency component passing step ST2Bh and a vertical high-frequency component passing step ST2Bv.
The horizontal nonlinear processing step ST2Ah includes, as shown in
First the operation of the intermediate image generating step ST1 will be described with reference to the flowchart in
In high-frequency component image generating step ST1A, the following processing is performed on an input image DIN input in an image input step, which is not shown. First, in the horizontal high-frequency component image generating step ST1Ah, a horizontal high-pass filtering process is performed to generate an image D1Ah by extracting a horizontal high-frequency component from the input image DIN. In the vertical high-frequency component image generating step ST1Av, a vertical high-pass filtering process is performed to generate an image D1Av by extracting a vertical high-frequency component from the input image DIN. High-frequency component image generating step ST1A thus performs the same processing as high-frequency component image generating means 1A to generate an image D1A including image D1Ah and image D1Av from the input image DIN. This operation is equivalent to the operation performed by high-frequency component image generating means 1A.
In the low-frequency component image generating step ST1B, the following processing is performed on image D1A. First, in the horizontal low-frequency component image generating step ST1Bh, a horizontal low-pass filtering process is performed to generate an image D1Bh by extracting horizontal a low-frequency component from image D1Ah. In the vertical low-frequency component image generating step ST1Bv, a vertical low-pass filtering process is performed to generate an image D1Bv by extracting a vertical low-frequency component from image D1Av. In this way, the low-frequency component image generating step ST1B performs the same processing as the low-frequency component image generating means 1B to generate an image D1B including image D1Bh and image D1Bv from image D1A. This operation is equivalent to the operation performed by the low-frequency component image generating means 1B.
The first intermediate image generating step ST1 operates as described above, using image D1Bh as an image D1h, using image D1Bv as an image D1v, and outputting an intermediate image D1 including image D1h and image D1v. These operations are equivalent to the operations performed by the intermediate image generating means 1.
Next, the operation of the second intermediate image processing step ST2 will be described with reference to the flowcharts in
First, in the non-linear processing step ST2A, the following processing is performed on intermediate image D1.
First, in the horizontal non-linear processing step ST2Ah, processing is performed according to the flowchart in
Next, in the vertical nonlinear processing step ST2Av, processing is performed according the flowchart in
The non-linear processing step ST2A operates as described above to generate an image D2A including images D2Ah and D2Av. The above operations are equivalent to the operations performed by the non-linear processing means 2A.
Next, in high-frequency component image generating step ST2B, the following processing is performed on image D2A.
First, an image D2Bh is generated by performing a horizontal high-pass filtering process on image D2Ah in the horizontal high-frequency component image generating step ST2Bh. The horizontal high-frequency component image generating step ST2Bh performs processing similar to that performed in horizontal high-frequency component image generating means 2Bh.
Next, an image D2Bv is generated by performing a vertical high-pass filtering process on image D2Av in the vertical high-frequency component image generating step ST2Bv. The vertical high-frequency component image generating step ST2Bv performs processing similar to that performed in vertical high-frequency component image generating means 2Bv.
High-frequency component image generating step ST2B operates as described above to generate an image D2B including image D2Bh and image D2Bv. These operations are equivalent to the operations performed by high-frequency component image generating means 2B.
The second intermediate image processing step ST2 operates as described above to output image D2B as intermediate image D2. Intermediate image D2 is output with image D2Bh as image S2h and image D2Bv as image D2v. The operations performed are equivalent to the operation of the second intermediate image generating means 2.
In the adding step ST4, the input image DIN, intermediate image D1, and intermediate image D2 are added together to generate the output image DOUT. Since intermediate image D1 includes image D1h and image D1v, and intermediate image D2 includes image D2h and image D2v, images D1h, D1v, D2h, and D2v are all added to the input image DIN in the adding step ST4. Images D1h, D1v, D2h, and D2v may simply be added to the input image DIN, or weighted addition may be performed. The output image DOUT is output as a final output image by the image processing method in the fourth embodiment. The operation performed in the adding step ST4 is equivalent to the operation of the adding means 4.
The image processing method in the fourth embodiment operates as described above.
As is clear from the description, the image processing method in the fourth embodiment and the image processing apparatus in the first embodiment operate equivalently. The image processing method in the fourth embodiment therefore has the same effects as the image processing apparatus in the first embodiment. An image processed by the above image processing method can be displayed by the image display apparatus shown in
The first intermediate image generating step ST1 and the second intermediate image generating step ST2 are as described in relation to the fourth embodiment.
The intermediate image postprocessing step ST3 includes a sign comparison step ST3A and a pixel value modification step ST3B, as shown in
The sign comparison step ST3A includes a horizontal sign comparison step ST3Ah and a vertical sign comparison step ST3Av, and the pixel value modification step ST3B includes a horizontal pixel value modification step ST3Bh and a vertical pixel value modification step ST3Bv.
Next the operation of the intermediate image postprocessing step ST3 will be described according to the flow in
First, in the sign comparison step ST3A, the signs of the pixel values of intermediate image D1 and intermediate image D2 are compared. Since intermediate image D1 includes image D1h and image D1v, and intermediate image D2 includes image D2h and image D2v, the comparison of signs is carried out on both the pixels of image D1h and image D2h and the pixels of image D1v and image D2v. The signs of the pixels of image D1h and image D2h are compared in the horizontal sign comparison step ST3Ah, and the signs of the pixels of image D1v and image D2v are compared in the vertical sign comparison step ST3Av. Signals D3Ah and D3Av are output as a signal D3A indicating the results of the comparisons.
The horizontal sign comparison step ST3Ah thus carries out processing similar to the processing performed by the horizontal sign comparing means 3Ah, obtaining a signal D3h from intermediate image D1h and intermediate image D2h. The details of the operation of the horizontal sign comparison step ST3Ah are the same as for the horizontal sign comparing means 3Ah. The vertical sign comparison step ST3Av carries out processing similar to the processing performed by the vertical sign comparing means 3Av, obtaining a signal D3v from intermediate image D1v and intermediate image D2v. The details of the operation of the vertical sign comparison step ST3Av are the same as for the vertical sign comparing means 3Av.
The pixel value modification step ST3B generates image D3B by modifying the pixel values in intermediate image D2 on the basis of signal D3A. Of the pixel values in intermediate image D2, the pixel value modifying means 3B sets the pixel values determined in the sign comparison step ST3A to differ in sign from the pixel values in intermediate image D1 to zero. When the signs match, intermediate image D2 is output without change. This process is carried out on both image D2h and image D2v. In the horizontal pixel value modification step ST3Bh, the pixel values of the pixels in image D2h that differ in sign from image D1h are set to zero to generate an image D3Bh, and in the vertical pixel value modification step ST3Bv, the pixel values of the pixels in image D2v that differ in sign from image D1v are set to zero to generate an image D3Bv. When the signs match, intermediate image D2h is output without change as intermediate image D3h in the horizontal pixel value modification step ST3Bh. Similarly, when the signs match, intermediate image D2v is output without change as intermediate image D3v in the vertical pixel value modification step ST3Bv. Image D3Bh and image D3Bv are output from the pixel value modification step ST3B as image D3B.
Image D3B is then output from the intermediate image postprocessing step ST3 as image D3. Image D3 includes an image D3h corresponding to image D3Bh and an image D3v corresponding to image D3Bv.
The intermediate image postprocessing step ST3 operates as described above. These operations are equivalent to the operations performed by the intermediate image postprocessing means 3.
In the adding step ST4, the input image DIN, intermediate image D1, and intermediate image D3 are added together to generate the output image DOUT. Since intermediate image D1 includes image D1h and image D1v, and intermediate image D3 includes image D3h and image D3v, images D1h, D1v, D3h, and D3v are all added to the input image DIN in the adding step ST4. Images D1h, D1v, D3h, and D3v may simply be added to the input image DIN, or weighted addition may be performed. The output image DOUT is output as the final output image of the image processing method in the fifth embodiment. The operations performed in the adding step ST4 are equivalent to the operation of the adding means 4.
The image processing method in the fifth embodiment operates as described above.
As is clear from the description, the image processing method in the fifth embodiment and the image processing apparatus in the second embodiment operate equivalently. Therefore, the image processing method in the fifth embodiment has the same effects as the image processing apparatus in the second embodiment. An image processed by the above image processing method can be displayed by the image display apparatus shown in
The noise elimination step ST11 includes a low-frequency component passing step ST11A; the low-frequency component passing step ST11A carries out noise processing by performing a low-pass filtering process on an input image DORG that is input in an image input step (not shown), and generates a noise-eliminated image DU11.
The noise elimination step ST11 operates as described above. These operations are equivalent to the operations performed by the noise eliminating means U11 in the third embodiment.
The first intermediate image generating step ST1 and second intermediate image generating step ST2 are as described in relation to the fourth embodiment. The noise-eliminated image DU11, however, is processed as the input image DIN.
In the adding step ST4, the noise-eliminated image DU11, the first intermediate image D1, and the second intermediate image D2 are added together to generate the output image DOUT. The output image DOUT is output as a final output image by the image processing method in the sixth embodiment. The operation performed in the adding step ST4 is equivalent to the operation of the adding means 4.
The enhancement processing step ST12 operates as described above, operating equivalently to the enhancement processing means U12.
The image processing method in the sixth embodiment operates as described above. These operations are equivalent to the operations performed by the image processing apparatus described in the third embodiment.
As is clear from the description of the operation of the image processing method in the sixth embodiment, it operates equivalently to the image processing apparatus in the third embodiment. The image processing method in the sixth embodiment accordingly has the same effects as the image processing apparatus in the third embodiment.
In the above description the image processing method in the fourth embodiment is used as the enhancement processing step ST12, but the image processing method in the fifth embodiment may be used as the enhancement processing step ST12, with similar effects.
The image processing method in the sixth embodiment may be used as an image display method or as part of an image display method. For example, the image DU12 processed by the image processing method in the sixth embodiment may be generated in the image processing apparatus 10 shown in
Part or all of the processing of the image processing methods in the fourth to sixth embodiments may be carried out by software, that is, by a programmed computer. An exemplary image processing apparatus for that purpose is shown in
The image processing apparatus shown in
The CPU 11 operates in accordance with a program stored in the program memory 12, and implements the image processing methods in the fourth to sixth embodiments by carrying out the processing in the steps described with reference to
When the image processing apparatus in
When the image processing apparatus in
Number | Date | Country | Kind |
---|---|---|---|
2008-196277 | Jul 2008 | JP | national |
2008-271123 | Oct 2008 | JP | national |
2009-039273 | Feb 2009 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2009/003330 | 7/15/2009 | WO | 00 | 1/28/2011 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2010/013401 | 2/4/2010 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5717789 | Anderson et al. | Feb 1998 | A |
5912702 | Sudo | Jun 1999 | A |
6005983 | Anderson et al. | Dec 1999 | A |
6611627 | LaRossa et al. | Aug 2003 | B1 |
6724942 | Arai | Apr 2004 | B1 |
6735337 | Lee et al. | May 2004 | B2 |
6807316 | Enomoto | Oct 2004 | B2 |
7006686 | Hunter et al. | Feb 2006 | B2 |
7215821 | Schuhrke et al. | May 2007 | B2 |
7332909 | Schaffter et al. | Feb 2008 | B2 |
7382406 | Higuchi | Jun 2008 | B2 |
7426314 | Kimbell et al. | Sep 2008 | B2 |
7456873 | Kohashi et al. | Nov 2008 | B2 |
7538822 | Lee et al. | May 2009 | B2 |
7668389 | Kitamura et al. | Feb 2010 | B2 |
20020067414 | Tanji et al. | Jun 2002 | A1 |
20020126911 | Gindele et al. | Sep 2002 | A1 |
20060215796 | Lin | Sep 2006 | A1 |
Number | Date | Country |
---|---|---|
5-83602 | Apr 1993 | JP |
5-344386 | Dec 1993 | JP |
7-177386 | Jul 1995 | JP |
7-274035 | Oct 1995 | JP |
8-98154 | Apr 1996 | JP |
9-44651 | Feb 1997 | JP |
9-46554 | Feb 1997 | JP |
9-224186 | Aug 1997 | JP |
2002-269558 | Sep 2002 | JP |
2003-304506 | Oct 2003 | JP |
2006-252508 | Sep 2006 | JP |
2007-104446 | Apr 2007 | JP |
Entry |
---|
Shimura et al., “A Digital Image Enlarging Method without Edge Effect by Using the ε-Filter,” Technical Report of IEICE, vol. J86-A, No. 5, May 2003, pp. 540-551. |
Greenspan et al., “Image enhancement by non-linear extrapolation in frequency space”, Proceedings of the IS & T/SPIE Symposium on Electronic Imaging Science and Technology—Image and Video Processing II, vol. 2182, pp. 2-13, Feb. 1994. |
Greenspan et al.,“Image Enhancement by Nonlinear Extrapolation in Frequency Space”, IEEE Transactions on Image Processing, vol. 9, No. 6, pp. 1035-1048, Jun. 2000. |
Number | Date | Country | |
---|---|---|---|
20110211766 A1 | Sep 2011 | US |