This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2008-56338, filed on Mar. 16, 2008; the entire contents of which are incorporated herein by reference.
The present invention relates to an image processing apparatus configured to improve the image quality of image data and a method thereof.
The image quality of a blurred image may be sharpened by processing a portion where the luminance is changed such as a boundary of a photogenic subject or a character, that is, an edge area of the image. For example, in JP-A 2007-310837 and “Reconstruction-based Super-resolution using self-congruency of images” Technical Report of Institute of Electronics Information and Communication Engineers, Institute of Electronics Information and Communication Engineers, December 2007, Volume 107, No. 379, CS 2007-52, p. 135-140 by Ida, Matsumoto, and Isogawa, whether it is an edge area or not is determined on the pixel to pixel basis on the image using Sobel operator (for example, see “Image processing engineering” by Ryoichi SUEMATSU and Hironao YAMADA, issued from Corona K. K, first edition, October 2000, p. 105) on an entered image as shown in
There are two types of Sobel operators. A first type is to detect an edge in the horizontal direction by inspecting the luminance difference between pixels above and below a pixel to be determined as shown in
In Ida et. al, the pixel to be determined is determined to be an edge area when the sum of absolute values of outputs of the two Sobel operators in the horizontal direction and the vertical directions is larger than a threshold value. For example, an edge detection results by the Sobel operators when an edge area as shown in
As described above, by using the Sobel operators, the edge areas of the image can be detected, and by applying image processing such as sharpening to the edge areas, the image quality is effectively improved.
However, when considering brightness changes with little differences in luminance as shown in
However, parts having many short edge areas in different directions are referred to as texture areas, and are not the boundaries of the photogenic subject or the characters. Therefore, these parts need not to be sharpened. These parts are parts which are not desired to be detected as the edge areas in terms of throughput saving. However, there is a problem that these parts are detected as the edge areas.
In view of such problem, it is an object of the invention to provide an image processing apparatus which is able to detect only edge areas but not texture areas and a method thereof.
According to embodiments of the present invention, there is provided an image processing apparatus including: an input unit configured to input an image; a pixel value setting unit configured to set respective pixel values in a template block including a pixel in the image sequentially; an arranging unit configured to arrange a plurality of reference blocks of the same shape and size as the template block in the image so as to surround the template block; an error calculating unit configured to obtain block matching errors between the respective pixel values of the plurality of reference blocks which surround the temperate block and the respective pixel values in the template block; and a determining unit configured to determine the pixel to be determined to be in an edge area when a minimum value in the block matching errors is a deviated value from all the block matching errors, and not to be in the edge area when the minimum value is not the deviated value.
According to the embodiments of the invention, only the edge areas which are not texture areas are detected, so that useless processing is omitted.
Referring now to the drawings, an image processing apparatus 100 according to a first embodiment of the invention will be described.
Referring now to
(1) Configuration of Image Processing Apparatus 100
The image processing apparatus 100 includes memories 102 and 106, an image processing unit 104, a differencing unit 108, and an edge determining unit 110.
Respective functions of the image processing unit 104, the differencing unit 108, and the edge determining unit 110 may be realized by programs stored in a computer.
(2) Memory 102 and Memory 106
In the memory 102, image data 101 to be processed is entered from the outside via an image input unit and stored therein.
The image processing apparatus 100 processes the image stored in the memory 102 to set respective pixels as pixels sequentially, for example, rightward from an upper left of a screen in the order from up to down, and determine the respective pixels whether it is an edge area or not.
Therefore, a pixel value arranging function of the image processing apparatus 100 reads N×N pixels including the pixel to be determined at the center as a template block 103 from the memory 102 and stores the same in the memory 106.
An arranging function of the image processing apparatus 100 sets thirteen reference blocks in the image so as to surround the periphery of the template block including the pixel to be determined. A reference block 1 is arranged on the upper left of the template block, and a reference block 2 is a block moved from the reference block 1 by one pixel rightward. Reference blocks 3, 4, 5, and 6 follow, not shown, and a block on the upper right of the template block corresponds to a reference block 7. The image processing apparatus 100 sets reference blocks from 8 to 13 in the image by moving downward therefrom by one pixel. The reference block 1 in this example is displaced from the template block by three pixels leftward and three pixels upward. However, this displacement may be larger than three pixels (see
Since the edge area is generally extended in both directions, the reference blocks are set to surround the template block along a half round as shown in
As shown in
(3) Differencing Unit 108
The differencing unit 108 performs a function of an error calculating unit. The differencing unit 108 calculates a block matching error (hereinafter, referred to simply as “error”) 109 between the reference block 1 and the template block.
A sum of absolute values or a sum of squares of a difference of the pixel value of the pixels at the same positions is used as the error 109. The differencing unit 108 then sends the error 109 to the determining unit 110.
In this manner, the differencing unit 108 switches from the reference blocks from 1 to 13 in sequence and sends the respective errors 109 to the determining unit 110.
(4) Distribution of Errors
When the template block and the reference block 1 are compared, five pixels out of nine pixels are significantly different in luminance. Therefore, the error in this case is a large value. The errors are large inmost of reference blocks not shown as in the case of the reference block 1. Among others, the error in the reference block 5 is specifically small because only one pixel at the center from among nine pixels is different. In such an edge area, a graph of errors calculated for each reference block is shown in
In this case, the luminance patterns of the template block and the respective reference blocks are all different, and there is no reference block having a specifically small error as shown in a graph in
(5) Determining Unit 110
Utilizing difference in errors as described above, the determining unit 110 in
More specifically, there is a method of determination as shown below.
In a first method, when the minimum value of error is a deviated value from among all the errors, in this case, from among thirteen errors, the pixel to be determined is determined to be in the edge area.
In a second method, assuming that the minimum value of error is M and the average of error is A, when the value of M/A is smaller than the threshold value, the pixel to be determined is determined to be in the edge area.
In a third method, when the difference between the smallest error value and the second smallest error vale is larger than the threshold value, the pixel to be determined is determined to be in the edge area.
In a fourth method, when the minimum value of error is smaller than the value of “average of errors−coefficient×standard deviation”, the pixel to be determined is determined to be in the edge area.
As shown in
On the other hand, as shown in
(6) Image Processing Unit 104
As shown in
The image processing unit 104 performs image processing such as sharpening (see Suematsu, et. al) in the periphery of the area which is determined as the edge area by the determined result 111.
The image processing unit 104 repeats the processing while setting the respective pixels in the screen as the pixel to be determined and image data 105 having improved image quality is sent outward.
The order of the edge determination and the image processing may be such that the image processing is performed on the periphery of each pixel every time when the pixel is determined, or the image processing is performed altogether after having determined all the pixels in the screen.
(7) Operation of Image Processing Apparatus 100
In Step 201, the image processing apparatus 100 sets a template block so as to include the pixel to be determined.
In Step 202, the differencing unit 108 calculates the errors between the respective reference blocks and the template block.
In Step 203, the determining unit 110 determines that the pixels to be determined in a certain reference block having an error specifically smaller than other errors is in the edge area, and the pixel to be determined not in such a reference block is not in the edge area.
In Step 204, the image processing unit 104 performs a predetermined image processing on the edge area.
Referring now to
The invention is also effective for In-frame Construction-based Super-Resolution (see Suematsu, et. al) which sharpens images while increasing resolution, that is, increasing the number of pixels or enlarging the scale in the image processing instead of a simple sharpening.
In the In-frame Construction-based Super-Resolution, the respective pixels in the entered image are set as a target pixel in sequence, and a plurality of points corresponding to the respective target pixels are searched along the edge area, and pixels values of the target pixels are copied as sampled values of the corresponding points. After having increased the density of the sampled points in this manner, the image quality is sharpened by a reconstruction process such as POCS (Projection Onto Convex Sets) method or MAP (maximum a posteriori) method.
The image processing unit 104 in
The image processing apparatus 100 enters the image data 103 to a provisionally enlarging unit 1901 while performing the edge determination.
The provisionally enlarging unit 1901 increases the number of pixels in the image data 103 by, for example, Cubic Convolution Interpolation or Bilinear Interpolation, and sends the same to a memory 1909 as a provisionally enlarged image data (provisional high-resolution image data) 1902 and stores the same therein.
The image processing apparatus 100 enters the image data 103 to a corresponding point detecting unit 1903.
The corresponding point detecting unit 1903 sets pixels in the image data 103 as target pixels one by one, searches the corresponding points from the same image at the decimal point accuracy, sends positional information 1904 of the corresponding points to a memory 1905, and stores the same.
After having stored the positional information of a number of corresponding points, the corresponding point detecting unit 1903 sends a positional information 1906 of the corresponding points from a memory 1905 to an image converting unit 1907.
The image converting unit 1907 calculates the errors of the sampled values of a high-resolution image data 1908 on the basis of the pixel values of the target pixels in the image data 103 and the positions of the corresponding points in the positional information 1906 of the corresponding points. The image converting unit 1907 overwrites a high-resolution image data 1911 whose value is corrected to be smaller in error in the memory 1909.
After having performed the renewal of the high-resolution image by a several times, the image converting unit 1907 outputs the same to the outside as a sharpened image data 1910.
In Step 904, the image processing unit 104 performs the process of the In-frame Construction-based Super-Resolution on the edge areas.
In Suematsu, et. al, the Sobel operator is used for the edge detection. However, in the second embodiment, since the edge areas other than the texture areas can be detected, the useless processing is not performed, and hence the sharpening of the entire image is achieved in a short time.
The In-frame Construction-based Super-Resolution utilizes such property that the similar pattern in luminance is present nearby as in the second embodiment. Therefore, improvement of the image quality is ensured by confirming the fact that the similar pattern is present nearby in advance.
In the second embodiment, it is estimated that the edge area is present in the direction from the position of the template block to the reference block having the smallest error. Therefore, in the second embodiment, the determined result 111 including the direction of the corresponding edge area is sent from the determining unit 110 to the corresponding point detecting unit 1903. Then, in the second embodiment, the throughput for the search of the corresponding points is saved by performing the search of the corresponding points only in the direction of the corresponding edge area.
(Modification)
The invention is not limited to the embodiments shown above, and may be modified in various manners without departing from the scope of the invention.
As a first modification, in the above described embodiments, flat area detection may be performed for the entire screen before the edge determination as shown in
For example, the image processing apparatus 100 in the first modification divides the screen into larger blocks. The image processing apparatus 100 in the first modification calculates dispersion of the luminance values in each block. The image processing apparatus 100 in the first modification determines that it is a flat area when the dispersion is smaller than a threshold value (see Step 501). Since the flat area needs not to be sharpened, the image processing apparatus 100 does not set the pixels in the portion determined as the flat area as the pixels to be determined, and perform the image processing in other areas in the same manner as the above-described embodiments (see Step 802 to 805). Accordingly, the image processing apparatus 100 in the first modification is able to save the entire throughput by not performing the edge determination or the image processing.
In a second modification, the reference blocks may be arranged so as to surround the template block along a full circle. In the case of the half round, one specifically small error appears in the edge area as described above, while in the case of the full circle, two of those appear in general.
Therefore, when the full circle is selected, a method of determining the edge area when the average value of the smallest error and the second smallest error is smaller than the average value of the remaining errors by more than the predetermined value is also effective as the edge determining method.
Number | Date | Country | Kind |
---|---|---|---|
2008-056338 | Mar 2008 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
5454052 | Kojima | Sep 1995 | A |
6055335 | Ida et al. | Apr 2000 | A |
6535650 | Poulo et al. | Mar 2003 | B1 |
7428335 | Ida et al. | Sep 2008 | B2 |
7440614 | Ida et al. | Oct 2008 | B2 |
7657059 | Olson et al. | Feb 2010 | B2 |
7856137 | Yonezawa et al. | Dec 2010 | B2 |
20020051572 | Matsumoto et al. | May 2002 | A1 |
20030156301 | Kempf et al. | Aug 2003 | A1 |
20060188160 | Matsumoto et al. | Aug 2006 | A1 |
20060221090 | Takeshima et al. | Oct 2006 | A1 |
20070206860 | Ida et al. | Sep 2007 | A1 |
20070206875 | Ida et al. | Sep 2007 | A1 |
20070269137 | Ida et al. | Nov 2007 | A1 |
20080013784 | Takeshima et al. | Jan 2008 | A1 |
20080267533 | Ida et al. | Oct 2008 | A1 |
20100033584 | Watanabe | Feb 2010 | A1 |
20100303340 | Abraham et al. | Dec 2010 | A1 |
Number | Date | Country | |
---|---|---|---|
20090226111 A1 | Sep 2009 | US |