This patent application claims priority to Japanese patent application Nos. 2004-206408 filed on Jul. 13, 2004, and 2005-039603 filed on Feb. 16, 2005, in the Japanese Patent Office, the entire contents of which are hereby incorporated by reference.
The following disclosure relates generally to converting the resolution of an image using interpolation and displaying the converted image.
The existing display apparatus is usually provided with a function for converting the resolution of an image. For example, if the image has a resolution lower than an output resolution of the display apparatus, the resolution of the image may be increased using any one of the known interpolation methods, including the nearest neighbor method, linear interpolation method, or cubic convolution method, for example. In addition, various interpolation methods have been recently introduced, as described in the Japanese Patent No. 2796900 (“the '900 patent”), patented on Jul. 3, 1998, for example.
The nearest neighbor method can be processed at a high speed, however, it may generate jaggedness in the image. The linear interpolation method may be more effective than the nearest neighbor method for generating a smoother image, however, it may lower the sharpness of the image, thus creating a blurred image. The cubic convolution method can provide higher image quality, as compared with the nearest neighbor method or the linear interpolation method, however, it requires a large reference range, thus making calculation more complicated. Further, the cubic convolution method may enhance a noise component of the image. The method disclosed in the '900 patent can provide higher image quality as compared with the nearest neighbor method with a relatively smaller reference range, however, the image still suffers from jaggedness.
As described above, none of the known methods can generate an image, which is smooth and sharp, without enhancing jaggedness in the image. Further, none of the known methods can generate a high quality image while suppressing a computation amount.
An exemplary embodiment of the present invention includes an apparatus, method, system, computer program and product, each capable of converting the resolution of an image using a first interpolation method, the method comprising the steps of: specifying an interpolated pixel to be added to the image; selecting a plurality of reference pixels from a vicinity of the interpolated pixel; obtaining a distance value for each of the reference pixels; extracting a pixel value for each of the reference pixels; generating a weighting factor for a target reference pixel selected from the plurality of reference pixels using the distance value and the pixel value of the target reference pixel; and adding the interpolated pixel having a pixel value determined by the weighting factor of the target reference pixel.
Another exemplary embodiment of the present invention includes an apparatus, method, system, computer program and product, each capable of converting a resolution of an image using an interpolation method, which is selected from a plurality of interpolation methods including the first interpolation method according to characteristics of the image.
Another exemplary embodiment of the present invention includes an apparatus, method, system, computer program and product, each capable of displaying an image having the resolution converted by using the first interpolation method or the selected interpolating method.
A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
In describing the preferred embodiments illustrated in the drawings, specific terminology is employed for clarity. However, the disclosure of this patent specification is not intended to be limited to the specific terminology selected and it is to be understood that each specific element includes all technical equivalents that operate in a similar manner. Referring now to the drawings, wherein like reference numerals designate identical or corresponding parts throughout the several views,
The image display apparatus 10 includes any kind of display apparatus capable of displaying an image according to image data 1, such as a CRT (cathode ray tube) display, LCD (liquid crystal display), PDP (plasma display panel), or a projector, for example.
As shown in
As shown in
The input data storage 4, which may be optionally provided, stores the image data 1, preferably, in a unit basis. For example, the image data 1 may be stored in a pixel basis, line basis, or frame basis.
The resolution detector 2 detects the input resolution of the image data 1 using a clock signal, a horizontal synchronization signal, or a vertical synchronization signal, for example.
The coordinate selector 3 selects a coordinate for the input resolution (“input coordinate”), and a coordinate for the output resolution (“output coordinate”), respectively. In one example, the coordinate selector 3 may store a plurality of look-up tables (LUTs), each corresponding to a specific resolution. In another example, the coordinate selector 3 may generate a LUT based on the input or output resolution.
The resolution converter 5 converts the image data 1 from the input resolution to the output resolution by changing the density of pixels in the image data 1.
In one example, if the output resolution is lower than the input resolution, the resolution converter 5 may delete a number of pixels (“deleted pixels”) throughout the image data 1. The resolution converter 5 selects the deleted pixels from the image data 1 based on the input and output coordinates.
In another example, if the output resolution is higher than the input resolution, the resolution converter 5 may add a number of pixels (“interpolated pixels”) throughout the image data 1. The resolution converter 5 determines a portion, in the image data 1, to which each of the interpolated pixels is added based on the input and output coordinates. Further, the resolution converter 5 determines a pixel value of each of the interpolated pixels based on information contained in the image data 1 using various interpolation methods as described below.
The conversion data storage 6 stores various data including data used for resolution conversion.
The output data storage 7, which may be optionally provided, stores the processed image data 1 having the output resolution, and outputs the processed image data 1, preferably, in a unit basis.
Referring now to
According to the first method, Step S100 specifies one of the interpolated pixels. For example, as shown in
Step S101 selects one or more reference pixels, which are originally provided in the image data 1, from a vicinity of the specified interpolated pixel. Step S101 further obtains a distance value for each of the reference pixels.
To select the reference pixels, the resolution converter 5 may calculate, for each of the interpolated pixels, a distance between the interpolated pixel and its neighboring pixel based on the input and output coordinates. The distance may be expressed in X and Y coordinate values. For example, if the interpolated pixel is positioned at the coordinate (X1, Y1), and its neighboring pixel is positioned at the coordinate (X2, Y2), the distance between the interpolated pixel and the neighboring pixel may be expressed in X and Y coordinate values (X1-X2) and (Y1-Y2). The calculated distance values are further stored in the conversion data storage 6 as a LUT. Using this LUT, the resolution converter 5 can select one or more reference pixels for each of the interpolated pixels in the image data 1. Further, the resolution converter 5 can obtain a distance value for each of the selected reference pixels from the LUT.
In the example shown in
Step S102 obtains a pixel value for each of the reference pixels obtained in Step S101, for example, from the input data storage 4. In the example shown in
Step S103 obtains a difference value M, indicating a difference between a maximum value MAX and a minimum value MIN of the reference pixels. The maximum value MAX corresponds to the pixel value having the largest value selected from the pixel values of the reference pixels. The minimum value MIN corresponds to the pixel having the smallest value selected from the pixel values of the reference pixels. The difference value M may be expressed with the equation: M=MAX−MIN.
Alternatively, the difference value M may be obtained by comparing the pixel value of a nearest reference pixel with the pixel value of each of the reference pixels other than the nearest reference pixel. The nearest reference pixel is a reference pixel having the smallest distance value. For example, in the example shown in
Step S104 determines whether the difference value M is equal to 0. If the difference value M is equal to 0(“YES” in Step S104), that is, the pixel values are the same for all the reference pixels, the operation proceeds to Step S108. If the difference value M is not equal to 0(“NO” in Step S104), the operation proceeds to Step S105.
Step S108 uses one of the pixel values of the reference pixels as a pixel value of the interpolated pixel. In the example shown in
Step S105 calculates an average value AVE, which is the average of the pixel values of the reference pixels. In the example shown in
AVE=(a00+a01+a10+a11)/4.
Step S106 obtains a weighting factor for each of the reference pixels using the pixel values obtained in Step S102, the distance values obtained in Step S101, the average value AVE obtained in Step S105, and a normalization factor. In this exemplary embodiment, a maximum pixel value of the image data 1, which is 255, is used as the normalization factor.
In the example shown in
Z00=x2*y2*(1−|a00−AVE|/255);
Z10=x1*y2*(1−|a10−AVE|/255);
Z01=x2*y1*(1−|a01−AVE|/255); and
Z11=x1*y1*(1−|a11−AVE|/255).
Step S107 calculates a pixel value of the interpolated pixel using the pixel values of the reference pixels. In this exemplary embodiment, each of the pixel values is weighted with the corresponding weighting factor obtained in Step S106.
In the example shown in
b=a00*Z00/(Z00+Z10+Z01+Z11)+a10*Z10/(Z00+Z10+Z01+Z11)+a01*Z01/(Z00+Z10+Z01+Z11)+a11*Z11/(Z00+Z10+Z01+Z11).
The above equation can be simplified as:
b=(Z00*a00+Z10*a10+Z01*a01+Z11*a11)/(Z00+Z10+Z01+Z11).
Step S109 determines whether all interpolated pixels in the image data 1 have been processed. If all interpolated pixels have been processed (“YES” in Step S109), the operation ends to store the processed image data 1 in the output data storage 7 to be displayed by the display device 10. If all interpolated pixels have not been processed (“NO” in Step S109), the operation returns to Step S100 to specify another interpolated pixel.
Using the first method, smoothness of an image may be increased as shown in
The operation using the second method is substantially similar to the operation using the first method, except for the calculation performed in Step S106.
According to the second method, Step S106 obtains a weighting factor for each of the reference pixels using the pixel values obtained in Step S102, the distance values obtained in Step S101, the average value AVE obtained in Step S105, and a normalization factor. In this exemplary embodiment, the difference value M obtained in Step S103 is used as the normalization factor.
In the example shown in
Z00=x2*y2*(1 −|a00−AVE|/M);
Z10=x1*y2*(1 −|a10−AVE|/M);
Z01=x2*y1*(1 −|a01−AVE|/M); and
Z11=x1*y1*(1 −|a11−AVE|/M).
Using the second method, smoothness of an image may be increased as shown in
In this exemplary embodiment, the difference value M is used as the normalization factor. However, any value may be used as long as it reflects the pixel values of the reference pixels. For example, a value larger than the value (MAX−AVE), a value larger than the value (AVE−MIN), a value smaller or larger than the difference value M by a predetermined value may be used.
Referring now to
The operation using the third method shown in
Step S205 calculates an average value AVE1, which is the average of the pixel values of a pair of reference pixels that are diagonally opposite each other.
In the example shown in
Similarly, the second reference pixel A01 and the third reference pixel A10 make a pair of diagonally opposing pixels. Accordingly, the average value AVE12 of the reference pixels A01 and A10 can be calculated as follows: AVE12=(a01+a10)/2.
Step S206 obtains a weighting factor for each of the reference pixels using the pixel values obtained in Step S102, the distance values obtained in Step S101, the average values AVE1 obtained in Step S205, and a normalization factor. In this exemplary embodiment, a predetermined value larger than the maximum pixel value of the image data 1, which is 255, is used as the normalization factor.
In the example shown in
Z00=x2*y2*(1−|a00−AVE12|/256);
Z10=x1*y2*(1−|a10−AVE11|/256);
Z01=x2*y1*(1−|a01−AVE11|/256); and
Z11=x1*y1*(1−|a11−AVE12|/256).
Using the third method, sharpness of an image may be increased as shown in
Referring now to
The operation using the fourth method shown in
Step S303 obtains a difference value M1 based on the maximum value MAX and the minimum value MIN of the reference pixels. The difference value M1 is any kind of value larger than the difference value M obtained in Step S103 of
Step S306 obtains a weighting factor for each of the reference pixels using the pixel values obtained in Step S102, the distance values obtained in Step S101, the average value AVE1 obtained in Step S205, and a normalization factor. In this exemplary embodiment, the difference value M1 obtained in Step S303 is used as the normalization factor.
In the example shown in
Z00=x2*y2*(1−|a00−AVE12|/M1);
Z10=x1*y2*(1−|a10−AVE11|/M1);
Z01=x2*y1*(1−|a01−AVE11|/M1); and
Z11=x1*y1*(1−|a11−AVE12|/M1).
Using the fourth method, sharpness of an image may be increased as shown in
In this exemplary embodiment, the difference value M1 is used as the normalization factor. However, any value may be used as long as it reflects the pixel values of the reference pixels. For example, the value of the normalization factor may be increased to improve smoothness of an image, as illustrated in
Referring now to
The operation using the fifth method shown in
Step S406 obtains a weighting factor for each of the reference pixels using the pixel values obtained in Step S102, the distance values obtained in Step S101, and a normalization factor. In this exemplary embodiment, a predetermined value larger than the maximum pixel value of the image data 1, which is 255, is used as the normalization factor.
In the example shown in
Z00=x2*y2*(1−|a00−a11|/256);
Z10=x1*y2*(1−|a10−a01|/256);
Z01=x2*y1*(1−|a01−a10|/256); and
Z11=x1*y1*(1−|a11−a00|/256).
As shown in the above equations, instead of using the average value AVE1 as described with reference to Step S206 of
Using the fifth method, sharpness of an image may be increased as shown in
Referring now to
The operation using the sixth method shown in
Step S506 obtains a weighting factor for each of the reference pixels using the pixel values obtained in Step S102, the distance values obtained in Step S101, and a normalization factor. In this exemplary embodiment, the difference value M1 obtained in Step S303 is used as the normalization factor.
In the example shown in
Z00=x2*y2*(1−|a00−a11|/M1);
Z10=x1*y2*(1−|a10−a01|/M1);
Z01=x2*y1*(1−|a01−a10|/M1); and
Z11=x1*y1*(1−|a11−a00|/M1).
As shown in the above equations, instead of using the average value AVE1 as described referring to Step S306 of
Using the sixth method, sharpness of an image may be increased as shown in
In this exemplary embodiment, the difference value M1 is used as the normalization factor. However, any value may be used as long as it reflects the pixel values of the reference pixels. For example, the value of the normalization factor may be increased to improve smoothness of an image, as illustrated in
Referring now to
The operation using the seventh method shown in
Step S605 selects a nearest reference pixel A, which is the reference pixel having the smallest distance value, from the reference pixels obtained in Step S101. In this exemplary embodiment, a distance value may be expressed in X and Y coordinate values.
According to the seventh method, Step S606 obtains a weighting factor for each of the reference pixels, other than the nearest reference pixel A using the pixel values obtained in Step S102, the distance values obtained in Step S101, the average value AVE obtained in Step S105, and a normalization factor, in a substantially similar manner as described with reference to Step S106 of
In the example shown in
Z00=x2*y2;
Z10=x1*y2*(1−|a10−AVE|/255);
Z01=x2*y1*(1−|a01−AVE|/255); and
Z11=x1*y1*(1−|a11−AVE|/255).
In the example shown in
Z00=x2*y2*(1−|a00−AVE|/255);
Z10=x1*y2;
Z01=x2*y1*(1−|a01−AVE|/255); and
Z11=x1*y1*(1−|a11−AVE|/255).
In the example shown in
Z00=x2*y2*(1−|a00−AVE|/255);
Z10=x1*y2*(1−|a10−AVE|/255);
Z01=x2*y1; and
Z11=x1*y1*(1−|a11−AVE|/255).
In the example shown in
Z00=x2*y2*(1−|a00−AVE|/255);
Z10=x1*y2*(1−|a10−AVE|/255);
Z01=x2*y1*(1−|a01−AVE|/255); and
Z11=x1*y1.
Using the seventh method, sharpness of an image may be increased as shown in
The operation using the eighth method is substantially similar to the operation using the seventh method, except for the calculation performed in Step S606.
According to the eighth method, Step S606 obtains a weighting factor for each of the reference pixels other than the nearest reference pixel A using the pixel values obtained in Step S102, the distance values obtained in Step S101, the average value AVE obtained in Step S105, and a normalization factor, in a substantially similar manner as described with reference to Step S106 of
In the example shown in
Z00=x2*y2;
Z10=x1*y2*(1−|a10−AVE|/M);
Z01=x2*y1*(1−|a01−AVE|/M); and
Z11=x1*y1*(1−|a11−AVE|/M).
In the example shown in
Z00=x2*y2*(1−|a00−AVE|/M);
Z10=x1*y2;
Z01=x2*y1*(1−|a01−AVE|/M); and
Z11=x1*y1*(1−|a11−AVE|/M).
In the example shown in
Z00=x2*y2*(1−|a00−AVE|/M);
Z10=x1*y2*(1−|a10−AVE|/M);
Z01=x2*y1; and
Z11=x1*y1*(1−|a11−AVE|/M).
In the example shown in
Z00=x2*y2*(1−|a00−AVE|/M);
Z10=x1*y2*(1−|a10−AVE|/M);
Z01=x2*y1*(1−|a01−AVE|/M); and
Z11=x1*y1.
Using the eighth method, sharpness of an image may be increased while keeping information regarding pixel values of an original image as shown in
In this exemplary embodiment, the difference value M is used as the normalization factor. However, any value may be used as long as it reflects the pixel values of the reference pixels. For example, a value larger than the value (MAX−AVE), a value larger than the value (AVE−MIN), or a value smaller or larger than the difference value M may be used.
Referring now to
The operation using the ninth method shown in
Step S706 obtains a weighting factor for each of the reference pixels other than the nearest reference pixel A using the pixel values obtained in Step S102, the distance values obtained in Step S101, the average values AVE1 obtained in Step S205, and a normalization factor, in a substantially similar manner as described referring to Step S206 of
In the example shown in
Z00=x2*y2;
Z10=x1*y2*(1−|a10−AVE11|/256);
Z01=x2*y1*(1−|a01−AVE11|/256); and
Z11=x1*y1*(1−|a11−AVE12|/256).
In the example shown in
Z00=x2*y2*(1−|a00−AVE12|/256);
Z10=x1*y2;
Z01=x2*y1*(1−|a01−AVE11|/256); and
Z11=x1*y1*(1−|a11−AVE12|/256).
In the example shown in
Z00=x2*y2*(1−|a00−AVE12|/256);
Z10=x1*y2*(1−|a10−AVE11|/256);
Z01=x2*y1; and
Z11=x1*y1*(1−|a11−AVE12|/256).
In the example shown in
Z00=x2*y2*(1−|a00−AVE12|/256);
Z10=x1*y2*(1−|a10−AVE11|/256);
Z01=x2*y1*(1−|a01−AVE11|/256); and
Z11=x1*y1.
Using the ninth method, sharpness of an image may be increased as shown in
Referring now to
The operation using the tenth method shown in
According to the tenth method, Step S806 obtains a weighting factor for each of the reference pixels other than the nearest reference pixel A using the pixel values obtained in Step S102, the distance values obtained in Step S101, and a normalization factor, in a substantially similar manner as described referring to Step S406 of
In the example shown in
Z00=x2*y2;
Z10=x1*y2*(1−|a10−a01|/256);
Z01=x2*y1*(1−|a01−a10|/256); and
Z11=x1*y1*(1−|a11−a00|/256).
In the example shown in
Z00=x2*y2*(1−|a00−a11|/256);
Z10=x1*y2;
Z01=x2*y1*(1−|a01−a10|/256); and
Z11=x1*y1*(1−|a11−a00|/256).
In the example shown in
Z00=x2*y2*(1−|a00−a11|/256);
Z10=x1*y2*(1−|a10−a01|/256);
Z01=x2*y1; and
Z11=x1*y1*(1−|a11−a00|/256).
In the example shown in
Z00=x2*y2*(1−|a00−a11|/256);
Z10=x1*y2*(1−|a10−a01|/256);
Z01=x2*y1*(1−|a01−a10|/256); and
Z11=x1*y1.
Using the tenth method, sharpness of an image may be increased as shown in
Referring now to
The operation using any one of the eleventh to thirteenth methods shown in
According to the eleventh method, Step S906 obtains a weighting factor for each of the reference pixels using the pixel values obtained in Step S102, the distance values obtained in Step S101, the average value AVE obtained in Step S105, and a normalization factor. In this exemplary embodiment, the difference value M obtained in Step S103 is used as the normalization factor. Further, in this exemplary embodiment, the distance value is raised to the power of a multiplication value n. The multiplication value n is an arbitrary number larger than 1, preferably larger than 2.
In the example shown in
Z00=(x2*y2)n*(1−|a00−AVE|/M);
Z10=(x1*y2)n*(1−|a10−AVE|/M);
Z01=(x2*y1)n*(1−|a01−AVE|/M); and
Z11=(x1*y1)n*(1−|a11−AVE|/M).
Using the eleventh method, sharpness of an image may be increased as shown in FIG. 16C,when compared to the image of
The operation using the twelfth method is substantially similar to the operation using the eleventh method, except for the calculation performed in Step S906.
According to the twelfth method, Step S906 obtains a weighting factor for each of the reference pixels using the pixel values obtained in Step S102, the distance values obtained in Step S101, the average value AVE obtained in Step S105, and a normalization factor. In this exemplary embodiment, the difference value M obtained in Step S103 is used as the normalization factor. Further, in this exemplary embodiment, the pixel value is raise to the power of a multiplication value n. The multiplication value n is an arbitrary number larger than 1, preferably larger than 2.
In the example shown in
Z00=(x2*y2)*(1−|a00−AVE|/M)n;
Z10=(x1*y2)*(1−|a10−AVE|/M)n;
Z01=(x2*y1)*(1−|a01−AVE|/M)n; and
Z11=(x1*y1)*(1−|a11−AVE|/M)n.
Using the twelfth method, sharpness of an image may be increased while keeping smoothness of the image, as shown in
The operation using the thirteenth method is substantially similar to the operation using the eleventh method, except for the calculation performed in Step S906.
According to the thirteenth method, Step S906 obtains a weighting factor for each of the reference pixels using the pixel values obtained in Step S102, the distance values obtained in Step S101, the average value AVE obtained in Step S105, and a normalization factor. In this exemplary embodiment, the difference value M obtained in Step S103 is used as the normalization factor. Further, in this exemplary embodiment, the distance value and the pixel values are raised to the power of multiplication values n and p, respectively. Any one of the multiplication values n and the value p is an arbitrary number larger than 1, preferably, larger than 2.
In the example shown in
Z00=(x2*y2)n*(1−|a00−AVE|/M)p;
Z10=(x1*y2)n*(1−|a10−AVE|/M)p;
Z01=(x2*y1)n*(1−|a01−AVE|/M)p;
and
Z11=(x1*y1)n*(1−|a11−AVE|/M)p.
If the factor n is equal to the factor p, the above equations can be simplified as follows:
Z00=((x2*y2)*(1−|a00−AVE|/M))n;
Z10=((x1*y2)*(1−|a10−AVE|/M))n;
Z01=((x2*y1)*(1−|a01−AVE|/M))n; and
Z11=((x1*y1)*(1−|a11−AVE|/M))n.
Using the thirteenth method, sharpness of an image may be increased while keeping smoothness of the image, as shown in
In this exemplary embodiment, sharpness and smoothness of an image may be adjusted, by changing the multiplication value of n or p. For example, with the increased value n, a pixel value of an interpolated pixel may be influenced more by a pixel value of its nearest reference pixel. Accordingly, sharpness of the image may be increased. With the increased value p, a pixel value of an interpolated pixel may be influenced more by an average pixel value of the entire image. Accordingly, smoothness of the image may be increased.
According to any one of the above-described and other methods of the present invention, the resolution converter 5 may store calculation results in the conversion data storage 6. Alternatively, the image processing device 9 of
For example, an add value data storage 11 may be additionally provided to the image processing device 9, as shown in
Referring now to
The operation using the fourteenth method shown in
According to the fourteenth method, Step S1007 obtains an add value for each of the weighting factors obtained in Step S106 from the LUT stored in the add value data storage 11.
Step S1008 multiplies the weighting factor by the corresponding add value to obtain a multiplied weighting factor.
The operation using the fifteenth method is substantially similar to the operation using the fourteenth method, except for the calculation performed in Step S1008.
Step S1008 adds the add value to the weighting factor to obtain a multiplied weighting factor.
Using any one of the fourteenth and fifteenth methods, the processing speed of the resolution converter 5 may be increased.
In addition to the above-described methods including the first to fifteenth methods, the resolution converter 5 may perform any interpolation method according to the scope of this disclosure and appended claims. For example, elements, features, or functions of the above-described methods may be combined with each other and/or substituted for each other within the scope of this disclosure and appended claims. In operation, the resolution converter 5 may select at least one of the above-described and other methods of the present invention according to a user's preference, for example. Alternatively, the resolution converter 5 may select at least one of the above-described and other methods of the present invention according to entire or local image characteristics. To make a selection, the image processing device 9 may be additionally provided with a selector capable of selecting at least one of the above-described and other methods of the present invention according to a user's preference or characteristics of an image.
Further, in addition to any one of the above-described and other methods of the present invention, the resolution converter 5 may perform any one of the known interpolation methods, including the linear method, cubic convolution method, or the nearest neighbor method, for example. In operation, the resolution converter 5 may select at least one of the above-described and other methods of the present invention, and the known interpolation methods according to a user's preference. Alternatively, the resolution converter 5 may select at least one of the above-described and other methods of the present invention, and the known interpolation methods according to entire or local image characteristics. To make a selection, the image processing device 9 may be additionally provided with a selector capable of selecting at least one of the above-described and other methods of the present invention, and the known interpolation methods according to a user's preference or characteristics of an image.
Referring now to
The operation using the sixteenth method shown in
Step S1105 determines whether the difference value M obtained in Step S103 is equal to or larger than a predetermined selection value. If the difference value M is smaller than the selection value (“NO” in Step S1105), the operation proceeds to Step S1107. If the difference value M is equal to or larger than the selection value (“YES” in Step S1105), the operation proceeds to Step S105.
As described with reference to
As described above, if the difference value M of the reference pixels is smaller than the selection value, the resolution converter 5 assumes that variations in pixel values of the reference pixels are relatively small. Based on this characteristic, the linear method is selected, which is suitable for enhancing smoothness of the image. Examples of an image having small variations in pixel values include an image having a character or a line.
If the difference value M of the reference pixels is equal to or larger than the selection value, the resolution converter 5 assumes that variations in pixel values of the reference pixels are relatively large. Based on this characteristic, the twelfth method is selected, which is suitable for enhancing sharpness of the image. Examples of an image having large variations in pixel values include an image having a picture image.
Step S1107 obtains a weighting factor for each of the reference pixels using the linear method. In the example shown in
Z00=x2*y2;
Z10=x1*y2;
Z01=x2*y1; and
Z11=x1*y1.
Step S906 obtains a weighting factor for each of the reference pixels using the twelfth method of the present invention. In the example shown in
Z00=(x2*y2)*(1−|a00−AVE|/M)n;
Z10=(x1*y2)*(1−|a10−AVE|/M)n;
Z01=(x2*y1)*(1−|a01−AVE|/M)n; and
Z11=(x1*y1)*(1−|a11−AVE|/M)n.
In this exemplary embodiment, the multiplication value n is set to 3, however, any number larger than 1, preferably larger than 2, may be used.
Step S107 calculates a pixel value of the interpolated pixel using the pixel values of the reference pixels. In this exemplary embodiment, each of the pixel values is weighted with the corresponding weighting factor obtained in Step S906 or S1107.
In the example shown in
b=(Z00*a00+Z10*a10+Z01*a01+Z11*a11)/(Z00+Z10+Z01+Z11).
Since the sum of the pixel values of the reference pixels (Z00+Z10+Z01+Z11) is 1, the above equation can be further simplified to:
b=Z00*a00+Z10*a10+Z01*a01+Z11*a11.
In the example shown in
b=(Z00*a00+Z10*a10+Z01*a01+Z11*a11)/(Z00+Z10+Z01+Z11).
Using the sixteenth method, smoothness and sharpness of an image may be controlled according to local image characteristics, as shown in
Further, in this exemplary embodiment, most of the steps illustrated in
Referring now to
The operation using the seventeenth method shown in
Step S1205 determines whether the pixel value of each of the reference pixels is equal to either the maximum value MAX and the minimum value MIN of the reference pixels. If the pixel value of the reference pixel is equal to the maximum value MAX or the minimum value MIN (“YES” in Step S1205), the operation proceeds to Step S105. Otherwise (“NO” in Step S1205), the operation proceeds to Step S1107.
In this exemplary embodiment, if the pixel value of each of the reference pixels is equal to either the maximum value MAX and the minimum value MIN, the resolution converter 5 assumes that the original image, or at least the portion having the reference pixels, is a binary image. Based on this characteristic, the twelfth method is selected, which is suitable for enhancing sharpness of the image.
In this exemplary embodiment, if the pixel value of each of the reference pixels is not equal to either one of the maximum value MAX and the minimum value MIN, the resolution converter 5 assumes that the original image, or at least the portion having the reference pixels, is a multivalue image. Based on this characteristic, the linear method is selected, which is suitable for enhancing smoothness of the image.
Using the seventeenth method, smoothness and sharpness of an image may be controlled according to local image characteristics, as shown in
Further, in this exemplary embodiment, most of the steps illustrated in
Referring now to
The operation using the eighteenth method shown in
Step S1304 defines a selection range, which may be used for selecting an interpolation method suitable for processing the specified interpolated pixel. The selection range may be defined based on a predetermined constant M2. The predetermined constant M2 may be any value, however, in this exemplary embodiment, the predetermined constant M2 is defined based on the difference value M as illustrated in the following equation: M2=M/S, wherein S is any value larger than 2. Based on the predetermined constant M2, the selection range may be defined as a range that is larger than (MAX−M2) or smaller than (MIN+M2). The value MAX and the value MIN correspond to the maximum value and the minimum value of the reference pixels.
Step S1305 determines whether the pixel value of each of the reference pixels is within the selection range defined in Step S1304. If the pixel value of the reference pixel is within the selection range (“YES” in Step S1305), the operation proceeds to Step S105. If the pixel value of the reference pixel is out of the selection range (“NO” in Step S1305), the operation proceeds to Step S1107.
In this exemplary embodiment, if the pixel value of each of the reference pixels is within the selection range, the resolution converter 5 assumes that the original image, or at least the portion having the reference pixels, is an image having small variations in pixel values such as a gradation image. Based on this characteristic, the twelfth method is selected, which is suitable for enhancing sharpness of the image.
In this exemplary embodiment, if the pixel value of each of the reference pixels is out of the selection range, the resolution converter 5 assumes that the original image, or at least the portion having the reference pixels, is a multivalue image, or an image having large variations in pixel values. Based on these characteristics, the linear method is selected, which is suitable for enhancing smoothness of the image.
Using the eighteenth method, smoothness and sharpness of an image may be controlled according to local image characteristics, as shown in
Further, in this exemplary embodiment, most of the steps illustrated in
Referring now to
The operation using the nineteenth method shown in
Step S1115 determines whether the difference value M obtained in Step S103 is equal to or larger than a predetermined selection value. If the difference value M is smaller than the selection value (“NO” in Step S1115), the operation proceeds to Step S106. If the difference value M is equal to or larger than the predetermined selection value (“YES” in Step S1115), the operation proceeds to Step S906.
As described with reference to
As described above, if the difference value M of the reference pixels is smaller than the selection value, the resolution converter 5 assumes that variations in pixel values of the reference pixels are relatively small. Based on this characteristic, the second method is selected, which is suitable for enhancing smoothness of the image.
If the difference value M of the reference pixels is equal to or larger than the selection value, the resolution converter 5 assumes that variations in pixel values of the reference pixels are relatively large. Based on this characteristic, the twelfth method is selected, which is suitable for enhancing sharpness of the image.
Step S106 obtains a weighting factor for each of the reference pixels using the second method. In the example shown in
Z00=x2*y2*(1−|a00−AVE|/M);
Z10=x1*y2*(1−|a10−AVE|/M);
Z01=x2*y1*(1−|a01−AVE|/M); and
Z11=x1*y1*(1−|a11−AVE|/M).
Step S107 calculates a pixel value of the interpolated pixel using the pixel values of the reference pixels. In this exemplary embodiment, each of the pixel values is weighted with the corresponding weighting factor obtained in Step S906 or S106.
In the example shown in
b=(Z00*a00+Z10*a10+Z01*a01+Z11*a11)/(Z00+Z10+Z01+Z11).
Using the nineteenth method, smoothness and sharpness of an image may be controlled according to local image characteristics, as shown in
Further, in this exemplary embodiment, most of the steps illustrated in
Referring now to
The operation using the twentieth method shown in
Step S1215 determines whether the pixel value of each of the reference pixels is equal to either the maximum value MAX and the minimum value MIN of the reference pixels. If the pixel value of the reference pixel is equal to the maximum value MAX or the minimum value MIN (“YES” in Step S1215), the operation proceeds to Step S906. Otherwise (“NO” in Step S1215), the operation proceeds to Step S106.
In this exemplary embodiment, if the pixel value of each of the reference pixels is equal to either the maximum value MAX and the minimum value MIN, the resolution converter 5 assumes that the original image, or at least the portion having the reference pixels, is a binary image. Based on this characteristic, the twelfth method is selected, which is suitable for enhancing sharpness of the image.
In this exemplary embodiment, if the pixel value of each of the reference pixels is not equal to either the maximum value MAX and the minimum value MIN, the resolution converter 5 assumes that the original image, or at least the portion having the reference pixels, is a multivalue image. Based on this characteristic, the second method is selected, which is suitable for enhancing smoothness of the image.
Using the twentieth method, smoothness and sharpness of an image may be controlled according to local image characteristics, as shown in
Further, in this exemplary embodiment, most of the steps illustrated in
Referring now to
The operation using the twenty-first method shown in
Step S1315 determines whether the pixel value of each of the reference pixels is within the selection range defined in Step S1304. If the pixel value of the reference pixel is within the selection range (“YES” in Step S1315), the operation proceeds to Step S906. If the pixel value of the reference pixel is out of the selection range (“NO” in Step S1315), the operation proceeds to Step S106.
In this exemplary embodiment, if the pixel value of each of the reference pixels is within the selection range, the resolution converter 5 assumes that the original image, or at least the portion having the reference pixels, is an image having small variations in pixel values, such as a gradation image. Based on this characteristic, the twelfth method is selected, which is suitable for enhancing sharpness of the image.
In this exemplary embodiment, if the pixel value of each of the reference pixels is out of the selection range, the resolution converter 5 assumes that the original image, or at least the portion having the reference pixels, is a multivalue image, or an image having large variations in pixel values. Based on this characteristic, the second method is selected, which is suitable for enhancing smoothness of the image.
Using the twenty-first method, smoothness and sharpness of an image may be controlled according to local image characteristics, as shown in
Further, in this exemplary embodiment, most of the steps illustrated in
As described above referring to any one of the sixteenth to twenty-first methods, the resolution converter 5 is capable of controlling sharpness and smoothness of an image. In another example, the resolution converter 5 may control information regarding pixel values of an original image, which may be used for determining a pixel value of an interpolated pixel.
Referring now to
Step S100 specifies one of the interpolated pixels. For example, as shown in
Step S101 selects one or more reference pixels, which are originally provided in the image data 1, from a vicinity of the specified interpolated pixel. Step S101 further obtains a distance value for each of the reference pixels. Thus, in the example shown in
Step S102 obtains a pixel value for each of the reference pixels obtained in Step S101. In the example shown in
Step S1401 obtains a distance value for each of the reference pixels, which is different from the distance value obtained in Step S101. In this exemplary embodiment, the resolution converter 5 calculates a direct distance value L, which is a direct distance between the interpolated pixel and each of the reference pixels, based on the distance values expressed in X and Y coordinates. As shown in
Step S1404 determines whether a pixel value of a nearest reference pixel is equal to a pixel value of at least one of adjacent reference pixels. If the pixel value of the nearest reference pixel is equal to the pixel value of any one of the adjacent reference pixels (“YES” in Step S1404), the operation proceeds to Step S1408. Otherwise (“NO” in Step S1404), the operation proceeds to Step S1402.
In this exemplary embodiment, the nearest reference pixel corresponds to one of the reference pixels having the smallest direct distance value L. The adjacent reference pixel corresponds to one of the reference pixels adjacent to the specified interpolated pixel in the direction of the X or Y axis.
If the pixel value of the nearest reference pixel is equal to the pixel value of any one of the adjacent reference pixels, the resolution converter 5 assumes that the reference pixels have the same or closer values, and further assumes that a portion having the reference pixels corresponds to a portion of a character or a symbol in the image data 1, for example. Based on these characteristics, the nearest neighbor method is selected, which is suitable for keeping pixel information of the original image.
If the pixel value of the nearest reference pixel is not equal to the pixel value of any one of the adjacent reference pixels, the resolution converter 5 assumes that the reference pixels have different pixel values, and further assumes that a portion having the reference pixels corresponds to a portion of a picture image or a diagonal line, for example. Based on these characteristics, the second method or any other method of the present invention is selected, which is suitable for enhancing smoothness of the image.
Step S1408 uses the pixel value of the nearest reference pixel as a pixel value of the interpolated pixel. In the example shown in
Step S1402 obtains a maximum value MAX and a minimum value MIN of the reference pixels. The maximum value MAX corresponds to a pixel value of the reference pixel having the largest pixel value. The minimum value MIN corresponds to a pixel value of the reference pixel having the smallest pixel value.
Step S1403 obtains a difference value M1 of the reference pixels based on the maximum value MAX and the minimum value MIN. In this exemplary embodiment, the difference value M1 may be expressed by the equation: M1=MAX−MIN+α, with α is any value larger than 0.
Step S105 calculates an average value AVE, which is the average of the pixel values of the reference pixels. In the example shown in
AVE=(a00+a01+a10+a11)/4.
Step S1406 obtains a weighting factor for each of the reference pixels using the pixel values obtained in Step S102, the direct distance values L obtained in Step S1401, the average value AVE obtained in Step S105, and a normalization factor. In this exemplary embodiment, the difference value M1 obtained in Step S1403 is used as the normalization factor.
In the example shown in
Z00=L11*(1−|a00−AVE|/M1);
Z10=L01*(1−|a10−AVE|/M1);
Z01=L10*(1−|a01−AVE|/M1); and
Z11=L00*(1−|a11−AVE|/M1).
As shown in the above equations, in this exemplary embodiment, Step S1406 uses the second method described referring to
Step S107 calculates a pixel value of the interpolated pixel using the pixel values of the reference pixels. In this exemplary embodiment, each of the pixel values is weighted with the corresponding weighting factor obtained in Step S1406.
In the example shown in
b=a00*Z00/(Z00+Z10+Z01+Z11)+a10*Z10/(Z00+Z10+Z01+Z11)+a01*Z01/(Z00+Z10+Z01+Z11)+a11*Z11/(Z00+Z10+Z01+Z11).
The above equation can be simplified to:
b=(Z00*a00+Z10*a10+Z01*a01+Z11*a11)/(Z00+Z10+Z01+Z11).
Step S109 determines whether all interpolated pixels in the image data 1 have been processed. If all interpolated pixels have been processed (“YES” in Step S109), the operation ends to store the processed image data 1 in the output data storage 7 to be displayed by the display device 10. If all interpolated pixels have not been processed (“NO” in Step S109), the operation returns to Step S100 to specify another interpolated pixel.
Using the twenty-second method, smoothness and information regarding pixel values of an original image may be controlled according to local image characteristics, as shown in
In this exemplary embodiment, Step S1404 determines whether the nearest reference pixel has a pixel value equal to a pixel value of any one of the adjacent reference pixels. Alternatively, Step S1404 may determine whether a reference pixel diagonally opposite to the nearest reference pixel has a pixel value equal to a pixel value of any one of the adjacent reference pixels.
Further, in this exemplary embodiment, most of the steps illustrated in
Referring now to
The operation using the twenty-third method shown in
Step S1503 defines a selection range, which may be used for selecting an interpolation method suitable for processing the specified interpolated pixel. The selection range may be defined based on the pixel value of the nearest reference pixel. For example, referring to
Step S1504 determines whether a pixel value of any one of the adjacent reference pixels is within the selection range defined in Step S1503. In the example shown in
Using the twenty-third method, smoothness and information regarding pixel values of an original image may be controlled according to local image characteristics, as shown in
The twenty-third method can enhance sharpness of an image, especially when the image has gradation, as illustrated in
Further, in this exemplary embodiment, most of the steps illustrated in
Referring now to
The operation using the twenty-fourth method shown in
Step S1602 obtains a difference value M using a maximum value MAX and a minimum value MIN of the reference pixels. Alternatively, Step S1602 may obtain a difference value M, by comparing the pixel value of the nearest reference pixel with the pixel value of the reference pixels other than the nearest reference pixel. For example, in the example shown in
Step 1603 defines a selection range. The selection range may be defined by a first constant M3 and a second constant M4. The first constant M3 may be any value, however, in this exemplary embodiment, the first constant M3 is determined based on the difference value M as illustrated in the following equation: M3=M/E, wherein E is any value equal to or larger than 2. The second constant M4 may be any value, however, in this exemplary embodiment, the second constant M4 is determined based on the difference value M as illustrated in the following equation: M4=M/F, wherein F is any value equal to or larger than 2. Based on the first constant M3 and the second constant M4, the selection range may be defined as a range that is equal to or larger than (a1+M3) or equal to or smaller than (a1−M4). The value a1 corresponds to the pixel value of the nearest reference pixel.
Step S1604 determines whether the pixel value of any one of the adjacent reference pixels is within the selection range defined in Step S1603. If the pixel value of the adjacent reference pixel is within the selection range (“YES” in Step S1604), the operation proceeds to Step S1408. If the pixel value of the adjacent reference pixel is out of the selection range (“NO” in Step S1604), the operation proceeds to Step S1403.
In this exemplary embodiment, if the pixel value of any one of the adjacent reference pixels is within the selection range, the resolution converter 5 assumes that the original image, or at least the portion having the reference pixels, is an image having small variations in pixel values such as a gradation image. Based on this characteristic, the nearest neighbor method is selected, which is suitable for keeping information regarding pixel values of an original image.
In this exemplary embodiment, if the pixel value of any one of the adjacent reference pixels is out of the selection range, the resolution converter 5 assumes that the original image, or at least the portion having the reference pixels, is a multivalue image, or an image having large variations in pixel values. Based on this characteristic, the second method or any other method is selected, which is suitable for enhancing smoothness of the image.
Using the twenty-fourth method, smoothness and information regarding pixel values of an original image may be controlled according to local image characteristics.
Further, in this exemplary embodiment, most of the steps illustrated in
Referring now to
The operation using the twenty-fifth method shown in
Using the twenty-fifth method, smoothness and information regarding pixel values of an original image may be controlled according to local image characteristics.
Further, in this exemplary embodiment, most of the steps illustrated in
In addition to the above-described methods including the sixteenth to twenty-fifth methods, the resolution converter 5 may perform any other interpolation method according to the scope of this disclosure and appended claims. For example, elements, features, or functions of the above-described methods may be combined with each other and/or substituted for each other within the scope of this disclosure and appended claims.
Numerous additional modifications and variations are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the disclosure of this patent specification may be practiced in ways other than those specifically described herein.
For example, any one of the above-described and other methods of the present invention may be embodied in the form of a computer program. In one example, the image processing device 9 may be implemented as one or more conventional general purpose microprocessors and/or signal processors capable of performing at least one of the above-described and other methods of the present invention, according to one or more instructions obtained from any kind of storage medium. Examples of storage mediums include, but are not limited to, flexible disk, hard disk, optical discs, magneto-optical discs, magnetic tapes, involatile memory cards, ROM (read-only-memory), etc.
Alternatively, the present invention may be implemented by ASIC, prepared by interconnecting an appropriate network of conventional component circuits or by a combination thereof with one or more conventional general purpose microprocessors and/or signal processors programmed accordingly.
Number | Date | Country | Kind |
---|---|---|---|
2004-206408 | Jul 2004 | JP | national |
2005-039603 | Feb 2005 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
4578812 | Yui | Mar 1986 | A |
4789933 | Chen et al. | Dec 1988 | A |
5054100 | Tai | Oct 1991 | A |
5644661 | Smith et al. | Jul 1997 | A |
5930407 | Jensen | Jul 1999 | A |
6002812 | Cho et al. | Dec 1999 | A |
6005989 | Frederic | Dec 1999 | A |
6324309 | Tokuyama et al. | Nov 2001 | B1 |
6366694 | Acharya | Apr 2002 | B1 |
6812935 | Joe et al. | Nov 2004 | B1 |
6832009 | Shezaf et al. | Dec 2004 | B1 |
6903749 | Soo et al. | Jun 2005 | B2 |
6961479 | Takarada | Nov 2005 | B1 |
7054507 | Bradley et al. | May 2006 | B1 |
7286700 | Gondek et al. | Oct 2007 | B2 |
20020008881 | Riley et al. | Jan 2002 | A1 |
20030007702 | Aoyama et al. | Jan 2003 | A1 |
20030053687 | Beo et al. | Mar 2003 | A1 |
20030098945 | Sugimoto et al. | May 2003 | A1 |
20030222980 | Miyagaki et al. | Dec 2003 | A1 |
20040086193 | Kameyama et al. | May 2004 | A1 |
20060013499 | Namie et al. | Jan 2006 | A1 |
Number | Date | Country |
---|---|---|
1318177 | Oct 2001 | CN |
9-252401 | Sep 1997 | JP |
2796900 | Jul 1998 | JP |
11-32209 | Feb 1999 | JP |
11-203467 | Jul 1999 | JP |
Number | Date | Country | |
---|---|---|---|
20060013499 A1 | Jan 2006 | US |