Pursuant to 35 U.S.C. § 119(a), this application claims the benefit of earlier filing date and right of priority to Korean Application No. 116951/2004, filed Dec. 30, 2004, the contents of which are hereby incorporated by reference herein in their entirety.
The present invention relates to color interpolation, and more particularly, to color interpolation in a Bayer pattern color filter.
Generally, a digital photographing device such as a digital camera or digital camcorder uses a charge coupled device (CCD) that uses various information for each pixel in order to obtain a full-color image. At least three types of color data are required in order to display an image viewable by human eye. A full-color image can be rendered based on pixel values for three independent colors, R, G, and B.
The CCD is a photographing device for converting an optical signal into an electric signal. CCD may be a single chip type or a multi-chip type. In a multi-chip type, each pixel receives three colors by using three chips having sensors reactive to three colors (R, G, and B) respectively. In the single chip type, each pixel receives only one color, and a color filter array (CFA) having sensors reactive to each color. The most general pattern of the CFA is a Bayer pattern.
In case of the multi-chip type, each color for constituting one screen has information for the entire screen. Accordingly, it is possible to re-construct the entire screen by using the colors. However, in case of the single-chip type shown in
For example, in case that a B pixel or a R pixel has no green detecting sensor to restore a green color value (G value), green is rendered by using information received by a green detecting sensor of an adjacent pixel. That is, a color interpolation algorithm for rendering the green color is used.
The related art color interpolation method in the single-chip type Bayer pattern color filter comprises a bi-linear interpolation method, a color shift interpolation method, and an adaptive interpolation method using a gradient of a brightness. Each interpolation method is explained below.
It is assumed that a ratio between an R value and a G value adjacent to a pixel B8 to be restored is constant, and then an average for the ratio between the R value and the G value is calculated among the adjacent four pixels G2, G4, G12, and G14. As shown in the following formula 2, the calculated average value and the G value (G8) of the pixel (B8) are multiplied to each other thereby to restore the R value (R8).
α=abs[(B42+B46)/2−B44]
β=abs[(B24+B64)/2−B44] [Formula 3]
Once the vertical edge information a and the horizontal edge information β are obtained, it can be determined if a color shift is less in a horizontal axis direction or in a vertical axis direction. If α is less than β, the color shift in the horizontal axis direction is less than the color shift in the vertical axis direction. On the other hand, if α is greater than β, an average value between G34 and G54 is determined as G44. Also, if the α is equal to the β, an average value among the adjacent values, G34, G43, G45, and G54 is determined as G44 (Refer to formula 4).
When a G value (for example G44) has been restored, as shown in
The following formula 6 is used to obtain a vertical value and a horizontal value (Gdiff
Gdiff
Gdiff
The obtained horizontal and vertical values (Gdff
(Gdiff
→G44=(G34+G45+G54+G43)/4 1.
(Gdiff
→G44=(G34+G45+G54+G43)/4 2.
(Gdiff
→G44=(G43+G45)/2 3.
(Gdiff
→G44=(G34+G54)/2 4.
When the G value for every pixel is restored by the above formulas, an R value and a B value are restored in the same manner as the interpolation method by using a gradient of a brightness. The thresholds are differently set according to each image sensor thereby to optimize each image sensor.
The interpolation method can be largely divided into a bi-linear interpolation method, a color correction interpolation method, and an interpolation method using a spatial correlation. The bi-linear interpolation method requires less calculation and is simple to implement. However, the method causes a zipper effect and a blurring phenomenon.
The color correction method comprises a color shift interpolation method, and an implementation method using a gradient of a brightness. The color correction method maintains a soft color by using a color difference and a color ratio. The color correction implementation method is provided at a camera due to a simple implementation and a constant color.
The interpolation method using a spatial correlation produces the best quality image by using a color difference and a shift ratio. However, the method's implementation is complicated and requires heavy calculations. Also, since the method uses only upper/lower components and right/left components, a blurring phenomenon may result when a biased line or an edge exists in an actual image.
In the related art interpolation methods, when calculation is simple, picture quality is reduced. And when calculation is complicated, picture quality is improved but calculation overhead is increased. Also, in case of a diagonal image or a biased image, picture quality is reduced since only the upper, lower, right, and left components are considered without consideration of the diagonal components.
A solution to the above problems is needed.
The foregoing and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.
In accordance with one aspect of the invention, an interpolation method comprises calculating a similarity among a plurality of pixels of a first color positioned adjacent to a first pixel to be restored; calculating an average value associated with the first color of at least two pixels from among said plurality of pixels, the at least two pixels having a first level of similarity; restoring a first color value associated with the first color for the first pixel; and restoring values associated with a second color and a third color for the first pixel based on an interpolation using a brightness gradient.
In one embodiment, calculating the similarity comprises regularizing first color values of the plurality of pixels adjacent to the first pixel. In another embodiment, calculating the similarity comprises obtaining a similarity among pixels of the first color adjacent to the first pixel. Calculating the average value may comprise multiplying an arithmetic average value of at least two pixels having the first level similarity by the similarity value between said at least two pixels.
In a preferred embodiment, the plurality of pixels are arranged according to a Bayer pattern. The first pixel may be either a blue or a red pixel. The first color may be green; the second color red; and the third color blue, for example. In a preferred embodiment, the first level of similarity is the highest level of similarity.
In accordance with yet another embodiment, a method for interpolating pixel colors is provided. The method comprises obtaining at least two pixels of a first color having a first level of similarity, wherein the at least two pixels are positioned adjacent a first pixel to be restored; calculating average values of the two pixels for the first color and thereby restoring the value of the at least two pixels for the first color.
In one embodiment, a value associated with a second color for the at least two pixels is restored by an interpolation method using a gradient of a brightness. A value associated with a third color for the at least two pixels may be obtained by an interpolation method using a gradient of a brightness.
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention.
Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings.
Referring to
The similarity is defined by determining how much color information of pixels is similar to each other, and is calculated by the following formula 7, for example.
Where similarity between a and b is a≡b, a,bε[0, 1] and {overscore (a)}=1−a, {overscore (b)}=1−b. In formula 7, the operator ‘→’ provides a multi-valued implication and is defined by various methods. In one embodiment, a Lukasiewicz implication method is used. The Lukasiewicz implication method is defined by the following formula 8.
Referring to
The i and j are integers associated with each index, and the bits define in which bits a photographing device displays value of each pixel. When the pixels are regularized by formula 9, a similarity between pixels is obtained by the formula 7 (S20).
The following formula 10 is used in one embodiment to calculate the similarity between G34 and G43. Small letter gij is a value obtained by regularizing the Gij.
The following formula 11 is used to calculate a value of g34→g43 by using formula 8.
Each similarity among the four upper, lower, right, and left pixels adjacent to the pixel to be restored (for example B44) is obtained by using formulas 7 and 8 (S30). Accordingly pixels having the highest similarity are obtained (S40).
That is, if a G value G44 of the pixel B44 is to be restored, a similarity among adjacent pixels (G34, G43, G45, and G54) in horizontal, vertical, and diagonal directions, G34≡G43, G43≡G45, G45≡G54, G54≡G34, G34≡G45 G43≡G54 are respectively obtained. In one embodiment, the highest similarity may be obtained by the following formula 12.
Max[G34≡G43, G43≡G45, G45≡G54, G54≡G34, G34≡G45, G43≡G54] Formula 12
If two pixels (for example G34 and G43) are determined to be highly similar, the two pixels (e.g., G34 and G43) have a higher possibility to be consistent with a G value of the pixel to be restored (B44). However, even if the two pixels are determined to have the highest similarity, when degree of similarity among the four adjacent pixels is very low, the two pixel may not be highly similar.
Accordingly, the similarity of the two pixels (e.g., G34 and G43) having the highest similarity G43≡G54) is multiplied to an arithmetic average (½ (G34+G43)) of the two pixels. The higher the similarity between the two pixels, the nearer the G value (G44) of the pixel to be restored is to the two pixels. The lower the similarity between the two pixels, the farther the G value (G44) of the pixel to be restored is from the two pixels.
Formula 13 is used to obtain the G value G44 of the B44 in case that the G34 and G43 have the highest similarity among the adjacent pixels.
When the ratio between an R value and a G value adjacent to a pixel to be restored (R:G) and a ratio between the B value and the G value adjacent to the pixel to be restored (B:G) are constant, the R value (Rji) and the B value (Bij) are obtained by the following formula 14.
Accordingly horizontal and vertical components and diagonal component are considered to reduce a blurring phenomenon when biased or edged portions are implemented. Additionally, a similarity among adjacent pixels to a pixel to be restored is measured and an average value of green values of two pixels having a highest similarity is multiplied by the similarity therebetween. Using the weight sum method, in one embodiment, a clearer image can be obtained.
As the invention may be embodied in several forms without departing from the spirit or essential characteristics thereof, it should also be understood that the above-described embodiments are not limited by any of the details of the foregoing description, unless otherwise specified, but rather should be construed broadly within its spirit and scope as defined in the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
116951/2004 | Dec 2004 | KR | national |