This invention relates to techniques for processing images acquired by a color digital sensor, and, more particularly, to a method of color interpolation of each pixel of the image acquired from intensity values generated by single photosensitive elements of the sensor, depending on the filter applied according to a certain spatial pattern (for instance according the so-called Bayer Pattern) such to make each single element sensitive to one of the primary colors or base hue. Therefore, the output of a digital color sensor generates a grey level value for each image pixel depending on the filter applied to the particular image pixel.
In order to reconstruct a color image it is usually necessary to carry out an operation known as color interpolation (or demosaicing) to generate triplets of base color values (RGB) or more values, one for each base hue, through appropriate interpolation algorithms for generating values of missing colors for each image pixel. Numerous techniques for processing data provided by a digital color sensor have been proposed. It is worth mentioning the following documents:
M. R. Gupta, T. Chen, “Vector Color Filter Array Demosaicing” SPIE Electronic Imaging 2001; R. Ramanath, W. E. Snyder, G. L. Bilbro, W. A. Sander, “Demosaicing Methods for Bayer Color Arrays”, Journal of Electronic Imaging, vol. 11, n. 3, pages 306-315, July 2002; R. Kimmel, “Demosaicing: Image Reconstruction from Color CCD Samples”; R. Kakarala, Z. Baharav, “Adaptive Demosaicing with The Principal Vector Method”, IEEE Transactions on Consumer Electronics, vol. 48, n. 4, pages 932-937, November 2002; B. E. Bayer, “Color Imaging Array”, U.S. Pat. No. 3,971,065, July 1976; B. K. Gunturk, Y. Altunbasak, R, Mersereau, “Color Plane Interpolation Using Alternating Projections”, IEEE Transactions on Image Processing, vol. 11, no. 9, pages 997-1013, September 2002; S. Smith, “Colour Image Restoration with Anti-Alias”, EP 1,098,535, May 2001. Many known techniques preliminarily subdivide the image data stream generated by the digital color sensor into two or more channels, such as three channels for the case of a filtering based upon the RGB triplet of primary colors (red, green, blue). When the red component of a pixel is to be estimated, but only its green level has been acquired, it is necessary to estimate the red pixels adjacent to the considered pixel, and so on, when the value of another missing color is to be estimated. Clearly, subdividing in different channels grey level data generated by the digital color sensor and the successive merging operation of the values calculated for primary colors or base hues with the known value of the primary color of base hue for the considered pixel implies an evident computational burden or, in case of hardware implementation, an increased circuit complexity.
The development of new consumer applications of digital photo-cameras and similar devices, for instance in cellular phones, in laptop (notebook) or hand-held computers, and other devices for mobile communications, encourages the need to devise more effective and at the same time low cost techniques for processing images acquired by a digital color sensor. A particular important factor is the low cost, because these techniques may desirably be used in devices economically accessible to individual consumers, and there is considerable competition in this field among manufacturers of these devices and their components.
It is an object of the invention to provide a new color interpolation algorithm that may not require any subdivision per channels of grey level data generated by a digital color sensor. According to the algorithm of this invention, the effectiveness in terms of definition and color rendition of the processed picture may be higher than that obtained using known methods based on subdivision per channels of data and on merging of data in terms of triplets or pairs of values of primary colors or complementary hues, respectively.
Basically, the color interpolation method of an image may be acquired by a digital color sensor generating grey levels for each image pixel as a function of the filter applied to the sensor, by interpolating the values of missing colors of each image pixel for generating triplets or pairs of values of primary colors (RGB) or complementary hues for each image pixel. The method may comprise calculating spatial variation gradients of primary colors or complementary hues for each image pixel and storing the information of directional variation of primary color or complementary hue in look-up tables pertaining to each pixel.
The method may further comprise interpolating color values of each image pixel considering the directional variation information of the respective values of primary colors or complementary hues stored in the respective look-up tables of the pixel for generating the multiple distinct values for each image pixel.
According to a preferred embodiment, spatial variation gradients may be calculated using differential filters, preferably Sobel operators over image pixels detected without subdividing them depending on the color. Preferably, the interpolation of the respective values of primary colors or base hue may be carried out using an elliptical Gaussian filter the orientation of which coincides with the calculated direction of the respective spatial gradient for the pixel being processed.
In order to compensate for the emphasis introduced by calculating the spatial gradients of the low spatial frequency components, due to the fact that values of the same primary color or complementary hue of the pixel adjacent to the processed pixel are used, the method may optionally include an additional enhancement operation of the high spatial frequency components. The invention may further provide a simple and effective method for calculating direction and amplitude values of a spatial gradient in correspondence of an image pixel. The methods of this invention may be implemented by a software computer program.
a and 4b show the same picture filtered with a weighted-mode directional filter in accordance with the invention and without such a filter.
a and 5b show two examples of frequency responses of directional filters at π/2 and 0 in accordance with the invention.
“Bayer pattern” filtering is based on the use of primary color filters: red (R), green (G) and blue (B). It is widely used in digital color sensors and for this reason the method of this invention is now described in detail referring to this application, though the same considerations apply mutatis mutandis for systems for acquiring color images based on filters of two or more complementary hues applied according to spatial patterns different from the Bayer pattern.
According to the invention, it is supposed that the red and blue channels, especially at low frequencies, be highly correlated with the green channel in a small interval, as shown in the example of
GLPF(i,j)=RLPF(i,j)+ΔRG (1)
wherein ΔRG is an appropriate value that depends on the pixels that surround the considered pixel, and
GLPF(i,j)=f1(i,j) (2)
RLPF(i,j)=f2(i,j) (3)
wherein f1 and f2 are interpolation functions of the central pixel determined from the surrounding pixel.
This implies that the interpolation, for instance, of the missing green pixels may advantageously exploit the information coming from the red and blue channels. In practice, according to the invention, the missing pixels are interpolated without splitting the image into the component channels. This model uses differences between different channels (that may be seen as chromatic information), is susceptible of generating optimal results because the human eye is more sensitive to low frequency chromatic variations than to luminance variations.
In order to estimate the direction of an edge depicted in the picture it is necessary to calculate in the CFA (Color Filter Array) image the variation along the horizontal and vertical directions. A preferred way of calculating these variations comprises using the well known Sobel filters along an horizontal x axis and a vertical y axis:
So far, the Sobel filters have been used on the same channel (R or G or B), or on a same luminance channel. According to the invention, the Sobel filters are used directly on the combination of the three channels RGB. The demonstration that the Sobel filters can be usefully applied to a Bayer pattern is not simple and is given hereinafter.
Let us consider a 3×3 Bayer pattern:
wherein Gi for i=1, 3, 5, 7, 9 are the intensities of the green pixels and Hi and Ji are respectively the intensities of the red and blue pixels. Considering Eq. (1), it is possible to approximate the intensity of the missing green pixels.
It is thus obtained that the channel G (corresponding to the considered central pixel in the example) is described by the following matrix P′:
The convolution between the matrix P′ and the matrix that describes the Sobel filter along the vertical direction (Sobely) is:
The unknown parameters are only Δ1 and Δ4. These two parameters are estimated in a small image portion, thus it may be reasonably presumed that they are almost equal to each other and that their difference is negligible. As a consequence, Eq. (7) becomes
P′·Sobely=G1+2J2+G3−G7−2J8−G9 (8)
It may be immediately noticed that, by applying the Sobel filter directly on the Bayer pattern of Eq. (5), the same result given by Eq. (8) is obtained. Therefore, using this filter Sobely on a Bayer pattern is equivalent to calculating the corresponding spatial variation value along the y direction. By using the filter Sobelx along the horizontal direction, another value P′·Sobelx is obtained that provides a corresponding spatial variation value along the x direction. For each pixel of the Bayer pattern the values P′·Sobelx and P′·Sobelx are calculated. According to the method of the invention, these values are collected in look-up tables for estimating the direction of the reconstruction filter.
The orientation of the spatial gradient in correspondence to a certain pixel is given by the following formula:
wherein P′·Sobely and P′·Sobelx are the filtered values with the horizontal and vertical Sobel filters centered in a given pixel. The orientation or (x,y) is preferably quantized in k pre-defined directions, for instance those depicted in
If the calculated orientation value or(x,y) belongs to a fixed interval i, the orientation will be quantized by the value directioni:
or(x,y)={directioni|directioni≦or(x,y)<directioni+1} (11)
The square of the absolute value of the spatial gradient mag(x,y) is calculated according to the following formula:
mag(x,y)=(P′·Sobelx)2+(P′·Sobely)2 (12)
It is preferable to consider the square of the absolute value for avoiding square roots, the calculation of which would slow down the method of the invention. For each calculated orientation of the spatial gradients, a new operator, purposely designed for the preferred embodiment of the method of the invention, that hereinafter will be indicated as the “weighted-mode” (WM) operator, is used. This operator substantially provides an estimation of the predominant amplitude of the spatial gradient around the central pixel by performing the operations of: storing the amplitude values for each pixel around a central pixel according to the following formula
wherein u=[−1, +1], v=[−1, +1], iε[0, k−1], kεN and Acc is an array of k possible orientations;
calculating the predominant amplitude WM with the following formula:
The operator WM is a sort of morphological operator that enhance the edges. This operator is used for preventing the effects due to the presence of noise in estimating the spatial gradient. According to an embodiment of the invention, the interpolation filtering is carried out by means of an elliptical Gaussian filter described by the following formula:
wherein
{tilde over (x)}=x cos(α)−y sin(α),
{tilde over (y)}=x sin(α)+y sin(α), (16)
and σx, σy are the standard deviations along the directions x and y, respectively, h is a normalization constant and α is the orientation angle, as shown in
A low-pass component is calculated using a spectral formulation, as defined in the following formula:
Comp(l,m)=Mask(l,m)·F(l,m)·I(l,m) (17)
wherein Mask (l,m) identifies the pixels of the spectral components (l,m) (red, green or blue), F is one of the above mentioned Gaussian filters and I is the input picture.
In order to further enhance the quality of the generated picture, it is possible to carry out the so-called “peaking” operation, that substantially comprises introducing in the filtered image high frequency components that were lost during the low-pass filtering with the directional Gaussian filter. This operation will be described referring to the case in which the-central pixel of the selection frequency of the Bayer pattern is a green pixel G. Being GLPF
ΔPeak=G−GLPF
In practice, the high frequency component is obtained as the difference between the known effective value of the central pixel G and the corresponding low-pass filtered value GLPF
H=HLPF
Various experiments have been carried out for comparing original images with images filtered with the method of this invention using a directional filtering (DF), and with images interpolated with the known method IGP [7]. The method of this invention has been implemented with the just described “peaking” operation. The results are shown in the Figures from 7 to 10.
The color interpolation method of the invention may be incorporated in any process for treating digital Bayer images. Tests have shown that the method of this invention relevantly limits the generation of false colors and/or zig-zag effects, compared to the known color interpolation methods. In order to further improve the quality of images obtained with any process for treating Bayer images including the method of the invention, it is possible to carry out any anti-aliasing algorithm.
Number | Date | Country | Kind |
---|---|---|---|
VA2004A000038 | Oct 2004 | IT | national |