1. Field of the Invention
The present invention generally relates to image processing, and more particularly to compensation of spectral mismatch to enhance WRGB demosaicking.
2. Description of Related Art
The Bayer color filter array (CFA) shown in
To determine the missing color components for each pixel of an image, many existing methods for WRGB demosaicking employ a linear model to relate the white (W) component of a pixel to its red (R), green (G), and blue (B) components. Since the readout value of a pixel is the integration of the product of the spectral sensitivity function of the pixel and the spectrum of the incident light, the linear model assumes that a perfect linear relationship exists between the spectral sensitivity functions of W, R, G, and B pixels. In practice, however, the assumption is not necessarily true.
In view of the foregoing, it is an object of the embodiment of the present invention to propose a novel method that introduces a spectrally dependent offset to the linear model to compensate for the spectral mismatch and thereby reduces the effect of spectral mismatch on WRGB demosaicking.
According to one embodiment, an offset representing spectral mismatch is introduced to modify a linear model that relates a white (W) component of a pixel to red (R), green (G), and blue (B) components of the pixel. Readout component images are obtained from an image sensor with the WRGB CFA. An estimated offset is generated according to the readout component images, and a compensated component image is generated according to the estimated offset and a corresponding readout component image.
In the presence of spectral mismatch, the W component of a pixel cannot be expressed as a linear combination of the R, G, and B components. In the embodiment, the linear model is modified by introducing an offset IM(n) as follows:
IW(n)=αIR(n)+γIG(n)+βIB(n)+IM(n) (1)
where nεZ2 denotes the location of a pixel in an image, IW(n), IR(n), IG(n), and IB(n) denote the white, red, green, and blue component images, respectively, and α, β, and γ denote weights. The offset IM(n) is spectrally dependent as it is the result of spectral mismatch.
Let λ denote wavelength and Cε{W, R, G, B}. According to the image formation model, a component image IC(n) is the integration of the product of the spectrum L(λ,n) of the incident light and the spectral sensitivity function Sc(λ) of the image sensor. That is,
IC(n)=∫L(λ,n)SC(λ)dλ (2)
Representing the W, R, G, and B component images by (2), we may rewrite (1) as follows:
IM(n)=∫L(λ,n)SM(λ)dλ (3)
where
SM(λ)=SW(λ)−αSR(λ)−γSG(λ)−βSB(λ) (4)
In other words, the offset is considered an image formed by a sensor with a spectral sensitivity function SM(λ).
It is observed from
|SM(λ)|≦μSW(λ) (5)
where
An upper bound on the absolute value of the offset can then be derived in terms of the white component image as follows:
|IM(n)|≦∫L(λ,n)|SM(λ)|λ≦μ∫L(λ,n)SW(λ)dλ=μIW (7)
According to the embodiment, the compensation method 300 primarily includes two steps: 1) generating an estimated offset according to readout component images (i.e., W, R, G, and B) (block 31); and 2) generating a compensated component image (e.g., W) according to the estimated offset and the corresponding readout component image (block 32).
Specifically speaking, block 311 (
ĪC(n)=IC(n)FC(n) (8)
where Cε{W,R,G,B} and FC(n) denotes the sampling function of the CFA. The interpolated image ÎC(n) and the interpolation error EC(n) are obtained by
ÎC(n)=IC(n)*ÎC(n) (9)
EC(n)=IC(n)−ÎC(n) (10)
where, in the embodiment, hC(n) is a linear low pass filter and * denotes convolution. In block 312 (
EM(n)=αER(n)+γEG(n)+βEB(n)−EW(n) (11)
we have
ÎM(n)=ÎW(n)−αÎR(n)−γÎG(n)−βÎB(n)
=IM(n)+(αER(n)+γEG(n)+βEB(n)−EW(n))
=IM(n)+EM(n) (12)
Now we discuss how to reduce the interpolation error in (11). Because interpolation error is the difference between latent image and a low-passed latent image, it can be considered a high-passed latent image. Thus, interpolation errors can be found at sharp edges, and can be effectively removed by a spatial filter such as a median filter (block 313 in
Subsequently, in block 314 (
where x denotes the estimate of the offset for a pixel and k denotes an empirically predetermined constant. The function shown in
ĨM(n)=Q(D(ÎM(n)),μÎW(n)) (14)
where D(•) denotes median filtering.
Subsequently, block 32 in
ĨW(n)=ĪW(n)−ĨM(n)FW(n)
=(IW(n)−ĨM(n))FW(n)
=(αIR(n)+γIG(n)+(n)+βIB(n)+{tilde over (E)}M(n)−FW(n) (15)
where
{tilde over (E)}M(n)=IM(n)−ĨM(n) (16)
Because the residual {tilde over (E)}M(n) is close to zero for homogenous area and isolated edge and because the demosaicking artifact is less noticeable in the high frequency texture area, we can assume that {tilde over (E)}M(n) equals zero in the demosaicking process and relate the compensated white pixel value to the latent R, G, and B pixel value by
ĨW(n)=(αIR(n)+γIG(n)+βIB(n))FW(n) (17)
Subsequently, the multiplied readout image is subjected to offset compensation (block 300) as detailed in the previous embodiment. The compensated component images are then subjected to demosaicking (block 52) to result in demosaicked images.
Finally, in block 53, inverse pixel-wise multiplication (or division) is performed on the demosaicked image. The inverse pixel-wise multiplication in block 53 performs the inverse operation of the pixel-wise multiplication in block 51. In one exemplary embodiment, the inverse pixel-wise multiplication in block 53 may be carried out by a lookup table (LUT).
Although specific embodiments have been illustrated and described, it will be appreciated by those skilled in the art that various modifications may be made without departing from the scope of the present invention, which is intended to be limited solely by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
20100141771 | Hu | Jun 2010 | A1 |
20120098975 | Chao | Apr 2012 | A1 |
20120147468 | Bell | Jun 2012 | A1 |
20130135500 | Theuwissen | May 2013 | A1 |
20130335783 | Kurtz | Dec 2013 | A1 |
20140184863 | Tu | Jul 2014 | A1 |