The invention relates to imaging devices and more particularly to automatic color balancing techniques for imaging systems.
Solid state imagers, including charge coupled devices (CCD), CMOS imagers and others, have been used in photo imaging applications. A solid state imager circuit includes a focal plane array of pixel cells, each one of the cells including a photosensor, which may be a photogate, photoconductor or a photodiode having a doped region for accumulating photo-generated charge. Each pixel cell has a charge storage region, formed on or in the substrate, which is connected to the gate of an output transistor that is part of a readout circuit. The charge storage region may be constructed as a floating diffusion region. In some imager circuits, each pixel cell may include at least one electronic device such as a transistor for transferring charge from the photosensor to the storage region and one device, also typically a transistor, for resetting the storage region to a predetermined charge level prior to charge transference.
In a CMOS imager, the active elements of a pixel cell perform the necessary functions of: (1) photon to charge conversion; (2) accumulation of image charge; (3) resetting the storage region to a known state; (4) transfer of charge to the storage region; (5) selection of a pixel cell for readout; and (6) output and amplification of a signal representing pixel charge. Photo charge may be amplified when it moves from the initial charge accumulation region to the storage region. The charge at the storage region is typically converted to a pixel output voltage by a source follower output transistor.
CMOS imagers of the type discussed above are generally known as discussed, for example, in U.S. Pat. Nos. 6,140,630, 6,376,868, 6,310,366, 6,326,652, 6,204,524 and 6,333,205, assigned to Micron Technology, Inc., which are hereby incorporated by reference in their entirety.
Color constancy is one of the characteristics of the human vision system. The human vision system is very capable of discriminating color objects under different lighting conditions. The color of an object looks substantially the same under vastly different types of natural and artificial light sources, such as sun light, moon light, incandescent, fluorescent, and candle light. However, due to the change in the spectral power distribution of the illumination, the perceived lightness and color appearance of the scene will change. The human vision system does not remove the influence of the light source completely.
A possible explanation is that the human vision system does not function as an absolute colorimetric device. The perceived images contain interactions of light sources and object reflectance. Therefore, for a captured image from an imaging device to look natural, the influence of the light source must be preserved in a manner similar to the way the human vision system functions. For example, the reproduced sunset scene must look like a sunset scene. This hypothesis is supported by R. W. G. Hunt's observation that a more pleasing effect is often produced in color prints if they are so made that instead of the color balance being correct, in which gray is printed as gray, it is so adjusted that the whole picture integrates to gray. R. W. G. Hunt, “The Reproduction of Colour” §16.7. The gray world theory assumes that all of the colors in a picture should integrate, i.e. average, to gray. Accordingly, there is a need and desire for an imaging device that more accurately color balances a captured image.
The invention provides a color balancing method and apparatus by which an image's color balance under different illuminations can be more closely maintained. According to an exemplary embodiment of the invention, pixels from an input image having an imager color space, for example, a red-green-blue (RGB) color space, are sampled for gray world statistics. To avoid the effect of saturated regions, the pixels are pruned. If a predetermined percentage of the pixels are included in the gray world statistics, gain is then computed for each RGB channel with respect to a neutral white point. Channel gains are applied to the RGB image. This process creates a transformed color balanced image suitable for display.
The invention may be implemented to operate on analog or digital image data and may be executed in hardware, software, or a combination of the two.
The foregoing and other advantages and features of the invention will become more apparent from the detailed description of exemplary embodiments provided below with reference to the accompanying drawings in which:
In the following description, an exemplary imager color space will be described as an RGB color space; however, other color imaging protocols could be used in the invention, including, for example, a subtractive CMYK (cyan, magenta, yellow, black) color space.
The general processing flow of the invention is now explained with reference to the figures. Referring to
The color balance process of block 4 of
The process illustrated in
The accuracy of the gray world summary, can be further improved by pruning, i.e., excluding obvious outliers, such as pixels near saturation, to compensate for fully saturated color objects, etc. An exemplary pruning process is shown by the flowchart in
If the pixel passes the criteria listed above (step 20), the red, green, and blue values of the pixel are added to the grand total of each component respectively, R_SUM, G_SUM, and B_SUM, and a valid pixel count, COUNT, is incremented (step 30). If the selected pixel is not the last pixel for sampling (step 40), the next pixel is selected (step 10), and the pruning criteria determination is performed again (step 20). The same operational steps, as described above with reference to
Once it is determined that a selected pixel is the last pixel from an image for sampling (yes at step 40), the color balance system determines at step 50 if COUNT is a predetermined percentage of the total number of image pixels sampled, for example, equal to or greater than a quarter of the total pixels sampled. If COUNT is equal to or greater than the predetermined percentage of the total number of image pixels sampled, then the color channel gains are deemed valid (step 60) and should be calculated in process block 4b and applied to the image data, Ri, Gi, and Bi. Otherwise, the gain is deemed invalid and the color channels remain unchanged (step 70). It should be noted that step 50, i.e., determining if COUNT is a predetermined percentage of the total pixels sampled, can occur after channel gain is calculated, although it is more efficient to make the determination before channel gain is calculated.
The calculation of the channel gain in processing block 4b (
The chromaticity of the neutral white point is determined in processing block 4b. The determination can be automatically calculated as a spectral response of the image sensor, preset based on the user's preference, or a combination of the two. It should be appreciated that any known method for selecting a neutral white point may be used. The chromaticity of the neutral white point is comprised of three components, the red value, N_CR, the green value, N_CG, and the blue value, N_CB. In a preferred embodiment, the chromaticity of all three components sums up to one. In other words, N_CR+N_CG+N_CB=1.
In a preferred embodiment, a system tuning parameter, G_BIAS, may be used in processing block 4b to fine tune the rendered image with respect to the color channel sensitivity of the image sensor. G_BIAS can be automatically calculated with respect to the color channel sensitivity of the image sensor, preset based on the user's preference, or a combination of the two. It should be appreciated that any known method for selecting a system tuning parameter may be used.
In an embodiment of the present invention, green channel gain, G_GAIN, is calculated as a function of the chromaticity of a neutral white point (N_CR, N_CG, N_CB), the chromaticity of the gray world summary (GW_CR, GW_CG, GW_CB), and an optional system tuning parameter (G_BIAS). Rather than independently calculating the red channel gain, the red channel gain is calculated as a function of the green channel gain. Finally, the blue channel gain is calculated as a function of the red and green channel gains. In one embodiment of the invention, the above mentioned parameters are used to calculate the gain for each color channel in processing block 4b as follows:
A preferred embodiment recursive calculations of the channel gains are performed in processing block 4b, allowing for more complete normalization of the channel gains. The recursive calculation consists of the calculations of Equations (1), (2), and (3) and the following recursive calculations, which can be repeated as many times as desired:
In a preferred embodiment, the channel gains calculated in either Equations (1), (2), and (3) or Equations (4), (5), and (6), now represented as R_GAIN, G_GAIN, B_GAIN for simplicity, are normalized in processing block 4b to minimize the variation of the luminance signal while maintaining the ratio between the color channel gains. Since the green channel constitutes the majority of the luminance signal; the variation of the luminance signal is minimized by setting the green channel gain to 1.0 and the red channel gain and the blue channel gain are recalculated while maintaining the ratio between the red, green, and blue channel gains. This is accomplished by:
In a preferred embodiment, the red, green, and blue channel gains calculated in processing block 4b and described above are used to respectively adjust the signals Ri, Gi, and Bi in processing block 4 to produce signals Rb, Gb, and Bb. For single picture snapshot applications, the derived channel gains can be applied for the subsequent color processing stages. For video applications, where video data flow needs to be continuous, the new channel gains for each frame can be stored in processing block 4c and can be applied for the next frame. If a smoother transition is desired, a low pass filtering of the channel gains can be implemented by storing multiple past channel gains and performing a low pass filtering of the past and current channel gains. The low pass filtering can be implemented by a running mean or moving average filter, as shown in processing block 4c. It should be appreciated that any known low pass filtering method may be used.
As is apparent from the above description, the invention achieves significant color balance, is straightforward, and is relatively easy to implement. As a result, color balance is achieved in a simple and efficient manner.
A sample and hold circuit 261 associated with the column driver 260 reads a pixel reset signal Vrst and a pixel image signal Vsig for selected pixels of the array 240. A differential signal (Vrst-Vsig) is produced by differential amplifier 262 for each pixel and is digitized by analog-to-digital converter 275 (ADC). The analog-to-digital converter 275 supplies the digitized pixel signals to an image processor 280 which forms and may output a digital image. The image processor 280 has a circuit (e.g., processor) that is capable of performing the color balance processing (
Alternatively, the color balance processing can be done on the analog output of the pixel array by a hardwired or logic circuit (not shown) located between the amplifier 262 and ADC 275 or on the digital image output of the image processor 280, in software or hardware, by another device.
System 1100, for example a camera system, generally comprises a central processing unit (CPU) 1102, such as a microprocessor, that communicates with an input/output (I/O) device 1106 over a bus 1104. Imaging device 300 also communicates with the CPU 1102 over the bus 1104. The processor-based system 1100 also includes random access memory (RAM) 1110, and can include removable memory 1115, such as flash memory, which also communicate with the CPU 1102 over the bus 1104. The imaging device 300 may be combined with a processor, such as a CPU, digital signal processor, or microprocessor, with or without memory storage on a single integrated circuit or on a different chip than the processor.
While preferred embodiments of the invention have been described and illustrated above, it should be understood that these are exemplary of the invention and are not to be considered as limiting. For example, although an exemplary embodiment has been described in connection with a CMOS image sensor, the invention is applicable to other electronic image sensors, such as CCD image sensors, for example. Additions, deletions, substitutions, and other modifications can be made without departing from the spirit or scope of the invention. Accordingly, the invention is not to be considered as limited by the foregoing description but is only limited by the scope of the appended claims.
The present application is a continuation of U.S. patent application Ser. No. 11/302,126 filed on Dec. 14, 2005 now U.S. Pat. No. 7,848,569, which is incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4602277 | Guichard | Jul 1986 | A |
5115327 | Ishima | May 1992 | A |
5274439 | Dischert et al. | Dec 1993 | A |
5489939 | Haruki et al. | Feb 1996 | A |
5907629 | Funt et al. | May 1999 | A |
6140630 | Rhodes | Oct 2000 | A |
6192152 | Funada et al. | Feb 2001 | B1 |
6204524 | Rhodes | Mar 2001 | B1 |
6310366 | Rhodes et al. | Oct 2001 | B1 |
6326652 | Rhodes | Dec 2001 | B1 |
6333205 | Rhodes | Dec 2001 | B1 |
6376868 | Rhodes | Apr 2002 | B1 |
6628830 | Yamazoe et al. | Sep 2003 | B1 |
6681042 | Weldy | Jan 2004 | B1 |
6754381 | Kuwata | Jun 2004 | B2 |
6774938 | Noguchi | Aug 2004 | B1 |
6850272 | Terashita | Feb 2005 | B1 |
6853748 | Endo et al. | Feb 2005 | B2 |
6906744 | Hoshuyama et al. | Jun 2005 | B1 |
6937774 | Specht et al. | Aug 2005 | B1 |
7162078 | Cheng | Jan 2007 | B2 |
7173663 | Skow et al. | Feb 2007 | B2 |
7227586 | Finlayson et al. | Jun 2007 | B2 |
7283667 | Takeshita | Oct 2007 | B2 |
7319483 | Park et al. | Jan 2008 | B2 |
7319544 | Sharman | Jan 2008 | B2 |
7336308 | Kubo | Feb 2008 | B2 |
7423779 | Shi | Sep 2008 | B2 |
7480421 | Henley | Jan 2009 | B2 |
7515181 | Terashita | Apr 2009 | B2 |
7961227 | Arakawa | Jun 2011 | B2 |
20030063197 | Sugiki | Apr 2003 | A1 |
20030151758 | Takeshita | Aug 2003 | A1 |
20040119854 | Funakoshi et al. | Jun 2004 | A1 |
20040120575 | Cheng | Jun 2004 | A1 |
20040202365 | Spaulding et al. | Oct 2004 | A1 |
20050068330 | Speigle et al. | Mar 2005 | A1 |
20050069200 | Speigle et al. | Mar 2005 | A1 |
20050069201 | Speigle et al. | Mar 2005 | A1 |
20050122408 | Park et al. | Jun 2005 | A1 |
20050219379 | Shi | Oct 2005 | A1 |
Number | Date | Country |
---|---|---|
60-500116 | Jan 1985 | JP |
11-331738 | Nov 1999 | JP |
2003-87818 | Mar 2003 | JP |
2004-328564 | Nov 2004 | JP |
2004-328564 | Nov 2004 | JP |
2005-109705 | Apr 2005 | JP |
2005-167956 | Jun 2005 | JP |
Number | Date | Country | |
---|---|---|---|
20110037873 A1 | Feb 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11302126 | Dec 2005 | US |
Child | 12915665 | US |