Digital cameras include at least one camera sensor, such as, e.g., a charge coupled device or “CCD” or complementary metal oxide semiconductor (CMOS) sensor. The digital camera includes a plurality of photosensitive cells, each of which builds-up or accumulates an electrical charge in response to exposure to light. The accumulated electrical charge for any given pixel is proportional to the intensity and duration of the light exposure, and is used to generate digital photographs.
One of the most challenging aspects of designing a compact high-resolution camera is the limitation on the overall volume of the camera. With a typical target height being less than 6 mm, very compact sensors must be used. These sensors require miniature pixel designs that exhibit reduced sensitivity, increased noise, increased color crosstalk, and increased color disparity. These compact designs often exhibit excessive vignetting due to the variation in the angle of incidence of the light rays striking the center of the camera sensor, which may be directly behind the camera lens, versus the light rays striking the edge of the camera sensor, which strike at highly oblique angles.
In addition to non-color-dependent vignetting, digital cameras may also exhibit color-dependent vignetting. For example, when an image of a uniformly illuminated neutral surface (e.g., a white wall) is captured, the resulting digital image may be undesirably tinted by pink, green, or blue hues. The exact color and shape of these areas changes with illuminant type and the scene being photographed. There are many causes to these observed hue shifts, depending on the optical system, sensor, electronics, and their interactions.
a is a diagram showing the positional dependence of color shading at different locations along an imaging sensor.
b is a component diagram of an exemplary camera system.
Systems and methods are disclosed herein for correction of color-dependent and non-color-dependent vignetting of digital camera sensors. As these effects vary spatially across the area of the sensor, an image processing algorithms can be used to correct these undesirable effects. These algorithms may make use of a mathematical model to fit a correction mask (polynomial, elliptical, circular, and so forth) or may store the actual correction mask at a smaller resolution due to memory constraints.
Some approaches to camera sensor correction may assume that a single correction factor can be acquired from a flat-field image at a known reference color temperature. While this approach may provide sufficient correction for sensor and lens combinations that do not exhibit color crosstalk or for sensors whose optical crosstalk does not vary with the wavelength of reflected light from different types of surfaces, these assumptions may not work well for ultra-compact devices (e.g., as used in mobile imaging devices). This is to say that a single linear multiplicative constant may not result in an overlapping spectral response as will be described herein.
Compact lens systems used for digital imaging devices are typically constructed of three to four lens elements and an infrared cutoff filter that is used to limit the optical bandpass of the light transmitted through the lens. Such lenses have very steep ray angles which causes two undesirable effects on the image: optical crosstalk and spectral crosstalk.
The area sensors used in many imaging devices generally include a mosaic of color filters arranged in a Bayer pattern. A Bayer pattern is constructed with one row of the sensor containing alternating red and green pixels (R and Gr), with the second row of the sensor contains alternating blue and green pixels (B and Gb). Thus, optical crosstalk occurs when light destined for either the red, green, or blue pixel is collected by an adjacent pixel of a different color. The amount of hue shift caused by optical crosstalk changes in the horizontal and vertical axes of the imaging sensor. Optical crosstalk has the effect of reducing the amount of light collected by each pixel as well as aberrating the color information used in processing the image. Accordingly, in an exemplary embodiment of the invention, a four-color spatially varying correction scheme is implemented when there is a difference in the spectral response of the green channels on the red row and the green channels on the blue row, as explained in more detail below.
Optical crosstalk can also be affected by the IR-cut filter, which limits the wavelength of the light captured by the image sensor. When light incident on the filter is not perpendicular to the coating surface of the sensor, there is a spatially-varying spectral transmittance shift of the cutoff wavelength towards shorter wavelengths. This spatially-varying spectral transmittance causes a spatially varying hue shift across the sensor. The unequal color separation of the color filters is sometimes referred to as spectral crosstalk.
a is a diagram 10 showing the positional dependence of color shading at different locations along an imaging sensor.
b is a component diagram of an exemplary camera system 100. Although reference is made to a particular digital still-photo camera system 100, it is noted that the systems and methods described herein may be implemented with any of a wide range of sensors for any of a wide variety of applications (e.g., camera phones, digital cameras, video cameras, scanners, medical imaging, and other electronic imaging devices), now known or that may be later developed.
There are many different types of image sensors that may be used in exemplary camera system 100. One way to classify image sensors is by their color separation mechanism. Typical image sensors in a digital imaging system consist of a mosaic type of sensor over which is formed a filter array that includes the additive colors red, green, and blue. Each pixel of the sensor includes a corresponding red, green, or blue filter area arranged in a repeating two-line pattern. The first line contains alternating red and green pixels with the second line containing alternating blue and green pixels. The separate color arrays of images formed by each pixel are then combined to create a full-color image after suitable processing.
Other mosaic color filter patterns are also possible. Embodiments of the invention may include color filters having cyan, magenta, yellow, and key (CMYK); or red, green, blue, teal (RGBT); red, white, blue, green (RWBG); and so forth. In one variant of the mosaic sensor, a sensor containing color filters arranged in stripes across' the array may be used. Another type of sensor relies on the phenomenon that different wavelengths of light penetrate silicon to different depths. This type of sensor may use an array of photo sites, each of which consists of three vertically stacked photodiodes organized in a two-dimensional grid. In such an embodiment, each of the three stacked photodiodes responds to different colors of light. Signals from the three photodiodes are processed to form an image. However, all of the embodiments described herein will work with any of the above-described sensors.
Returning now to
Camera system 100 may also include an analog-to-digital converter (“A/D”) 160. In digital cameras, the analog-to-digital converter 160 digitizes the analog signal from the camera sensor 150 and outputs it to a spatially-varying color correction module 162 which is connected to an image processing pipeline 170, and an exposure/focus/WB analysis module 164. The A/D 160 generates image data signals representative of the light 130 captured during exposure to the scene 145. The sensor controller 155 provides signals to the image sensor that may be implemented by the camera for auto-focusing, auto-exposure, pre-flash calculations, image stabilizing, and/or detecting white balance, to name only a few examples.
The camera system 100 may be provided with an image processing pipeline or module 170 operatively associated with a sensor controller 155, and optionally, with camera settings 180. The image processing module 170 may receive as input image data signals from the spatially varying color correction module 162. Image processing module 170 may be implemented to perform various calculations or processes on the image data signals, e.g., for output on the display 190.
In an exemplary embodiment, the spatially varying color correction module 162 may be implemented to correct for defects in the digital image caused by optical crosstalk, spectral crosstalk, or changes in sensor spectral sensitivity. The spatially varying color correction module 162 may apply a correction factor to each pixel (or group of pixels) based on the location of the pixel or group of pixels on the camera sensor 150.
It is noted that output by the camera sensor 150 may be different under various conditions due to any of a wide variety of factors (e.g., test conditions, light wavelength, altitude, temperature, background noise, sensor damage, zoom, focus, aperture, etc.). Anything that varies the optical behavior of the imaging system can affect color shading. Accordingly, in exemplary embodiments the sensor may be corrected “on-the-fly” for each digital image or at various times (e.g., various seasons, geographic locations, or based on camera settings or user selections), instead of basing correction on an initial calibration of the camera sensor 150 by the research and development team or manufacturer. Exemplary embodiments for camera sensor correction can be better understood with reference to the exemplary camera sensor shown in
In an interline CCD, every other column of a silicon sensor substrate is masked to form active photocells (or pixels) 200 and inactive areas adjacent each of the active photocells 200 for use as shift registers (not shown). Although n columns and i rows of photocells are shown, it is noted that the camera sensor 150 may include any number of photocells 200 (and corresponding shift registers). The number of photocells 200 (and shift registers) may depend on a number of considerations, such as, e.g., image size, image quality, operating speed, cost, etc.
During operation, the active photocells 200 become charged during exposure to light reflected from the scene. This charge accumulation (or “pixel data”) is then transferred to the shift registers after the desired exposure time, and may be read out from the shift registers.
In exemplary embodiments, the camera sensor may be sampled as illustrated by photocell windows 210a-i. For purposes of illustration, nine windows 210a-i are shown corresponding substantially to the corners, edges, and middle of the camera sensor.
The image can be described as having a width DimX and an image height DimY. Then the spatial location of the center of each window (left to right and top to bottom) is described using the following coordinates:
The coordinates of the upper left corner is given by the coordinates (0, 0). Each window 210a-i is approximately 100×100 pixels in this example. However, it is understood that any suitable size window may be implemented to obtain pixel data for the camera sensor and will depend at least to some extent on design considerations (e.g., processing power, compute power, desired time to completion, etc.). For example, smaller windows (e.g., single pixel windows) may be used for an initial calibration procedure, while larger windows may be used for on-the-fly data collection. In any event, the pixel data may be used to identify optical crosstalk and spectral crosstalk for individual pixels or groups of pixels, as explained in more detail with reference to
After the desired exposure time, the pixel data may be transferred from the active photocells to the shift registers (not shown), read out, and the pixel data analyzed, as shown in the plots 300, 310. For purposes of simplification and contrast, pixel data is shown plotted 300 for the upper-left corner of the sensor (e.g., window 210a in
After the desired exposure time, the pixel data may be transferred from the active photocells to the shift registers (not shown), read out, and the pixel data analyzed, as shown in the plots 500, 510. For purposes of simplification and contrast, pixel data is again shown plotted 500 for the upper-left corner of the sensor (e.g., window 210a in
Instead, an M×N (e.g., 4×4, or more depending on the number of colors) color correction matrix may be implemented for groups of pixels (or assuming sufficient computational power and memory, each pixel) in the image. An exemplary matrix is given as:
It is noted that Rsensor Grsensor, Gbsensor, Bsensor, and Rcorr, Grcorr, Gbcorr, Bcorr in the example above are not unique and limited to single color pixel values. Those skilled in the art will appreciate that any of the color plane representations can be used, such as, e.g., groups of locally-averaged pixel values.
The pixel values before and after the spatially-varying color shading correction can be considered a process in which the un-corrected color-channel data is input to a matrix, operated on by that matrix, and output as a color-shading-corrected data set. There are several possible cases in which the color-dependent vignetting can be corrected. One is in which the uncorrected sensor data is operated on by an M×N correction matrix that returns a corrected vector of color channels prior to the demosaic process. Another example is that the uncorrected data has been demosaiced and then is operated on by a correction matrix that returns a color-shading corrected vector of sensor values post demosaic. The final scenario is one in which the uncorrected sensor values are demosaiced and corrected for spatially-varying color-dependent vignetting as part of the demosaic process. This process can also be completed as part of a transformation from one color space to another such as converting from sensor RGB to sRGB, YUV, or YCC, and so forth. In the case of including other color space conversions one can use an exemplary matrix given as:
In an exemplary embodiment, four colors, R, Gr, Gb, and B describe the red, green on the red row, green on the blue row, and blue color channels, respectively. K00 through K33 describe the correction coefficients. The number of color correction matrices equals the actual image resolution, and the 4×4 matrix converts the spectral response of each color plane at a given spatial location to match the spectral response of the sensor in the center of the image.
This approach may be incorporated into the procedure for finding the module spectral response without requiring additional calibration images. This is due to the fact that the color correction matrices and spectral responses are required from different spatial locations of the calibration images; however, the correction and calibration process in the current invention do not require an increase in the number of images. Therefore, computation time increases but not the number of calibration images needed. The traditional color shading and vignetting correction and color rendering are no longer needed because such tasks are now part of the proposed spatially-varying m×n color correction. In an exemplary embodiment this spatially-varying color correction could be combined with the transformation to other color spaces such as sensor RGB to sRGB, sensor RGB to YUV, or sensor RGB to YCC. It will however, be evident to those skilled in the art that various changes and modifications may also be made.
To simplify the proposed invention, one can choose to measure spatially varying spectral responses at a lower resolution. For example, nine equally-spaced windows of 4×4 matrices may be implemented. For other pixels in between the nine windows, interpolation may be used to find the matrices for other spatial sample locations. The choice in the number of spatial location is a trade off between color precision and computational and memory performance.
In order to convert any of the aforementioned sensor's color data into a full-color image, some sort of pixel processing algorithm is required. In mosaic sensors, a demosaic algorithm is used. It is noted to those skilled in the art, that the spatially-varying color correction could be applied as part of the demosaic algorithm. In the case of sensors not requiring a demosaic algorithm, this step could be applied as part of the broader imaging task.
It is noted that the illustrations described above with reference to
In operation 710, a spectral response is sampled for a plurality of color channels at different spatial locations on a sensor. In operation 720, a color correction matrix is applied at the different spatial locations in an image captured by the sensor. In operation 730, the spectral response at each spatial location is converted to match the spectral response of the sensor at any one location (e.g., center or substantially the center or other location) on the image.
The operations shown and described herein are provided to illustrate exemplary implementations for camera sensor correction. The operations are not limited to the ordering shown. In addition, still other operations may also be implemented as will be readily apparent to those having ordinary skill in the art after becoming familiar with the teachings herein.
It is noted that the exemplary embodiments shown and described are provided for purposes of illustration and are not intended to be limiting. Still other embodiments are also contemplated for camera sensor correction.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US08/64584 | 5/22/2008 | WO | 00 | 11/3/2010 |