The technology described in this document relates generally to image signal processing (ISP) methods and more particularly to green imbalance compensation in imaging devices.
Semiconductor image sensors are used to sense radiation that includes, for example, visible light. Complementary metal-oxide-semiconductor (CMOS) image sensors and charge-coupled device (CCD) sensors are widely used in digital cameras and mobile phone cameras. These sensors utilize an array of pixels located in a substrate, where the pixels include photodiodes and transistors. The pixels absorb radiation projected toward the substrate and convert the absorbed radiation into electrical signals.
The present disclosure is directed to an imaging device and a method of compensating for green pixel imbalance in an image captured by a pixel array. In an example method of compensating for green pixel imbalance in an image captured by a pixel array, a pixel output signal is obtained for each green pixel in a group of pixels of the image captured by the pixel array. The group of pixels includes a plurality of pixel arrays, where each pixel array of the plurality of pixel arrays includes (i) a first green pixel in a row that includes a red pixel, and (ii) a second green pixel in a row that includes a blue pixel. A green imbalance value is determined for the group of pixels based on the obtained pixel output signals. For each pixel array of the plurality of pixel arrays, a difference between the pixel output signal for the first green pixel and the pixel output signal for the second green pixel is calculated. An average of the calculated differences is determined, where the green imbalance value is equal to the average. The pixel output signal of at least one of the green pixels in the group of pixels is adjusted based on the green imbalance value.
In another example method of compensating for green pixel imbalance in an image captured by a pixel array, a pixel output signal for each green pixel in a group of pixels of the image captured by the pixel array is obtained. The group of pixels includes a plurality of non-overlapping pixel arrays. Each pixel array of the plurality of pixel arrays includes (i) a first green pixel in a row that includes a red pixel, and (ii) a second green pixel in a row that includes a blue pixel. A green imbalance value for the group of pixels is determined based on the obtained pixel output signals. For each pixel array of the plurality of pixel arrays, a difference between the pixel output signal for the first green pixel and the pixel output signal for the second green pixel is calculated. For each of the calculated differences, it is determined if the calculated difference is a result of image edge pixels in the group of pixels. The pixel output signal of at least one of the green pixels in the group of pixels is adjusted based on the green imbalance value.
In another example, an imaging device includes a pixel array and a processor for processing pixel output signals received from the pixel array. The processor is configured to obtain a pixel output signal for each green pixel in a group of pixels of an image captured by the pixel array. The group of pixels includes a plurality of pixel arrays, where each pixel array of the plurality of pixel arrays includes (i) a first green pixel in a row that includes a red pixel, and (ii) a second green pixel in a row that includes a blue pixel. The processor is also configured to, for each pixel array of the plurality of pixel arrays, calculate a difference between the pixel output signal for the first green pixel and the pixel output signal for the second green pixel. An average of the calculated differences is determined, where the average is equal to a green imbalance value for the group of pixels. The processor is further configured to adjust the pixel output signal of at least one of the green pixels in the group of pixels based on the green imbalance value.
In order to allow capturing of a color image, the array of pixels 100 utilizes a Bayer pattern. As illustrated in
The Bayer pattern shown in
The use of the Bayer pattern color filter is intended to limit the color of light that is received by each pixel of the array 100. Specifically, the Bayer pattern color filter is intended to cause each pixel to receive light of a single color that is defined by a wavelength range of the pixel's associated filter (e.g., a red filter, a green filter, or a blue filter). However, due to the angle of incident light, a pixel often receives light of a color that is different from that of the intended color defined by the pixel's associated filter. For example, light may pass through a filter element of the Bayer filter at such an angle that the light strikes a neighboring pixel, despite the fact that the neighboring pixel is not associated with the filter element and is not intended to receive light of the color defined by the filter element. This condition is known as “cross-talk.” Cross-talk causes i) green pixels positioned next to red pixels to receive an amount of red-filtered light, and ii) green pixels positioned next to blue pixels to receive an amount of blue-filtered light. As a result of such cross-talk, the pixel array 100 has two different detection levels for the color green. Thus, the signal produced by the green pixels that are included in the rows 102 with the blue pixels is not the same as the signal produced by the green pixels that are included in the rows 104 with the red pixels. Ideally, there is uniformity in the detection level for the color green across the array 100.
The different detection levels for the color green is known as “green-green imbalance” or “green imbalance” and causes unwanted effects and degraded image quality in captured images. Because the green imbalance is caused at least in part by the angle of incident light impinging on the pixel array 100, green imbalance is location dependent, such that portions of the pixel array 100 experience a green imbalance that differs from that of other portions of the pixel array 100. With smaller pixel sizes and higher sensor resolution, the undesirable effects of green imbalance increase.
According to examples described herein, green imbalance is corrected by i) determining a green imbalance value for a portion of the pixel array 100 (e.g., determining a green imbalance value for a local neighborhood of pixels included in the pixel array 100), and ii) digitally correcting an image captured by the pixel array 100 based on the determined green imbalance. The portion of the pixel array 100 comprises, in an example, a 4 pixel×4 pixel group of pixels. Examples described herein further describe the consideration of image edge pixels in correcting green imbalance. When capturing an image with the pixel array 100, different detection levels for the color green (i.e., as between green pixels on rows with blue pixels and green pixels on rows with red pixels, as described above) result from both green imbalance and image edges. Thus, both the green imbalance and the image edge pixels are considered in the green imbalance correction methods described herein. The examples described herein are applicable to CMOS imaging devices, CCD imaging devices, and other imaging devices.
In the example of
In compensating for the green pixel imbalance, the group of pixels 202 limits a sampling of pixel output signals to a selection of local values, such that the compensation procedure described herein is a local neighborhood operation. The use of such a local neighborhood operation involving the group of pixels 202 is in contrast to alternative green balance procedures that consider only two pixel values (e.g., Gr3 and Gb3, or other combinations of green pixels within a single 2×2 pixel array). In the compensation procedure described herein, a green imbalance value for the group of pixels 202 is determined based on the pixel output signals produced by the green pixels of the group 202, and the green imbalance value is used to correct at least one green pixel output signal from the group of pixels 202.
Specifically, for each 2×2 pixel array of the plurality of 2×2 pixel arrays included in the group 202, a difference between the pixel output signal for the first green pixel (i.e., the green-red or Gr pixel) and the pixel output signal for the second green pixel (i.e., the green-blue or Gb pixel) is calculated. After calculating the differences for the 2×2 pixel arrays of the group 202, an average difference is determined by summing the differences and then dividing by a number of differences calculated. The green imbalance value for the group 202 is set equal to the average difference, and a pixel output signal of at least one of the green pixels in the group of pixels 202 is adjusted based on the green imbalance value. An example application of this procedure for adjusting the pixel output signal of the at least one of the green pixels in the group of pixels 202 is described below with reference to
The group of pixels 202 includes a plurality of 2×2 pixel arrays, where each of the 2×2 pixel arrays includes (i) a first green pixel in a row that includes a red pixel (i.e., a green-red pixel), and (ii) a second green pixel in a row that includes a blue (i.e., a green-blue pixel). In the example of
In the example of
At 304, differences are calculated between pixel output signals for green pixels in each of the 2×2 pixel arrays of the group. Thus, for each of the 2×2 pixel arrays, a difference is taken between the output signals for the first green pixel and the second green pixel. With reference to the example of
Delta0=Gr0−Gb0 (Eqn. 1)
Delta1=Gr1−Gb1 (Eqn. 2)
Delta2=Gr2−Gb2 (Eqn. 3)
Delta3=Gr3−Gb3 (Eqn. 4)
In an example, an estimated green imbalance value (Delta_est) between the pixels of interest, Gb0 and Gr3, is determined by taking an average of the calculated differences according to an equation:
Delta_est=(Delta0+Delta1+Delta2+Delta3)/4 (Eqn. 5)
At 306, for each of the calculated differences, it is determined if the calculated difference is caused by edges in the group of pixels. With reference to the example of
Delta—i=Gr—i−Gb—i(i=0,1,2,3)
if Gr—i−Gb—i>Pos_threshold, then Delta—i=Pos_threshold
if Gr—i−Gb—i<Neg_threshold, then Delta—i=Neg_threshold (Eqn. 6)
In Equation 6 above, the differences between the pixel output signals for each of the four 2×2 pixel arrays are calculated according to Delta_i=Gr_i−Gb_i, where “i” is equal to 0, 1, 2, and 3. If the calculated difference Gr_i−Gb_i exceeds the positive threshold, then the calculated difference is considered to be an outlier caused by edges within the group 202, and the calculated difference is clamped to the positive threshold. Similarly, if the calculated difference Gr_i−Gb_i is less than the negative threshold, then the calculated difference is determined to be caused by edges within the group 202, and the calculated difference is clamped to the negative threshold. The upper and lower limit values defined by the positive and negative thresholds are predetermined values that are set via a calibration procedure that is described in greater detail below with reference to
The use of the comparisons to the positive and negative thresholds reflects the fact that calculated differences Gr_i−Gb_i with unusually high values that exceed the threshold values are generally caused by edges within the group of pixels, rather than an actual green imbalance. Comparing the calculated differences to the thresholds and clamping the calculated differences, if necessary, reduces the error in the calculated differences. In an example, the positive threshold value is a positive number, and the negative threshold value is a negative number. In this example, if the calculated difference Gr_i−Gb_i is a positive value, then the calculated difference is compared to the positive threshold value, and if the calculated difference Gr_i−Gb_i is a negative value, then the calculated difference is compared to the negative threshold value. In the comparison, if the calculated difference has a magnitude that is greater than that of the threshold value, then the calculated difference is adjusted by setting the calculated difference equal to the threshold value. As described in further detail below, this adjusting of the calculated differences occurs prior to determining an average of the calculated differences.
At 308, an average of the calculated differences (after comparing the calculated differences to the threshold values and clamping the calculated differences, as necessary) is calculated to determine the green imbalance between the pixels of interest. As explained above, in the example of
Green_imbalance=(Delta0+Delta1+Delta2+Delta3)/4 (Eqn. 7)
As explained above, in Equation 7, the calculated differences Delta0, Delta1, Delta2, and Delta3 have been adjusted, as necessary, based on the comparisons to the positive and negative thresholds. Thus, the Green_imbalance value of Equation 7 may differ from the Delta_est value of Equation 5.
At 310, the green imbalance between the pixels of interest within the group of pixels is corrected using the calculated green imbalance value. For the pixels of interest Gr3 and Gb0 of
Gr3_adjusted=Gr3−(Green_imbalance/2) (Eqn. 8)
Gb0_adjusted=Gb0+(Green_imbalance/2) (Eqn. 9)
In the example of
At 404, lens shading correction is performed on the flat field image to generate a corrected image. The acquisition of the image via the pixel array leads to situations where the image exhibits significant shading across the image. In an example, the image is bright in the center of the image, and the brightness decreases towards the edge of the image. In another example, the image is darker on the left side and lighter on the right side. The shading is caused by non-uniform illumination, non-uniform camera sensitivity, or dirt and dust on a lens surface. Lens shading correction is used to remove such effects from the image after the image has been acquired. After the lens shading correction, in the corrected image, the green-red, green-blue, red, and blue channels are at an equal, flat level.
At 406, the corrected image is divided into a plurality of locations. In an example, the corrected image is divided into a matrix of 64 locations×64 locations. At 408, for each location of the plurality of locations, the green imbalance in the corrected image is measured and recorded. As explained above, green imbalance is location dependent, such that the measured green imbalance values vary over the plurality of locations in the corrected image. At 410, the positive and negative threshold values are set equal to the measured green imbalance values plus or minus a margin value. Thus, the positive and negative threshold values are location dependent and vary across the different locations of the image, resulting in an array of local green imbalance thresholds (i.e., a positive and negative threshold for each location of the plurality of locations). If the Gr_i−Gb_i difference value exceeds the threshold, then the difference value is determined to be caused by an edge in the image, and the difference value is clamped to a particular local green imbalance threshold that is dependent on the portion of the image under consideration.
At 506, each Delta_i=Gr_i−Gb_i value is compared to predetermined positive and negative thresholds. If the Delta_i=Gr_i−Gb_i value exceeds either of the thresholds, the Delta_i=Gr_i−Gb_i value is clamped to the threshold that is exceeded. A calibration process for determining the positive and negative thresholds is detailed at steps 512, 514, and 516 of FIG. 5. At 512, a flat field image is obtained using the pixel array. At 514, lens shading correction is performed on the flat field image to generate a corrected image. At 516, calibration is performed to obtain an array of positive and negative threshold values. In an example, the calibration performed at 516 involves determining green imbalance values at various locations of the corrected image. The array of positive and negative threshold values are determined using the green imbalance values, thus causing a plurality of positive and negative threshold values to be defined across the pixel array. In an example, the array of positive and negative threshold values are set equal to the green imbalance values at the various locations of the corrected image plus some margin value, as described above with reference to
At 508, an average of the Delta_i=Gr_i−Gb_i values is computed. In an example where i=0, 1, 2, 3, the average is computed according to an equation, Delta=(Delta0+Delta1+Delta2+Delta3)/4. The green imbalance value for the group of pixels is equal to the computed average. At 510, at least one pixel output signal for the captured image is adjusted based on the green imbalance value for the group of pixels. In an example, the pixels of interest to be adjusted within the group of pixels are located at a center area of the group of pixels. With reference to
The components of the imaging device 600 are configured to provide image acquisition and green imbalance correction as described herein. In providing the image acquisition, the optical sensing unit 602 includes a pixel array or other components used to form a complementary metal-oxide-semiconductor (CMOS) image sensor or charge-coupled device (CCD) image sensor. In providing the green imbalance correction, the image processing unit 604 includes one or more processors for processing image pixel output signals that are generated by the optical sensing unit 602. The one or more processors of the image processing unit 604 obtain the pixel output signals and perform procedures to adjust the pixel output signals as necessary for the green imbalance correction.
The data storage unit 606 and memory are configured to hold persistent and non-persistent copies of computer code and data. The computer code includes instructions that when accessed by the image processing unit 604 result in the imaging device 600 performing green imbalance correction operations as described above. The data includes data to be acted upon by the instructions of the code, and in an example, the data includes stored pixel output signals. The processing unit 604 includes one or more single-core processors, multiple-core processors, controllers, or application-specific integrated circuits (ASICs), among other types of processing components. The memory includes random access memory (RAM), dynamic RAM (DRAM), static RAM (SRAM), synchronous DRAM (SDRAM), or dual-data rate RAM (DDRRAM), among other types of memory.
The data storage unit 606 includes integrated or peripheral storage devices, such as, but not limited to, disks and associated drives (e.g., magnetic, optical), USB storage devices and associated ports, flash memory, read-only memory (ROM), or non-volatile semiconductor devices, among others. In an example, data storage unit 606 is a storage resource physically part of the imaging device 600, and in another example, the data storage unit 606 is accessible by, but not a part of, the imaging device 600. The input/output interface 608 includes interfaces designed to communicate with peripheral hardware (e.g., remote optical imaging sensors or other remote devices). In various embodiments, imaging device 600 has more or less elements or a different architecture.
This written description uses examples to disclose the invention, including the best mode, and also to enable a person skilled in the art to make and use the invention. The patentable scope of the invention includes other examples. Additionally, the methods and systems described herein may be implemented on many different types of processing devices by program code comprising program instructions that are executable by the device processing subsystem. The software program instructions may include source code, object code, machine code, or any other stored data that is operable to cause a processing system to perform the methods and operations described herein. Other implementations may also be used, however, such as firmware or even appropriately designed hardware configured to carry out the methods and systems described herein.
The systems' and methods' data (e.g., associations, mappings, data input, data output, intermediate data results, final data results, etc.) may be stored and implemented in one or more different types of computer-implemented data stores, such as different types of storage devices and programming constructs (e.g., RAM, ROM, Flash memory, flat files, databases, programming data structures, programming variables, IF-THEN (or similar type) statement constructs, etc.). It is noted that data structures describe formats for use in organizing and storing data in databases, programs, memory, or other computer-readable media for use by a computer program.
The computer components, software modules, functions, data stores and data structures described herein may be connected directly or indirectly to each other in order to allow the flow of data needed for their operations. It is also noted that a module or processor includes but is not limited to a unit of code that performs a software operation, and can be implemented for example as a subroutine unit of code, or as a software function unit of code, or as an object (as in an object-oriented paradigm), or as an applet, or in a computer script language, or as another type of computer code. The software components and/or functionality may be located on a single computer or distributed across multiple computers depending upon the situation at hand.
This disclosure claims priority to U.S. Provisional Patent Application No. 61/858,455, filed on Jul. 25, 2013, which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
6714239 | Guidash | Mar 2004 | B2 |
7561194 | Luo | Jul 2009 | B1 |
7830428 | Jerdev et al. | Nov 2010 | B2 |
7876363 | Ovsiannikov | Jan 2011 | B2 |
8005297 | Hung et al. | Aug 2011 | B2 |
8297858 | Bledsoe et al. | Oct 2012 | B1 |
8471921 | Li | Jun 2013 | B1 |
20080231735 | Sheikh | Sep 2008 | A1 |
20080252759 | Jerdev et al. | Oct 2008 | A1 |
20140375668 | Shih | Dec 2014 | A1 |
Number | Date | Country | |
---|---|---|---|
61858455 | Jul 2013 | US |