Method, apparatus, and system for selecting pixels for automatic white balance processing

Information

  • Patent Application
  • 20080049274
  • Publication Number
    20080049274
  • Date Filed
    August 24, 2006
    18 years ago
  • Date Published
    February 28, 2008
    16 years ago
Abstract
A method, apparatus, and system that use a white balance operation. A selecting process is applied to each pixel selected and considered for automatic white balance statistics to determine the distance from the selected pixel to a white curve defined in a white area corresponding to an image sensor.
Description

BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1
a is diagram of a white area for a specific image sensor plotted in a two-dimension space.



FIG. 1
b is a graphical representation of a white curve, within the white area identified in FIG. 1, plotted in an (x, y) coordinate space.



FIG. 2 is a schematic block diagram of an imaging apparatus that performs an automatic white balance operation in accordance with an embodiment of the invention.



FIG. 3 is a flowchart of an image process method that includes an automatic white balance operation in accordance with an embodiment of the invention.



FIG. 4 is a flowchart of a selecting process in accordance with an embodiment of the invention.



FIGS. 5
a and 5b are graphical representations of an example of a white curve for a specific image sensor and a pixel considered for automatic white balance statistics in accordance with an embodiment of the invention.



FIG. 6 is a block diagram of a system that includes an imaging apparatus, such as the apparatus shown in FIG. 2.





DETAILED DESCRIPTION OF THE INVENTION

In the following detailed description, reference is made to various specific embodiments in which the invention may be practiced. These embodiments are described with sufficient detail to enable those skilled in the art to practice them. It is to be understood that other embodiments may be employed, and that structural and logical changes may be made.


The term “pixel” refers to a picture element in an image. Digital data defining an image may, for example, include one or more color values for each pixel. For a color image, each pixel's values may include a value for each of a plurality of colors, such as red, green, and blue.



FIG. 2 illustrates an embodiment of an imaging apparatus 400 of the invention that includes an image sensor 404 and an imaging processing circuit 408. The imaging apparatus may be used, for example, in a digital camera. The imaging processing circuit 408 further includes an auto-white balance circuit (“AWB”) 414. The apparatus 400 also includes a lens 402 for directing light from an object to be imaged to the image sensor 404 having a pixel array that produces analog signals based on the viewed object. An analog-to-digital (A/D) converter 406 converts the analog pixel signals from the image sensor 404 into digital signals, which are processed by the image processing circuit 408 into digital image data. The output format converter/compression unit 410 converts the digital image data into an appropriate file format for output or display. The controller 412 controls the operations of the entire apparatus 400.


The image sensor 404 is a sensor that may be any type of solid state images including CMOS, CCD and others. The image sensor 404 receives image information in the form of photons, and converts that information to pixel analog electrical signals, which are subsequently provided to down stream processing circuits. In the imaging apparatus 400, the image sensor 404 provides electrical signals to the image processing circuit 408.


The image processing circuit 408 performs image processing on the digital signals received from analog-to-digital (A/D) converter 406. The image processing circuit can be implemented using logic circuits in hardware, or using a programmed processor, or by a combination of both. The image processing circuit 408 may include other circuits that perform pixel defect correction, demosaicing, image resizing, aperture correction, and correction for other effects or defects.


In an embodiment, the image processing circuit 408 outputs are pixels having RGB channels representational of the image data from red, blue and green pixels of image sensor 404. The image processing circuit 408 also converts the RGB image signals into a second set of data structures representing image pixel data in YUV format that are also representational of the RGB data. YUV stores image data in Y—luminance (“luma”), U—blue chrominance (“blue chroma” or “Cb”) and V—red chrominance (“red chroma” or “Cr”).


The auto-white balance circuit 414 receives image data, in the form of the YUV data structures and computes the correction values, if required, to perform white balancing. The auto-white balance circuit 414 provides the results of its computation to the image processing circuit 408. The white balance computation in the auto-white balance circuit 414 is performed using the YUV data structure because of the inherent properties of the YUV data structure. The YUV data structure breaks down an image into the Y, the luminance values, and UV, which is essentially a two-dimensional representation of color, where the two color components are U and V (i.e., Cb and Cr). Any color can be expressed in terms of the two color components, which can be plotted in an x and y coordinate space.


The image processing circuit 408 receives information from the auto-white balance circuit 414 and provides gain information to the component of the YUV data structure and thus makes appropriate adjustments to an image.



FIG. 3 is a flowchart illustrating an imaging process method 100 that includes a white balancing operation using a white point estimation in accordance with an embodiment described herein. As depicted in FIG. 3, after obtaining the image by the image sensor 404 (step 102), the auto-white balance circuit 408 performs a white balancing operation (steps 104-140). As part of the white balancing operation, a white point estimation (steps 112-128) is also executed by the auto-white balance circuit 408, in which one selecting criterion determines whether a selected pixel is within a white area of the image.


At step 104, a first pixel considered for automatic white balance statistics is selected from the captured image. Typically, the pixels considered for automatic white balance statistics are selected from the captured image in same order, row by row and pixel by pixel within a row. Next, at step 112, it is determined if the pixel is in the white area (described below in more detail). The acceptance of the pixel for automatic white balance statistics, depicted as a “Yes” response in Step 112, depends on the selecting criterion (described below in more detail) used at step 112.


At step 120, the data for a selected pixel that meets the selecting criterion is saved, and the method 100 continues at step 124. On the other hand, if the selected pixel does not meet the selecting criterion at step 112, then the method 100 continues at step 124 without saving the pixel data. At step 124, a determination is made to decide if the selected pixel is the last pixel to be tested. If it is determined that the pixel is not the last pixel, the method 100 continues at step 128. Otherwise, the method 100 continues at step 136.


Steps 112-128 are repeated until it is determined at step 124 that the last pixel from the captured image has been selected. Once it is determined at step 124 that a selected pixel is the last pixel considered for automatic white balance statistics; at step 136, saved pixel data is used to obtain the automatic white balance statistics. At step 140, the automatic white balance statistics are then used by the auto-white balance circuit 408 to perform a white balancing operation.


If it is determined at step 124 that the selected pixel is not the last pixel to be considered for automatic white balance statistics, then at step 128, the next pixel is selected, and the method 100 continues at step 112.


Determining if a pixel meets the selecting criterion used at step 112 includes estimating the distance from the selected pixel to a white curve, e.g., FIG. 1b, specified in the white area of the image sensor 404. FIGS. 4, 5a, and 5b illustrate an embodiment of the pixel selecting process used at step 112 (FIG. 3) that includes a relatively quick method of determining the shortest distance between the selected pixel and the white curve specified by piecewise linear curve.


Once the first pixel has been selected at step 102 of FIG. 3, the selected pixel is converted into the YUV data structure, as described above. The two color components are U and V are plotted in x and y coordinate space, at step 202 (FIG. 4). At step 204, the coordinates defining a white curve within a white area of the image sensor specified during manufacturing are obtained from storage. The coordinates defining the white curve are defined by a piecewise linear curve and plotted in (x, y) coordinate space. FIGS. 5a and 5b illustrate an example of the white curve, and the selected pixel plotted on a diagram in a (x, y) coordinate space.



FIG. 5
a illustrates the white curve L50 having a set of nodes P1, P2, P3, P4 with associated coordinates (X1,Y1), (X2,Y2), (X3,Y3), (X4,Y4). Interval L12 is between nodes P1 and P2. Interval L23 is between nodes P2 and P3. Interval L34 is between nodes P3 and P4. The selected pixel being considered for white balance statistics is defined by node P0 having associated coordinate (X0,Y0).


Referring to FIGS. 5b and 4, at step 206, the distance between selected pixel node P0, and each white curve nodes P1, P2, P3, P4 is calculated by applying the procedure DIST(A,B) as follows:






Dis1=DIST(P0, P1);






Dis2=DIST(P0, P2);






Dis3=DIST(P0, P3); and






Dis4=DIST(P0, P4);


where DIST(A, B) estimates the distance between node A and B. If node A has coordinate [Xa,Ya] and node B has coordinate [Xb,Yb], the equation defining the distance between these two nodes is DIST(A, B)=√{square root over ((Xa−Xb)2+(Ya−Yb)2)}{square root over ((Xa−Xb)2+(Ya−Yb)2)}. Implementation of this equation might be difficult and costly; alternatively, the following estimation can be easily implemented. The following estimation provides an acceptable, less than 10% error, in distance detection.


Applying the estimation to the illustrated example in FIG. 5a, we begin by calculating the distance between selected pixel node P0 and white curve node P1, wherein Dis1=DIST(P0, P1), as follows:


(1) Performing an initial determination:






dX=|X P
0
−X P
1
|; dY=|Y P
0
−Y P
1|,






D=dX+dY;


(2) Calculating the Min and Max:

















IF (dX > dY) then



   Min = dY;



   Max1 = dX;



   Max2 = dX / 2;



   Max4 = dX / 4;



   Max8 = dX / 8;



ELSE



   Min = dX;



   Max1 = dY;



   Max2 = dY / 2;



   Max4 = dY / 4;



   Max8 = dY / 8; and










(3) Setting the distance based on these relationships:

















IF (Min <= Max8)



   DIST = D;



IF (Max8 < Min <= Max4)



   DIST = D/2 + D/4 + D/8;



IF (Min > Max4)



   DIST = D/2 + D/4;











The DIST(A, B) procedure discussed above is repeated to calculate the distance between selected pixel node P0, and each white curve node P1, P2, P3, P4. Accordingly, DIST(A, B) procedure continues, where Dis2=DIST(P0, P2); Dis3=DIST(P0, P3); and Dis4=DIST(P0, P4).


At step 208, the two nodes on the white curve closest to the selected pixel node P0 defines an interval on the white curve, which is selected for further evaluation. In the example shown in FIG. 5a, the two nodes on the white curve closest to the selected pixel node P0 in FIG. 5a are P1 and P2, so the white curve interval L12 between P1 and P2 is selected for further evaluation.



FIG. 5
b shows a blow-up of the two nodes P1 and P2 with associated coordinates (X1,Y1) and (X2,Y2). At step 210, the interval L12 between the two nodes P1 and P2 is divided into four equal intervals. FIG. 5b illustrates the interval between the two nodes P1 and P2 having a set of sub-nodes P1, P5, P6, P7, P2 with associated coordinates (X1,Y1), (X5,Y5), (X6,Y6), (X7,Y7), (X2,Y2). The coordinates for sub-nodes P5, P6, and P7, can be defined as follows:






X
5
=X
1+deltaX/4;






X
6
=X
1+deltaX/2;






X
7
=X
1+deltaX/2+deltaX/4;






Y
5
=Y
2+deltaY/2+deltaY/4;






Y
6
=Y
2+deltaY/2; and






Y
7
=Y
2+deltaY/4,


where deltaX=X2−X1 and deltaY=Y1−Y2. FIG. 5b depicts the interval L12 between the two nodes P1 and P2 divided into four equal intervals L15, L56, L67, L72. Interval L15 is between nodes P1 and P5. Interval L56 is between nodes P5 and P6. Interval L67 is between nodes P6 and P7. Interval L72 is between nodes P7 and P2.


Next at step 212, the distance between the selected pixel node P0 and each sub-node P1, P5, P6, P7, and P2 is calculated. FIG. 5b depicts the distances d1, d5, d6, d7 and d2 calculated between the selected pixel node P0 and each sub-node P1, P5, P6, P7, P2. The distances d1, d5, d6, d7 and d2 are computed as follows:






d
1
=DIST(P0, P1);






d
5
,=DIST(P0, P5);






d
6
,=DIST(P0, P6);






d
7
=DIST(P0, P7); and






d
2
=DIST(P0, P2),


where the procedure DIST(A,B) calculates the distances d1, d5, d6, d7 and d2, applying the same estimation, as discussed above.


It should be appreciated to one skilled in the art, that steps 208 through 212 can be repeatedly performed to further narrow the intervals on the white curve and define the sub-node closest to the pixel; thereby determining the shortest distance possible from the pixel to the white curve.


At step 214, the shortest distance among the distances d1, d5, d6, d7 and d2 is selected for the next evaluation. At step 216, the threshold distance TH, shown in FIGS. 5a and 5b, which was determined during manufacturing is obtained from a storage area. At step 218, the selected shortest distance is compared with the threshold distance TH. If the selected shortest distance is less than the threshold distance TH, then the pixel is determined to be within the white area of the image and values are saved (120) and used (step 136) for the white balance statistics, which are in turn used to perform a white balance operation (step 140). Otherwise, the pixel is determined to be outside the white area of the image and the pixel data is discarded.


The selecting process described above is independent of the type of coordinates used to define the white curve. Furthermore, the selecting process does not require a large amount of resources for implementation due to the estimations used.



FIG. 6 shows an embodiment of a processor system 700, e.g., a camera system, which includes an imaging apparatus 400 (as constructed in FIG. 2) using the white balance statistics and balancing. Without being limiting, system 700 could include, instead of a camera, a computer system, scanner, machine vision system, vehicle navigation system, video telephone, surveillance system, auto focus system, star tracker system, motion detection system, image stabilization system, and other image acquisition or processing system.


System 700, for example, a camera system includes a lens 402 for focusing an image on a pixel array of the image sensor 404, central processing unit (CPU) 705, such as a microprocessor, which controls camera operation, and which communicates with one or more input/output (I/O) devices 710 over a bus 715. Imaging apparatus 400 also communicates with the CPU 705 over bus 715. The processor system 700 also includes random access memory (RAM) 720, and can include removable memory 725, such as flash memory, which also communicate with CPU 705 over the bus 715. Imaging apparatus 400 may be combined with the CPU, with or without memory storage on a single integrated circuit or on a different chip than the CPU.


The above description and drawings illustrate embodiments of the invention. Although certain advantages and embodiments have been described above, those skilled in the art will recognize that substitutions, additions, deletions, modifications and/or other changes may be made. Accordingly, the embodiments are not limited by the foregoing description but are only limited by the appended claims.

Claims
  • 1. A method for performing a white balance operation on an image data, comprising: representing values of a pixel from the image data in a coordinate color space;representing coordinates of a white curve in a white area specified for an image sensor in the coordinate color space;determining two nodes on the white curve closest to the pixel, which define a white curve interval;determining the shortest distance from the pixel to each of a plurality of sub-nodes on the white curve interval;selecting the pixel for a white balance operation if the shortest distance is less than a predetermined threshold distance; andusing data of a selected pixel in a white balance operation.
  • 2. The method of claim 1, wherein the color space corresponds to YUV coordinate color space.
  • 3. The method of claim 1, further comprising determining a distance (DIST(A,B)) between a pixel node A and a node B on the white curve according to: DIST(A, B)=√{square root over ((Xa−Xb)2+(Ya−Yb)2)}{square root over ((Xa−Xb)2+(Ya−Yb)2)},wherein (Xa, Ya) and (Xb,Yb) represent the respective coordinates of nodes A and B in the coordinate color space.
  • 4. The method of claim 3 further comprising applying an estimation method to implement the DIST(A, B)equation, the estimation method, comprising: performing an initial determination by calculating dX=|Xa−Xb|; dY=|Ya−Yb|, andD=dX+dY; calculating Min and Max by determining
  • 5. An image processing method, comprising: representing color values of each pixel in an image as coordinates of an x and y coordinate space;providing coordinates of a white curve in a white area specified in the x and y coordinate space;providing pixels from the image for the testing; andfor each pixel provided for testing: estimating first respective distances in the x and y coordinate space from the pixel to each of a plurality of points on the white curve;selecting two points on the white curve having the smallest distances to the pixel, the two points defining an interval on the white curve;estimating second respective distances in the x and y coordinate space from the pixel to a plurality of points on the interval;selecting the pixel for use in a white balance operation when a shortest distance among the second respective distances is less than a predetermined threshold distance; andperforming a white balance operation on the image using selected pixels.
  • 6. The method of claim 5, wherein the x and y coordinate correspond to UV coordinate.
  • 7. An image processing method, comprising: selecting a pixel from a captured image;converting the pixel into coordinates in a two-dimensional color space;obtaining coordinates for a white curve in said two-dimensional color space, the white curve corresponding to an image sensor used to capture the image;calculating first respective distances between a node representing the coordinates of the selected pixel and a plurality of nodes representing coordinates on the white curve;selecting two of the white curve nodes that are closest to the selected pixel node;dividing an interval between the two closest white curve nodes into a predetermined number of sub-intervals having a set of sub-nodes;calculating second respective distances between the selected pixel node and each sub-node;selecting a shortest distance among the second respective distances to the selected pixel node; andsaving data for the pixel if the selected shortest distance is less than a predetermined threshold.
  • 8. The method of claim 7, wherein the predetermined number of intervals are four.
  • 9. The method of claim 8, further comprising using the saved data for a white balancing operation.
  • 10. An imaging device, comprising: a pixel array for capturing an image; andan image processing circuit for performing a white balancing operation; the image processing circuit comprising: a first circuit portion for: calculating first respective distances between a node from a selected pixel from the captured image and each node on a white curve defining a predetermined white area of the image sensor, where the nodes are plotted in a two-dimension space;selecting two of the white curve nodes closest to the selected pixel node;dividing an interval between the two closest white curve nodes into multiple intervals having a set of sub-nodes;calculating second respective distances between the selected pixel node and each sub-node;selecting a distance determined to be shortest distance among the second respective distances;saving pixel data for a selected pixel if said selected distance is less than a predetermined value; anda second portion for performing an automatic white balance operation using the saved pixel data.
  • 11. The imaging device of claim 10, wherein the first circuit portion of the image processing circuit defines coordinates of the white curve nodes in an x and y coordinate color space.
  • 12. The imaging device of claim 10, wherein the first circuit portion of the image processing circuit defines a coordinate of the pixel node in an x and y coordinate color space.
  • 13. An imaging system, comprising: a processor; andan imaging device coupled to the processor, the imaging device comprising: an image sensor for receiving an image and outputting an image signal that includes pixel data for each pixel of the image; andan image processor in communication with the image sensor for performing a white balancing operation on the image, the image processor comprising: means for defining a white curve as a piece-wise linear curve in a two-dimension space, wherein the white curve is located in the white area, and the piece-wise linear curve is defined by a set of white curve nodes;means for defining the pixel by a pixel node in the two-dimension space;means for calculating first respective distances between the selected pixel node and each white curve node;means for selecting two white curve nodes determined to be closest to the selected pixel node;means for defining an interval between the selected two white curve nodes by multiple sub-nodes;means for calculating second respective distances between the selected pixel node and each sub-node;means for selecting a shortest distance among the second respective distances;means for determining if the shortest distance is less than a predefined threshold distance;means for saving only pixels having a shortest distance which is less than the predefined distance to obtain automatic white balance statistics; andmeans for performing white balance on the image by applying the obtained automatic white balance statistics.
  • 14. A camera, comprising: a lens;an image sensor for receiving an image from the lens and outputting an image signal that includes pixel data for each pixel of the image;a memory for storing a white curve associated with said image sensor, the white curves being defined by multiple node coordinates in a coordinate color space; andan image processor for processing the image signal, the image processor being operable on each of at least some pixels of the image, to: select the pixel from the image, the pixel having a pixel node in the color space;obtain coordinates for the stored white curve;calculate first respective distances from the selected pixel node to each white curve node;select two of the white curve nodes determined to be closest to the selected pixel node;divide an interval between the two white curve nodes into four equal sub-intervals, the sub-intervals being defined in the color space by sub-nodes;define the sub-intervals by a set of sub-nodes;calculate second respective distances from the selected pixel node to each sub-node; andsave a pixel which has a shortest distance among the second respective distances, which is less than a threshold defining a boundary of a white area in the color space.
  • 15. The camera of claim 14, wherein the image processor is further operable to use saved selected pixels to obtain automatic white balance statistics and to use the statistics in an in an automatic white balance statistic, and to use the statistics in the image.
  • 16. The camera of claim 15, wherein the color space corresponds to YUV domain.
  • 17. The camera of claim 16, wherein the coordinates for the color space corresponds to UV components.