This invention is a method for processing data provided by an electronic image-capturing device in a way that makes it self-calibrating, and allows it to determine precise colors of a segment or segments of the captured image.
An electronic image capture device (typically a digital camera) captures an image of a target area of unknown color on an object along with reference colors that have been placed in its field of view in close proximity to the target area. Pre-determined color measurements have been made for the reference colors, with data reported as separate intensity values for each channel—typically red, green, and blue. These data are corrected using a four-step mathematical process that generates values for the corrected intensity values in each channel for the target area independently of the state of the imaging device and variations in the illumination of the target.
One use of the invention applies it to measure skin color or hair color to assist in the selection of health and beauty products. The device is placed against the skin or hair and pixel values are collected from each channel from the target area and reference colors. These data are processed in software to determine the color of the target area. The software can then either report the color or do further processing to identify the product with the best color match, recommend what coloring products & processes to use to achieve a target color, or to predict and/or simulate (present a visual representation of) the result when a particular product is selected.
Another use of the invention applies it to measure colors of a home déecor product to assist in the selection, make recommendations and/or assess the compatibility of various products. The device is placed against the target product, and pixel values are collected from each channel from the target area and reference colors. These data are processed in software to determine the actual color of the target area. The device can then report the color information or do further processing to, for example, suggest product(s) of a closely matching color, report the level of similarity to a second color measured and processed in a similar way, assess compatibility with a second color measured in a similar way, and/or recommend other colors that would be compatible with the measured color.
Operational Overview
In operation, an image capture device (typically including a digital imaging chip such as a color CMOS or CCD sensor or monochrome sensor with external color filters that site under the lens 12 shown in
The image capture device, as used in one embodiment of this invention, captures an image of a set of fixed reference colors and an unknown color for which a measurement is desired (target region) as described in US Patent application US 2004/0179101 which is incorporated herein by reference. The fixed reference colors 14 (
Prior to using the device, a series of “Setup Steps” is conducted (typically in manufacturing) to profile the performance of each device as compared to other devices, a particular reference device or devices, and/or standard, calculated, “predicted” performance. These Setup Steps yield data that capture the unique characteristics of each device. In operation, the device uses data generated by the Setup Steps to perform a series of mathematical corrections, herein referred to as “Operational Adjustments” that process/correct the raw data from the image capture device.
A description of the Setup Steps is outlined in the flowchart shown as
Setup Step 1: Field Correction. Field correction compensates for differences in illumination and detector sensitivity from region to region. To perform this connection, the same color is placed in each reference color region. The imaging system should report each region with the same value c. However, because of differences in illumination and detector response, there will typically be a different measurement mi for each of the i regions of interest. In certain embodiments, it may not be required to place the same color in all the reference color regions, although this is the desirable approach.
The goal of the field correction step is to solve for a correction value xi for each region of interest so that after the correction value is applied, each region will measure the same value as shown in equation 1.
c=xi×mi (1)
The constant c depends on the target color used in the field correction process. If (typically) a mid-tone gray color is chosen, c would be 0.5 for all channels.
To find the field correction values in one embodiment, the multiple reference colors 14 mentioned earlier are replaced in step 18 with a uniform, single color 20. This same color is also placed in the target area so that the target area and the regions of interest are the same. The device reports data (typically R, G, and B pixel values) from these regions, mi and uses equation 2 in step 22 to calculate the field correction array of xi values—one for each channel in each region.
Since lighting, shadows, lens anomalies, etc, can “corrupt” any one of the readings, it's important that each reference area and the target area be “equalized” to factor out any irregularities. For this reason, a single color (e.g., all reference colors the same) is read.
Setup Step 2: Level Correction. Level correction scales the region measurements to the neighborhood of the predicted reference responses. Because environmental conditions and other external influences (e.g., a low battery vs. a fresh battery) can change the performance of the electronics, a measurement of a color made on one occasion may not return the same values as a measurement of that exact same color made on another occasion. Scaling compensates for these changes in the device. It records readings from the black and white (or representative dark and light) references at a point in time. These values are used to define the “base state” of the device. This allows future readings to be mathematically adjusted, such that the range of values provided by the device under that new set of conditions can be converted to values in the range the device reported in its base state.
The fixed references contain typically 2 reference colors for level correction. In one embodiment the level correction colors include a light or highly reflective sample and a dark or highly absorbing sample. There are several workable approaches to choosing the light and dark references, but in practice, the scaling process will be successful as long as the scaling references approximate the lightest and darkest colors that the device will be expected to measure. As an alternative approach in another embodiment, a mid-range gray reference can be used as a single-point scaling reference, but the two-reference approach as described above will typically better characterize changes in performance.
In setup step 2, the predicted responses 24 for the level correction references are determined in step 27. In one embodiment of the invention, the imaging system is designed to report a vector t typically of device red, green and blue responses to a color stimulus. That color stimulus r is a vector of n spectral reflectance values. In one embodiment n is 31, which segments the visible light spectrum into 10 nm bands, but smaller or larger bands (and larger or smaller n's) are also possible. The process can be modeled in the following matrix/vector equation:
t=FTDTITLTr (3)
where, in the case of an image capture device that reports red, green and blue values, F is an n×3 matrix where the 3 columns are n spectral transmittances for the red, green and blue filters respectively, D is an n×n diagonal matrix of the detector quantum efficiency, I is an n×n diagonal matrix of the spectral transmittance of the optical system including the lens and infrared cutoff filter, and L is an n×n diagonal matrix of the illumination spectral power distribution.
The spectral quantities D, I, and L in equation 3 can either be measured directly in step 25 or supplied by the manufacturer of the component in question as shown in step 26, namely the imaging device in the case of D, the lens and infra-red filter in the case of I, and the illumination source (a white LED in this embodiment) in the case of L. Those quantities may be for a specific device or average component and as such may not represent any individual imaging system constructed. This model of the imaging system is used in step 27 to calculate the predicted responses of the system to any color in the image it captures, and these predicted values 24 can be used to calculate corrections that compensate for the unique characteristics of an individual imaging system.
Predicted values of red, green and blue intensity for the level correction references can be found using equation 3 and the spectral reflectance of the light and dark references obtained from either a data sheet or measurement with a suitable instrument. Equation 3 yields vectors tL and tD which contain the predicted red, green and blue channel values for the light and dark references respectively.
As an alternative embodiment of the invention, rather than predicted response, R, G, and B intensity measurements for the light and dark references 26 can be acquired from one or more devices and used as reference readings 24 for level correction.
Setup Step 3: Color Correction. As noted earlier, the fixed references also include a set of m colors for color correction. These predicted values for these references are used in operation of the device to make color adjustments. In setup step 3, the predicted responses for color correction references are computed in steps 28 and 29 in the same manner as that used to compute the predicted response for the level correction references (steps 25 and 27). Using a vector of m spectral reflectance values from the color correction references as color stimulus r (either from data sheets or measured) and equation 3, a red-green-blue image capture device yields a 3×m matrix Y that contains the predicted values 31.
As an alternative, rather than predicted response, R, G, and B intensity measurements can be acquired in step 30 from one or more devices and used as reference readings for color correction.
Setup step 4: Calibration Predicted Values. Calibration samples are 2 colors that are measured by the device as targets to calculate a calibration correction in step 32. The 2 colors are typically a light or highly reflective sample and a dark or highly absorbing sample. Preferably these samples should be different from the level correction reference colors that are part of the device, as discussed in Setup Step 3 because calibrating performance using two additional colors will improve performance of the device, but if necessary, the same light and dark reference colors could be used. If different colors are chosen as calibration samples, again, they should be chosen to approximate boundaries of the span of colors intended to be measured by the device.
In setup step 4, in accordance with one embodiment, the color references are measured in step 32 and the predicted responses 34 for calibration correction samples are computed in step 33 in an analogous manner as that used to compute the predicted response 24 for the level correction references. Using a vector of m (typically 31 or more, as discussed above) spectral reflectance values from the calibration correction targets as color stimulus r (either from data sheets or measured) and equation 3, yields vectors cL and cD which contain the predicted red, green and blue channel values 34 for the light and dark calibration samples respectively.
Again, as an alternative embodiment of the invention, measurement data can be acquired in alternate step 35 from one or more devices and used as reference readings for calibration.
Setup Step 5: Calibration Using Light and Dark Targets. The light and dark calibration samples for which predicted values were derived in setup step 4 are measured at 38 and 39 by the device like an unknown target. In normal operation, the device makes four successive adjustments to the raw data from an unknown target: field correction, level correction, color correction and calibration correction, as described in the following section. For setup step 5, the raw data from the color references 40b and the level reference readings 40a are collected for each of the light and dark target colors 38/39. This data and the raw data from the light and dark calibration targets 40c are processed using only the first three of these adjustments in steps 42, 44 and 46. The field correction 22′ is applied to yield field-adjusted level reference, color reference and light/dark color readings 43a, 43b, and 43c respectively. The level correction 24 is applied 24′ to the field-adjusted reference and light/dark target values 43b and 43c in step 44 to yield level-adjusted color and light/dark color readings 44b and 44c respectively. Based upon the predicted color values 31 and the level-adjusted reference values, a color correction matrix 80 is determined as discussed in detail in U.S. Published Application 2004/0179101.
The result 45 of these three adjustments is two vectors, xL and XD, which, in the typical case, contain the measured and adjusted red, green and blue channel values for the light and dark calibration samples respectively. These data are used in operation of the device to make the fourth adjustment—the calibration correction.
Operational Adjustments
During operation in step 48, “raw” data (i.e., unprocessed, uncorrected and unadjusted) is collected from the target area 49c that contains the color for which a color measurement is desired, and also from light and dark level references 49a and reference color samples 49b.These data are used in conjunction with data collected and derived during the Setup Steps to make a series of adjustments to the raw target data provided by the imaging device as described below.
The result of these adjustments is typically a set of red, green and blue values for the unknown color in the target area that, for any given target color, remain largely consistent regardless of changes within the device and largely consistent from one device to another. Optionally, using standard device characterization techniques, the resulting values can be further transformed1 to an industry standard color space like CIE X, Y, Z, or CIELAB.
The adjustments are summarized in the flowchart shown in
Adjustment 1: Field Correction. In step 50, using the field correction array 22 (xi for each region (i) as calculated in Setup Step 1, and equation 4, where ri is the raw data from the actual reference colors,) mi becomes the field corrected raw data, adjusted to correct for uneven illumination and detector response.
mi=xi×ri (4)
Adjustment 2: Level Correction. In step 60, two of the field corrected measurements 61 from the individual device are represented by the vectors mL and mD which contain the measured and field corrected red, green and blue channel values for the light and dark references respectively. Combining these with the vectors tL and tL derived at 24 in Setup Step 2 which represent the predicted values of the red, green and blue channel values for the light and dark references respectively, all of the field corrected measurements mi (62 and 64) can be level corrected at step 63 using the following equation.
In equation 5, mi is the field corrected instantaneous measurement for region i and xi is the level corrected result. Each of the operations may be performed on a channel-by-channel basis (e.g., in a typical red-green-blue device, the R, G, and B channels are each calculated independently).
Adjustment 3: Color Correction. In step 70, the field- and level-corrected measurements for the color correction references in a typical red-green-blue image capture device are represented by the 3×m matrix X, where m is the number of fixed reference colors as described earlier. The color correction calculation process yields a correction matrix B as described in U.S. Published Application 2004/0179101 such that equation 6 is true, where Y is the matrix of predicted values 31 for the color corrected references developed in Setup Step 3.
Y=BX (6)
There are several ways to solve for B. A common method is referred to as matrix linear regression as shown in equation 7 and described in the aforementioned application, where XT is the transpose of matrix X in equation 6.
B=[XTX]−1XTY (7)
Once the color correction matrix B is calculated in step 72, the matrix 73 is applied to the readings in step 74 obtained from the unknown color in the target area as processed through adjustments 1 and 2 using the following equation.
x=Bu (8)
In equation 8, B is the color correction matrix as derived in equation 7 above, u is the vector representing (typically red, green and blue) values for the unknown color in the target region, and x (obtained at 76) is the vector containing the color corrected result.
Adjustment 4: Calibration Correction. The calibration correction is the process that scales the field-, level-, and color-corrected unknown target measurement to the neighborhood of the predicted calibration sample response. Similar to the scaling done in Setup Step 2, at 77 scaling is again used to compensate for changes in the range of values reported by the device once all other mathematical adjustments are made. This scaling proportionally adjusts the range of values currently reported by the device to values in the range the device reported in its base state. The color corrected measurements from the individual device as captured in Setup Step 5 are represented by the vectors xL and xD which contain the measured values 45 for the light and dark calibration samples respectively. With the predicted values for the calibration samples as calculated in Setup Step 4, as represented by the vectors CL and CD, the color corrected measurement can be calibration corrected in step 78 using the following equation.
In equation 9, x is the color corrected measurement of an unknown from equation 8, and a is the calibration corrected result. All of the operations are performed on a channel-by-channel basis (e.g., the typical R, G, and B channels are each calculated independently).
Illustrative Implementation
Although any color digital image capture device may be used, one embodiment is a still camera 99 with at least 240×320 resolution. Reference colors can be presented as a “doughnut-shaped” ring 100 (see
The image capture device/system and reference color set can be packaged in several hand-held configurations to facilitate capturing an image of the target area. In one configuration, (a “closed” configuration”) a light source may be provided to illuminate the target. In an “open” configuration, no independent light source is provided and ambient light provides the necessary illumination. The “stand-alone” configuration would include a processor that controls image capture and processes the resulting image and also include any of a variety of display and I/O components (e.g., LCD, touchscreen, keyboard, etc.) integrated into a single package with either an external power source or a provision for internal batteries for power. A “peripheral” configuration would only include the image capture components, and all processing would be in a separate package with any of several forms of interconnection (wired, rf, ir, etc). See exhibit 1 for examples.
Typical Operation
To take a measurement, the device is situated such that the color target to be measured is positioned in the imaging device's field of view in the measurement target region.
The device controller, when commanded to take a reading, will activate illumination, if the device is so equipped, capture the image or the relevant pixels from the image, and then deactivate the illumination. The controller performs the analysis and corrections of the image as described above, and then reports the result of the color measurement or processes it further according to the requirements of the application.
Having described the invention in detail and by reference to specific embodiments thereof, it will be apparent the numerous modifications and variations are possible without separating from the spirit and scope of the invention.
This application claims priority from U.S. Provisional Application No. 60/631,078 filed Nov. 23, 2004.
Number | Name | Date | Kind |
---|---|---|---|
3971065 | Bayer | Jul 1976 | A |
4185920 | Suga | Jan 1980 | A |
4405940 | Woolfson et al. | Sep 1983 | A |
4564945 | Glover et al. | Jan 1986 | A |
4812904 | Maring et al. | Mar 1989 | A |
4831437 | Nishioka et al. | May 1989 | A |
4991007 | Corley | Feb 1991 | A |
5150199 | Shoemaker et al. | Sep 1992 | A |
5371538 | Widger | Dec 1994 | A |
5526285 | Campo et al. | Jun 1996 | A |
5537516 | Sherman et al. | Jul 1996 | A |
5760829 | Sussmeier | Jun 1998 | A |
5850472 | Alston et al. | Dec 1998 | A |
6069973 | Lin et al. | May 2000 | A |
6084983 | Yamamoto | Jul 2000 | A |
6205243 | Migdal et al. | Mar 2001 | B1 |
6369895 | Keeney | Apr 2002 | B1 |
6525819 | Delawter et al. | Feb 2003 | B1 |
6546119 | Ciolli et al. | Apr 2003 | B2 |
6580820 | Fan | Jun 2003 | B1 |
6594377 | Kim et al. | Jul 2003 | B1 |
6654048 | Barrett-Lennard et al. | Nov 2003 | B1 |
6944494 | Forrester et al. | Sep 2005 | B2 |
7102669 | Skow | Sep 2006 | B2 |
7136036 | O'Donnell | Nov 2006 | B2 |
7218358 | Chen et al. | May 2007 | B2 |
7233871 | Raymond et al. | Jun 2007 | B2 |
7336401 | Unal et al. | Feb 2008 | B2 |
7728845 | Holub | Jun 2010 | B2 |
20020012895 | Lehmann | Jan 2002 | A1 |
20020126328 | Lehmeier et al. | Sep 2002 | A1 |
20030020724 | O'Donnell | Jan 2003 | A1 |
20030071998 | Krupka et al. | Apr 2003 | A1 |
20030076498 | Pfister | Apr 2003 | A1 |
20030156118 | Ayinde | Aug 2003 | A1 |
20030169347 | Jenkins | Sep 2003 | A1 |
20030174886 | Iguchi et al. | Sep 2003 | A1 |
20040001210 | Chu et al. | Jan 2004 | A1 |
20040078299 | Down-Logan et al. | Apr 2004 | A1 |
20040136579 | Gutenev | Jul 2004 | A1 |
20040167709 | Smitherman et al. | Aug 2004 | A1 |
20040179101 | Bodnar et al. | Sep 2004 | A1 |
20040189837 | Kido | Sep 2004 | A1 |
20040264767 | Pettigrew | Dec 2004 | A1 |
20050018890 | McDonald et al. | Jan 2005 | A1 |
20050146733 | Lohweg et al. | Jul 2005 | A1 |
20070225560 | Avni et al. | Sep 2007 | A1 |
20080128589 | Drummond et al. | Jun 2008 | A1 |
Number | Date | Country |
---|---|---|
19633557 | Mar 1998 | DE |
5-289206 | Nov 1993 | JP |
2002-190959 | Jul 2002 | JP |
WO 03029766 | Apr 2003 | WO |
WO 2004018984 | Mar 2004 | WO |
Number | Date | Country | |
---|---|---|---|
20060159337 A1 | Jul 2006 | US |
Number | Date | Country | |
---|---|---|---|
60631078 | Nov 2004 | US |