This application is based on and claims priority from Korean Patent Application No. 10-2006-0047751, filed on May 26, 2006 in the Korean Intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference.
1. Field of the Invention
One or more embodiments of the present invention relate to a color reproduction technology, and more particularly, to a white balancing method, medium, and system.
2. Description of the Related Art
Though natural Light is typically thought of as being white, in actuality the light may have an abundance of one or more wavelengths resulting in the overall light having a peculiar color called a color temperature, expressed in Kevin (K). In general, since the human beings' visual ability automatically adjusts for such minor discrepancies, the human beings' cognitive difference for the colors is very insignificant even though light of a particular color temperature may be illuminated. However, image pick-up devices, such as a camera or a camcorder, sense colors, in which color temperatures are reflected, as they are. Accordingly, if an illuminant is changed, images taken by the image pick-up device are tingled with different colors.
For example, since the color temperature of sunlight around noon on a sunny day is considered to be high, the image taken by an image pick-up device will appear bluish on the whole. By contrast, since the color temperature of the sunlight just after sunrise or just before sunset is considered to be low, the image taken by the image pick-up device will appear reddish on the whole.
An auto white balancing (AWB) technique proposed to solve this problem compensates for distortions of the color tone of the image if the image is deflected toward any one of red (R), green (G), and blue (B) components depending upon its color temperature.
In one example, an image pick-up device discussed in Japanese Patent Unexamined Publication No. 2002-290988, divides an object into a plurality of regions, detects chromaticity in every region having a luminance higher than a threshold value, and calculates a gain value to perform white balancing based on the detected chromaticity.
However, such white balancing techniques have problems in that it is difficult to perform a consistent color reproduction in accordance with the color or dimension of an object existing in the image even though the image is taken under the same light source or illuminant.
Accordingly, the present invention has been made to solve such above-mentioned problems, with an aspect of one or more embodiments of the present invention being to improve the performance of color reproduction through a more stable illuminant estimation.
Additional aspects and/or advantages of the invention will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the invention.
To achieve the above and/or other aspects and advantages, embodiments of the present invention include a method with white balancing, including setting an illuminant detection region of an image based on an exposure integration time indicative of an amount of light collected for the image when the image was captured, and detecting an illuminant of the image by using data relative to the illuminant detection region in a color gamut of the image.
To achieve the above and/or other aspects and advantages, embodiments of the present invention include at least one medium including computer readable code to control at least one processing element to implement one or more embodiments of the present invention.
To achieve the above and/or other aspects and advantages, embodiments of the present invention include a system, including a setting unit to set an illuminant detection region of an image based on an exposure integration time indicative of an amount of light collected for the image when the image was captured, and a detection unit to detect an illuminant of the image by using data relative to the illuminant detection region in a color gamut of the image.
These and/or other aspects and advantages of the invention will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. Embodiments are described below to explain the present invention by referring to the figures.
In addition, white balancing operations illustrated in
The EIT may also be provided together with an image to be input either at the time of the image photographing of the image or stored with or for the image for subsequent correction. For example, as the digital still camera attaches the exposure information at the time of the photographing of the image, such as shutter speed or aperture value, to the photographed image as additional data, the EIT may be attached to the image file, or imbedded in the image, as the additional data.
Meanwhile, the illuminant detection region represents a range of data to be used to detect an illuminant of the image, an example of which is shown in
The illuminant detected by the detection unit 120 may reflect distorted information, e.g., in accordance with a deviation between devices or an amount of data sampled for detecting the illuminant, such that the stabilization unit 130 may further correct the detected illuminant so as to correct the distorted information, for example, in operation S230.
The white balancing unit 140 may further perform white balancing on the image by use of the corrected illuminant, for example, in operation S240. Since there are diverse known techniques for performing the white balancing on the image, the detailed description thereof will be omitted herein.
The example operation of setting the illuminant detection region for the input image can be performed by the setting unit 110 in
As described above, the setting unit 110 may set the illuminant detection region associated with the EIT. In an embodiment, by statistically analyzing the relationship between the EIT consumed when the image is taken and the point, in which the illuminant exists, in the color gamut of the taken image, any of the candidate regions 410 through 440 having a high possibility that the illuminant exists may be previously set in accordance with the EIT, as shown in
According to the embodiment shown in
From such information, a modeling of a median chromaticity locus of the illuminant associated with the EIT can be performed. To this end, an average chromatic value of the illuminants corresponding to each EIT can be calculated, and a trend line 510 of the points representing each average chromatic value can be obtained. The obtained trend line can further be projected on the chromaticity coordinates to obtain the median chromaticity locus 520 of the illuminant, as shown in
The illustrated median chromaticity locus 520 of the illuminant associated with any particular EIT may not be previously set, and the setting unit 110 can obtain the central illuminant point corresponding to the EIT of the image to be input from the median chromaticity locus 520 of the illuminant.
In an embodiment, if the central illuminant point is obtained, the setting unit 110 may then set a given range as the illuminant detection region 540 based on the central illuminant point 530, as shown in
The first threshold distance and the second threshold distance may be determined dynamically in accordance with the EIT. For example, if it is assumed that when the EIT is low, the chromaticity distributed range of the illuminant is narrow, while when the EIT is high, the chromaticity distributed range of the illuminant is wide, at least one of the first threshold distance and the second threshold distance may be altered in accordance with the EIT, so as to reflect this observed tendency. Here, alternate tendencies may also be observed depending on embodiment.
According to one embodiment, the setting unit 110 may additionally use a variance of the color gamut of the image to be input and the central point of the color gamut, so as to set the illuminant detection region, as shown in
In order to obtain the variance of the color gamut, the setting unit 110 may select the illustrated threshold number of data 610-1 through 610-4 in near order from four reference points O, P, Q, and R on the chromaticity coordinates in the color gamut of the image to be input.
The four reference points, according to an embodiment of the present invention, include, as shown in
As illustrated in
After that, as illustrated in
Then, the setting unit 110 may determine whether the color gamut height 540 and the color gamut width 650 exist in a given threshold range, respectively. If the color gamut height 540 and the color gamut width 650 satisfy the given threshold range, the setting unit 110 may use the illuminant detection region determined in accordance with the EIT as it is, since it may be considered that the input image has a normal color distribution. However, if the color gamut height 540 and the color gamut width 650 are found to be outside of the threshold range, it can be understood that the input image has an abnormal color distribution, since the color gamut of the input image is excessively wider or narrower than a normal case. In this instance, where the color gamut height 540 and the color gamut width 650 are outside of the threshold range, if a portion of the color gamut of the input image were to be determined to be the illuminant detection region in accordance with the EIT, there is a high possibility that the correct illuminant would not be detected. Accordingly, the setting unit 110 can set the color gamut of the input image as the illuminant detection region irrespective of the EIT, such as through conventional techniques, after such a detection that the color gamut height or the color gamut width is out of the threshold range.
Since the variance of the color gamut indicates the uniformity of the color gamut, embodiments of the present invention are not limited to the above described methods of calculating the variance of the color gamut. For example, the setting unit 110 may select the threshold number of data in the color gamut of the input image in near order from four reference points, and predict the variance of the color gamut by use of the distance between the points having the mean chromaticity of the selected data. Alternate methods are also available.
Returning to
For example, modeling can be performed of the point, on which the illuminant of the image exists, on the chromaticity coordinates in accordance with the central point 660 of the color gamut, similar to the method of modeling the point, on which the illuminant of the image exists, on the chromaticity coordinates in accordance with the EIT. That is, the point, on which the illuminant of the image can exist, may be set as a desired number of regions on the chromaticity coordinates in accordance with the central point 660 of the color gamut.
Then, it can be determined whether the use of the illuminant detection region determined by the EIT is appropriate, through whether the region on the chromaticity coordinates corresponding to the central point 660 of the color gamut of the input image overlaps the illuminant detection region determined in accordance with the EIT. If the region on the chromaticity coordinates corresponding to the central point 660 of the color gamut is identical or sufficiently similar to the illuminant detection region determined by the EIT, the illuminant detection region determined by the EIT can be used as it is. However, if the region on the chromaticity coordinates corresponding to the central point 660 of the color gamut is not identical or sufficiently similar to the illuminant detection region determined by the EIT, the color gamut of the input image can be set as the illuminant detection region irrespective of the EIT.
An operation of detecting the illuminant may, thus, be performed by the detection unit 120 in
The division unit 710 may divide the data contained in the illuminant detection region, e.g., as set by the setting unit 110 in the color gamut of an input image, into two groups on the basis of luminance.
In reference to
Then, a threshold luminance value may be calculated to be used to divide the data in the illuminant detection region into two groups by using the average luminance value and the mean luminance value, in operation S820. The threshold luminance value may be determined by a weighted sum of the average luminance value of the data contained in the illuminant detection region and the mean luminance value thereof, which may be expressed by the below Equation 1, for example.
Ythresh=k·Ymedian+(1−k)·Yavg Equation 1:
Here, Ythresh is a threshold luminance value to be calculated, Ymedian is a median luminance value, and Yavg is an average luminance value. In addition, k may be a weighted value of 0 or 1, for example.
If the threshold luminance value is calculated, the data in the illuminant detection region may be divided into two groups on the basis of the threshold luminance value, in operation S830. For example, the data having a luminance more than a threshold luminance value among the data in the illuminant detection region may be classified into the first group, and the data having a luminance less than a threshold luminance value may be classified into the second group.
The comparative value determination unit 720 in
In reference to
Then, a weighted average may be calculated of the average chromatic value and the median chromatic value, in operation S920, as expressed by the below Equation 2, for example.
Chw=m·Chavg+(1−m)·Chmedian Equation 2:
Here, Chw is a weighted average to be calculated, Chavg is an average chromatic value, and Chmedian is a median chromatic value. In addition, m may be a weighted value of 0 or 1, for example.
An average may be calculated of chromatic values of the data contained in each of two divided groups, e.g., as divided by the division unit 710, in operation S930. Hereinafter, the average of the chromatic values of the data contained in the first group will be referred to as the first average, and the average of the chromatic values of the data contained in the second group will be referred to as the second average.
A difference value may further be calculated between the first average and the second average, in operation S940. Next, a comparative illuminant may be set by using the weighted average Chw, e.g., calculated in the operation S920, the difference value, e.g., calculated in the operation S940, and a standard illuminant (e.g., D65, D50, CWF, A, Horizon, and the others) of a device providing a corresponding image frame (e.g., a digital still camera comprising the white balancing system 100) as an input value, in operation S950. In order to obtain the comparative illuminant, the below Equation 3 may be used, for example.
Wref(r,b)=F2(F1(Chw,DEVw),Chdist) Equation 3:
Here, Wref(r,b) is a chromatic value of the comparative illuminant, Chw is a weighted average calculated in operation S920, for example, DEVw is a standard illuminant, and Chdist is a difference value between the first average and the second average calculated in the operation S940, for example. In addition, F1 may be a quadratic correlation function, and F2 may be a linear correlation function.
The correlation function F1 may be a function reflecting a correlation between the standard illuminants and Chw under the standard illuminant, and a substantial comparative estimating function to estimate the point of the illuminant in the image, for example. The correlation function F2 may be a modeling function considering Chdist in the standard illuminant locus function, and a function to compensate a performance of a comparative illuminant estimation of the correlation function F1, for example.
The order of functions F1 and F2 can be varied, and F1 and F2 may be set as a quadratic function and a linear function, respectively, as one example of optimizing its complexity. A concrete embodiment of F1 and F2 can be easily understood through the below Equations 4 through 6, for example.
Σ(|DEVw−α*Chw2−β*Chw−γ|)≅0 Equation 4:
F1=α*Chw2+β*Chw+γ Equation 5:
F2=θ*(F1(Chw,DEVw)±Chdist)+ζ Equation 6:
Here, in Equations 4 through 6, α, β, γ, θ, and ζ are a certain real number, and may be determined as a proper value based on experiment results. For example, α, β, and γ may preferably exist as in the relation in Equation 4, noting that alternative embodiments are equally available.
Referring again to
With reference to
In order to compensate this distortion phenomenon, the stabilization unit 130, for example, may stabilize the initial illuminant detected by the detection unit 120 based on the reference illuminant locus and the average chromatic value of data input into the illuminant detection region in the color gamut of the input image.
In reference to
Wavg=N·Wi+(1−N)·Chavg Equation 7:
Here, Wavg is a weighted value to be calculated, Wi is a chromatic value of the initial illuminant, e.g., as determined by the illuminant estimation unit 730, and Chavg is an average of chromatic values of the data contained in the illuminant detection region. Chavg is also used in Equation 2. In addition, N may be a weighted value of 0 or 1, for example.
As shown in
Further, as shown in
In embodiments of the present invention, the term “unit” indicating a respective component of the white balancing system 100, for example, may be constructed as a module, for example. Here, the term “module”, as used herein, means, but is not limited to, a software and/or hardware component, such as a Field Programmable Gate Array (FPGA) or Application Specific Integrated Circuit (ASIC), which performs certain tasks. A module may advantageously be configured to reside on the addressable storage medium and configured to execute on one or more processors. Thus, a module may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. The operation provided for in the components and modules may be combined into fewer components and modules or further separated into additional components and modules.
In addition to the above described embodiments, embodiments of the present invention can also be implemented through computer readable code/instructions in/on a medium, e.g., a computer readable medium, to control at least one processing element to implement any above described embodiment. The medium can correspond to any medium/media permitting the storing and/or transmission of the computer readable code.
The computer readable code can be recorded/transferred on a medium in a variety of ways, with examples of the medium including recording media, such as magnetic storage media (e.g., ROM, floppy disks, hard disks, etc.) and optical recording media (e.g., CD-ROMs, or DVDs), and transmission media such as carrier waves, as well as through the Internet, for example. Thus, the medium may further be a signal, such as a resultant signal or bitstream, according to embodiments of the present invention. The media may also be a distributed network, so that the computer readable code is stored/transferred and executed in a distributed fashion. Still further, as only an example, the processing element could include a processor or a computer processor, and processing elements may be distributed and/or included in a single device.
In accordance with the above description, one or more embodiments of the present invention include a white balancing method, medium, and system, where the color reproducing performance can be improved through more stabilized illuminant estimation.
Although a few embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2006-0047751 | May 2006 | KR | national |
Number | Name | Date | Kind |
---|---|---|---|
5448502 | Kindo et al. | Sep 1995 | A |
5530474 | Takei | Jun 1996 | A |
5684359 | Yano et al. | Nov 1997 | A |
6359651 | Yokonuma | Mar 2002 | B1 |
6504952 | Takemura et al. | Jan 2003 | B1 |
6665434 | Yamaguchi | Dec 2003 | B1 |
6788813 | Cooper | Sep 2004 | B2 |
7352895 | Speigle et al. | Apr 2008 | B2 |
7728880 | Hung et al. | Jun 2010 | B2 |
20030020826 | Kehtarnavaz et al. | Jan 2003 | A1 |
20060078182 | Zwirn et al. | Apr 2006 | A1 |
20060176379 | Hyodo | Aug 2006 | A1 |
20070025718 | Mori et al. | Feb 2007 | A1 |
Number | Date | Country |
---|---|---|
05-068258 | Mar 1993 | JP |
05-083728 | Apr 1993 | JP |
2000-102030 | Apr 2000 | JP |
2002-290988 | Oct 2002 | JP |
2005-33609 | Feb 2005 | JP |
2005-109930 | Apr 2005 | JP |
2005-236375 | Sep 2005 | JP |
2006-033158 | Feb 2006 | JP |
Entry |
---|
Japanese Office Action dated Sep. 4, 2009, issued during examination of corresponding Japanese Patent Application No. 2007-120869. |
Number | Date | Country | |
---|---|---|---|
20070285530 A1 | Dec 2007 | US |