The present invention relates to providing for detection of scene illuminant and the use thereof to provide automatic scene balance correction in the digital photographic process.
To perform like the human visual system, imaging systems should automatically adapt to changing color casts in scene illumination. Simply put, white objects in a scene must be rendered as white, regardless of whether the scene illuminant was daylight, tungsten, fluorescent, or some other source. This process of automatic white adaptation is called “white balancing” and the corrective action determined by this adaptation mechanism is the white balance correction.
Automatic white balance algorithms employed in automatic printers, digital scanners, and digital cameras conventionally employ the digitized image information and related mathematical techniques to attempt to deduce from the image data the optimum level of white balance correction to be applied on a scene-by-scene basis to the image. It is known that errors in automatic white balance correction occur when the algorithm is unable to differentiate between an overall color cast caused by the scene illuminant and an overall color bias due to the composition of the scene. It is desirable, therefore, to be able to differentiate a color cast due to scene illumination from a color bias due to scene composition. It is also known that white balance errors occur due to color temperature variations within a class of scene illuminant. Late day direct sunlight imposes a yellowish color cast to a scene while skylight on a cloudy day will lend a bluish color cast to a scene. However, both lights are clearly daylight and will require substantially different white balance corrections. It is desirable, therefore, to also be able to account for scene illuminant color temperature variation when determining the white balance correction.
There are many methods described in the literature for determining the scene illuminant of a digital image. Some require special hardware at the time of image capture to make this determination. In commonly-assigned U.S. Pat. Nos. 4,827,119 and 5,037,198 a method of measuring scene illuminant temporal oscillations with the use of a dedicated sensor is described. Daylight will have no oscillation, while tungsten and fluorescent sources will fluctuate in output power due to the AC nature of their power supplies. The problem with any dedicated sensor approach is that it includes two separate data collection and processing paths, one for illuminant detection and another for actual image capture. This leads to the potential of the dedicated sensor path losing synchronization and calibration with respect to the main image capture path. Additionally, the relatively limited amount of information captured by a dedicated sensor can severely limit the robustness of the scene illuminant determination. In commonly-assigned U.S. Pat. Nos. 5,644,358 and 5,659,357 the image data (video input) is combined with a luminance input to perform illuminant classification. (The nature of the luminance input is never described.) Rather than determining an overall illuminant for the scene, a low-resolution version of the image is produced and each image element (or “paxel”) within the low-resolution image is individually classified into one of a number of possible scene illuminants. Statistics are performed on these paxel classifications to derive a compromise white balance correction. The problem with this approach is that no explicit attempt is made to uncouple the effects of scene illuminant color cast from the effects of scene composition. Instead, a complex series of tests and data weighting schemes are applied after the paxel classifications to try to reduce subsequent algorithm errors. Japanese Publication 2001-211458 teaches a method very similar to that described in commonly-assigned U.S. Pat. Nos. 5,644,358 and 5,659,357, and has the same problems.
There are many methods described in the literature for determining a color temperature responsive white balance correction of a digital image. In commonly-assigned U.S. Pat. Nos. 5,185,658 and 5,298,980 a method of measuring the scene illuminant's relative amounts of red (R), green (G), and blue (B) power with dedicated sensors is described. The white balance correction values are derived from the ratios of R/G and B/G, which are considered to be related to the color temperature of the scene illuminant. As with commonly-assigned U.S. Pat. Nos. 4,827,119 and 5,037,198, discussed above, the problem with any dedicated sensor approach is that it includes two separate data collection and processing paths, one for illuminant detection and another for actual image capture, and these two paths can get “out of step” with each other. In the above referenced Japanese Publication 2001-21458, the illuminant classification step is further refined to represent a variety of subcategories within each illuminant class. In this way cooler and warmer color cast versions of the illuminant classes of daylight, tungsten, and fluorescent are determined. However, as stated before, there is no explicit method given for uncoupling illuminant color cast from scene composition variability and, as a result, a variety of involved statistical operations are required in an attempt to reduce algorithmic errors.
In commonly-assigned U.S. Pat. No. 6,133,983 Wheeler discloses a method for optical printing of setting the degree of color correction; i.e. a parameter used to determine the magnitude of applied color balancing to photographic images, based on camera meta-data. In particular, Wheeler discloses using the scene-specific measurements of the scene light level, camera-to-subject distance, flash fire signal, and flash return signal to classify an image as being captured either under daylight or non-daylight illuminant. It is stated that for images captured with daylight-balanced films there is no need to further distinguish the non-daylight illuminants because the same white balance correction methodology works regardless. As a result, commonly assigned U.S. Pat. No. 6,133,983 does not present a method for such subsequent illuminant discrimination. This approach fails when applied to imaging systems requiring further differentiation of non-daylight sources for accurate white balancing, or if any of the image metadata (i.e., scene light level, camera-to-subject distance, flash fire signal, and flash return signal) are corrupt or missing. In particular, the method disclosed by Wheeler requires the scene light level to be a measured quantity.
In conference paper “Usage of DSC meta tags in a general automatic image enhancement system” from the Proceedings of SPIE Vol. #4669, Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications III, Jan. 21, 2002, pgs. 259-267, the authors Moser and Schroder describes a method of scene analysis regarding the likelihood of a photographic scene having been influenced by an artificial illuminant light source. The method disclosed by Moser and Schroder uses the camera meta-data of Fnumber (f) and exposure time (t) to calculate a “pseudo” energy (pE) quantity for a digital image derived from a digital camera using the formula:
Moser and Schroder then use the pE quantity to analyze digital images with regard to the likelihood of the scene illumination source and corresponding resultant color cast. While the method disclosed by Moser and Schroder is useful for analyzing digital images, as disclosed it is not accurate enough to produce consistent automatic white balance correction results for a practical digital enhancement system. This is principally due to the inherent relative, as opposed to absolute, nature of the “pseudo” energy quantity. Two digital cameras with substantially different energy requirements for producing acceptable images will have substantially different “pseudo” energy values for the same scene illumination conditions. Similarly, these same two digital cameras can produce identical “pseudo” energy values when producing digital images with substantially different scene illumination sources.
The problem of automatically analyzing for scene illuminant and performing automatic white balance is also referred to as scene balance. Scene balance analysis typically also includes some degree of exposure adjustment as well as white balance adjustment.
In the prior art, illuminant estimation and balance analysis was performed on images having three color channels, most often red green and blue. With the development of image sensors having panchromatic and color pixels, a need exists to provide improved scene balance performance starting with an image using panchromatic and color pixels.
It is an object of the present invention to produce an automatically scene balanced digital color image from a digital image having panchromatic and color pixels.
This object is achieved by a method of providing an enhanced image including color and panchromatic pixels, comprising:
(a) using a captured image of a scene that was captured by a two-dimensional sensor array having both color and panchromatic pixels;
(b) providing an image having paxels in response to the captured image so that each paxel has color and panchromatic values;
(c) converting the paxel values to at least one luminance value and a plurality of chrominance values; and
(d) computing scene balance values from the luminance and chrominance values to be applied to an uncorrected image having color and panchromatic pixels that is either the captured image of the scene or an image derived from the captured image of the scene and using the computed scene balance values to provide an enhanced image including color and panchromatic pixels.
It is an advantage of the present invention that using panchromatic and color pixels produces improved scene balance correction for digital color images. This improvement includes increased accuracy of the scene balance corrections or decreased sensitivity of the scene balance corrections to image noise.
In the following description, a preferred embodiment of the present invention will be described in terms that would ordinarily be implemented as a software program. Those skilled in the art will readily recognize that the equivalent of such software can also be constructed in hardware. Because image manipulation algorithms and systems are well known, the present description will be directed in particular to algorithms and systems forming part of, or cooperating more directly with, the system and method in accordance with the present invention. Other aspects of such algorithms and systems, and hardware or software for producing and otherwise processing the image signals involved therewith, not specifically shown or described herein, are selected from such systems, algorithms, components and elements known in the art. Given the system as described according to the invention in the following materials, software not specifically shown, suggested or described herein that is useful for implementation of the invention is conventional and within the ordinary skill in such arts.
Still further, as used herein, the computer program is stored in a computer readable storage medium, which can include, for example; magnetic storage media such as a magnetic disk (such as a hard drive or a floppy disk) or magnetic tape; optical storage media such as an optical disc, optical tape, or machine readable bar code; solid state electronic storage devices such as random access memory (RAM), or read only memory (ROM); or any other physical device or medium employed to store a computer program.
Before describing the present invention, it facilitates understanding to note that the present invention is preferably used on any well-known computer system, such as a personal computer. Consequently, the computer system will not be discussed in detail herein. It is also instructive to note that the images are either directly input into the computer system (for example by a digital camera) or digitized before input into the computer system (for example by scanning an original, such as a silver halide film).
Referring to
A compact disk-read only memory (CD-ROM) 124, which typically includes software programs, is inserted into the microprocessor based unit for providing a way of inputting the software programs and other information to the microprocessor based unit 112. In addition, a floppy disk 126 can also include a software program, and is inserted into the microprocessor-based unit 112 for inputting the software program. The compact disk-read only memory (CD-ROM) 124 or the floppy disk 126 can alternatively be inserted into an externally located disk drive unit 122 which is connected to the microprocessor-based unit 112. Still further, the microprocessor-based unit 112 is programmed, as is well known in the art, for storing the software program internally. The microprocessor-based unit 112 can also have a network connection 127, such as a telephone line, to an external network, such as a local area network or the Internet. A printer 128 can also be connected to the microprocessor-based unit 112 for printing a hardcopy of the output from the computer system 110.
Images are displayed on the display 114 via a personal computer card (PC card) 130, such as, as it was formerly known, a PCMCIA card (based on the specifications of the Personal Computer Memory Card International Association) which contains digitized images electronically embodied in the PC card 130. The PC card 130 is ultimately inserted into the microprocessor based unit 112 for permitting visual display of the image on the display 114. Alternatively, the PC card 130 is inserted into an externally located PC card reader 132 connected to the microprocessor-based unit 112. Images are also input via the compact disk 124, the floppy disk 126, or the network connection 127. Any images stored in the PC card 130, the floppy disk 126 or the compact disk 124, or input through the network connection 127, are obtained from a variety of sources, such as a digital camera (not shown) or a scanner (not shown). Images are also input directly from a digital camera 134 via a camera docking port 136 connected to the microprocessor-based unit 112 or directly from the digital camera 134 via a cable connection 138 to the microprocessor-based unit 112 or via a wireless connection 140 to the microprocessor-based unit 112.
In accordance with the invention, the algorithm is stored in any of the storage devices heretofore mentioned and applied to images in order to automatically scene balance the images.
Returning to
These computations are widely interpreted as luminance (Y), the color temperature axis (C1), the green-magenta axis (C2), and the white-green axis (C3). It will apparent to one skilled in the art that other computations are used to produce different YCCC values that would still be applicable to the preferred embodiment.
The (r, g, b, p) values so computed are the log RGBP scene balance values 238. The log RGBP target values, (rT, gT, bT, pT), are specified to produce correct exposure and white balance adjusted pixel values for an 18% scene reflectance gray region in the enhanced RGBP CFA image 220 (
The (rA, gA, bA, pA) values are the RGBP scene balance values 216 (
where (r, g, b, p) are the log RGBP CFA image 246 values, (rA, gA, bA, pA) are the RGBP scene balance values 216 (
In
B
V
=T
V
+A
V
−S
V
where
A first illuminant score Z1 is computed by the expression
Z
1
=a
1
b
V
+a
2
1
+a
3
where
Z
2
=b
1
B
V
+b
2
2
+b
3
where
Z
3
=c
1
B
V
+c
2
3
+c
3
where
In
The scene balance algorithms disclosed in the preferred embodiments of the present invention are employed in a variety of user contexts and environments. Exemplary contexts and environments include, without limitation, are wholesale digital photofinishing (which involves exemplary process steps or stages such as film in, digital processing, prints out), retail digital photofinishing (film in, digital processing, prints out), home printing (home scanned film or digital images, digital processing, prints out), desktop software (software that applies algorithms to digital prints to make them better or even just to change them), digital fulfillment (digital images in—from media or over the web, digital processing, with images out—in digital form on media, digital form over the web, or printed on hard-copy prints), kiosks (digital or scanned input, digital processing, digital or scanned output), mobile devices (e.g., PDA or cell phone that are used as a processing unit, a display unit, or a unit to give processing instructions), and as a service offered via the World Wide Web.
In each case, the scene balance algorithms stand alone or are components of a larger system solution. Furthermore, the interfaces with the algorithm, e.g., the scanning or input, the digital processing, the display to a user (if needed), the input of user requests or processing instructions (if needed), the output, are each on the same or different devices and physical locations, and communication between the devices and locations are via public or private network connections, or media based communication. Where consistent with the foregoing disclosure of the present invention, the algorithms themselves are fully automatic, have user input (be fully or partially manual), have user or operator review to accept/reject the result, or are assisted by metadata (metadata that is user supplied, supplied by a measuring device (e.g. in a camera), or determined by an algorithm). Moreover, the algorithms can interface with a variety of workflow user interface schemes.
The scene balance algorithms disclosed herein in accordance with the invention can have interior components that utilize various data detection and reduction techniques (e.g., face detection, eye detection, skin detection, flash detection).
The invention has been described in detail with particular reference to certain preferred embodiments thereof, but it will be understood that variations and modifications are effected within the spirit and scope of the invention.
Reference is made to commonly assigned U.S. patent application Ser. No. 11/341,206, filed Jan. 27, 2006 by James E. Adams, Jr. et al, entitled “Interpolation of Panchromatic and Color Pixels”, the disclosure of which is incorporated herein.