Determining the color of light that illuminates a scene and correcting an image to account for the lighting effect, is referred to as the “color constancy problem,” and is a consideration for many imaging applications. For example, digital cameras may use a color constancy algorithm to detect the illuminant(s) for a scene, and make adjustments accordingly before generating a final image for the scene. The human eye is sensitive to imperfections. Therefore, performance of any color constancy algorithm has a direct effect on the perceived capability of the camera to produce quality images.
a-b are photographs illustrating example output based on ranking color correction processes.
a-d are photographs illustrating example output based on ranking color correction processes.
Imaging devices, such as digital cameras, may use illuminant detection process(es) to enhance color reproduction in the images. The performance of such processes contributes to the overall image quality of the imaging device. But because no single process has been shown to be significantly better than another process under all possible lighting conditions, more than one process may be implemented in an imaging device to determine the scene illuminant and make adjustments accordingly. Example processes include, but are not limited to, CbyC, BV Qualification, Gray World, Max RGB, and Gray Finding.
But implementing multiple processes presents another challenge. That is, how can the results from different processes be combined to give the desired output, particularly when different processes may give different results under the same lighting conditions. An ad-hoc or heuristic approach may be used, for example, relying on the experience of human “experts.” But these approaches are still error prone.
The systems and methods described herein disclose a new approach and framework where different processes are ranked during use or “on the fly.” An example uses the same image that is being analyzed, and each algorithm influences the outcome (e.g., the “voting power” of the algorithm is adjusted) based on the ranking of the algorithm. This approach is based on subimage analysis, and may be used with any of a wide variety of underlying processes on any of a wide variety of cameras or other imaging technologies, both now known and later developed. Another benefit is the ability of increasing statistical samples by using “sub-image” analysis. In other words, this approach is similar to capturing multiple images at the same scene (without having to actually capture a plurality of images), which increases the statistically meaningful sample size and arrive at a better decision based on the larger sample set.
Exemplary image sensor 130 may be implemented as a plurality of photosensitive cells, each of which builds-up or accumulates an electrical charge in response to exposure to light. The accumulated electrical charge for any given pixel is proportional to the intensity and duration of the light exposure. Exemplary image sensor 130 may include, but is not limited to, a charge-coupled device (CCD), or a complementary metal oxide semiconductor (CMOS) sensor.
Camera 100 may also include image processing logic 140. In digital cameras, the image processing logic 140 receives electrical signals from the image sensor 130 representative of the light 120 captured by the image sensor 130 during exposure to generate a digital image of the scene 125. The digital image may be stored in the camera's memory 150 (e.g., a removable memory card).
Shutters, image sensors, memory, and image processing logic, such as those illustrated in
Camera 100 may also include a photo-editing subsystem 160. In an exemplary embodiment, photo-editing subsystem 160 is implemented as machine readable instructions embodied in program code (e.g., firmware and/or software) residing in computer readable storage and executable by a processor in the camera 100. The photo-editing subsystem 160 may include color correction logic 165 for analyzing and correcting for color in the camera 100.
Color correction logic 165 may be operatively associated with the memory 150 for accessing a digital image (e.g., a pre-image) stored in the memory 150. For example, the color correction logic 165 may read images from memory 150, apply color correction to the images, and write the image with the applied color correction back to memory 150 for output to a user, for example, on a display 170 for the camera 100, for transfer to a computer or other device, and/or as a print.
Before continuing, it is noted that the camera 100 shown and described above with reference to
For purposes of illustration, a set of images of the scene being photographed may be captured by “switching” a lens from wide angle to telephoto under constant conditions. Of course, multiple images are not necessarily taken using different lenses, because the scene conditions may change between image capture sessions. For example, the lighting, lens quality, and/or the camera angle may change if different lenses are used at different times to photograph the scene.
Instead, a single image is captured and stored in memory. Then, the subimage generator 210 crops portions from within the same image to obtain a set of subimages for the image. The set of images includes the same data, as though the image had been taken using a wide angle lens to capture the main image, and then the subimages had been taken of the same scene using a telephoto lens. Using subimage generator 210, the images and the subimages are based on the same conditions (both scene and camera conditions).
Next, the image processing module 220 may be implemented to process the set of images including the subimages for the image. For example, processing the set of images may include applying a color correction process to the set of images and obtaining results. This may be repeated using different color correction processes to obtain results for each of a plurality of color correction processes. The results from applying each of the color correction processes are then analyzed across the set of images, and the degree of influence each color correction process is allowed to have (e.g., the “vote” of each color correction process), is based on the results of the color correction processes for each of the applications of the color correction processes. Using a set of images results in more consistent results from each of the processes, which enables the system to better “understand the scene” being photographed.
In addition, image processing module 220 may use a plurality of color correction processes, now known and/or later developed. Example processes include, but are not limited to, CbyC, BV Qualification, Gray World, Max RGB, and Gray Finding. Image processing is not limited to use with any particular type of color correction processes. The performance of each color correction process is evaluated, for example, using statistical analysis.
In an example, some or all of the color correction processes are used to process the subimages, just as those processes would ordinarily process the overall image itself. The results of each process are analyzed to identify information, such as a mean and variance across the sub-image set. For purposes of illustration, this information is designated herein as F. The final result can then be determined using a function, designated herein as f(W, F), where W is a set of parameters. An example of this determination is shown for purposes of illustration by the following pseudocode:
R = ΣWiKi/ΣWi
Wi = (C − Vi )*Wvj
Ei = abs(Ri − R′j)/R′i
E = ΣEi
Minimize (E)
In the above pseudocode, R is the result and E is the error. It is also noted that “temperature” as used herein refers to color temperature. Color temperature is commonly defined such that lower Kelvin (K) ratings indicate “warmer” or more red and yellow colors in the light illuminating the scene. Higher Kelvin ratings indicate “cooler” or more blue color the light illuminating the scene.
While the value of W can be determined manually based on human experience, in another example the optimal values for W are determined automatically using machine learning and optimization technologies. Given a labeled dataset (e.g., output from processing each of the subimages using the color correction processes), machine learning and optimization technologies finds an optimum value of W so that the final result R has minimal errors E for the dataset. If the dataset is reasonable in size and content, the system yields better overall performance.
The ranking module 230 may then be used to rank color correction processes across the subimages. The amount or degree of influence each color correction process contributes to the final color correction process is based on the ranking. That is, the color correction process “votes” based on how well the color correction process performs for the particular scene being photographed.
It is noted that in some examples, a color correction process may have little or even no influence at all. In other examples, a single color correction process may have most or all of the voting power. But in many cases, a plurality of color correction processes will be used to various extends to apply color correction to the image being photographed.
Using multiple color correction processes enables better color correction in the final image. The rendering module 240 is then used to apply color correction to an image based on the ranking of the color correction processes.
A framework based on constraint programming was developed. In this example, 161 photos with RAW format were captured. The dataset was divided into two sets, images 1-100 and images 101-161. The first set was used for training, and the second set was used for measuring the errors.
The error (E) in each entry of Table 1 is a mean absolute percentage error (MAPE). As can be seen in Table 1, each of the six known color correction processes (CCP1-CCP6) had higher error rates when used individually, when compared to the combined color correction process (CCCP) implementing the color correction ranking process described herein.
Accordingly, the color correction ranking process may be implemented as a system (e.g., in a digital camera) to rank any of a wide variety of different color correction processes during image capture or “on the fly,” based on the same image that is being captured and analyzed. Then each processes' voting power may be adjusted based on the corresponding ranking. No prior knowledge of the scene being photographed or the conditions is needed.
a-b are photographs illustrating example output based on ranking color correction processes. In this example, a scene was photographed under mixed illumination. The mixed illuminants included inside lighting from lamps inside the room, and outside lighting from sunlight shining through the window.
It is noted that the systems and methods described herein may be implemented under any of a wide variety of lighting conditions. For example, different lighting conditions may exist inside a room even if there is no outside lighting. Such would be the case where both an incandescent light and a fluorescent light are used in or near the scene being photographed. In addition, different output from various light sources may also create a mixed illumination effect.
In this example,
b shows the output from a digital camera which implemented the ranking color correction processes described herein. The difference between the cameras used to take the two photographs shown in
The output shown by the photograph in
a-d are photographs illustrating example output based on ranking color correction processes. In this example, a scene illuminated by a single source (e.g., outdoors in the sunlight) was photographed from different angles. Current illuminant detection processes can be sensitive to angle and/or other conditions. For example, the same camera can give quite different color results even when the camera view of the scene is only changed slightly.
It is noted that the systems and methods described herein may be implemented under any of a wide variety of conditions. For example, conditions that affect color determination and correction may include, but are not limited to, camera angle (also referred to as “angle of approach”), optical/digital zoom level, and lens characteristics.
In this example, the photographs shown in
The photographs shown in
The camera used to take the photographs shown in
In operation 510, subimages of an image are processed using a plurality of color correction processes; In an example, the subimages may include both wide angle crops and telephoto crops of the image. In another example, all of the subimages are crops from the same image.
In operation 520, color correction processes are ranked across the subimages. In operation 530, color correction is applied to the image based on the ranking of the color correction processes.
The operations shown and described herein are provided to illustrate example implementations of ranking color correction processes. It is noted that the operations are not limited to the ordering shown. Still other operations may also be implemented.
In an example, further operations may include ranking the color correction processes based on results from processing the subimages by the color correction processes. Further operations may also include ranking the color correction processes based on statistical analysis of results from processing the subimages by the color correction processes.
In another example, ranking the color correction processes is based on a function f(W, F) where F is the results from processing the subimages by each color correction process, and a W is set of parameters. Further operations may include optimizing W using machine learning. Further operations may also include determining W from a labeled dataset.
Still further operations may include ranking the color correction processes for use in color correction of the image using the image being analyzed for color correction. Further operations may also include adjusting voting power of each of the correction processes for use in color correction based on the ranking.
It is noted that the examples shown and described are provided for purposes of illustration and are not intended to be limiting. Still other examples are also contemplated.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US2011/043578 | 7/11/2011 | WO | 00 | 1/8/2014 |