This application is a U.S. National Stage Application under 35 U.S.C. 371 of International Application PCT/EP2017/057222, filed Mar. 27, 2017, which was published in accordance with PCT Article 21(2) on Oct. 5, 2017, in English, and which claims the benefit of European Patent Application No. 16305372.1 filed Mar. 30, 2016, each of which is incorporated herein by reference in its entirety.
This invention pertains to the field of detection of over-exposed or saturated regions in an image. Such a detection is notably used before color correction, for instance for image restoration or for conversion to higher dynamic range.
Capturing a wide dynamic range scene with a standard low dynamic range camera can result in saturated or over-exposed images. As most of image or video contents are generally coded in a low or standard dynamic range (“SDR”), they generally contains saturated and/or over-exposed regions. At these times, display devices are now starting to become available that can reproduce wider color gamut, higher dynamic range and higher resolution images or video contents. Therefore, in order to be able to use the full capabilities of such display devices when displaying such SDR contents, a color correction should generally be applied to these contents to recover lost detail and information in the saturated or over-exposed regions of these SDR contents.
A first step for such a correction comprises generally an identification of these over-exposed regions. For example, in a SDR image the colors of which are represented by color coordinates each of which represents a different color channel and is coded in 8 bit, all values of these color coordinates above 255 are generally clipped to a value inferior or equal to 255. Commonly, 235 is considered to be a common and fixed saturation threshold for detecting saturated colors and associated pixels in 8-bit images. One reason for such a value of 235 may be that, above this value, the response of the sensor of a camera that have captured this SDR image is not linear. In most of usual color correction methods that are used to recover lost detail in over-exposed regions of contents, the identification of over-exposed regions is based on excessing a fixed saturation threshold in at least one of these different color channels. These different color channels are for instance the usual R, G and B channels. Given such a saturation threshold, all pixels of an image the colors of which have at least one color channel with a value higher than the saturation threshold are considered over-exposed and form saturated or over-exposed regions of this image.
Using such a fixed saturation threshold value to detect over-exposed regions can be problematic however. In a typical camera sensor, adjacent elements are generally covered by different red, green and blue colored filters corresponding respectively to a R, G and B channel of image data delivered by this camera. As such, different elements of the camera sensor might not receive the same amount of light all the time and they might not reach their maximum capacity all at the same time, subsequently leading to different behavior in the red, green and blue channels of the image data delivered by the camera. This is particularly the case if the light in the scene is not white. Therefore, using a fixed saturation threshold for all three color channels of a RGB image may lead to a wrong detection of over-exposed region and also to an incorrect subsequent color correction.
An object of the invention is to avoid the aforementioned drawbacks.
For this purpose, the subject of the invention is a method of detection of saturated pixels in an image the colors of which are represented by color coordinates corresponding to different color channels, comprising detecting pixels the colors of which have at least one color coordinate corresponding to one of said color channels which is superior to a saturation threshold for said color channel, wherein said saturation thresholds for said color channels depend respectively on color coordinates representing an illuminant of said image.
This method limits advantageously wrong detection of saturated pixels, notably because it takes into account the effect of the illuminant of the scene
In a first variant, saturation thresholds for said color channels are respectively equal to color coordinates representing said illuminant.
In a second preferred variant, saturation thresholds for said color channels are obtained by scaling color coordinates representing said illuminant into scaled color coordinates such that these scaled color coordinates are included into a fixed range of saturation values.
Said scaling may be performed without change of hue, or, preferably, keeps constant ratios between color coordinates (rw, gw, bw) representing said illuminant (ILL).
An object of the invention is also a module for detection of saturated pixels in an image, the colors of which are represented by color coordinates corresponding to different color channels, said module comprising at least one processor being configured for:
An object of the invention is also a color correction device for correction of colors of an image, wherein said colors are represented by color coordinates corresponding to different color channels, comprising such a module for detection of saturated pixels. Color correction device means here any device configured to change colors of an image, including a change in dynamic range. Such change of colors can be implemented for instance for image restoration or for conversion to higher dynamic range, notably prior to a step of restoring lost details in saturated regions of this image.
An object of the invention is also a computer-readable medium containing a program for configuring a digital computer to perform the method of detection of saturated pixels in an image as described above.
The invention will be more clearly understood on reading the description which follows, given by way of non-limiting example and with reference to
It is to be understood that the method of detection of saturated pixels according to the invention can be implemented in various forms of hardware, software, firmware, special purpose processors, or combinations thereof, notably a combination of hardware and software forming a module for detection of saturated pixels in an image. Moreover, the software may be implemented as an application program tangibly embodied on a program storage unit. The application program may be uploaded to, and executed by, a computer comprising any suitable architecture. Preferably, the computer is implemented on a platform having hardware such as one or more central processing units (“CPU”), a random access memory (“RAM”), and input/output (“I/O”) interfaces. The platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU. In addition, various other peripheral units may be connected to the computer platform such as a display unit and additional data storage unit.
The module for detection of saturated pixels in an image can notably be part of a color correction device. Such a color correction device may notably be configured for the restoration of images or for the conversion of images into higher dynamic range. According to exemplary and non-(imitative embodiments, such a color correction device may be included in a mobile device; a communication device; a game device; a tablet (or tablet computer); a laptop; a still image camera; a video camera; an encoding chip; a still image server; and a video server (e.g. a broadcast server, a video-on-demand server or a web server).
The functions of the various elements shown in
A description will now be given below of a main embodiment of a method of detection of saturated pixels of an image of a scene, the colors of which are represented in a given color space by a set of RGB color values, wherein each of these color values correspond to a different color channel.
As illustrated on
As an example of implementation of this first step, one can more specifically use the method of Jun-yan Huo, Yi-lin Chang, Jing Wang, and Xiao-xia Wei described in the article entitled “Robust automatic white balance algorithm using gray color points in images”, in Consumer Electronics, IEEE Transactions on, 52(2):541-546, 2006. Such a method comprises the following sub-steps:
Where ai, bi, Li represent the color of a pixel i in the CIELab color space, where t=0.3 in the proposed implementation.
As a variant of example of implementation of this first step, pixels having saturated (or clipped) colors values thr are not considered for the computing of the mean RGB value. It means that pixels having colors that are already reaching saturation are not likely to correctly represent the white point of the image and should therefore not be included in the computation. To achieve that, Eq. 1 above can be rewritten as
As such, only pixels i having color values Ii below thr are included for the computation of the mean RGB value of equation 2, as it is assumed that any color above saturation threshold thr is likely to be clipped. In this variant, we don't have a high and low threshold as below. So thr is just the saturation threshold. It can be set to the same value as thr in the fixed range.
In a first variant of detection of saturated pixels in the image, the color coordinates rw, gw and bw of the estimated illuminant are retained as threshold values thr, thg and thb respectively for the R, G and B color channel for the detection of saturated pixels. If thr=rw, thg=gw and thb=bw, it means that a color of the image having r, g and b color coordinates is considered as saturated if r>thr and/or g>thg and/or b>thb.
But color coordinates rw, gw, bw representing the color of an illuminant as estimated from an SDR image have very often values that are too low to be appropriate to be used directly as saturation thresholds for over-exposure or saturation detection. As a matter of fact, using very low saturation threshold values could lead to misdetection of some well exposed pixels. That is why a second variant of detection of saturated pixels in the image is preferred, which is illustrated on
As an example of this scaling, threshold values thr, thg and thb are obtained respectively for the R, G and B color channels by shifting the color coordinates rw, gw, bw of the evaluated illuminant into a fixed range [thl, thr], where thl and thr are lower and upper limits of possible over-exposure threshold intensities/luminance. The values thl and thr are notably set by the user. In the below implementation performed in the context in which RGB colors values are encoded over 8 bits, thl=175 and thr=235.
If min(Wrgb) is the minimum coordinate among color coordinates rw, gw, bw of the illuminant, if max(Wrgb) is the maximum coordinate among color coordinates rw, gw, bw, then a threshold vector th having thr, thg and thb as coordinates is defined as follows:
Eq. 3 and 4, each affects only elements within th that are below thl or above thr respectively. As such these equations are independent.
If for example we find that min(Wrgb)=rw, i.e. the red value, then Eq. 3 would become
Note that in the 2nd and 3rd line above, we still subtract rw from thl since it was found to be the minimum element of Wrgb. If any of the elements rw, gw, bw≥thl then this element remains unchanged.
Similarly, for Eq. 4, if we find that max(Wrgb)=gw for instance, i.e. the green value, then Eq. 4 effectively becomes
Again, if any of the elements rw, gw, bw≤thr then this element remains unchanged.
Equation 3 means that threshold values thr, thg and thb used for the detection of saturated colors are obtained by a same translation [thl−min(Wrgb)] of all color coordinates rw, gw, bw of the illuminant.
Equation 4 means that threshold values thr, thg and thb used for the detection of saturated colors are obtained by a same homothety of all color coordinates rw, gw, bw of the illuminant, using a homothety ratio
which is superior to 1.
In a third step illustrated on
In a variant of the above scaling, this scaling is partially performed in a perceptually-uniform color space (as LCh color space) instead of being entirely performed in a device-dependent RGB color space as above.
In a first sub-step of this variant, an intermediate threshold vector th′ is calculated by scaling the color Wrgb of the illuminant such that the green component gw of this illuminant is scaled into thr as follows:
The green component is preferably chosen for this scaling sub-step because it has the largest contribution to the luminance of the image, as opposed to the red and to the blue component. But it is also possible to use the red or the blue component of the illuminant for this scaling.
If any of the color components th′r, th′g and th′b of the intermediate threshold vector th′ exceeds the maximum bit value, here 255, then, after color conversion of these color components th′r, th′g and th′b representing the threshold color in the RGB color space into color components th′L, th′C and th′h representing the same color the LCh color space, the Chroma component th′C is scaled into a reduced value th″C=k. th′C such that none of the color components th″r, th″g and th″b resulting from the conversion in the RGB color space of the color components th′L, th″C and th′h exceeds the maximum 255 bit value, where k is as closed as possible to 1 and inferior to 1. Such a value of k can be found by iteration. The final threshold color th″ is the color represented in the RGB color space by the color components th″r, th″g and th″b.
Because the above chroma scaling is performed without change of hue, this variant of scaling ensures advantageously that the hue of the illumination point of the image is not changed and therefore any inadvertent hue changes is advantageously avoided in the saturated/clipped area corrections after detecting saturated pixels in the image. Since the illumination point is adjusted by scaling its Chroma component in the CIE LCh color space, it becomes less and less saturated, preserving advantageously its hue.
The method and module for detecting saturated pixels in an image of a scene which has been described above limit advantageously wrong detection of saturated pixels, notably because this detection takes into account the effect of the illuminant of the scene.
Although the illustrative embodiments of the invention have been described herein with reference to the accompanying drawing, it is to be understood that the present invention is not limited to those precise embodiments, and that various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the invention. All such changes and modifications are intended to be included within the scope of the appended claims. The present invention as claimed therefore includes variations from the particular examples and preferred embodiments described herein, as will be apparent to one of skill in the art.
While some of the specific embodiments may be described and claimed separately, it is understood that the various features of embodiments described and claimed herein may be used in combination.
Number | Date | Country | Kind |
---|---|---|---|
16305372 | Mar 2016 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2017/057222 | 3/27/2017 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2017/167700 | 10/5/2017 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5231504 | Magee | Jul 1993 | A |
8026953 | Lammers et al. | Sep 2011 | B2 |
8144216 | Shiraishi | Mar 2012 | B2 |
8154560 | Kurokawa et al. | Apr 2012 | B2 |
8331665 | Cordes et al. | Dec 2012 | B2 |
8830350 | Yamada et al. | Sep 2014 | B2 |
9171215 | Kanou et al. | Oct 2015 | B2 |
20020163525 | Liao et al. | Nov 2002 | A1 |
20040246336 | Kelly, III | Dec 2004 | A1 |
20050123211 | Wong | Jun 2005 | A1 |
20060146193 | Weerasinghe et al. | Jul 2006 | A1 |
20120200589 | Min | Aug 2012 | A1 |
20140375849 | Komatsu | Dec 2014 | A1 |
20150063690 | Chiu | Mar 2015 | A1 |
20160191802 | Martinello | Jun 2016 | A1 |
20160350900 | Barron | Dec 2016 | A1 |
Number | Date | Country |
---|---|---|
101714340 | Aug 2012 | CN |
2006303559 | Nov 2006 | JP |
2006333313 | Dec 2006 | JP |
2010062920 | Mar 2010 | JP |
2010103700 | May 2010 | JP |
2010161557 | Jul 2010 | JP |
2015092301 | May 2015 | JP |
2015092643 | May 2015 | JP |
101136345 | Apr 2012 | KR |
2006102386 | Aug 2007 | RU |
WO 2005117454 | Dec 2005 | WO |
WO 2013108493 | Jul 2013 | WO |
2015113881 | Aug 2015 | WO |
Entry |
---|
Gijsenij et al., “Computational Color Constancy: Survey and Experiments”, IEEE Transactions on Image Processing, vol. 20, No. 9, Sep. 2011, pp. 2475-2489. |
Huo et al., “Robust Automatic White Balance Algorithm using Gray Color Points in Images”, IEEE Transactions on Consumer Electronics, vol. 52, No. 2, May 2006, pp. 541-546. |
Guo et al., “Correcting Over-Exposure in Photographs”, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, California, USA, Jun. 13, 2010, pp. 515-521. |
Didyk et al., “Enhancement of Bright Video Features for HDR Displays”, Eurographics Symposium on Rendering 2008, vol. 27, No. 4, (2008), 10 pages. |
Masood et al., “Automatic Correction of Saturated Regions in Photographs using Cross-Channel Correlation”, Pacific Graphics 2009, vol. 28, No. 7, (2009), 9 pages. |
Khan et al., “Compensation for Specular Illumination in Acne Patients Images Using Media Filter”, 2014 5th International Conference on Intelligent and Advanced Systems (ICIAS 2014), Kuala Lumpur, Malaysia, Jun. 3, 2014, 4 pages. |
Fu et al., “Correcting Saturated Pixels in Images”, Digital Photography VIII, Proceedings of SPIE-IS&T Electronic Imaging, vol. 8299, No. 1, Jan. 22, 2012, 11 pages. |
JP2015092301—English Translation. |
JP2010062920—English Translation. |
JP2006333313—English Translation. |
JP2015092643A—English Translation. |
JP2006303559 A—English Abstract, Abstract Only. |
Number | Date | Country | |
---|---|---|---|
20190114749 A1 | Apr 2019 | US |