This application claims priority from European Patent Application No. 16305571.8, entitled “METHOD TO DETERMINE CHROMATIC COMPONENT OF ILLUMINATION SOURCES OF AN IMAGE”, filed on May 17, 2016, the content of which are incorporated by reference in its entirety.
This invention concerns the computation of the hue of a white point in each segmenting area of an image, notably when there is a plurality of illuminants. This invention is more generally related to white balancing of multi-illuminated color images.
Existing automatic methods for computing white balance of an image for multiple light sources illuminating this image generally analyze this image locally in order to find local white points and propagate these local white points to the other pixels of the image.
For instance, in the article entitled “Color constancy and non-uniform illumination: Can existing algorithms work?”, by Michael Bleier, Christian Riess, Shida Beigpour, Eva Eibenberger, Elli Angelopoulou, Tobias Troger, and Andr Kaup, published in Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on, pages 774-781, the image is divided into patches and a single white point is computed for each patch. In this document, patches are computed using super-pixel segmentation. A smoothing step may take place to ensure that there are no sharp discontinuities between patches when the computed white points are applied to the image.
In the article entitled “Multi-illuminant estimation with conditional random fields”, by Shida Beigpour, Christian Riess, Joost van de Weijer, and Elli Angelopoulou, published in Image Processing, IEEE Transactions on, 23(1): pages 83-96, 2014, it is proposed to cluster the locally computed white points using K-means clustering to determine a small set of dominant white point colors. Then these white point colors are propagated to the rest of the image using an optimization scheme that encourages image patches to obtain a white point close to the local white point estimate as well as its neighboring patches. The resulting illumination mixture map can be further filtered using for instance a Gaussian filter to remove artifacts.
The above methods can then detect more than a single illuminant in an image, but within each patch or super-pixel, filtering is applied in an isotropic manner. In all the above cases, the output of the described algorithms is a mixture map of illumination that is the same size as the image. This map is filtered with a smoothing step to avoid discontinuities between adjacent patches but that may lead to disturbing colors halos across image edges.
An object of the invention is to propose an advantageous method to determine chromatic component of illumination sources of an image, comprising:
Preferably, said semantic segmenting method is such that, within each segmenting area, approximate constant reflectance, approximate constant indirect lighting and mostly one illuminating source can be assumed.
It means that the segmentation of the image of a scene is based on two key ideas. First, such a segmentation means that nearby pixels within the same segmenting area of the image correspond to elements of this scene that are likely to belong to the same surface of the same object of this scene and therefore correspond to elements that have similar material properties and then similar reflectance. It means that, in most cases, color variations between such nearby pixels are likely to be due to directional illumination variations.
Second, such a segmentation means that color variations between such nearby pixels are likely to be due to the same illumination source. In other words, it means that, if the scene is illuminated by multiple light sources, we are likely to find variations around a few different colors across the image, corresponding to the colors of the illumination sources. For instance, if a red illumination source is present, we are likely to find variations (i.e. gradients) along the red component.
Preferably, said pixels between which representative color variations of a segmenting area are computed comprises control pixels distributed along directions crossing said segmenting area passing through a centroid of said segmenting area.
Preferably, said chromatic similarity criteria is computed between the chromatic components (a, b) of said representative color variations.
Preferably, the similarity weight between two representative color variations that is used for the chrominance clustering step is a decreasing function of a chromatic distance between these two representative color variations.
Preferably, said opponent color space is the CIELab color space.
Preferably, said clustering uses a spectral clustering method.
Preferably, said computing or determining of a chromatic/principal direction uses a Principal Component Analysis of the representative color variations of the chrominance cluster.
An object of the invention is also a method of color grading an image comprising:
Such a method allows advantageously to simplify the color grading of images through an automatic estimation of multiple light sources in this image. Thanks to this method, the influence of the illumination and of the reflectance properties of the objects in the scene can be separated and content from disparate illuminating sources can be modified to attain a consistent color appearance.
An object of the invention is also a method of white balancing an image comprising:
An object of the invention is also an apparatus for the determination of the chromatic component of illumination sources of an image comprising a processor configured for implementing the above method.
An object of the invention is also an apparatus for color grading an image comprising a processor configured for implementing the above method.
An object of the invention is also an apparatus for white balancing an image comprising a processor configured for implementing the above method.
An object of the invention is also an electronic device comprising such an apparatus. Such an electronic device may be notably an image capture device such as a camera, an image display device such as a TV set, a monitor, a head mounted display, or a set top box or a gateway. Such an electronic device may also be a smartphone or a tablet.
An object of the invention is also a computer program product comprising program code instructions to execute the steps of the above method, when this program is executed by a processor.
The invention will be more clearly understood on reading the description which follows, given by way of non-limiting example and with reference to the appended figures in which:
It will be appreciated by those skilled in the art that flow charts presented herein represent conceptual views of illustrative circuitry embodying the invention. They may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown. The functions of the various elements shown in the figures may be provided through the use of hardware capable of executing software in association with appropriate software. Such hardware capable of executing such software generally uses processor, controller, digital signal processor (“DSP”) hardware, read-only memory (“ROM”) for storing software, random access memory (“RAM”), and non-volatile storage.
The invention may notably be implemented by any device capable of implementing white balance of an image or color grading of an image. Therefore, the invention can be notably implemented in an image capture device such as a camera, an image display device such as a TV set, a monitor, a head mounted display, or a set top box or a gateway. The invention can also be implemented in a device comprising both an image capture display device and an image display device, such as a smartphone or a tablet. All such devices comprise hardware capable of executing software that can be adapted in a manner known per se to implement the invention.
An image being provided to such a device, we will now describe in reference to
In this first step, using a semantic segmentation method, the image is segmented as illustrated on
In this main embodiment, as an example of such a semantic segmentation method, a superpixel based segmentation method described by Duan, Liuyun, and Florent Lafarge, in “Image partitioning into convex polygons”, published in Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on, 2015, is used. This method creates a Voronoi partitioning of the image into a plurality of segmenting areas, both following the structure within this image and sampling well color gradient information, i.e. sampling well the variations of colors along different spatial directions crossing the image or these segmenting areas. When using this method, each superpixel forms a segmenting area.
Alternative segmentation methods can also be used, as long as they are consistent with the following properties for all elements of each segmenting area which is obtained: approximately constant reflectance, approximately constant indirect lighting and mainly one illuminating source. More precisely, in each segmenting area which is obtained:
The number of segmenting areas obtained by this segmenting step can be controlled by a parameter ε which can be manually set. A higher value leads to more segmenting areas, which follows more accurately the structure of the image, while a lower number leads to fewer segmenting areas but is faster to compute. An example of the segmentation of the image 1 (a) is shown on
It is known that the color I(p) of a pixel p is given as a product between its reflectance R(p) and its shading S(p) such that:
I(p)=R(p)*S(p) (1)
As such a shading corresponds to the illumination of this pixel, equation (1) can be further expanded to distinguish between direct illumination Ldirect(p) of this pixel and its indirect illumination Lindirect(p), such that we have:
I(p)=R(p)*(Ldirect(p)+Lindirect(p)) (2)
For any nearby pixels p, q of the same segmenting area, from the above specific properties of the image segmentation, we know that R(p)≅R(q), Lindirect(p)≅Lindirect(q) and that these pixels p, q are mostly shaded by the same illumination source having a color LRGB(p) for instance given in the RGB color space of the image. According to these properties, the 1D color variation along the direction pq of the image space is mainly due to variation of the direct lighting and can therefore be expressed as follows:
ΔI(p,q)=*ΔLdirect(p,q) (3)
In other words, information about potential changes in direct illumination between two nearby pixels p and q of the same segmenting area is representative of a value of color variation between them:
ΔI(p,q)=R(p)*|Ldirect(p)*Ldirect(q)| (4)
The intensity of the common illumination illuminating these two nearby pixels p and q depends on the orientation of the surface of objects at points P and Q corresponding to theses pixels. Then, equation 4 can be further rewritten as follows:
ΔI(p,q)=R(p)*|LRGB*({right arrow over (n)}(p)·{right arrow over (L)})−LRGB*({right arrow over (n)}(q)·{right arrow over (L)}) (5)
ΔI(p,q)=R(p)*|LRGB*({right arrow over (n)}(p)·{right arrow over (L)}−{right arrow over (n)}(q)·{right arrow over (L)})| (6)
where n(p) is the unity vector normal to the surface at point P, n(q) is the unity vector normal to the surface at point Q of the scene, and L is the direction of this common illumination having the color LRGB(p), which is the same direction at point P and point Q.
Then, within each segmenting area cn within the image, different color variations are computed using equation 6 along different directions pq crossing this segmenting area cn. These different color variations computed for the same segmenting area are representative of this segmenting area.
As illustrated on
Color variation values are computed along these crossing directions for all three RGB components of the colors. All these RGB color variations computed in image space define a cloud of color variations in RGB color space. Each point of this cloud of the RGB color space corresponds for instance to a RGB color variation computed between two pixels of a same segmenting area of the image space, these two pixels belonging to a direction crossing this segmenting area.
Any other method to sample pixels of a segmenting area that are considered to compute color variations values can be used instead.
Each segmenting area of the image can then be represented by as many RGB color variations points as the number of control pixels defined in this segmenting area.
To simplify the computation of the next step, it is preferred to have only one representative RGB color variation point for each segmenting area. Such a single representative RGB color variation of a segmenting area is computed in function of all the different RGB color variations computed for this segmenting area. In the present embodiment, for each color channel R, G and B, this single representative color variation is computed as the maximum of the different color variations computed for this color channel. As a variation, this single representative color variation can be computed as the average or the median of the different color variations computed for this color channel.
When computing a single representative color variation for each segmenting area, all single representative RGB color variations define a cloud of representative color variations in RGB color space.
In any segmenting area of the image, defining if a representative color variation as computed above is due to reflectance change or to lighting change cannot be disambiguated. But, if color variations are mainly due to a single illuminating light for one similar reflectance, this set of these color variations captured in image space will define a 3D line within an opponent color space. An opponent color space is preferred over RGB color space, due to its ability to separate luminance information from chrominance information.
Estimating these 3D lines for an image shaded by different illuminants in presence of different reflectances requires to partition the 3D cloud of color variations representative of all segmenting area into different chrominance clusters. For computing performance reasons, only one representative color variation will be used for each segmenting area in the implementation below, but the same implementation can be used if more than one representative color variation is used for each segmenting area.
For such a partition of the 3D cloud of representative color variations, the RGB components of these representative color variations are converted in components representing the same color variations in an opponent color space separating chrominance from luminance and then these color variations are grouped according to a similarity of their chrominance components within this opponent color space
In a preferred implementation, the CIELab space is used as an opponent color space separating chrominance from luminance, and known color space conversion formulas are used for the above conversion.
Through this chrominance grouping or clustering step, the cloud of representative color variations is divided into different chrominance clusters such that each chrominance cluster groups representative color variations having chromatic similarities. It means that the similarity weight between two representative color variations that is used for this chrominance clustering step should be a decreasing function of a chromatic distance between these two representative color variations. It means also that this clustering step does not take into account the luminance components of the representative color variations, but only their chromatic components, generally named an and b in the Lab color space. A chromatic similarity value Simhue between two points m, n of the cloud of representative color variations is computed from the chromatic components am, an and bm, bn of respectively these two points m and n, using for instance the following similarity function:
where σ is a normalization constant defined according to the opponent color space. In CIE Lab, we have for instance set σ=4.
In this chrominance clustering step, no spatial distance between segmenting areas represented by the representative color variations is involved.
Any other clustering approach using a chromatic similarity measure between two representative color variations is suitable to partition the cloud of color variations into chrominance clusters.
A spectral clustering method is preferably used for such clustering, because such a method can automatically determine the appropriate number of chrominance clusters needed according to the color variations cloud, therefore avoiding the need for a user-parameter. The article entitled “Normalized Cuts and Image Segmentation”, published on August 2000 by Jianbo Shi and Jitendra Malik in IEEE TRANSACTIONS on pattern analysis and machine intelligence, Vol. 22, No 8, gives an example of such a spectral clustering method which is applied to segmenting areas of an image. Alternative clustering methods can be used instead, such as a simpler k-means clustering with a user-defined value of k for the number of clusters.
Using a spectral clustering method applied to representative color variations, the following three sub-steps are for instance implemented:
Then, the first eigenvalues are ordered in increasing order, until the ratio between two subsequent eigen-values exceeds a threshold τeigen, set to 0.98 in this implementation. Note that the smallest eigenvalue is ignored. The output of this sub-step provides the number k of chrominance clusters that are necessary to sufficiently describe the representative color variations in the cloud.
To assign a chrominance cluster to each representative color variation from the cloud, a matrix U of dimension l, N is built, where l is the number of most representative eigenvectors (as determined above) and N is the number of representative color variations considered within the cloud. This matrix is built such as the eigen vectors are the columns. K-means clustering is then applied on the rows of U using the number of clusters k determined in the previous sub-step.
At the end of this 3rd step, whatever a spectral clustering method is used or not, each representative color variation is grouped per similarity of chrominance.
Due to the semantic segmenting method used to segment the image, it has been shown above that all pixels of a segmenting area from which representative color variations are computed have approximate constant reflectance, approximate constant indirect lighting and are mostly illuminated by a single illumination source. As shown above, representative color variations are represented in the Lab color space by luminance variations and by hue variations. The chromatic components of a representative color variation correspond this hue variation.
To find the hue of illumination specific to each chrominance cluster, it will now be assumed that within each chrominance cluster the strongest luminance variations will be mainly due to variations of light intensity of a common illumination source. It means that representative color variations having the highest luminance variation components in a same chrominance cluster are oriented along a direction representative of the hue of the illumination source of the chrominance cluster. Determining this direction allows to separate the influence of the illumination and of the reflectance properties of the objects in the scene. Then, the chromatic components ai,bi of the intersection of this representative direction with the plane of maximum luminance, in the case of CIELab L=100, is considered as the hue
of this illumination source.
In other words, it is assumed that the representative color variations of a same chrominance cluster that have the highest luminance variation component are assumed to be distributed along a direction representative of the hue of the illumination source of this chrominance cluster and have likely the lowest chromatic variations, showing a somehow constant hue along this direction.
To find such a representative direction in each chrominance cluster, a Principal Component Analysis can be advantageously performed on the representative color variations of this chrominance cluster i such as to determine in the Lab color space a direction exhibiting the strongest variation in the luminance component of these representative color variations. Since the chrominance cluster data are defined on an opponent color space (here the CIELab), they are 3 dimensional data. As such, the Principal Component Analysis performed on the representative color variations of a chrominance cluster provides three principal components, each of those representing a vector from the mean of this chrominance cluster towards a direction defined within the opponent color space. The vector corresponding to the strongest variation in the luminance component determines the direction to consider.
Then, the chromatic components ai, bi of the intersection of this direction with the plane of maximum luminance L=100 provides the hue
of the illumination source common to all segmenting areas represented by the different representative color variations of this chrominance cluster. Globally, it means that, based on Equation 6 above, from an analysis of a collection of different pairs of nearby pixels within the image, sufficient color information can be obtained to rebuild the variation of each illumination source of this image and therefore to estimate the hue of the different illumination sources illuminating this image.
Application of the Determination of the Chromatic Component ai, bi of the Illumination Sources Si for Each Cluster i of Segmenting Areas of an Image for the Color Grading of this Image:
It is well known to apply data concerning the illumination of an image for the color grading of this image. U.S. Pat. No. 7,688,468 (CANON) discloses for instance a method that predicts final color data viewed under a final illuminant from initial color data viewed under an initial illuminant.
Using the chromatic component ai, bi of an illumination source Si as determined above for a chrominance cluster of segmenting areas of an image as determined above, such a color grading of an image can for instance be performed as follows, here in the context of processing in the CIELab color space:
The color graded image that is obtained can be finally converted back to the RGB space for display, using any existing color gamut mapping method if necessary to ensure that RGB values do not exceed the target display gamut.
The above embodiments show that the method of determination of chromatic components of the illuminations sources of an image as described above allow then advantageously to modify the colors of illumination of an image without any prior knowledge of the scene geometry or the illumination configuration.
Application of the Determination of the Chromatic Component ai, bi of the Illumination Sources Si for Each Cluster i of Segmenting Areas of an Image for the Automatic White Balancing of this Image:
The above section related to background art mentions existing automatic methods for computing white balance of an image. The automatic determination of the chromatic component ai, bi of the illumination sources Si for each cluster i of segmenting areas of an image as described above can be advantageously used for computing white balance of an image, notably by pushing these chromatic components ai, bi towards the achromatic point [a=0,b=0].
Such a white balance is for instance obtained as follows according to a first embodiment:
In a second embodiment of such an automatic white balancing application, we take the chromatic component ai, bi of the illumination sources Si for each cluster i of segmenting areas of the image as determined above. For each segmented area, we define a correction locally for each crossing direction between control points and centroid point obtained using Equation 6 previously. This local correction for each crossing direction takes as parameters:
a. The illumination source Si for this segmented area, defined by ai, bi
b. The color variations within this crossing direction, defined by ap, bp
We estimate the adjustment values δap, δbp to perform this correction such as δap=−ai and δbp=−bi only if vec(ap, bp). vec(ai, bi)>0, otherwise, δap=0 and δbp=0.
Then:
We get then a white balanced image.
In a third embodiment of such an automatic white balancing application, the user can aid the process by clicking on an area of the image that represents a white surface (e.g. a white wall or paper). In this embodiment, the chromatic adjustment values δai and δbi of the first embodiment above are computed according to this constraint. This ensures that a specific white surface is accurately white balanced and used as a stronger constraint compared to the first embodiment.
The above embodiments show that the method of determination of chromatic components of the illuminations sources of an image as described above allow then advantageously for white balancing of scenes under complex mixed illumination automatically, while methods of the prior art requires adding scribbles to provide information to the white balancing algorithm.
It should also be noted that the method of determination of chromatic components of the illuminations sources of an image as described above could also be used directly in an Augmented Reality application, to provide an estimation of the illumination of the real scene for accurately lighting the synthetic objects that might be added in the scene.
Globally, the method of determination of chromatic components of the illuminations sources of an image as described above can be advantageously implemented in real time, for instance on mobile devices (e.g. tablet) to modify or to white-balance photographs on the fly.
Although the illustrative embodiments of the invention have been described herein with reference to the accompanying drawings, it is to be understood that the present invention is not limited to those precise embodiments, and that various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims. The present invention as claimed therefore includes variations from the particular examples and preferred embodiments described herein, as will be apparent to one of skill in the art.
While some of the specific embodiments may be described and claimed separately, it is understood that the various features of embodiments described and claimed herein may be used in combination.
Number | Date | Country | Kind |
---|---|---|---|
16305571.8 | May 2016 | EP | regional |