The present invention relates to a method for simulating hair using variable colorimetry and to a device for implementing said method.
Simulation methods are known today that offer a person self-visualization with a new hairstyle before the hairstylist effectively cuts and/or colors the hair.
The patent application EP 2 124 185 thus describes a method allowing fine detection of the contour of a hairstyle in order to offer a person a simulation of his face with a different hairstyle.
The application WO 9821695 describes a hairstyle simulation method in which a hairstyle type can be chosen from a database by a person, and then geometric transformations are applied to the chosen hairstyle type in order to obtain a hairstyle that is able to be superimposed plausibly on an initial photograph of the person. Using this method, it is likewise possible to modify the color of the hairstyle in order to make it compliant with the color of the initial hairstyle of the person, allowing plausibility to be further increased.
The present application describes a method for simulating hair using variable colorimetry that allows the new color that the hair of a person will have after application of a lotion of his choice, according to his present or future hairstyle type, to be simulated, in real time, in detail and at reasonable technical cost at a consumer point of sale.
According to one aspect of the invention, the invention relates to a method for simulating hair using variable colorimetry comprising:
From the prior creation of a database comprising colorimetric conversion matrices associated with various hairstyle images of varying hairstyle type and hair type, it is possible to create an avatar that superimposes a realistic reconstructed hairstyle image on the face of the person wanting to simulate a change of hairstyle, a reference hairstyle image and a colorimetric conversion matrix being associated with the reconstructed hairstyle image. It is thus possible to apply to the reconstructed hairstyle image the transformation associated with a hair lotion of his choice, making it possible to implement a real time simulation at the point of sale.
According to one variant, each hairstyle exhibits a given hairstyle type and hair type, and, for each hairstyle image, said associated colorimetric conversion matrix allows the conversion of the colorimetric value of each pixel of said hairstyle image in relation to the colorimetric value of the corresponding pixel of a reference hairstyle image in the same hairstyle type.
Advantageously, said conversion matrix is a square matrix and the colorimetric value of a pixel its component according to the colors red, green and blue (RGB component).
According to one variant, the colorimetric conversion matrix per lotion allows the conversion of the colorimetric value of each pixel of a reference hairstyle image with the colorimetric value of each pixel of the same reference hairstyle image after application of said lotion to given hair of the hair type from which the reference hairstyle comes.
The step of creating the avatar may be automatic or semiautomatic. According to one variant, the reconstructed hairstyle image is obtained by applying the conversion matrix associated with the lifelike hairstyle image to a reference hairstyle image from the database in the same type as the lifelike hairstyle image. The reconstructed hairstyle image is thus faithful in terms of type and color to the hairstyle image of the person. According to another variant, the reconstructed hairstyle image is obtained by applying the conversion matrix associated with the lifelike hairstyle image to a reference hairstyle image from said database in a different type than that of said lifelike hairstyle image. This variant can be implemented when the person wishes to visualize himself with a hairstyle of different type, for example.
According to one variant, the method according to the first aspect moreover comprises, for a given reference hairstyle, the prior recording, in said first database, of a given number of reference hairstyle images according to various shot parameters and the recording of one or more conversion matrices, each linked to one of said parameters and allowing conversion of the colorimetric values of each of the pixels of a reference image associated with said parameter on the basis of those of each of the pixels of a reference image defined as principal reference image.
It is thus possible to take account of the shot parameters of the image capture for the person at the point of sale and to possibly correct shot parameters of the reference images.
According to one variant, the method according to the first aspect moreover comprises, at the time of the prior recording of said first database and/or of said second database, a step of simplification of the colorimetric conversion matrices comprising the definition of colorimetric conversion matrices by zones of the image. This simplification step allows the finesse of the simulation to be suited to the calculation capabilities available at the point of sale and thus allows the best compromise to be found over the quality of the simulation.
According to one variant, the colorimetric conversion matrices of said hairstyle images each form part of a set of metadata associated with each of said hairstyle images. Advantageously, the metadata, multi-format, will moreover be able to comprise attributes of the hairstyle image, including shot parameters, for example. The recording in the form of metadata allows the size of the databases to be reduced.
According to a second aspect of the invention, the invention relates to a device for simulating hair using variable colorimetry comprising:
According to one variant, said storage means are arranged in a server situated at a distance from the calculation unit, and moreover comprising a remote connection between said server and the calculation unit. Thus, it is not necessary to have the storage means at the point of sale and, in the event of a plurality of points of sale, the databases can be updated in a centralized manner for all of the points of sale.
Other advantages and features of the invention will emerge upon reading the description, which is illustrated by the following figures:
In a first step S10, high-resolution digital images of various hairstyle types (A, B) belonging to various persons (100A, 100B) are acquired. Hairstyle type is generally understood to mean the arrangement of the hair (loose hair, short hair, hair with bangs, hair put up in a bun, plaited hair, etc.) and possibly its texture (stiffness, suppleness, shine, curls, etc.). Each image constitutes a reference image for the hairstyle type. These images are taken for a given color of the hair, for example a light color, for example a blonde type, which will constitute the reference color. According to one variant, the reference images of the various hairstyle types are taken under normalized shot conditions, these conditions being notably lighting, shot distance, shot angle, etc. According to another variant, for a given hairstyle type, several images will be able to be taken under various shot conditions. It will thus be possible to define a principal reference image for the hairstyle type among these reference images.
In a step S11, a set of data comprising, besides the colorimetric value of each pixel of the image, parameters linked to the shot conditions and descriptive parameters for the image (for example the resolution thereof) is extracted from these images, all of these data forming attributes of the image that are recorded in the form of metadata associated with each image, these metadata being able to be in multimedia form. For example, the metadata may comprise texts, images, texture forms, photographs, etc.
In the event of several reference images being taken under various shot conditions for a given hairstyle type, the metadata of the various reference images will moreover be able to comprise one or more conversion matrices linked to each of the shot parameters, and allowing conversion of the colorimetric values of each of the pixels of the reference image on the basis of those of each of the pixels of the image defined as principal reference image. Thus, by way of example, in the example from
In a step S12, high-resolution digital images of persons 111A, 112A, 113A, 114A etc. having a hairstyle in a given hairstyle type, for example A, with hair of different type, for example, various natural colors, are produced. This step is reproduced for each of the hairstyle types. For each hairstyle type, it will be possible to record M images that each correspond to a color, for example, of the order of ten or so or one hundred or so natural colors according to the desired objective in the exemplary embodiment in question, which are respectively indicated ChA1, ChA2, etc. for the hairstyle type A and ChB1, ChB2, etc. for the hairstyle type B in
A step S13 then allows, for each given hairstyle type and each given natural color, extraction of a colorimetric conversion matrix corresponding to a given natural color. The conversion matrix allows the conversion of the colorimetric value of each pixel of the image of the hairstyle (for example ChA1) on the basis of the colorimetric value of the corresponding pixel of the image of the reference hairstyle of the same hairstyle type (or principal reference hairstyle), for example ChA0. It is possible, in some cases, for geometric transformation to be necessary between two hairstyle images of the same type, the images being acquired from two different persons. In this case, the correspondence of the pixels from one image to the other is understood following application of said geometric transformation. Thus, each image also has associated metadata that may comprise, besides the colorimetric value of each pixel of the image, the colorimetric conversion matrix and possibly parameters linked to the shot conditions and descriptive parameters of the image.
At the conclusion of this step, a first database 101 is obtained that stores a set of metadata that are each associated with a hairstyle image and notably comprise the associated colorimetric conversion matrix/matrices.
Advantageously, it is possible to simplify the colorimetric conversion matrices associated with the hair types and/or the conversion matrices corresponding to the various shot parameters, specifically with the aim of subsequently simplifying the calculations for simulating the hair of a person, as will be described later. It is thus possible to reconstruct all the colorimetric variants of a hairstyle/of hair solely on the basis of its reference image, and by applying a model for recomposing the lights, colors and geometries of the shots across the colorimetric conversion matrices.
Advantageously, the database will be able to be enriched as a function of time with new images corresponding to associated new hairstyle types and/or natural colors, with, each time, an update of the metadata associated with the various images, and notably an update of the colorimetric conversion matrices.
In a first step S20, various hair lotions are applied to various persons having varied hairstyle types and natural colors and high-resolution digital images of the hairstyles are acquired following application of the lotion. By way of example, in the example in
In a step S21, for each image resulting from the application of the lotion, metadata associated with the images are extracted and then colorimetric conversion matrices are calculated and recorded (step S22) per lotion that allow the conversion of the colorimetric value of each pixel of the image of the hairstyle following application of the lotion (for example 121A) on the basis of the colorimetric value of the corresponding pixel of the reference hairstyle in the same hairstyle type (or principal reference hairstyle), for example ChA). This calculation and recording step can be carried out for various hair lotions applied to various hairstyle types using a method equivalent to that described in step S20 above, as shown in
Advantageously, it is possible, during a step S23, to apply a statistical simplification of the transformation matrices obtained, allowing definition, for each hair lotion, of a single simplified transformation matrix, or a limited number of transformation matrices. By way of example, it will be possible to state that the transformation matrix associated with a given hair lotion is similar to a set of hairstyle types and/or a set of hair colors, allowing limitation, per lotion, of the number of transformation matrices.
The second database 102 recording the associated colorimetric conversion matrix/matrices for each hair lotion is then obtained.
Advantageously, in the step of extracting the colorimetric conversion matrices associated with the hair types (S13) and/or in the step of extracting the colorimetric conversion matrices applied to the hair lotions (S22), simplifications will be able to be provided for the various matrices. Strictly speaking, the colorimetric conversion matrices are square matrices that are also called “look up tables” or LUT, making it possible to provide the RGB (“red green blue”) component of each pixel of the final image on the basis of the RGB component of the corresponding pixel of the reference image. Each pixel therefore has an associated colorimetric conversion matrix. The simplifications aim to simplify the matrices by defining zones of the image, or masks, in which the colorimetric features of the image are sufficiently homogeneous to be able to define identical colorimetric conversion matrices for all of the pixels of the zone. The simplifications likewise aim to simplify the conversion matrices by defining zones that are sufficiently homogeneous to be able to define diagonal conversion matrices, or matrices that are simplified in relation to the initial matrices. The masks defined in this manner are part of the attributes of the image and the metadata associated with the images will be able to include successions of pairs (zone/LUT) that will be able to be recorded in the databases.
The procedures for choosing and selecting a certain number of zones (step S31) may be very diverse. Consequently, the image editing professionals currently use zone definition tools, either using geometric contour selection means or using methods linked to color histograms; they then produce the transformations that they desire on these zones. By way of example, the patent application WO98/21695 cited previously uses a method for selecting zones by means of histograms.
Within one and the same chosen zone, the simplification of the LUT involves firstly making linearity assumptions in the logarithmic color space in one dimension (that is to say that is independent of RGB color channels) in order to apply a calculation that is very commonly performed in editing tools, of the type Rout=(GAIN*Rin+LIFT)GAMMA, where the indicated parameters have the following meanings: LIFT for adjusting dark colors, GAIN for adjusting light colors and GAMMA for adjusting intermediate tones. The parameters LIFT, GAIN, GAMMA of said linearity are calculated per ROB color channel by using a statistical method of least cost of scatter (for example variance). Alternatively, it is also possible to provide even finer simplification by going to look, still in the chosen zone, for the matrix parameters that bring in the influence of neighboring channels (“cross talks”) in order better to take account of the second order effects between channels that the linearity will not be able to provide. This may prove to be of particular benefit if there are large shade contrasts on certain hairstyles, the eye being sensitive to the effects of relative color induced between visually close zones and of substantially different colors.
Secondly, once the zones have been selected and the simplified LUTs obtained, a statistical measurement calculation of the dispersion over the whole of the image between reality and the result from the simplified LUTs is performed (step S33). The result of this calculation is called the “level of precision” of this first selection of zones.
The process above is then repeated (step S34) by selecting a different number of zones (either smaller or larger) and by comparing the levels of precision found at the end of the process. Thus, by virtue of successive iterations, the operator will end up by finding (step S35) the features best suited to the hairstyle type for which the simplification has been made and to the model of use at the point of sale with the corresponding required level of precision. It is these features (associated zones Zi/conversion matrices set) that will then be recorded in the database of colorimetric metadata 101 that the aim is to set up (step S36).
A similar simplification process can be applied to the conversion matrices for creating the colorimetric metadata associated with the hair lotions in the second database 102.
For simplifying the conversion and/or transformation matrices, variants of the process that is described above are possible.
By way of example, an editing professional can work on the image of the reference hairstyle in successive operations using the tool he has available (for example Photoshop®), with the aim of using these operations to get as close as possible to the target image of the hairstyle to be reproduced by means of colorimetric transformations on the image of the reference hairstyle. The operations with all the features thereof, notably the chosen zones and the associated LIFT, GAIN, GAMMA coefficients, are then put on a list of metadata, a list (“color decision list”) describing a series of the colorimetric metadata that is then exported to the database associated with the target image. The level of precision obtained by the manual operations thus performed by the professional is likewise calculated.
According to another example, mathematical tools for mass classification of data are used, such as those used for evaluations of credit risks or financial markets, in order to perform all of the operations above, including the successive reiterations part, more or less automatically. The zones are then chosen on the basis of a geometric or calorimetric algorithm systematically and according to certain criteria, for example criteria of geometric shapes with an increasing surface area or criteria of choice of histograms for shades of gray (cf. patent application WO98/21695 cited previously). Furthermore, a classification algorithm uses the “level of precision” criterion and/or any other criterion deemed relevant, for example, the number of electronic operations necessary for processing at the point of sale. This allows identification of the best compromise between number of zones and sought-after precision and subsequent recording thereof in the database.
In a first step S40, a digital image 400 of the person is acquired. This image can be recorded using lightweight technical means, which are accessible at the point of sale, consumer or semi-professional (digital camera, data processing tools, smartphones or tablets), and is not necessarily a high-resolution image.
In a step S41, the hair 401 and the face 402 without the hair are cropped from the image 400. The hair 401 can be extracted using known means, for example those described in the patent application EP2124185 cited previously.
In a step S42, the first database 101 is searched for the hairstyle closest to the photographed one, both in terms of hairstyle type and in terms of color. This step may comprise processing of the initial image 400 of the hairstyle, by applying a pre-established model associated with an editing tool in order to characterize said hairstyle a first time approximately with a few simple parameters predefined by the system (shapes, principal colors, identified textures). It is then possible to begin automatically extracting, from said first database 101, hairstyles and hair that satisfy the initial simple approximate criteria found previously, by means of automatic comparison of similarity between the features of the hairstyle/hair photographed and those extracted, and then to display a few images that respond to optimization of this comparison on a screen. Manual selection of the hairstyle/hair for which the user perceives the similarity as being the best can then be carried out if necessary.
In step S43, the metadata (zoning masks, textures, colors, shapes, etc.) associated with the image of the hairstyle/hair determined in the preceding step are then extracted from said first database. It is then possible (step S44) to reconstruct a hairstyle from the reference image corresponding to the hairstyle type identified for the person and by applying conversion tables (“LUT”) or pairs (masks/LUT) that are present in said metadata associated with the image of the hairstyle selected from the database. This hairstyle is associated with the face 402 previously recorded for the person in order to form the reconstructed image 410 (step S45).
According to one variant, a piece of equipment and a software application provide the person with the IHM tools that vary a finite number of characterization parameters for the hairstyle/hair of said person, on the basis of his initial reconstructed avatar. This involves an independent module that can supplement the composition/calibration tools that can currently be found on the market, for example. This module can then either be sold as a supplement to these tools (such as Photoshop®), or sold separately to interface with these applications as a linked module (“plug-in”).
According to one variant, it will be possible to simulate the depiction of the hairstyle/hair in various light environments beyond the reference environment of the database, as supplementary elements for assessing the effect of the chosen lotion.
Although described using a certain number of detailed exemplary embodiments, the method according to the invention comprises various variants, modifications and improvements that will be obvious to a person skilled in the art, on the understanding that these various variants, modifications and improvements are part of the scope of the invention, as defined by the claims that follow.
Number | Date | Country | Kind |
---|---|---|---|
1159396 | Oct 2011 | FR | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2012/070689 | 10/18/2012 | WO | 00 | 4/18/2014 |