The present disclosure generally relates to systems and methods for multi-tiered generation of a face chart using, for example, machine-learning techniques.
In accordance with one embodiment, a computing device obtains an image depicting an image of a user's face. The computing device identifies one or more regions in the image depicting skin of the user and generates a skin mask. The computing device predicts a skin tone of the user's face depicted in the image and populates the skin mask according to the predicted skin tone. The computing device defines feature points corresponding to facial features on the user's face depicted in the image and extracts pre-defined facial patterns matching facial features depicted in the image. The computing device inserts the extracted pre-defined facial patterns into the skin mask based on the feature points and generates a hair mask identifying one or more regions in the image depicting hair of the user. The computing device extracts a hair region depicted in the image of the user based on the hair mask and inserts the hair region on top of the
Another embodiment is a system that comprises a memory storing instructions and a processor coupled to the memory. The processor is configured by the instructions to obtain an image depicting an image of a user's face. The processor is further configured to identify one or more regions in the image depicting skin of the user and generate a skin mask. The processor is further configured to predict a skin tone of the user's face depicted in the image and populate the skin mask according to the predicted skin tone. The processor is further configured to define feature points corresponding to facial features on the user's face depicted in the image and extract pre-defined facial patterns matching facial features depicted in the image. The processor is further configured to insert the extracted pre-defined facial patterns into the skin mask based on the feature points and generate a hair mask identifying one or more regions in the image depicting hair of the user. The processor is further configured to extract a hair region depicted in the image of the user based on the hair mask and insert the hair region on top of the skin mask to generate a face chart.
Another embodiment is a non-transitory computer-readable storage medium storing instructions to be implemented by a computing device. The computing device comprises a processor, wherein the instructions, when executed by the processor, cause the computing device to obtain an image depicting an image of a user's face. The processor is further configured by the instructions to identify one or more regions in the image depicting skin of the user and generate a skin mask. The processor is further configured by the instructions to predict a skin tone of the user's face depicted in the image and populate the skin mask according to the predicted skin tone. The processor is further configured by the instructions to define feature points corresponding to facial features on the user's face depicted in the image and extract pre-defined facial patterns matching facial features depicted in the image. The processor is further configured to insert the extracted pre-defined facial patterns into the skin mask based on the feature points and generate a hair mask identifying one or more regions in the image depicting hair of the user. The processor is further configured to extract a hair region depicted in the image of the user based on the hair mask and insert the hair region on top of the skin mask to generate a face chart.
Other systems, methods, features, and advantages of the present disclosure will be apparent to one skilled in the art upon examining the following drawings and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description, be within the scope of the present disclosure, and be protected by the accompanying claims.
Various aspects of the disclosure are better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, with emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
The subject disclosure is now described with reference to the drawings, where like reference numerals are used to refer to like elements throughout the following description. Other aspects, advantages, and novel features of the disclosed subject matter will become apparent from the following detailed description and corresponding drawings.
The present disclosure is directed to systems and methods for multi-tiered generation of face charts that capture characteristics of the individual's facial features that are more accurate in comparison to conventional configurations, thereby facilitating selection and application of the most suitable cosmetic products for the individual. Face charts may be used by makeup artists to design looks using various cosmetics based on the specific characteristics of a user's face. Therefore, a face chart that accurately captures characteristics of a user's face is essential. A description of a system for performing multi-tiered generation of a face chart is described followed by a discussion of the operation of the components within the system.
A face chart constructor 104 is executed by a processor of the computing device 102 and includes an image capture module 105, a facial feature analyzer 106, and a layer aggregator 116. The facial feature analyzer 106 generates different layers of the face chart where each layer captures characteristics relating to different aspects of the user's face, as described in more detail below. The facial feature analyzer 106 includes a skin mask module 108, a hair mask module 110, a skin tone predictor 112, and a facial features module 114. The layer aggregator 116 is configured to combine all the layers generated by the facial feature analyzer 106 and generate a final face chart 118 of the user.
The image capture module 105 is configured to obtain digital images 101 of a user's face. For some embodiments, the image capture module 105 is configured to cause a camera of the computing device 102 to capture an image 101 or a video of the user of the computing device 102.
Referring back to
The images obtained by the image capture module 105 may be encoded in any of a number of formats including, but not limited to, JPEG (Joint Photographic Experts Group) files, TIFF (Tagged Image File Format) files, PNG (Portable Network Graphics) files, GIF (Graphics Interchange Format) files, BMP (bitmap) files or any number of other digital formats. The video may be encoded in formats including, but not limited to, Motion Picture Experts Group (MPEG)-1, MPEG-2, MPEG-4, H.264, Third Generation Partnership Project (3GPP), 3GPP-2, Standard-Definition Video (SD-Video), High-Definition Video (HD-Video), Digital Versatile Disc (DVD) multimedia, Video Compact Disc (VCD) multimedia, High-Definition Digital Versatile Disc (HD-DVD) multimedia, Digital Television Video/High-definition Digital Television (DTV/HDTV) multimedia, Audio Video Interleave (AVI), Digital Video (DV), QuickTime (QT) file, Windows Media Video (WMV), Advanced System Format (ASF), Real Media (RM), Flash Media (FLV), an MPEG Audio Layer III (MP3), an MPEG Audio Layer II (MP2), Waveform Audio Format (WAV), Windows Media Audio (WMA), 360 degree video, 3D scan model, or any number of other digital formats.
With reference to
With reference to
With reference to
With reference to
In the example shown in
The processing device 202 may include a custom made processor, a central processing unit (CPU), or an auxiliary processor among several processors associated with the computing device 102, a semiconductor based microprocessor (in the form of a microchip), a macroprocessor, one or more application specific integrated circuits (ASICs), a plurality of suitably configured digital logic gates, and so forth.
The memory 214 may include one or a combination of volatile memory elements (e.g., random-access memory (RAM) such as DRAM and SRAM) and nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM). The memory 214 typically comprises a native operating system 216, one or more native applications, emulation systems, or emulated applications for any of a variety of operating systems and/or emulated hardware platforms, emulated operating systems, etc. For example, the applications may include application specific software that may comprise some or all the components of the computing device 102 displayed in
In accordance with such embodiments, the components are stored in memory 214 and executed by the processing device 202, thereby causing the processing device 202 to perform the operations/functions disclosed herein. For some embodiments, the components in the computing device 102 may be implemented by hardware and/or software.
Input/output interfaces 204 provide interfaces for the input and output of data. For example, where the computing device 102 comprises a personal computer, these components may interface with one or more input/output interfaces 204, which may comprise a keyboard or a mouse, as shown in
In the context of this disclosure, a non-transitory computer-readable medium stores programs for use by or in connection with an instruction execution system, apparatus, or device. More specific examples of a computer-readable medium may include by way of example and without limitation: a portable computer diskette, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory), and a portable compact disc read-only memory (CDROM) (optical).
Reference is made to
Although the flowchart 300 of
At block 310, the computing device 102 obtains an image depicting an image of a user's face. At block 320, the computing device 102 identifies one or more regions in the image depicting skin of the user and generates a skin mask. For some embodiments, the computing device 102 identifies the one or more regions in the image depicting the user's skin and generates the skin mask by executing a machine-learning algorithm based on other images of the user.
At block 330, the computing device 102 predicts a skin tone of the user's face depicted in the image and populates the skin mask according to the predicted skin tone. For some embodiments, the computing device 102 predicts the skin tone of the user's face depicted in the image of the user by executing a machine-learning algorithm based on other images of the user and other individuals.
At block 340, the computing device 102 defines feature points corresponding to facial features on the user's face depicted in the image. For some embodiments, the computing device 102 defines the feature points corresponding to the facial features on the user's face depicted in the image is performed by utilizing a convolutional neural network.
At block 350, the computing device 102 extracts pre-defined facial patterns matching facial features depicted in the image. For some embodiments, the pre-defined facial patterns may include, but are not limited to, an eye, a mouth, a nose, or an eyebrow. At block 360, the computing device 102 inserts the extracted pre-defined facial patterns into the skin mask based on the feature points.
At block 370, the computing device 102 generates a hair mask identifying one or more regions in the image depicting hair of the user. For some embodiments, the computing device 102 generates the hair mask identifying the one or more regions in the image depicting the user's hair by executing a machine-learning algorithm based on other images of the user.
At block 380, the computing device 102 extracts a hair region depicted in the image of the user based on the hair mask and inserts the hair region on top of the skin mask to generate a face chart. For some embodiments, the computing device 102 inserts the hair region on top of the skin mask to generate the face chart by extracting the hair region depicted in the image of the user and inserting the extracted hair region on top of the skin mask. As an alternative, the computing device 102 inserts the hair region on top of the skin mask to generate the face chart by inserting a sketch drawing of the user's hair on top of the skin mask. For some embodiments, the computing device 102 generates the face chart by inserting a background into the face chart or superimposing the skin mask on the background. For some embodiments, the background is extracted from the image of the user's face. Thereafter, the process in
It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are included herein within the scope of this disclosure and protected by the following claims.
This application claims priority to, and the benefit of, U.S. Provisional Patent Application entitled, “Personalized Face Chart Generator,” having Ser. No. 63/381,204, filed on Oct. 27, 2022, which is incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63381204 | Oct 2022 | US |