The present application relates generally to systems and methods for determining the apparent age of a person's skin. More specifically, the present application relates to the use of image processing techniques and one or more convolutional neural networks to more accurately determine the age of a consumer's skin.
Skin is the first line of defense against environmental insults that would otherwise damage sensitive underlying tissue and organs. Additionally, skin plays a key role in the physical appearance of a person. Generally, most people desire younger, healthy looking skin. And to some, the tell-tale signs of skin aging such as thinning skin, wrinkles, and age spots are an undesirable reminder of the disappearance of youth. As a result, treating the signs of skin aging has become a booming business in youth-conscious societies. Treatments range from cosmetic creams and moisturizers to various forms of cosmetic surgery.
While a wide variety of cosmetic skin care products are marketed for treating skin conditions, it is not uncommon for a consumer to have difficulty determining which skin care product they should use. For example, someone with skin that appears older than their chronological age may require a different product or regimen compared to someone with more youthful looking skin. Thus, it would be desirable to accurately determine the apparent age of a person's skin.
Numerous attempts have been made to determine a person's apparent skin age by analyzing an image of the person (e.g., a “selfie”) using a computer model/algorithm. The results provided by the computer model can then be used to provide a consumer with a skin profile (e.g., skin age, moisture level, or oiliness) and/or a product recommendation. Past attempts at modeling skin age have relied on facial macro features (eyes, ears, nose, mouth, etc.) as a primary factor driving the computer model/prediction. However, macro-feature based systems may not adequately utilize other skin appearance cues (e.g., micro features such as fine lines, wrinkles, and pigmentation conditions) that drive age perception for a consumer, which can lead to a poor prediction of apparent skin age.
Other past attempts to model skin age and/or skin conditions utilized cumbersome equipment or techniques (e.g., stationary cameras, microscopes, cross-polarized light, specular reflectance, and/or spatial frequency analysis). Thus, it would be desirable to provide consumers with a convenient to use and/or mobile system that analyzes skin such that the consumer can receive product and/or skin care regimen recommendations.
Accordingly, there is still a need for an improved method of conveniently determining the apparent age of a person's skin, which can then be used to help provide a customized skin care product or regimen recommendation.
Disclosed herein are systems and methods for determining an apparent skin age of a person and providing customized skin care product recommendations to a user. The systems and methods utilize a computing device to process an image of a person, which depicts the person's face, and then analyze the processed image. During processing, the face of the person is identified in the image and facial macro features are masked. The processed image is analyzed. Determining the apparent skin age may include identifying at least one pixel that is indicative of skin age and utilizing the at least one pixel to provide the apparent skin age. Based on the analysis by the CNN and, optionally, other data provided by a user, the system can determine an apparent skin age of a person and/or provide a skin care product or skin care regimen for the person.
A variety of systems and methods have been used in the cosmetics industry to provide customized product recommendations to consumers. For example, some well-known systems use a macro feature-based analysis in which one or more macro features commonly visible in a photograph of a person's face (e.g., eyes, ears, nose, mouth, and/or hair) are detected in a captured image such as a digital photograph or “selfie” and compared to a predefined definition. However, macro-feature based analysis systems may not provide a suitably accurate indication of apparent skin age. Conventional micro feature based systems can employ cumbersome equipment or techniques, which may not be suitable for use by the average consumer.
It has now been discovered that masking facial macro-features and analyzing facial micro-features with a convolutional neural network (“CNN”) can provide a suitably accurate determination of a person's apparent skin age. The CNN based image analysis system can be configured to use relatively little image pre-processing, which reduces the dependence of the system on prior knowledge and predetermined definitions and reduces the computer memory and/or processing power needed to analyze an image. Consequently, the present system demonstrates improved generalization compared to a conventional macro-feature-based image analysis systems, which may lead to a better skin care product or regimen recommendations for a consumer who uses the system.
“About,” as used herein, modifies a particular value, by referring to a range equal to the particular value, plus or minus twenty percent (+/−20%) or less (e.g., less than 15%, 10%, or even less than 5%).
“Apparent skin age” means the age of a person's skin calculated by the system herein, based on a captured image.
“Convolutional neural network” is a type of feed-forward artificial neural network where the individual neurons are tiled in such a way that they respond to overlapping regions in the visual field.
“Coupled,” when referring to various components of the system herein, means that the components are in electrical, electronic, and/or mechanical communication with one another.
“Disposed” means an element is positioned in a particular place relative to another element.
“Image capture device” means a device such as a digital camera capable of capturing an image of a person.
“Joined” means configurations whereby an element is directly secured to another element by affixing the element directly to the other element, and configurations whereby an element is indirectly secured to another element by affixing the element to intermediate member(s) that in turn are affixed to the other element.
“Macro features” are relatively large bodily features found on or near the face of a human. Macro features include, without limitation, face shape, ears, eyes, mouth, nose, hair, and eyebrows.
“Masking” refers the process of digitally replacing at least some of the pixels disposed in and/or proximate to a macro feature in an image with pixels that have an RGB value closer to or the same as pixels disposed in a region of interest.
“Micro features” are relatively small features commonly associated with aging skin and/or skin disorders found on the face of a human. Micro features include, without limitation, fine line, wrinkles, dry skin features (e.g., skin flakes), and pigmentation disorders (e.g., hyperpigmentation conditions). Micro features do not include macro features.
“Person” means a human being.
“Region of interest” or “RoI” means a specifically bounded portion of skin in an image or image segment where analysis by a CNN is desired to provide an apparent skin age. Some nonlimiting examples of a region of interest include a portion of an image depicting the forehead, cheek, nasolabial fold, under-eye area, or chin in which the macro features have been masked.
“Segmenting” refers to dividing an image into two or more discrete zones for analysis.
“Target skin age” means a skin age that is a predetermined number of years different from the apparent skin age.
“User” herein refers to a person who uses at least the features provided herein, including, for example, a device user, a product user, a system user, and the like.
The systems and methods herein utilize a multi-step (e.g., 2, 3, 4, or more steps) approach to determine the apparent skin age of a person from an image of that person. By using a multi-step process instead of a single-step process, in which the CNN processes and analyzes the image or analyzes a full-face image, the CNN can focus on the important features that drive age perception (e.g., micro features) and reduce the computing power needed to analyze the image and reduce the bias that may be introduced to the system by macro features.
In a first step, processing logic stored in a memory component of the system causes the system to perform one or more (e.g., all) of the following: identify a face in the image for analysis, normalize the image, mask one or more (e.g., all) facial macro-features on the identified face, and segment the image for analysis. The processing steps may be performed in any order, as desired. The processed image is provided to a convolutional neural network as one or more input variants for analysis. The results of the CNN analysis are used to provide an apparent skin age of each segment and/or an overall skin age for the entire face.
The mobile computing device 102 may be a mobile telephone, a tablet, a laptop, a personal digital assistant and/or other computing device configured for capturing, storing, and/or transferring an image such as a digital photograph. Accordingly, the mobile computing device 102 may include an image capture device 103 such as a digital camera and/or may be configured to receive images from other devices. The mobile computing device 102 may include a memory component 140a, which stores image capture logic 144a and interface logic 144b. The memory component 140a may include random access memory (such as SRAM, DRAM, etc.), read only memory (ROM), registers, and/or other forms of computing storage hardware. The image capture logic 144a and the interface logic 144b may include software components, hardware circuitry, firmware, and/or other computing infrastructure. The image capture logic 144a may facilitate capturing, storing, preprocessing, analyzing, transferring, and/or performing other functions on a digital image of a user. The interface logic 144b may be configured for providing one or more user interfaces to the user, which may include questions, options, and the like. The mobile computing device 102 may also be configured for communicating with other computing devices via the network 100.
The remote computing device 104 may also be coupled to the network 100 and may be configured as a server (or plurality of servers), personal computer, mobile computer, and/or other computing device configured for creating, storing, and/or training a convolutional neural network capable of determining the skin age of a user by locating and analyzing skin features that contribute to skin age in a captured image of the user's face. For example, the CNN may be stored as logic 144c and 144d in the memory component 140b of a remote computing device 104. Commonly perceived skin flaws such as fine lines, wrinkles, dark (age) spots, uneven skin tone, blotchiness, enlarged pores, redness, yellowness, combinations of these and the like may all be identified by the trained CNN as contributing to the skin age of the user.
The remote computing device 104 may include a memory component 140b that stores training logic 144c, analyzing logic 144d, and/or processing logic 144e. The memory component 140b may include random access memory (such as SRAM, DRAM, etc.), read only memory (ROM), registers, and/or other forms of computing storage hardware. The training logic 144c, analyzing logic 144d, and/or processing logic 144e may include software components, hardware circuitry, firmware, and/or other computing infrastructure. Training logic 144c facilitates creation and/or training of the CNN, and thus may facilitate creation of and/or operation of the CNN. Processing logic 144e causes the image received from the mobile computing device 102 (or other computing device) to be processed for analysis by the analyzing logic 144d. Image processing may include macro feature identification, masking, segmentation, and/or other image alteration processes, which are described in more detail below. Analyzing logic 144d causes the remote computing device 104 to analyze the processed image to provide an apparent skin age, product recommendation, etc.
In some instances, a training computing device 108 may be coupled to the network 100 to facilitate training of the CNN. For example, a trainer may provide one or more digital images of a face or skin to the CNN via the training computing device 108. The trainer may also provide information and other instructions (e.g., actual age) to inform the CNN which assessments are correct and which assessments are not correct. Based on the input from the trainer, the CNN may automatically adapt, as described in more detail below.
The system 10 may also include a kiosk computing device 106, which may operate similar to the mobile computing device 102, but may also be able to dispense one or more products and/or receive payment in the form of cash or electronic transactions. Of course, it is to be appreciated that a mobile computing device 102, which also provides payment and/or production dispensing, is contemplated herein. In some instances, the kiosk computing device 106 and/or mobile computing device 102 may also be configured to facilitate training of the CNN. Thus, the hardware and software depicted and/or described for the mobile computing device 102 and the remote computing device 104 may be included in the kiosk computing device 106, the training computing device 108, and/or other devices. Similarly, the hardware and software depicted and/or described for the remote computing device 2104 in
It should also be understood that while the remote computing device 104 is depicted in
In a first step of the image analysis process herein, the present system receives an image containing at least one face of person and prepares the image for analysis by the CNN. The image may be received from any suitable source, such as, for example, a smartphone comprising a digital camera. It may be desirable to use a camera capable of producing at least a one megapixel image and electronically transferring the image to a computing device(s) that can access suitable image processing logic and/or image analyzing logic.
Once the image is received, the processing logic identifies the portion(s) of the image that contain a human face. The processing logic can be configured to detect the human face(s) present in the image using any suitable technique known in the art, such as, for example, color and/or color contrast techniques, removal of monochrome background features, edge-based techniques that use geometric models or Hausdorff distance, weak cascade techniques, or a combination of these. In some instances, it may be particularly desirable to use a Viola-Jones type of weak cascade technique, which was described by Paul Viola and Michael Jones in “International Journal of Computer Vision” 57(2), 137-154, 2004.
In some instances, an image received by the present system may contain more than one face, but a user may not want to analyze all of the faces in the image. For example, the user may only want to analyze the face of the person seeking advice related to a skin care treatment and/or product. Thus, the present system may be configured to select only the desired image(s) for analysis. For example, the processing logic may select the dominant face for analysis based on the relative position of the face in the image (e.g., center), the relative size of face (e.g., largest “rectangle”), or a combination of these. Alternatively or additionally, the present system may query the user to confirm that the face selected by the processing logic is correct and/or ask the user to select one or more faces for analysis. Any suitable user interface technique known in the art may be used to query a user and/or enable the user to select one or more faces present in the image.
Once the appropriate face(s) is selected for further processing, the processing logic detects one or more facial landmarks (e.g., eyes, nose, mouth, or portions thereof), which may be used as anchor features (i.e., reference points that the processing logic can use to normalize and/or segment the image). In some instances, the processing logic may create a bounding box that isolates the face from the rest of the image. In this way, background objects, undesirable macro features, and/or other body parts that are visible in the image can be removed. The facial landmarks of interest may be detected using a known landmark detection technique (e.g., Viola-Jones or a facial shape/size recognition algorithm).
Facial segmentation may be performed, for example, by a tasks constrained deep convolutional network (TCDCN) or other suitable technique, as known to those skilled in the art. Segmenting the facial image allows the analyzing logic to provide an apparent age for each segment, which can be important because some segments are known to impact overall skin age perception more than other segments. Thus, each segment may be weighted to reflect the influence that segment has on the perception of skin age. In some instances, the processing logic may cause the system to scale the segmented image such that the full height of the facial image (i.e., distance from the bottom of the chin to the top of the forehead) does not exceed a particular value (e.g., between 700 and 800 pixels, between 700 and 750 pixels, or even about 716 pixels).
It is important to prevent facial macro features from contaminating the skin age analysis by the CNN. If the facial macro features are not masked, the CNN may learn to predict the skin age of a person from macro feature cues rather than micro features cues such as fine lines and wrinkles, which are known to be much more influential on how people perceive skin age. This can be demonstrated by digitally altering an image to remove facial micro features such as fine lines, wrinkles, and pigmentation disorders, and observing that the apparent age provided by the system does not change. Masking may occur before and/or after the image is segmented and/or bounded. In the present system, masking may be accomplished by replacing the pixels in a facial macro feature with pixels that have a uniform, non-zero (i.e., black), non-255 (i.e., white) RGB value. For example, it may be desirable to replace the pixels in the macro feature with pixels that have a median RGB value of the skin in the region of interest. It is believed, without being limited by theory, that by masking the facial macro features with uniformly colored pixels or otherwise nondescript pixels, the CNN will learn to predict age using features other than the macro features (e.g., facial micro features such fine lines and wrinkles). Masking herein may be accomplished using any suitable masking means known in the art, such as, for example, Matlab® brand computer software.
Even when masking the facial macro features as described above, a sophisticated convolutional neural network may still learn to predict skin age based on “phantom” macro features. In other words, the neural network may still learn to recognize differences in the patterns of median RGB pixels because the patterns generally correspond to the size and/or position of the masked facial macro feature. The CNN may then apply the pattern differences to its age prediction analysis. To avoid this problem, it is important to use more than one input variant (e.g., 2, 3, 4, 5, or 6, or more) of the processed image to the CNN. By varying how the masked macro features are presented to the CNN, it is believed, without being limited by theory, that the CNN is less likely to learn to use differences in the median RGB pixel patterns to predict skin age.
In some instances, it may be desirable to select only a portion of a particular region of interest for analysis by the CNN. For example, it may be desirable to select a patch of skin disposed in and/or around the center of the region of interest, and scale the selected skin patch to a uniform size. Continuing with this example, the largest rectangle of skin-only area may be extracted from the center of each region of interest and rescaled to a 256 pixel×256 pixel skin patch.
The systems and methods herein use a trained convolutional neural network, which functions as an in silico skin model, to provide an apparent skin age to a user by analyzing an image of the skin of a person (e.g., facial skin). The CNN comprises multiple layers of neuron collections that use the same filters for each pixel in a layer. Using the same filters for each pixel in the various combinations of partially and fully connected layers reduces memory and processing requirements of the system. In some instances, the CNN comprises multiple deep networks, which are trained and function as discrete convolutional neural networks for a particular image segment and/or region of interest.
The CNN herein may be trained using a deep learning technique that allows the CNN to learn what portions of an image contribute to skin age, much in the same way as a mammalian visual cortex learns to recognize important features in an image. For example, the CNN may be trained to determine locations, colors, and/or shade (e.g., lightness or darkness) of pixels that contribute to the skin age of a person. In some instances, the CNN training may involve using mini-batch stochastic gradient descent (SGD) with Nesterov momentum (and/or other algorithms). An example of utilizing a stochastic gradient descent is disclosed in U.S. Pat. No. 8,582,807.
In some instances, the CNN may be trained by providing an untrained CNN with a multitude of captured images to learn from. In some instances, the CNN can learn to identify portions of skin in an image that contribute to skin age through a process called supervised learning. “Supervised learning” generally means that the CNN is trained by analyzing images in which the age of the person in the image is predetermined. Depending on the accuracy desired, the number of training images may vary from a few images to a multitude of images (e.g., hundreds or even thousands) to a continuous input of images (i.e., to provide continuous training).
The systems and methods herein utilize a trained CNN that is capable of accurately predicting the apparent age of a user for a wide range of skin types. To generate an apparent age, an image of a region of interest (e.g., obtained from an image of a person's face) or portion thereof is forward-propagating through the trained CNN. The CNN analyzes the image or image portion and identifies skin micro features in the image that contribute to the predicted age of the user (“trouble spots”). The CNN then uses the trouble spots to provide an apparent skin age for the region of interest and/or an overall apparent skin age.
In some instances, an image inputted to the CNN may not be suitable for analysis, for example, due to occlusion (e.g., hair covering a portion of the image, shadowing of a region of interest). In these instances, the CNN or other logic may discard the image prior to analysis by the CNN or discard the results of the CNN analysis prior to generation of an apparent age.
In some instances, the present system may determine a target skin age (e.g., the apparent age of the person minus a predetermined number of years (e.g., 10, 9, 8, 7, 6, 5, 4, 3, 2, or 1 year)) or the actual age of the person. The system may cause the target age to be propagated back to the original image as a gradient. The absolute value of a plurality of channels of the gradient may then be summed for at least one pixel and scaled from 0-1 for visualization purposes. The value of the scaled pixels may represent pixels that contribute most (and least) to the determination of the skin age of the user. Each scaling value (or range of values) may be assigned a color or shade, such that a virtual mask can be generated to graphically represent the scaled values of the pixels. In some instances, the CNN analysis, optionally in conjunction with habits and practices input provided by a user, can be used to help provide a skin care product and/or regimen recommendation.
The memory component 2240b may store operating logic 2242, processing logic 2244b, training logic 2244c, and analyzing logic 2244d. The training logic 2244c, processing logic 2244b, and analyzing logic 2244d may each include a plurality of different pieces of logic, each of which may be embodied as a computer program, firmware, and/or hardware, as an example. A local communications interface 2246 is also included in
The processor 2230 may include any processing component operable to receive and execute instructions (such as from a data storage component 2236 and/or the memory component 2240b). As described above, the input/output hardware 2232 may include and/or be configured to interface with the components of
The network interface hardware 2234 may include and/or be configured for communicating with any wired or wireless networking hardware, including an antenna, a modem, a LAN port, wireless fidelity (Wi-Fi) card, WiMax card, Bluetooth™ module, mobile communications hardware, and/or other hardware for communicating with other networks and/or devices. From this connection, communication may be facilitated between the remote computing device 2204 and other computing devices, such as those depicted in
The operating system logic 2242 may include an operating system and/or other software for managing components of the remote computing device 2204. As discussed above, the training logic 2244c may reside in the memory component 2240b and may be configured to cause the processor 2230 to train the convolutional neural network. The processing logic 2244b may also reside in the memory component 2244b and be configured to process images prior to analysis by the analyzing logic 2244d. Similarly, the analyzing logic 2244d may be utilized to analyze images for skin age prediction.
It should be understood that while the components in
Additionally, while the remote computing device 2204 is illustrated with the training logic 2244c, processing logic 2244b, and analyzing logic 2244d as separate logical components, this is also an example. In some embodiments, a single piece of logic may cause the remote computing device 2204 to provide the described functionality.
In some instances, at least some of the images and other data described herein may be stored as historical data for later use. As an example, tracking of user progress may be determined based on this historical data. Other analyses may also be performed on this historical data, as desired.
The dimensions and values disclosed herein are not to be understood as being strictly limited to the exact numerical values recited. Instead, unless otherwise specified, each such dimension is intended to mean both the recited value and a functionally equivalent range surrounding that value. For example, a dimension disclosed as “40 mm” is intended to mean “about 40 mm.”
Every document cited herein, including any cross referenced or related patent or application and any patent application or patent to which this application claims priority or benefit thereof, is hereby incorporated herein by reference in its entirety unless expressly excluded or otherwise limited. The citation of any document is not an admission that it is prior art with respect to any invention disclosed or claimed herein or that it alone, or in any combination with any other reference or references, teaches, suggests or discloses any such invention. Further, to the extent that any meaning or definition of a term in this document conflicts with any meaning or definition of the same term in a document incorporated by reference, the meaning or definition assigned to that term in this document shall govern.
While particular embodiments of the present invention have been illustrated and described, it would be obvious to those skilled in the art that various other changes and modifications can be made without departing from the spirit and scope of the invention. It is therefore intended to cover in the appended claims all such changes and modifications that are within the scope of this invention.
Number | Name | Date | Kind |
---|---|---|---|
4276570 | Burson et al. | Jun 1981 | A |
5850463 | Horii | Dec 1998 | A |
5983120 | Groner et al. | Nov 1999 | A |
6556196 | Blanz et al. | Apr 2003 | B1 |
6571003 | Hillebrand et al. | May 2003 | B1 |
6619860 | Simon | Sep 2003 | B1 |
6734858 | Attar et al. | May 2004 | B2 |
6761697 | Rubinstenn et al. | Jul 2004 | B2 |
6959119 | Hawkins et al. | Oct 2005 | B2 |
7200281 | Zhang et al. | Apr 2007 | B2 |
7362886 | Rowe | Apr 2008 | B2 |
7634103 | Rubinstenn et al. | Dec 2009 | B2 |
8014589 | del Valle | Sep 2011 | B2 |
8077931 | Chatman | Dec 2011 | B1 |
8094186 | Fukuoka et al. | Jan 2012 | B2 |
8254647 | Nechyba | Aug 2012 | B1 |
8391639 | Hillebrand et al. | Mar 2013 | B2 |
8401300 | Jiang et al. | Mar 2013 | B2 |
8425477 | Mou | Apr 2013 | B2 |
8491926 | Mohammadi et al. | Jul 2013 | B2 |
8520906 | Moon | Aug 2013 | B1 |
8550818 | Fidaleo et al. | Oct 2013 | B2 |
8582807 | Yang | Nov 2013 | B2 |
8625864 | Goodman | Jan 2014 | B2 |
8666770 | Maes et al. | Mar 2014 | B2 |
8725560 | Aarabi | May 2014 | B2 |
9013567 | Clemann | Apr 2015 | B2 |
9189679 | Yamazaki | Nov 2015 | B2 |
20010037191 | Furuta et al. | Nov 2001 | A1 |
20030065255 | Giacchetti et al. | Apr 2003 | A1 |
20030065589 | Giacchetti et al. | Apr 2003 | A1 |
20030198402 | Zhang | Oct 2003 | A1 |
20040122299 | Nakata | Jun 2004 | A1 |
20040170337 | Simon et al. | Sep 2004 | A1 |
20040213454 | Lai et al. | Oct 2004 | A1 |
20040223631 | Waupotitsch | Nov 2004 | A1 |
20060023923 | Geng | Feb 2006 | A1 |
20060257041 | Kameyama et al. | Nov 2006 | A1 |
20060274071 | Bazin | Dec 2006 | A1 |
20070052726 | Wright | Mar 2007 | A1 |
20070053940 | Huang et al. | Mar 2007 | A1 |
20070070440 | Li et al. | Mar 2007 | A1 |
20070071314 | Bhatti | Mar 2007 | A1 |
20070104472 | Quan | May 2007 | A1 |
20070229498 | Matusik et al. | Oct 2007 | A1 |
20080080746 | Payonk | Apr 2008 | A1 |
20080089561 | Zhang | Apr 2008 | A1 |
20080194928 | Bandic | Aug 2008 | A1 |
20080212894 | Demirli | Sep 2008 | A1 |
20080316227 | Fleury et al. | Dec 2008 | A1 |
20090003709 | Kaneda | Jan 2009 | A1 |
20090028380 | Hillebrand | Jan 2009 | A1 |
20090245603 | Koruga | Oct 2009 | A1 |
20100068247 | Mou | Mar 2010 | A1 |
20100172567 | Prokoski | Jul 2010 | A1 |
20100185064 | Bandic | Jul 2010 | A1 |
20100189342 | Parr et al. | Jul 2010 | A1 |
20100329525 | Goodman | Dec 2010 | A1 |
20110016001 | Schieffelin | Jan 2011 | A1 |
20110064331 | Andres del Valle | Mar 2011 | A1 |
20110116691 | Chung | May 2011 | A1 |
20110158540 | Suzuki | Jun 2011 | A1 |
20110196616 | Gunn | Aug 2011 | A1 |
20110222724 | Yang | Sep 2011 | A1 |
20110249891 | Li | Oct 2011 | A1 |
20110300196 | Mohammadi | Dec 2011 | A1 |
20120223131 | Lim | Sep 2012 | A1 |
20120253755 | Gobel | Oct 2012 | A1 |
20120300049 | Clemann | Nov 2012 | A1 |
20120325141 | Mohammadi | Dec 2012 | A1 |
20130013330 | Guerra | Jan 2013 | A1 |
20130029723 | Das | Jan 2013 | A1 |
20130041733 | Officer | Feb 2013 | A1 |
20130079620 | Kuth et al. | Mar 2013 | A1 |
20130089245 | Yamazaki | Apr 2013 | A1 |
20130094780 | Tang et al. | Apr 2013 | A1 |
20130158968 | Ash et al. | Jun 2013 | A1 |
20130169621 | Mei et al. | Jul 2013 | A1 |
20130271451 | Tong | Oct 2013 | A1 |
20130325493 | Wong et al. | Dec 2013 | A1 |
20140089017 | Klappert et al. | Mar 2014 | A1 |
20140099029 | Savvides | Apr 2014 | A1 |
20140201126 | Zadeh | Jul 2014 | A1 |
20140209682 | Gottwals et al. | Jul 2014 | A1 |
20140211022 | Koh et al. | Jul 2014 | A1 |
20140219526 | Linguraru et al. | Aug 2014 | A1 |
20140226896 | Imai | Aug 2014 | A1 |
20140270490 | Wus et al. | Sep 2014 | A1 |
20140304629 | Cummins et al. | Oct 2014 | A1 |
20140323873 | Cummins et al. | Oct 2014 | A1 |
20140334723 | Chatow | Nov 2014 | A1 |
20150045631 | Pederson | Feb 2015 | A1 |
20150099947 | Qu | Apr 2015 | A1 |
20150178554 | Kanaujia et al. | Jun 2015 | A1 |
20150310040 | Chan | Oct 2015 | A1 |
20150339757 | Aarabi | Nov 2015 | A1 |
20160062456 | Wang | Mar 2016 | A1 |
20160162728 | Arai et al. | Jun 2016 | A1 |
20160219217 | Williams | Jul 2016 | A1 |
20160255303 | Tokui | Sep 2016 | A1 |
20160292380 | Cho | Oct 2016 | A1 |
20160314616 | Su | Oct 2016 | A1 |
20160330370 | Ghosh | Nov 2016 | A1 |
20170032178 | Henry | Feb 2017 | A1 |
20170039357 | Hwang | Feb 2017 | A1 |
20170178058 | Bhat | Jun 2017 | A1 |
20170246473 | Marinkovich | Aug 2017 | A1 |
20170270348 | Morgana et al. | Sep 2017 | A1 |
20170270349 | Polania Cabrera et al. | Sep 2017 | A1 |
20170270350 | Maltz et al. | Sep 2017 | A1 |
20170270593 | Sherman | Sep 2017 | A1 |
20170270691 | Maltz et al. | Sep 2017 | A1 |
20170272741 | Maltz et al. | Sep 2017 | A1 |
20170294010 | Shen | Oct 2017 | A1 |
20170308738 | Zhang | Oct 2017 | A1 |
20180276869 | Matts | Sep 2018 | A1 |
20180276883 | D'Alessandro | Sep 2018 | A1 |
20180352150 | Purwar | Dec 2018 | A1 |
20190035149 | Chen | Jan 2019 | A1 |
Number | Date | Country |
---|---|---|
1870047 | Nov 2006 | CN |
101556699 | Oct 2009 | CN |
104504376 | Apr 2015 | CN |
1297781 | Apr 2003 | EP |
1030267 | Jan 2010 | EP |
1813189 | Mar 2010 | EP |
1189536 | Mar 2011 | EP |
2728511 | May 2014 | EP |
2424332 | Sep 2006 | GB |
2007050158 | Mar 2007 | JP |
20140078459 | Jun 2014 | KR |
WO200076398 | Dec 2000 | WO |
2003049039 | Jun 2003 | WO |
2006005917 | Jan 2006 | WO |
2007044815 | Apr 2007 | WO |
WO2007051299 | May 2007 | WO |
WO2008003146 | Jan 2008 | WO |
WO2008086311 | Jul 2008 | WO |
WO2009100494 | Aug 2009 | WO |
WO2011109168 | Sep 2011 | WO |
2011146321 | Nov 2011 | WO |
2013104015 | Jul 2013 | WO |
2014122253 | Aug 2014 | WO |
WO2015017687 | Feb 2015 | WO |
WO201588079 | Jun 2015 | WO |
WO2017029488 | Feb 2017 | WO |
Entry |
---|
Jagtap et al., Human Age Classification Using facial Skin Aging Features and Artificial Neural Network, Cognitive Systems Research vol. 40 (2016), pp. 116-128 (Year: 2016). |
Jagtap et al., Human Age Classification Using facial Skin Aging Features and Artificial Neural Network, Cognitive Systems Research vol. 40 (2016), pp. 116-128 (Year: 2016) (Year: 2016). |
Y. Fu, G. Guo, and T. S. Huang, “Age synthesis and estimation via faces: A survey,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 32, No. 11, pp. 1955-1976, 2010. |
B. Tiddeman, M. Burt, and D. Perrett, “Prototyping and transforming facial textures for perception research,” Computer Graphics and Applications, IEEE, vol. 21, No. 5, pp. 42-50, 2001. |
D. M. Burt and D. I. Perrett, “Perception of age in adult Caucasian male faces: Computer graphic manipulation of shape and colour information,” Proceedings of the Royal Society of London. Series B: Biological Sciences, vol. 259, No. 1355, pp. 137-143, 1995. |
A. Lanitis, C. J. Taylor, and T. F. Cootes, “Toward automatic simulation of aging effects on face images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, No. 4, pp. 442-455, Apr. 2002. |
Z. Liu, Z. Zhang, and Y. Shan, “Image-based surface detail transfer,” Computer Graphics and Applications, IEEE, vol. 24, No. 3, pp. 30-35, 2004. |
E. Patterson, K. Ricanek, M. Albert, and E. Boone, “Automatic representation of adult aging in facial images,” in Proc. IASTED Int'l Conf. Visualization, Imaging, and Image Processing, 2006, pp. 171-176. |
T. J. Hutton, B. F. Buxton, P. Hammond, and H. W. Potts, “Estimating average growth trajectories in shape-space using kernel smoothing,” Medical Imaging, IEEE Transactions on, vol. 22, No. 6, pp. 747-753, 2003. |
D. Dean, M. G. Hans, F. L. Bookstein, and K. Subramanyan, “Three-dimensional Bolton-Brush Growth Study landmark data: ontogeny and sexual dimorphism of the Bolton standards cohort,” 2009. |
J. H. Langlois and L. A. Roggman, “Attractive faces are only average,” Psychological science, vol. 1, No. 2, pp. 115-121, 1990. |
Y. H. Kwon and N. da Vitoria Lobo, “Age classification from facial images,” in Computer Vision and Pattern Recognition, 1994. Proceedings CVPR'94., 1994 IEEE Computer Society Conference on, 1994, pp. 762-767. |
P. A. George and G. J. Hole, “Factors influencing the accuracy of age estimates of unfamiliar faces,” Perception—London-, vol. 24, pp. 1059-1059, 1995. |
I. Pitanguy, F. Leta, D. Pamplona, and H. I. Weber, “Defining and measuring aging parameters,” Applied Mathematics and Computation, vol. 78, No. 2-3, pp. 217-227, Sep. 1996. |
Y. Wu, P. Kalra, and N. M. Thalmann, “Simulation of static and dynamic wrinkles of skin,” in Computer Animation'96. Proceedings, 1996, pp. 90-97. |
P. N. Belhumeur, J. P. Hespanha, and D. Kriegman, “Eigenfaces vs. fisherfaces: Recognition using class specific linear projection,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 19, No. 7, pp. 711-720, 1997. |
M. J. Jones and T. Poggio, “Multidimensional morphable models,” in Computer Vision, 1998. Sixth International Conference on, 1998, pp. 683-688. |
I. Pitanguy, D. Pamplona, H. I. Weber, F. Leta, F. Salgado, and H. N. Radwanski, “Numerical modeling of facial aging,” Plastic and reconstructive surgery, vol. 102, No. 1, pp. 200-204, 1998. |
V. Blanz and T. Vetter, “A morphable model for the synthesis of 3D faces,” in Proceedings of the 26th annual conference on Computer graphics and interactive techniques, 1999, pp. 187-194. |
L. Boissieux, G. Kiss, N. M. Thalmann, and P. Kalra, Simulation of skin aging and wrinkles with cosmetics insight. Springer, 2000. |
T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active appearance models,” IEEE Transactions on pattern analysis and machine intelligence, vol. 23, No. 6, pp. 681-685, 2001. |
Y. Bando, T. Kuratate, and T. Nishita, “A simple method for modeling wrinkles on human skin,” in Computer Graphics and Applications, 2002. Proceedings. 10th Pacific Conference on, 2002, pp. 166-175. |
M. R. Gandhi, “A method for automatic synthesis of aged human facial images,” McGill University, 2004. |
A. Lanitis, C. Draganova, and C. Christodoulou, “Comparing different classifiers for automatic age estimation,” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 34, No. 1, pp. 621-628, 2004. |
S. R. Coleman and R. Grover, “The anatomy of the aging face: volume loss and changes in 3-dimensional topography,” Aesthetic surgery journal, vol. 26, No. 1 suppl, pp. S4-S9, 2006. |
Y. Fu and N. Zheng, “M-face: An appearance-based photorealistic model for multiple facial attributes rendering,” Circuits and Systems for Video Technology, IEEE Transactions on, vol. 16, No. 7, pp. 830-842, 2006. |
X. Geng, Z.-H. Zhou, Y. Zhang, G. Li, and H. Dai, “Learning from facial aging patterns for automatic age estimation,” in Proceedings of the 14th annual ACM international conference on Multimedia, 2006, pp. 307-316. |
N. Ramanathan and R. Chellappa, “Modeling age progression in young faces,” in Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, 2006, vol. 1, pp. 387-394. |
C. J. Solomon, S. J. Gibson, and others, “A person-specific, rigorous aging model of the human face,” Pattern Recognition Letters, vol. 27, No. 15, pp. 1776-1787, 2006. |
K. Ueki, T. Hayashida, and T. Kobayashi, “Subspace-based age-group classification using facial images under various lighting conditions,” in Automatic Face and Gesture Recognition, 2006. FGR 2006. 7th International Conference on, 2006, p. 6-pp. |
A. M. Albert, K. Ricanek Jr, and E. Patterson, “A review of the literature on the aging adult skull and face: Implications for forensic science research and applications,” Forensic Science International, vol. 172, No. 1, pp. 1-9, 2007. |
X. Geng, Z.-H. Zhou, and K. Smith-Miles, “Automatic age estimation based on facial aging patterns,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 29, No. 12, pp. 2234-2240, 2007. |
K. Scherbaum, M. Sunkel, H.-P. Seidel, and V. Blanz, “Prediction of Individual Non-Linear Aging Trajectories of Faces,” in Computer Graphics Forum, 2007, vol. 26, pp. 285-294. |
J. Suo, F. Min, S. Zhu, S. Shan, and X. Chen, “A multi-resolution dynamic model for face aging simulation,” in Computer Vision and Pattern Recognition, 2007. CVPR'07. IEEE Conference on, 2007, pp. 1-8. |
Y. Fu and T. S. Huang, “Human age estimation with regression on discriminative aging manifold,” Multimedia, IEEE Transactions on, vol. 10, No. 4, pp. 578-584, 2008. |
G. Guo, Y. Fu, C. R. Dyer, and T. S. Huang, “Image-based human age estimation by manifold learning and locally adjusted robust regression,” Image Processing, IEEE Transactions on, vol. 17, No. 7, pp. 1178-1188, 2008. |
F. Jiang and Y. Wang, “Facial aging simulation based on super-resolution in tensor space,” in Image Processing, 2008. ICIP 2008. 15th IEEE International Conference on, 2008, pp. 1648-1651. |
U. Park, Y. Tong, and A. K. Jain, “Face recognition with temporal invariance: A 3d aging model,” in Automatic Face & Gesture Recognition, 2008. FG'08. 8th IEEE International Conference on, 2008, pp. 1-7. |
N. Ramanathan and R. Chellappa, “Modeling shape and textural variations in aging faces,” in Automatic Face & Gesture Recognition, 2008. FG'08. 8th IEEE International Conference on, 2008, pp. 1-8. |
B. Guyuron, D. J. Rowe, A. B. Weinfeld, Y. Eshraghi, A. Fathi, and S. Iamphongsai, “Factors contributing to the facial aging of identical twins,” Plastic and reconstructive surgery, vol. 123, No. 4, pp. 1321-1331, 2009. |
G. Mu, G. Guo, Y. Fu, and T. S. Huang, “Human age estimation using bio-inspired features,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, 2009, pp. 112-119. |
N. Ramanathan, R. Chellappa, and S. Biswas, “Computational methods for modeling facial aging: A survey,” Journal of Visual Languages & Computing, vol. 20, No. 3, pp. 131-144, 2009. |
U. Park, Y. Tong, and A. K. Jain, “Age-invariant face recognition,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 32, No. 5, pp. 947-954, 2010. |
J. Suo, S.-C. Zhu, S. Shan, and X. Chen, “A compositional and dynamic model for face aging,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 32, No. 3, pp. 385-401, 2010. |
K. Sveikata, I. Balciuniene, and J. Tutkuviene, “Factors influencing face aging. Literature review,” Stomatologija, vol. 13, No. 4, pp. 113-115, 2011. |
J. P. Farkas, J. E. Pessa, B. Hubbard, and R. J. Rohrich, “The science and theory behind facial aging,” Plastic and Reconstructive Surgery—Global Open, vol. 1, No. 1, pp. e8-e15, 2013. |
J. Gatherwright, M. T. Liu, B. Amirlak, C. Gliniak, A. Totonchi, and B. Guyuron, “The Contribution of Endogenous and Exogenous Factors to Male Alopecia: A Study of Identical Twins,” Plastic and reconstructive surgery, vol. 131, No. 5, p. 794e-801e, 2013. |
U.S. Appl. No. 62/547,196, filed Aug. 18, 2017, Ankur (NMN) Purwar. |
All Office Actions, U.S. Appl. No. 15/414,002. |
All Office Actions, U.S. Appl. No. 15/414,095. |
All Office Actions, U.S. Appl. No. 15/414,147. |
All Office Actions, U.S. Appl. No. 15/414,189. |
All Office Actions, U.S. Appl. No. 15/414,305. |
All Office Actions, U.S. Appl. No. 15/465,166. |
All Office Actions, U.S. Appl. No. 15/993,950. |
All Office Actions, U.S. Appl. No. 15/993,973. |
Andreas Lanitis, Comparative Evaluation of Automatic Age-Progression Methodologies, EURASIP Journal on Advances in Signal Processing, vol. 2008, No. 1, Jan. 1, 2008, 10 pages. |
Beauty.AI Press Release, PRWeb Online Visibility from Vocus, Nov. 19, 2015, 3 pages. |
Chen et al., Face Image Quality Assessment Based on Learning to Rank, IEEE Signal Processing Letters, vol. 22, No. 1 (2015), pp. 90-94. |
Crete et al., The blur effect: perception and estimation with a new no-reference perceptual blur metric, Proc. SPIE 6492, Human Vision and Electronic Imaging XII, 2007, 12 pages. |
Dong et al., Automatic age estimation based on deep learning algorithm, Neurocomputing 187 (2016), pp. 4-10. |
Finlayson et al., Color by Correlation: A Simple, Unifying Framework for Color Constancy, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, No. 11, Nov. 2001, pp. 1209-1221. |
Fu et al., Learning Race from Face: A Survey, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, No. 12, Dec. 1, 2014, pp. 2483-2509. |
Gong et al., Quantification of Pigmentation in Human Skin Images, IEEE, 2012, pp. 2853-2856. |
Gray et al., Predicting Facial Beauty without Landmarks, European Conference on Computer Vision, Computer Vision—ECCV 2010, 14 pages. |
Guodong Guo et al., A framework for joint estimation of age, gender and ethnicity on a large database, Image and Vision Computing, vol. 32, No. 10, May 10, 2014, pp. 761-770. |
Huerta et al., A deep analysis on age estimation, Pattern Recognition Letters 68 (2015), pp. 239-249. |
Hyvarinen et al., A Fast Fixed-Point Algorithm for Independent Component Analysis of Complex Valued Signals, Neural Networks Research Centre, Helsinki University of Technology, Jan. 2000, 15 pages. |
Hyvarinen et al., A Fast Fixed-Point Algorithm for Independent Component Analysis, Neural Computation, 9:1483-1492, 1997. |
International Search Report and Written Opinion of the International Searching Authority, PCT/US2017/023334, dated May 15, 2017, 12 pages. |
International Search Report and Written Opinion of the International Searching Authority, PCT/US2018/023042, dated Jun. 6, 2018. |
International Search Report and Written Opinion of the International Searching Authority, PCT/US2018/023219, dated Jun. 1, 2018, 13 pages. |
International Search Report and Written Opinion of the International Searching Authority, PCT/US2018/035291, dated Aug. 30, 2018, 11 pages. |
International Search Report and Written Opinion of the International Searching Authority, PCT/US2018/035296, dated Oct. 17, 2018, 17 pages. |
Jagtap et al., Human Age Classification Using Facial Skin Aging Features and Artificial Neural Network, Cognitive Systems Research vol. 40 (2016), pp. 116-128. |
Konig et al., A New Context: Screen to Face Distance, 8th International Symposium on Medical Information and Communication Technology (ISMICT), IEEE, Apr. 2, 2014, pp. 1-5. |
Krizhevsky et al., ImageNet Classification with Deep Convolutional Neural Networks, part of Advances in Neural Information Processing Systems 25 (NIPS 2012), 9 pages. |
Levi et al., Age and Gender Classification Using Convolutional Neural Networks, IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2015, pp. 34-42. |
Mathias et al., Face Detection Without Bells and Whistles, European Conference on Computer Vision, 2014, pp. 720-735. |
Ojima et al., Application of Image-Based Skin Chromophore Analysis to Cosmetics, Journal of Imaging Science and Technology, vol. 48, No. 3, May 2004, pp. 222-226. |
Sun et al., Statistical Characterization of Face Spectral Reflectances and Its Application to Human Portraiture Spectral Estimation, Journal of Imaging Science and Technology, vol. 46, No. 6, 2002, pp. 498-506. |
Sung Eun Choi et al., Age face simulation using aging functions on global and local features with residual images, Expert Systems with Applications, vol. 80, Mar. 7, 2017, pp. 107-125. |
Tsumura et al., Image-based skin color and texture analysis/synthesis by extracting hemoglobin and melanin information in the skin, ACM Transactions on Graphics (TOG), vol. 22, Issue 3, Jul. 2003, pp. 770-779. |
Viola et al., Robust Real-Time Face Detection, International Journal of Computer Vision 57(2), 2004, pp. 137-154. |
Wang et al., Combining Tensor Space Analysis and Active Appearance Models for Aging Effect Simulation on Face Images, IEEE Transactions on Systems, Man, and Cybernetics—Part B: Cybernetics, vol. 42, No. 4, Aug. 1, 2012, pp. 1107-1118. |
Wang et al., Deeply-Learned Feature for Age Estimation, 2015 IEEE Winter Conference on Applications of Computer Vision, pp. 534-541. |
Wu et al., Funnel-Structured Cascade for Multi-View Face Detection with Alignment-Awareness, Neurocomputing 221 (2017), pp. 138-145. |
Xiangbo Shu et al., Age progression: Current technologies and applications, Neurocomputing, vol. 208, Oct. 1, 2016, pp. 249-261. |
Yi et al., Age Estimation by Multi-scale Convolutional Network, Computer Vision—ACCV 2014, Nov. 1, 2014, pp. 144-158, 2015. |
Yun Fu et al., Age Synthesis and Estimation via Faces: A Survey, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, No. 11, Nov. 1, 2010, pp. 1955-1976. |
Number | Date | Country | |
---|---|---|---|
20180350071 A1 | Dec 2018 | US |
Number | Date | Country | |
---|---|---|---|
62513186 | May 2017 | US |