The amount of accessible content is ever expanding. This is particularly true in areas such as fashion and beauty. For example, content showing various beauty looks, makeup applications, trends, celebrity looks etc. can seem to be never-ending. Although content in connection with fashion and beauty is ever expanding, it can be difficult to identify the specific products shown in the content and how those products may appear when used by the consumers of the content.
As is set forth in greater detail below, embodiments of the present disclosure are generally directed to systems and methods for identifying a look (e.g., a beauty aesthetic) in a content item (e.g., an image, a video, etc.) and rendering the identified look on a content item associated with the user (e.g., a live-feed video of the user, a selfie, an existing image, etc.) of the content. For example, a user can provide and/or identify (e.g., via a client device) a content item, which can include an image with a visual representation of at least a portion of a body part (e.g., a face, a hand, toes, nails, hair, etc.) having a look/beauty aesthetic that may be achieved or created through the application of one or more beauty products (e.g., lipstick, eyeshadow, mascara, eyeliner, foundation, concealer, powder, blush, eye pencil, nail polish, toe polish, hair dye, hair style, hair extensions, etc.). Alternatively, the content item can include a visual representation of a beauty product itself. Embodiments of the present disclosure can process the content item to extract parameters associated with the beauty products in the content item and identify beauty products that may be used to achieve the same or similar look/beauty aesthetic. For example, parameters associated with the applied beauty products that can be extracted from the content item can include a color, a gloss, an amount of application, an opacity, a glitter, a glitter size, a glitter density, a shape of application, an intensity, etc. Based on the extracted parameter(s), one or more beauty products (or similar beauty products) used to create the look/beauty aesthetic, or a similar look/beauty aesthetic, can be identified. The identified beauty products can then be rendered on a content item associated with the user (e.g., a live-feed video of the user, a “selfie,” etc.) so that a consumer can assess the appearance of the beauty products on him/her (e.g., his/her face, hands, hair, nails, etc.) in the live content item associated with the user. According to aspects of the present disclosure, any such content items associated with a user are preferably not stored, saved, and/or otherwise maintained by the computer system implementing embodiments of the present disclosure.
Embodiments of the present disclosure can also identify and present additional content items to the user based on a user provided content item. For example, a corpus of additional content items (e.g., an image, a video, etc.), each having an image and a corresponding visual representation of a look/beauty aesthetic, can be stored and maintained in a data store. Similar to the processing and extraction of parameters in the user provided content item, each of the corpus of additional content items can be processed to extract certain parameters associated with the look/beauty aesthetic, beauty products, etc. presented in each additional content item. After the parameters have been extracted from the user provided content item and the corpus of additional content items, the parameters can be associated and stored with the additional content items. The stored parameters can then be used to identify additional content items. For example, the parameters extracted from the user provided content item can be compared against the parameters extracted from the corpus of additional content items to identify additional content items that have looks/beauty aesthetics that may be similar, be related in certain aspects, be a contrasting look, be a recommended look, be an alternative look, etc. The looks/beauty aesthetics presented in the additional content items can also be rendered on a user content item, such as a live-feed streaming video of the user or a selfie, so that the user can assess the appearance of the beauty products identified in the additional content items on his/her face in the live-feed streaming video. According to aspects of the present disclosure, any such content items associated with a user are preferably not stored, saved, and/or otherwise maintained by the computer system implementing embodiments of the present disclosure.
According to certain aspects of the present disclosure, processing the corpus of additional content items can also include identifying a skin tone associated with the look/beauty aesthetic presented in each of the additional content items. This can allow a user to filter or choose to see looks/beauty aesthetics only associated with a certain skin tone. To associate a skin tone with each of the corpus of additional content items, a dominant skin tone can be extracted from each of the additional content items. The dominant skin tone can then be processed and compared to certain various predetermined thresholds to classify each additional content item in one of a plurality of categories based on the identified dominant skin tone. A user can then opt to be presented additional content items only having looks/beauty aesthetics associated with a certain skin tone.
Although embodiments of the present disclosure are described primarily with respect to lipstick, embodiments of the present disclosure may be applicable to other any other types of looks/beauty aesthetics, and/or beauty products, such as, for example, lipstick, eyeshadow, mascara, eyeliner, foundation, concealer, powder, blush, eye pencil, nail polish, toe polish, hair dye, hair style, hair extensions, etc.
According to embodiments of the present disclosure, users 101-1, 101-2, and 101-3 may, via client devices 102-1, 102-2, and 102-3, identify and/or provide a content item having an image with a visual representation of a look/beauty aesthetic of interest to content system 110. This can include, for example, any content (e.g., images, videos, etc.) that may include an image of at least a portion of a body part (e.g., a face, a hand, toes, nails, hair, etc.) having a look/beauty aesthetic. Alternatively, users 101-1, 101-2, and 101-3 may, via client devices 102-1, 102-2, and 102-3, provide a content item of a beauty product itself. According to aspects of the present disclosure, the content can include an image generated using one or more cameras of the client devices 102-1, 102-2, or 102-3, an image from memory of client devices 102-1, 102-2, or 102-3, an image stored in a memory that is external to client devices 102-1, 102-2, or 102-3, an image provided by content system 110, and/or an image from another source or location. The look/beauty aesthetic presented in the content item can include the application of one or more beauty products and/or styles (e.g., lipstick, eyeshadow, mascara, eyeliner, foundation, concealer, powder, blush, eye pencil, nail polish, toe polish, hair dye, hair style, hair extensions, hair perm, etc.) which, when combined, can contribute to the overall beauty aesthetic presented in the content item.
In the content item presented by the user, content system 110 can extract certain parameters associated with the look/beauty aesthetic and/or beauty product presented in the content item provided by the user, and, based on the extracted parameters, identify beauty products (or similar beauty products) that may be used to achieve the same or similar look/beauty aesthetic as the one presented in the content item. Content system 110 can also render the identified beauty product on a content item associated with the user (e.g., a live-feed streaming video, a selfie, an existing image, etc.) of any of users 101-1, 101-2, and 101-3 on client devices 102-1, 102-2, and 102-3, respectively. For example, after a beauty product has been identified, a user may be able to opt to see how the beauty product may appear on his/her face. Accordingly, the user may request that the beauty product is rendered on a content item presenting a visual representation of the user. For example, the user can activate a camera associated with the client device and initiate a live-feed streaming video, capture a selfie, select an existing image, etc. of him/herself. Content system 110 can then provide parameters to client device 102-1, 102-2, and 102-3 to facilitate rendering of the beauty product on the content item (e.g., the live-feed streaming video, the selfie, the existing image, etc.) locally on client device 102-1, 102-2, and 102-3. Alternatively, the content item can be provided to content system 110 and content system 110 can then render the beauty product on the face of the user in the content item and transmit it back to the client device in substantially real-time.
The content item, which can include an image with a visual representation of a look/beauty aesthetic of interest, provided by any of users 101-1, 101-2, and 101-3 via client devices 102-1, 102-2, and 102-3 may include an image with a visual representation of a face having lipstick and eyeshadow applied thereon. Content system 110 can utilize a face detection algorithm to detect features, regions of interest, landmarks, etc. presented in the image. For example, in an image of a face having lipstick and eyeshadow applied thereon, the detection algorithm can identify the lips and the eyes as regions of interest. According to certain aspects of the present disclosure, the detection and identification of regions of interest can be performed by a trained network (e.g., a trained classifier, a machine learning system, a deep learning system, a trained neural network, etc.). Alternatively, other approaches can be utilized such as image segmentation algorithms, edge detection algorithms, etc. Areas in the image not identified as regions of interest can then be removed from the image, so as to crop the identified regions of interest (e.g., the lips and the eyes) from the remainder of the image. For each remaining region of interest, certain parameters associated with the beauty product applied in each region of interest can be extracted. For example, in the lip region, a color, gloss, opacity, etc. of the lipstick shown in the image can be extracted. In connection with the eye regions, a color, a shape, an opacity, an amount, etc. associated with the eyeshadow can be extracted. Other parameters, such as, for example, a glitter, a glitter size, a glitter density, a shape of application, a shimmer, an intensity, etc. can also be extracted. For example, each pixel in the regions of interest can be processed to extract parameters such as gloss, red-green-blue (RGB) values, etc. Additionally, the extracted parameters can vary for each type of beauty product applied in the region of interest.
Alternatively, the content item provided by any of users 101-1, 101-2, and 101-3 via client devices 102-1, 102-2, and 102-3 may include an image with a visual representation of a beauty product itself. Content system 110 can detect the visual representation of the product and identify the portion of the product that is applied (e.g., remove areas showing packaging, etc.) as the region of interest. According to certain aspects of the present disclosure, the detection and identification of the region of interest can be performed by a trained network (e.g., a trained classifier, a machine learning system, a deep learning system, a trained neural network, etc.). Alternatively, other approaches can be utilized such as image segmentation algorithms, edge detection algorithms, etc. Areas in the image not identified as regions of interest can then be removed from the image, so as to crop the identified regions of interest from the remainder of the image. For the remaining region of interest, certain parameters associated with the beauty product can be extracted. For example, in an example of a content item displaying a lipstick, a color, gloss, opacity, etc. of the lipstick shown in the image can be extracted. Alternatively, for an eyeshadow, a color, an opacity, etc. associated with the eyeshadow can be extracted. Other parameters, such as, for example, a glitter, a glitter size, a glitter density, a shimmer, an intensity, etc. can also be extracted. For example, each pixel in the regions of interest can be processed to extract parameters such as gloss, red-green-blue (RGB) values, etc. Additionally, the extracted parameters can vary for each type of beauty product.
After the parameters have been extracted, they can be compared against stored parameter values associated with stored beauty products. Based on the comparison of the parameters extracted from the content with stored parameters, a corresponding beauty product that may produce the same or similar look/beauty aesthetic as shown in the content item can be identified. The identified beauty product can include, for example, the beauty product presented in the content item that was used to achieve the look presented in the content item. Alternatively and/or in addition, the identified beauty product may include beauty products that can provide the same, or a substantially similar, look/beauty aesthetic as that presented in the content item. Optionally, content system 110 can render the identified beauty products on a live-feed streaming video of any of users 101-1, 101-2, or 101-3 on client devices 102-1, 102-2, or 102-3 so that user 101-1, 101-2, or 101-3 can assess the appearance of the identified beauty products on his/her face in the live-feed streaming video. For example, content system 110 can receive, from any of client devices 102-1, 102-2, or 102-3, a streaming live-feed video of user 101-1, 101-2, or 101-3. Content system 110 can then render any of the identified beauty products on the streaming live-feed video of user 101-1, 101-2, or 101-3 on client device 102-1, 102-2, or 102-3 so that user 101-1, 101-2, or 101-3 can assess the appearance of the identified beauty products on his/her face in the live-feed streaming video.
Content system 110 can also include a data store which can store and maintain a corpus of additional content items 120 which can include additional content items 120-1, 120-2, 120-3, and 120-4 and can be provided to users 101-1, 101-2, and 101-3 via client devices 102-1, 102-2, and 102-3. For example, content system 110 can provide the additional content items 120-1, 120-2, 120-3, and 120-4 to users 101-1, 101-2, and 101-3 via client devices 102-1, 102-2, and 102-3 as recommended content items that may include looks/beauty aesthetics that may be similar, be related in certain aspects, be a contrasting look, be a recommended look, be an alternative look, etc. to the look/beauty aesthetic presented in the content item provided by user 101-1, 101-2, or 101-3. Similar to the content item that may be provided by a user, each additional content item 120-1, 120-2, 120-3, and 120-4 of the corpus of additional content items 120 can also include, for example, any content (e.g., images, videos, etc.) that may include an image having a visual representation of at least a portion of a body part (e.g., a face, a hand, toes, nails, hair, etc.) with a look/beauty aesthetic. The look/beauty aesthetic can be composed of the application of one or more beauty products and/or styles (e.g., lipstick, eyeshadow, mascara, eyeliner, foundation, concealer, powder, blush, eye pencil, nail polish, toe polish, hair dye, hair style, hair extensions, hair perm, etc.) which, when combined, can contribute to an overall beauty aesthetic. Further, as with the content item provided by user 101-1, 101-2, or 101-3, each additional content item 120-1, 120-2, 120-3, and 120-4 of the corpus of additional content items 120 can be or may already have been processed to extract certain parameters associated with the look/beauty aesthetic presented therein. For example, additional content items 120-1, 120-2, 120-3, and 120-4 can be processed to extract parameters (e.g., a color, a shape, an opacity, an amount, a glitter, a glitter size, a glitter density, a shape of application, a shimmer, an intensity, etc.) associated with areas of interest in each respective additional content item 120-1, 120-2, 120-3, and 120-4 to identify a corresponding beauty product (e.g., lipstick, eyeshadow, mascara, eyeliner, foundation, concealer, powder, blush, eye pencil, nail polish, toe polish, hair dye, hair style, hair extensions, hair perm, etc.) presented in each respective additional content item 120-1, 120-2, 120-3, and 120-4. The extracted parameters can be associated and stored with each respective additional content item 120-1, 120-2, 120-3, and 120-4.
In providing additional content items 120-1, 120-2, 120-3, and 120-4 to users 101-1, 101-2, or 101-3, the parameters extracted from the user provided content item can be compared against the parameters extracted from the additional content items 120-1, 120-2, 120-3, and 120-4. The comparison can be made to identify looks/beauty aesthetics that may be similar, be related in certain aspects, be a contrasting look, be a recommended look, be an alternative look, etc. to the look/beauty aesthetic presented in the content item provided by user 101-1, 101-2, or 101-3. The identified additional content items 120-1, 120-2, 120-3, and 120-4 can then be provided to users 101-1, 101-2, or 101-3 on client devices 102-1, 102-2, or 102-3. Further, the looks/beauty aesthetics presented in the additional content items 120-1, 120-2, 120-3, and 120-4 provided to users 101-1, 101-2, or 101-3 can also be rendered on a live-feed streaming video or a selfie of the user so that the user can assess the appearance of the beauty products on his/her face in the live-feed streaming video or the selfie.
Optionally, according to certain aspects of the present disclosure, the processing of the additional content items 120-1, 120-2, 120-3, and 120-4 can include identifying a dominant skin tone associated with the look/beauty aesthetic presented in each of the additional content items 120-1, 120-2, 120-3, and 120-4 and classifying each additional content item 120-1, 120-2, 120-3, and 120-4 by skin tone to facilitate filtering of the additional content items 120-1, 120-2, 120-3, and 120-4 based on a desired skin tone. According to certain implementations, each additional content item 120-1, 120-2, 120-3, and 120-4 can be classified into one of four different skin tone classes (e.g., light, medium-light, medium-dark, and dark). Alternatively, any number of classification classes can be utilized depending on the granularity of classification that is desired (e.g., 2 classes, 3 classes, 4 classes, 5 classes, 6 classes, 7 classes, 8 classes, 9 classes, 10 classes, etc.). In classifying each additional content item 120-1, 120-2, 120-3, and 120-4 by skin tone, threshold values can be determined for each class of skin tone being utilized. As described further herein, the threshold values can be based on color space values, and the color space values for each additional content item 120-1, 120-2, 120-3, and 120-4 can be compared to the thresholds to classify each additional content item 120-1, 120-2, 120-3, and 120-4 into one of the prescribed classes. According to certain aspects of the present disclosure, the threshold values for each class can be determined using a trained network (e.g., a trained classifier, a machine learning system, a deep learning system, a trained neural network, etc.).
In determining the skin tone, content system 110 can first identify the regions in each additional content item 120-1, 120-2, 120-3, and 120-4 that correspond to skin tone. Accordingly, for additional content items 120-1, 120-2, 120-3, and 120-4 that include a representation of a face, content system 110 can utilize a face detection algorithm to detect features, regions of interest, landmarks, etc. presented in each additional content item 120-1, 120-2, 120-3, and 120-4 to determine the pixels corresponding to the face shown in the additional content item 120-1, 120-2, 120-3, and 120-4. According to certain aspects of the present disclosure, the detection and identification of regions of interest can be performed by a trained network (e.g., a trained classifier, a machine learning system, a deep learning system, a trained neural network, etc.). Alternatively, other approaches can be utilized such as image segmentation algorithms, edge detection algorithms, etc. The remainder of the visual representation can be removed from the content item, so as to crop the identified regions of interest (e.g., the lips and the eyes).
Alternatively, in additional content items 120-1, 120-2, 120-3, and 120-4 that do not include a visual representation of a face, a hue, saturation, value (HSV) model can be applied to ascertain the pixels that may correspond to skin tone in the additional content item 120-1, 120-2, 120-3, and 120-4. For example, this may be used in images of hands, nails, toes, feet, etc. Optionally, prior to the detection of pixels corresponding to skin tone, pre-processing of the image can be performed (e.g., to correct for exposure, etc.) to compensate for over-exposed or under-exposed images. Then, once the pixels corresponding to skin tone have been identified (e.g., either by facial detection or application of an HSV model), the RGB value for each identified pixel can be determined. The RGB values for each pixel can then be used to determine a median RGB value or a clustering algorithm can be applied to obtain relevant RGB values for the skin tone regions of each additional content item 120-1, 120-2, 120-3, and 120-4. The RGB values can then be transformed into a color space, such as a lightness (L), A, B color space (LAB), where L can represent a lightness (e.g., on a scale from 0-100 with 0 corresponding to black and 100 corresponding to white), A can represent a red-green coloration, and B can represent a blue-yellow coloration. The LAB values for each additional content item 120-1, 120-2, 120-3, and 120-4 can then be utilized to classify each additional content item 120-1, 120-2, 120-3, and 120-4. For example, the classifications can be determined based on a lightness value in the LAB model, or a lightness value and the B value, which can be used to determine an individualized typology angle (ITA). The predetermined thresholds for each skin tone classification class can be based on a lightness value or an ITA value, and the lightness and/or ITA value for each additional content item 120-1, 120-2, 120-3, and 120-4 can be compared against the threshold values to classify each additional content item 120-1, 120-2, 120-3, and 120-4. The skin tone classification can then be associated and stored with each respective additional content item 120-1, 120-2, 120-3, and 120-4.
In view of the skin tone classifications, user 101-1, 101-2, or 101-3 can specify a skin tone, and only additional content items 120-1, 120-2, 120-3, and 120-4 having the skin tone specified by the user may be presented to the user. According to one aspect, a user's skin tone can be detected (in a manner similar to that described with respect to associating a skin tone with the additional content items) and only additional content items 120-1, 120-2, 120-3, and 120-4 having the detected skin tone may be presented to the user. Preferably, because of the potentially sensitive nature of this data with respect to a user, there may be times when the selection of skin tone by a user and/or detection of a user's skin tone should not be stored, saved, and/or otherwise retained by content system 110, so as to protect the privacy of users.
According to certain exemplary embodiments, content item 200 can be provided by a user as a user provided content item of interest. According to embodiments of the present disclosure, a user provided content item can be processed to extract parameters associated with the overall look/beauty aesthetic shown in content item 200. For example, each of lips 212, eye regions 214, eyebrows 216, and hair 218 can be detected and isolated as a region of interest. The detection and identification of these regions of interest (e.g., lips 212, eye regions 214, eyebrows 216, and hair 218) can be obtained by utilizing facial detection algorithms. Alternatively and/or in addition, the detection and identification of the regions of interest can be obtained by utilizing image segmentation algorithms, edge detection algorithms, etc. or a trained network (e.g., a trained classifier, a machine learning system, a deep learning system, a trained neural network, etc.). According to aspects of the present disclosure, content item 200 can include an image of another body part (e.g., hands, feet, fingernails, toenails, etc.) and/or a portion of a body part (e.g., a portion of a face, etc.).
After each region of interest has been isolated, certain parameters associated with the beauty product applied in each respective region of interest can be extracted. For example, for lips 212, a color, a gloss, an opacity, etc. of the lipstick applied to lips 212 in image 202 of content item 200 can be extracted. For eye region 214, a shape of application, a color, an opacity, a shimmer, etc. of the eyeshadow applied to eye region 214 in image 202 of content item 200 can be extracted. For eyebrows 216, a color, an opacity, a shimmer, etc. of the eyebrow pencil applied to eyebrows 216 in image 202 of content item 200 can be extracted. For hair 218, a color, highlights, a perm, a shimmer, etc. of the hairstyle applied to hair 218 appearing in image 202 of content item 200 can be extracted. For example, in extracting the various parameters, each pixel in each of the regions of interest can be processed to extract parameters such as gloss, red-green-blue (RGB) values, etc. Other parameters, such as, a glitter, a glitter size, a glitter density, a shape of application, a shimmer, an intensity, etc. can also be extracted. Additionally, the extracted parameters can vary for different regions of interest and/or for each type of beauty product that may be applied in the image shown in the content item.
After the parameters have been extracted for each region of interest, a user may opt to view a rendering of one or more of the beauty products applied in the image to achieve look/beauty aesthetics presented in content item 200 on a content associated with him/her self (e.g., a live-feed video of him/herself, a selfie, etc.). The user can use a camera associated with a client device (e.g., client device 102-1, 102-2, or 102-3) to capture a streaming live-feed video of him/herself, capture a selfie, or provide an existing image, and the parameters extracted for each selected beauty product (e.g., the lipstick applied on lips 212, the eyeshadow applied on eye regions 214, the eyebrow pencil applied on eyebrows 216, and/or the hair style applied to hair 218) can be used to generate a rendering of the beauty product onto the user's face in the content item. This can allow a user to see how the selected looks/beauty aesthetics may appear on the user. For example, the user can select to have the lipstick applied to lips 212 in content item 200 rendered on him/herself in content item of him/herself.
Alternatively and/or in addition to generating a rendering of the selected beauty products based on the extracted parameters, embodiments of the present disclosure can also identify specific beauty products that may create the same or a similar look/beauty aesthetic to the beauty products applied in the user supplied content item so that they can be provided as a recommendation to the user. Continuing the exemplary embodiment where a user may provide and/or identify content item 200 as a content item of interest. After the parameters have been extracted for each region of interest, one or more beauty products may be identified that may create the same or similar look/beauty aesthetic as the beauty product shown in content item 200. For example, the parameters extracted for the lipstick applied on lips 212 can be used to identify one or more corresponding lipsticks that may recreate the look/beauty aesthetic, or a similar such look/beauty aesthetic, to the lipstick applied on lips 212 and presented in content item 200. Similarly, the parameters extracted for the eyeshadow applied on eye regions 214, the eyebrow pencil applied on eyebrows 216, and/or the hair style applied to hair 218 can be used to identify one or more corresponding eyeshadows, eyebrow pencils, and/or hair styles, respectively, that may recreate the look/beauty aesthetic, or a similar such look/beauty aesthetic, to the eyeshadow applied on eye regions 214, the eyebrow pencil applied on eyebrows 216, and/or the hair style applied to hair 218, respectively, and presented in content item 200.
In addition to identifying individual beauty products, embodiments of the present disclosure can also identify additional content items (e.g., from additional content items 120-1, 120-2, 120-3, and 120-4) having an image with an overall look/beauty aesthetic based on the user provided content item 200 to provide as a recommendation to the user. According to certain exemplary embodiments, a plurality of additional content items (e.g., additional content items 120-1, 120-2, 120-3, and 120-4) can be stored and maintained in a datastore (e.g., as a corpus of additional content items 120). Similar to the processing of user provided content item 200, each of the additional content items can be processed to extract certain parameters associated with the look/beauty aesthetic and/or beauty products presented in each additional content item. For example, each additional content item can be processed to detect and isolate regions of interest (e.g., lips, eye regions, cheeks, hair, fingernails, toenails, etc.). The detection and identification of these regions of interest can be obtained by utilizing facial detection algorithms. Alternatively and/or in addition, the detection and identification of the regions of interest can be obtained by utilizing image segmentation algorithms, edge detection algorithms, etc. or a trained network (e.g., a trained classifier, a machine learning system, a deep learning system, a trained neural network, etc.).
After each region of interest has been isolated, certain parameters associated with the beauty product(s) applied in each respective region of interest can be extracted. For example, a color, a gloss, an opacity, a shape of application, a color, an opacity, a shimmer, highlights, a perm, an amount of application, a glitter, a glitter size, a glitter density, an intensity, etc. can be extracted from the various regions of interest depending on the type of beauty product that may have been applied in each respective region of interest. To extract the various parameters, each pixel in each of the regions of interest can be processed to extract parameters such as gloss, red-green-blue (RGB) values, etc. These extracted parameters can be associated and stored with each respective additional content item. Accordingly, similar to the identification of an individual beauty product, the parameters extracted from content item 200 can be compared against the stored parameters associated with each respective additional content item. Accordingly, the comparison can then identify additional content items with similar parameters that may produce the same or similar overall look/beauty aesthetic to the look/beauty aesthetic presented in content item 200. Alternatively, the extracted parameters may be compared with stored parameters associated with each respective additional content item to identify related, complementary, alternative, contrasting, etc. looks/beauty aesthetics to the look/beauty aesthetic presented in content item 200.
Optionally, according to certain aspects of the present disclosure, the processing of the additional content items can also include identifying a dominant skin tone associated with the look/beauty aesthetic presented in each of the additional content items and classifying each additional content item by skin tone to facilitate filtering of the additional content items based on a desired skin tone. According to certain implementations, each additional content item can be classified into one of four different skin tone classes (e.g., light, medium-light, medium-dark, and dark). Alternatively, any number of classification classes can be utilized depending on the granularity of classification that is desired (e.g., 2 classes, 3 classes, 4 classes, 5 classes, 6 classes, 7 classes, 8 classes, 9 classes, 10 classes, etc.). In classifying each additional content item by skin tone, threshold values can be determined for each class of skin tone being utilized. As described further herein, the threshold values can be based on color space values, and the color space values for each additional content item can be compared to the thresholds to classify each additional content item into one of the prescribed classes. According to certain aspects of the present disclosure, the threshold values for each class can be determined using a trained network (e.g., a trained classifier, a machine learning system, a deep learning system, a trained neural network, etc.).
In determining the skin tone, the regions in each additional content item that correspond to skin tone can first be identified. Accordingly, for the additional content items that include a representation of a face, a face detection algorithm can be utilized to detect features, regions of interest, landmarks, etc. presented in each additional content item to determine the pixels corresponding to the face presented in the respective additional content item. According to certain aspects of the present disclosure, the detection and identification of regions of interest can be performed by a trained network (e.g., a trained classifier, a machine learning system, a deep learning system, a trained neural network, etc.). Alternatively, other approaches such as image segmentation algorithms, edge detection algorithms, etc. can also be utilized.
Alternatively and/or in addition, additional content items that do not include a visual representation of an HSV model can be applied to ascertain the pixels that may correspond to skin tone in the respective additional content item. For example, this may be used in images of hands, nails, toes, feet, etc. Optionally, prior to the detection of pixels corresponding to skin tone (e.g., through facial detection or application of an HSV model), pre-processing of the image can be performed (e.g., to correct for exposure, etc.) to compensate for over-exposed or under-exposed images. After the pixels corresponding to skin tone have been identified (e.g., either by facial detection or application of an HSV model), the RGB value for each identified pixel can be determined. According to certain aspects of the present disclosure, the RGB values for each pixel can then be used to determine a median RGB value for the skin tone regions of each additional content item. Alternatively, clustering algorithms can be applied to extract relevant RGB values. The RGB values can then be transformed into a color space, such as a lightness (L), A, B color space (LAB), where L can represent a lightness value (e.g., on a scale from 0-100 with 0 corresponding to black and 100 corresponding to white), A can represent a red-green coloration, and B can represent a blue-yellow coloration. The LAB values for each additional content item can then be utilized to classify each additional content item. For example, the classifications can be determined based on a lightness value in the LAB model, or a lightness value and the B value, which can be used to determine an ITA associated with each additional content item. The predetermined thresholds for each skin tone classification class can be based on a lightness value or an ITA value, and the lightness and/or ITA value for each additional content item can be compared against the threshold values to classify each additional content item. The skin tone classification can then be associated and stored with each respective additional content item.
According to embodiments of the present disclosure, a user can specify a target skin tone in connection with any additional content items that may be provided to the user as an identified and/or recommended look/beauty aesthetic, such that additional content items only classified with the skin tone specified by the user are presented to the user.
Regarding the processes 600, 700, and 800 described above with respect to
Of course, while these routines and/or processes include various novel features of the disclosed subject matter, other steps (not listed) may also be included and carried out in the execution of the subject matter set forth in these routines, some of which have been suggested above. Those skilled in the art will appreciate that the logical steps of these routines may be combined together or be comprised of multiple steps. Steps of the above-described routines may be carried out in parallel or in series. Often, but not exclusively, the functionality of the various routines is embodied in software (e.g., applications, system services, libraries, and the like) that is executed on one or more processors of computing devices, such as the computing device described in regard
As suggested above, these routines and/or processes are typically embodied within executable code blocks and/or modules comprising routines, functions, looping structures, selectors and switches such as if-then and if-then-else statements, assignments, arithmetic computations, and the like that, in execution, configure a computing device to operate in accordance with the routines/processes. However, the exact implementation in executable statement of each of the routines is based on various implementation configurations and decisions, including programming languages, compilers, target processors, operating environments, and the linking or binding operation. Those skilled in the art will readily appreciate that the logical steps identified in these routines may be implemented in any number of ways and, thus, the logical descriptions set forth above are sufficiently enabling to achieve similar results.
While many novel aspects of the disclosed subject matter are expressed in routines embodied within applications (also referred to as computer programs), apps (small, generally single or narrow purposed applications), and/or methods, these aspects may also be embodied as computer executable instructions stored by computer readable media, also referred to as computer readable storage media, which are articles of manufacture. As those skilled in the art will recognize, computer readable media can host, store and/or reproduce computer executable instructions and data for later retrieval and/or execution. When the computer executable instructions that are hosted or stored on the computer readable storage devices are executed by a processor of a computing device, the execution thereof causes, configures and/or adapts the executing computing device to carry out various steps, methods and/or functionality, including those steps, methods, and routines described above in regard to the various illustrated implementations. Examples of computer readable media include, but are not limited to: optical storage media such as Blu-ray discs, digital video discs (DVDs), compact discs (CDs), optical disc cartridges, and the like; magnetic storage media including hard disk drives, floppy disks, magnetic tape, and the like; memory storage devices such as random-access memory (RAM), read-only memory (ROM), memory cards, thumb drives, and the like; cloud storage (i.e., an online storage service); and the like. While computer readable media may reproduce and/or cause to deliver the computer executable instructions and data to a computing device for execution by one or more processors via various transmission means and mediums, including carrier waves and/or propagated signals, for purposes of this disclosure, computer readable media expressly excludes carrier waves and/or propagated signals.
Regarding computer readable media,
As will be appreciated by those skilled in the art, the memory 1004 typically (but not always) includes both volatile memory 1006 and non-volatile memory 1008. Volatile memory 1006 retains or stores information so long as the memory is supplied with power. In contrast, non-volatile memory 1008 is capable of storing (or persisting) information even when a power supply is not available. Generally speaking, RAM and CPU cache memory are examples of volatile memory 1006 whereas ROM, solid-state memory devices, memory storage devices, and/or memory cards are examples of non-volatile memory 1008.
As will be further appreciated by those skilled in the art, the processor 1002 executes instructions retrieved from the memory 1004, from computer readable media, such as computer readable media 908 of
Further still, the illustrated computing system 1000 typically also includes a network communication interface 1012 for interconnecting this computing system with other devices, computers and/or services over a computer network, such as network 108 of
The exemplary computing system 1000 further includes an executable task manager 1018. As described, task manager 1018 can include beauty aesthetic extractor 1020, skin tone extractor 1022, beauty aesthetic rendering engine 1024, and additional content identification engine 1026. Task manager 1018 may be operable to deliver content to devices, receive information from devices, and/or perform one or more of the routines 600, 700, and/or 800. In some implementations, the content system may provide additional content items 1016 from a digital content items store 1014 for presentation on a user device. As discussed above, the content system may also determine which potential advertisements to provide for presentation to a user on a user device.
It should be understood that, unless otherwise explicitly or implicitly indicated herein, any of the features, characteristics, alternatives or modifications described regarding a particular implementation herein may also be applied, used, or incorporated with any other implementation described herein, and that the drawings and detailed description of the present disclosure are intended to cover all modifications, equivalents and alternatives to the various implementations as defined by the appended claims. Moreover, with respect to the one or more methods or processes of the present disclosure described herein, including but not limited to the flow charts shown in
Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey in a permissive manner that certain implementations could include, or have the potential to include, but do not mandate or require, certain features, elements and/or steps. In a similar manner, terms such as “include,” “including” and “includes” are generally intended to mean “including, but not limited to.” Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more implementations or that one or more implementations necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular implementation.
The elements of a method, process, or algorithm described in connection with the implementations disclosed herein can be embodied directly in hardware, in a software module stored in one or more memory devices and executed by one or more processors, or in a combination of the two. A software module can reside in RAM, flash memory, ROM, EPROM, EEPROM, registers, a hard disk, a removable disk, a CD-ROM, a DVD-ROM or any other form of non-transitory computer-readable storage medium, media, or physical computer storage known in the art. An example storage medium can be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor. The storage medium can be volatile or nonvolatile. The processor and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the processor and the storage medium can reside as discrete components in a user terminal.
Disjunctive language such as the phrase “at least one of X, Y, or Z,” or “at least one of X, Y and Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain implementations require at least one of X, at least one of Y, or at least one of Z to each be present.
Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C.
Language of degree used herein, such as the terms “about,” “approximately,” “generally,” “nearly” or “substantially” as used herein, represent a value, amount, or characteristic close to the stated value, amount, or characteristic that still performs a desired function or achieves a desired result. For example, the terms “about,” “approximately,” “generally,” “nearly” or “substantially” may refer to an amount that is within less than 10% of, within less than 5% of, within less than 1% of, within less than 0.1% of, and within less than 0.01% of the stated amount.
Although the invention has been described and illustrated with respect to illustrative implementations thereof, the foregoing and various other additions and omissions may be made therein and thereto without departing from the spirit and scope of the present disclosure.
This application is a continuation application of U.S. patent application Ser. No. 16/906,088, which was filed on Jun. 19, 2020, and is hereby incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 16906088 | Jun 2020 | US |
Child | 17564004 | US |