1. Field of the Invention
One or more embodiments of the invention described herein pertain to the field of computer systems. More particularly, but not by way of limitation, one or more embodiments of the invention enable the rendering of a computer graphical user interface for the selection of options from option groups. In at least one context the graphical user interface provides users with screen elements that enable the efficient selection of context appropriate colors that are applied to an image in a corresponding screen region.
2. Description of the Related Art
Computer systems have made longstanding use of various systems for enabling users to affect the color of what is displayed on the computer screen. Early programs allowed a user to draw shapes and text in a rudimentary form. While the earliest incarnations of these graphic design programs functioned only in black and white, as computers developed color options were attached. These early applications, for example PC Paintbrush for Microsoft's Disk Operating System (DOS), created in 1985, allowed a user to select a drawing color from a palette. The user was presented with a variety of colors to choose from in rowed boxes where different shades of color were represented by a mix of pixels using a 16-bit Enhanced Graphics Adapter and compatible display.
As graphical programs continued to develop more color options became available, and truer representation of the color spectrum developed. A number of formats for color selection exist including the rowed format continued from early graphics programs into later versions, such as Microsoft's Paint for Windows XP. Customization of these color selectors allow a user to move a point through a large square that contains certain colors, and also provides access to the millions of shades in between. The user may create a desired color by selecting hue, saturation and luminosity values for a color. While some vary, most color selection user interfaces use this method for color selection. Less commonly, a hexadecimal value associated with a color that is recognized by Hypertext Markup Language (HTML) browsers may be selected to create a desired color.
Color selection interfaces, however, generally lack an ability to allow a user to select a color based on the availability of the color and/or the appropriateness of a color in a given situation. Color selection interfaces that allow for the grouping of colors into palettes often do so in a fashion that is arbitrary to the shade of color. For example, many group all reds together into a single palette.
The interfaces provided for color selection are generally formulaic and vary little from program to program. Almost all involve a process requiring a large amount of user experimentation to create the desired color based on manual selection of the hue, saturation and luminosity (HSL) values or Red, Green, Blue (RGB) values. Others simply provide a limited supply of colors. These interfaces also lack the ability to allow a user to select from a wide variety of colors, which is necessary in an interface for selecting colors for transference to a photographic image. For example, if a user picks a shade of orange from an HSL color selection interface, the interface is unable to provide a name for the color for said user to go to a local hardware store and attempt to purchase a matching paint color with which to paint a house.
Interfaces for color selection also lack a coherent and ordered method of tracking recent selections within the interface, while maintaining the context appropriate application of these selections as described above.
For at least the limitations described above there is a need for a computer graphical user interface for the selection of options from option groups in various contexts such as the one described in further detail throughout this document.
One or more embodiments of the invention are directed to a graphical user interface for the selection of options from option groups. By way of example and not by limitation, the graphical user interface described herein provides users with an arrangement of screen elements that enables the user to make color choices within a particular context and apply the selected colors to an image.
In one or more embodiments of the invention, a graphical user interface may enable a user to see the results of applying makeup to an image of a person's face. This interface may offer multiple tabs to enable a user to select a particular section of a person's face to which the user chooses to apply makeup. The interface may enable a user to select the color palette of the makeup to be applied through a circular region with a series of option group tabs. Each option group tab has a group of associated colors. The outer portion of the circular region, or the flywheel, may have multiple color segments. The color segments are arranged so that adjacent color segments may be perceived to be most similar. The inner portion of the circle may present the history of the colors previously chosen and may have an uncolored center circle, and a series of uncolored circles surrounding the center circle.
In one or more embodiments of the invention, when a user selects an option group tab, a new group of color segments may be presented in the flywheel portion of the circle. As the user moves and holds the cursor above a colored segment, the size of that segment may expand to allow the user to see the color more clearly. When the user clicks on a color segment, the section of the person's face selected changes to the color of the color segment. In addition, the selected color fills the center circle. When the user selects a different color, the center circle may be filled with the selected different color and one of the circles surrounding the center circle may be then filled with the previous selected color. As the user selects additional different colors, the center circle and the circles surrounding the center circle present the history of the previously selected color choices.
In the example described here, the graphical user interface components and the methods enabling the graphical user interface components to operate as described are illustrated via a virtual make over. Thus, while not limited solely to such an implementation one or more embodiments of the invention are directed to providing users with a graphical user interface that enables the users to apply virtual makeup to an image. Users may, for instance, upload a picture of them and use the graphical user interface components described herein to apply virtual makeup to the image. Hence in at least one embodiment of the invention, users utilize the graphical user interface components to make color choices about various color shades of makeup such as foundation, concealer, blush, eye-shadow, eye-liner, mascara, lipstick, lip liner, lip gloss, and contact lenses. Color choices are made by a user as to what color of makeup to apply, and the system renders the chosen color to the image.
The user's color choices and the context within which the choices were made are retained in a recent selection screen region of the interface. The method described here is not limited as to the type of computer it may run upon and may for instance operate on any generalized computer system that has the computational ability to execute the methods described herein and can display the results of the users' choices on a display means. The computer typically includes at least a keyboard, a display device such as a monitor, and a pointing device such as a mouse. The computer also typically comprises a random access memory, a read only memory, a central processing unit and a storage device such as a hard disk drive. In some embodiments of the interface, the computer may also comprise a network connection that allows the computer to send and receive data through a computer network such as the Internet. The invention may be embodied on mobile computer platforms such as cell phones, Personal Desktop Assistants (PDAs), kiosks, games boxes or any other computational device
The term options as it is used here relates to an option that is selectable by a user which relate to the applicable of said option to a subject. For instance, in one embodiment, the options are colors associated with facial makeup that are further applied to a subject image, and the context for the options provided is dependent on the part of the image the color is to be applied to. In one or more embodiments of the invention the system is able to dynamically generate options presented to the user and may, for instance, determine what colors to layout on the fly based on the user's selection. The number of colors and the particular colors to be displayed are generally dictated by user choice. If a user asks to only see a certain type, category or subcategory of makeup the system renders the color choices based on the user's input. For instance, if the user selects only organic, SPF foundations the system supporting the graphical user interface obtains the makeup that is classified as such, determines the colors to be displayed using corresponding color data associated with those makeup choices and renders the choices within the graphical user interface for selection by the user.
The visual presentation used within the graphical user interface to present color options to the user can vary depending upon which embodiment of the invention is implemented. In at least one embodiment of the invention a circular color flywheel type interface is utilized to present the color choices. The colors displayed on the color flywheel show the user what color choices are available for application to the image. The user may change the colors displayed on the color flywheel by defining or selecting a new option group. Each option group has an associated collection of colors and when the option group is active, the associated colors are displayed on the color flywheel. In the center portion of the color flywheel information about the operations and color choices already made by the user is displayed using a collection of circles. The center most circle indicates which color is active and presently applied to the associated image and the various circles surrounding this active circle depict what other colors have been or may be applied to the image. One advantage of using a color flywheel to implement one or more embodiments of the invention is that the colors on the color flywheel can be determined on the fly as was mentioned above and will be more fully described throughout this document.
The method described herein in the context of a graphical user interface enables users to visually glance at a history view as the history data is collected and gathered based upon user selections within a specific set of option groups. As new options are selected, the interface presents and distinguishes between the currently applied and selected option as well as options that have been recently selected through the use of dynamic graphical representation. A number of items are kept in the history, and this number can as large or as little as is called for in any implementation of an embodiment of the invention. The interface moves in sequence from the most recently selected to the earliest selected option. Upon the interface history being filled the earliest selected option is removed from the list to make room for new selections. One or more embodiments of the invention are also able to recognize a new selection by a user that is already in the recent history and has not been removed and is able to resort the history instead of duplicating the history entry and taking up a second history position on the interface.
The interface is able to retain its history through navigation. As a user navigates through option groups which are associated with the same subject area of application the history persists, allowing for selections by a user to be retained and referred back to even between option groups. A new history is created once the user navigates to a new subject area and the graphical representation of the history is blanked. However should a user choose to return to the previous subject area the history for that area would once again become available. The history may also persist across user sessions.
In one embodiment used for example where the interface is configured to present makeup options for application to an image of a face, a user selecting a set of lipstick colors for application to the face would see their most recent selections represented on the recently selected history regions of the interface. Were the user to choose to select eye shadow colors instead, the interface would be repopulated with options appropriate to eye shadows, and a new history would be created based on those selections. Navigating back to the lipstick options would repopulate both the interface with options and the previous history.
Context sensitivity with respect to the operations to be applied to the image is incorporated into one or more embodiments of the invention. For instance, when an image of a person is obtained by the system, the system processes the image using facial detection algorithms to identify the various parts of a person's face such as the eyes, nose, mouth, skin, cheeks, chin, hair, eyebrows, ears, or any other anatomically identifiable feature. This facial information is stored by the system for later use. When the user is later working on applying color changes to the uploaded image, the information obtained about the image such as where the eyes are located, what part of the image is skin, and where the lips are located is used to present a limited set of operations to the user based on the context within which a particular set of tasks may be applied. The system is configured to present operations to the user that are relevant to the location on the image where the user has positioned the cursor or otherwise selected. When the cursor is located over the eye, for instance, a mouse click presents operations that are eye specific. When the cursor is positioned over the lips the operations are lip specific (e.g., relate the application of lipstick, lip liner, or lip gloss) and when the cursor is over a relevant portion of the skin the operations presented coincide with the location of the cursor (e.g., relate to foundation or concealer). The location within the image where context sensitive menus such as the ones described are presented depends upon the particular face within the image and is driven by the system making use of an automatic facial detection algorithm or by having the user manually identify key anatomical features of the face.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
The above and other aspects, features and advantages of the invention will be more apparent from the following more particular description thereof, presented in conjunction with the following drawings wherein:
A graphical user interface for the selection of options from option groups and method for implementing the same will now be described. In the example given here the invention is described in the context of a graphical user interface component that is used to enable users to select and apply virtual makeup to an image. In the following exemplary description numerous specific details are set forth in order to provide a more thorough understanding of embodiments of the invention. It will be apparent, however, to an artisan of ordinary skill that the present invention may be practiced without incorporating all aspects of the specific details described herein. In other instances, specific features, quantities, or measurements well known to those of ordinary skill in the art have not been described in detail so as not to obscure the invention. Readers should note that, although examples of the invention are set forth herein, the invention is not limited to the specific examples given in that the claims, and the full scope of any equivalents, are what define the invention.
One or more embodiments of the invention are implemented in the context of a graphical user interface for selection of options from option groups and methods related to the same. This disclosure relies in part on the novel programming algorithms and user interface elements discussed in two co-pending U.S. Patent applications filed on the same day and owned by the same assignee. These applications are entitled “SYSTEM AND METHOD FOR CREATING AND SHARING PERSONALIZED VIRTUAL MAKEOVER,” Ser. No. ______, filed 17 Mar. 2009, hereinafter known as the “Personalized Makeovers” co-pending patent application and “METHOD OF MONETIZING ONLINE PERSONALIZED BEAUTY PRODUCT SELECTIONS”, Ser. No. ______, filed 17 Mar. 2009, hereinafter, the “Monetizing” co-pending patent application. These patent applications are hereby incorporated by reference into this specification.
Interface Enabling Dynamic Layout of Options or Color Within a Group for Application to an Image
In the figure depicted as an example of one or more embodiments of the invention a color flywheel type format is used to display the available color choices to the user. It will be apparent however to those of ordinary skill in the art that any geometric shape, or combination of shapes, can be used in alternative to a circle and hence while some of the advantages described herein result from the circular color flywheel format other shapes such as an ellipse, a triangle, or a polygon are feasible to implement. In one or more embodiments of the invention, a circular format may consistently generate the same interface regardless of the number of colors within the grouping. In one or more embodiments of the invention, a square format may be employed such that the size of the segments may change to accommodate more colors.
As shown in
In the example given at
Currently selected screen region 104 which in the example depicted here is located in the middle of the user interface allows for the display of the currently selected option. Screen elements around this middle currently selected screen region 104 contain a history of color values identifying recent selections made by the user as shown in this example at 105-112.
Upon activation of the interface the user is presented with options dynamically generated or preloaded by the application. Once a user enters the application, the option group selection unit is populated at 101 with options the system determined to be relevant. This determination is made by performing a lookup of the options (e.g., color values, etc . . . ) associated with a specific option group. In the examples provided, the options provided in the Option Selector device of
A generalized interface for making use of the graphical user interface depicted in
The color options presented to a user are therefore provided based on the context in which they are to be applied. In the makeup example the palettes and colors presented to the user relate to eye makeup, foundations, or others makeup type applications. Outside the context of the makeup example described herein the methods described here may be applied to other situations where there is a grouping that has options for application to an image. For example the planned decoration of a home where colors are applied based on a desired ‘theme’ or ‘look’. Applying colors to an image is not the only embodiment of the invention as textures, patterns, text, fonts, and any other form of graphically represented information can be applied as option groups and options for this interface.
As previously mentioned, parts of the interface comprise a history region. Circular region 103 depicts the history interface in accordance with one or more embodiments of the invention, where 104 through 112 represent specific positions in the history, ranging from the most currently selected option, through a number of recently selected options. Upon selection of an option (e.g. color value) from the interface (e.g., color flywheel) and the application of this option to the associated subject image, the selected option is shown at currently selected screen region 104, which is used to represent the ‘currently applied’ selection. In keeping with our provided example embodiment, a lipstick color selected by the user and applied to the subject image would then display on the applied image as well as in current selected screen region 104.
Arrangement and Ordering of Colors
The flywheel is part of a graphical user interface where users select a color from a finite number of colors, typically between 50 and 200 colors. Rather than displaying colors in an arbitrary order, it may be preferable for the colors to be ordered in a fashion where successive colors appear similar and where families of colors are proximal. In the virtual makeover application, it is necessary to layout colors on the flywheel in real-time because the selection of colors to be presented to a user may vary over time (e.g., the collection of products being offered by merchants may change, or a user may only want to see colors of lipstick from her two favorite brands).
Colors lie in a 3-dimensional space, often specified in terms of the three primaries red, green blue (RGB) or by their Hue, Saturation and Luminance (HSL). In mapping colors to the flywheel, we are projecting colors from a 3-D space onto a 1-D circle. Unlike the standard color wheel, which is 2-dimensional and where colors are typically organized in polar coordinates with the hue defining the angle and the saturation defining the radius, the flywheel data structure may be a one-dimensional sorted circular doubly linked list of colors for example. Rather than performing this dimensionality reduction in a data independent way (e.g., projecting to the flywheel just using Hue), the projection may be data-dependent, using the set of input colors to determine the projection. This is important since the colors being displayed in a flywheel for certain applications (e.g., display of cosmetics colors) may not be uniformly distributed through the color space, but are often highly clustered. For example, lipsticks may contain many red-toned colors, but may contain very few blue, yellow and green tones.
To assign a discrete set of N colors to N positions on a flywheel display, we define a cost function
where color(i) is the color assigned to the i-th position on the flywheel, and where <x,y> means the distance in color space between colors x and y. For the subsequent method, the distance between colors could be any metric. In the implementation, the metric may be the square of the Euclidian distance in the RGB color space. Finding the assignment of colors to positions to minimize this cost function is a combinatorial optimization problem. Finding the global minimum of this cost function is computationally expensive, and so a greedy algorithm can be used which is fast, but is not guaranteed to find the global minimum. A greedy algorithm estimates the solution of a problem by making the locally optimal choice at each stage with the hope of finding the global optimum. In one or more embodiments of the invention, a color palette has multiple colors. A circular doubly linked data structure has nodes which may contain information describing each color's coordinates in color space for example. A greedy algorithm may be employed to sort through the circular doubly linked data structure with the goal of re-ordering the colors so that the colors are ordered such that colors that are perceived to be most similar are close together. Once the greedy algorithm completes the sorting process, the flywheel display may then be populated with the color in the first node filling the first screen region in the flywheel display, the color in the second node filling the second screen region in the flywheel display, and so forth.
At block 151, variable integer “n” is set to 1. At block 152, the Local Cost is calculated for colors n−1, n, n+1, and n+2. The Local Cost is calculated by adding the distance in color space between color(n) and color(n−1) to the distance in color space between color(n+1) and color(n+2) and subtracting the distance in color space between color(n−1) and color(n+1) and subtracting the distance in color space between color(n) and color(n+2). In one or more embodiments of the invention, the calculation may be executed in a tangible memory medium or a microprocessor-based computer for example.
At block 153, the value of Local Cost may be considered. Should the value of the Local Cost be less than zero (“0”), the flow may divert to block 154. Should the value of the Local Cost be not less than zero, the flow will divert to block 155. At block 154, the order of color(n) and color(n+1) may be swapped. At block 155, the variable integer n incremented by one and is set to the value of n plus one. At block 156, the value of “n” is compared to that of “N.” When the value of “n” exceeds the value of “N,” the process flow is diverted to block 158. When the value of “n” does not exceed the value of “N,” the process flow is diverted to block 157. At block 157, if the accumulated time for the process exceeds a timeout value, the process flow is diverted to block 158. If the accumulated time for the process does not exceed a timeout value, the process flow is diverted to block 152.
At block 158, the flywheel display may be displayed. The first screen region may be filled with color(1), the second screen region filled color(2), and so forth for example. In one or more embodiments of the invention, the flywheel may be displayed on a display monitor for example.
Color Selection History
As previously mentioned, the user interface has the ability to populate option groups on a context-dependent basis. In the makeup example, color palettes and the associated colors are populated based on the specific area of the subject image that colors are intended to be applied to. In this, and any other embodiments of the invention, the circular region 103 is capable of persistently remembering selected options during navigation of other palettes. So if a user were to pick three colors from one palette under ‘lipstick,’ subsequently change palettes and pick three more colors, all six would be displayed as recent selections in the history. This persistence is capable of surviving during navigation of other areas of the interface. As previously mentioned the palette and option group tab 101 and flywheel display segments 102a through 102ff are repopulated with new options upon the selection of a new detail. The navigation to a new region, for example ‘lip liner,’ would create a new ‘history’ to repopulate, and previous selections from the lipstick would no longer display in circular region 103. A new history would be created based on user selections under lip liner. However, should the user return to the lipstick detail, the palette and option group tab 101 and flywheel display segments 102a through 102ff would be repopulated, and the previous history associated with that detail would once again be displayed. This allows for a user to navigate through extensive options and sub options, while keeping the histories for each section separate and persistent through navigation without requiring separate instances of the interface for each detail being used. It will be apparent to the reader that the use of histories that are connected to each group of palettes and color groups can be applied to any embodiment where multiple option groups are available depending on the context of selection to be made. For example, wall colors retained in a history would be persistent and remembered through navigation of other areas of home decoration such as wallpaper options.
The interface is also able to recognize whether an option being selected by a user is already in the recent history recently selected screen regions 105-112, and able to reorder the selections within the history region to represent the new selection order without duplication of the selection in the history or the unnecessary dropping off of a selection from the recently selected screen region 112.
While the following paragraph appears to be duplicative, the intention of this paragraph and accompanying figure is intended to display a higher level of abstraction at which this invention can operate with a number of options, option groups and history interfaces dependent on the context in which the invention is applied. At step 301 the interface obtains the option group selections and populates flywheel display segments 102a through 102ff with the options associated with the selected option group. At step 302, the selection of an option from flywheel display segment 102a applies said option to the associated subject, and displays the selected option, or its relevant representation, in the ‘currently selected’ screen region of 104. At step 303, the method then proceeds through decisions relating to the history region of the interface. At step 304 the interface determines whether or not a history needs to be built, or whether it already exists. A history will need to be built if more than one option has been selected from at least one of the option group options provided as described above. In the event that a history does not exist and is required, at steps 305 through 306 the interface can perform the application of a newly selected option to the associated subject, while representing the currently selected option in the current selected region of 104, while moving the last selected option to the recently selected regions of 105. In the event that a history already exists a further decision must be made depending on whether the history regions have been filled with options already or otherwise. This decision occurs at step 307. Steps 308 and 309 occur in the event that the history has not already been filed, and performs the application of a newly selected option to the subject, rotating the option history around circular region 103 to make room for a new addition to the list. Steps 310 through 312 occur in the event that the history region has already been filled, and performs the same function above but instead removing the earliest option from the list in order to free up a section. For example, if recently selected screen regions 105-112 were filled with option selections and a user selects a new option, the new option is displayed in currently selected screen region 104 as currently applied, while moving the last selected to 105. All of the other recently selected options rotate around, with the option occupying recently selected screen region 112 being removed from the list to make room. At the end of the decisions at 306, 309 or 312, the interface then loops the decision process.
As touched on above,
Context Sensitive Menus
When an image is provided to the system the system processes the image to identify the various features within the image. This is achieved using the facial recognition or other image processing algorithms or located manually by the user. Once the features within an image are located the system is then able to take actions based on the user input. The image in this example is a face, which has been divided into sections for application of makeup. The eyes, mouth, and skin have been identified as areas for application, and assigned as ‘hotspots’ within the image. These ‘hotspots’ allow the user to directly interact with the image being modified and apply options relevant to the hotspot. For instance, blush is applied to the cheeks and lipstick to the lips.
The location of each hotspot is determined by the system once the facial features are identified by the system or user. The eyes, lips, cheeks, eyelashes or any other facial features have an associated set of action. These hotspots, which may differ from image to image, are based on the facial or image recognition system identifying the features. In some cases the user confirms or adjusts the first attempt the system makes at identifying the features. In one or more embodiments of the invention, when the user clicks the mouse over a hotspot the system presents actions that can be performed on the part of the image associated with the hotspot. In one or more embodiments of the invention, commands that can be performed on the part of the image associated with the hotspot may be activated by other means such as a user touching a touch screen or through the use of a light pen for example. This provides the user with context sensitive menus that are based on different parts of the image having been given a feature classification by the system. An eyelid is thus identified as such as are every other feature within the image.
The system uses this feature information to present commands to the user that are relevant to the feature. This means that a user, using an activation command, most commonly a left click on a mouse pointer device, is able to access a menu wherein the menu relates to the area being clicked. In one or more embodiments of the invention, commands are presented when a user clicks the right or other buttons on a computer mouse. In this example, right clicking on the area of an image that was identified as being eyes presents an eye-related context menu. The user may then select one of the operations on the menu and perform direct manipulation of the image. A right click on a part of the image that has no specific identifier would present the user with options that are applicable to the entire image. These context sensitive menus, and the options they provide, are associated with the particular image upon that image being processed and shown to the user through the interface provided.
Look History/Saving
In the examples given, a number of beauty products can be applied to an image to create a personalized makeover. The products are applied to specific areas of the image to which they apply lipstick and lip liner to lip regions of the image, eye shadow and liner to the eyes, and so on until a full set of makeup has been created by the user in this personalized makeover. This completed set of selections made by a user and applied to the subject image is then a complete “look.” The term look is used herein as a noun in a sense to describe an overall style and appearance of a subject, such as a person, personal fashion, or interior decoration. The term makeover is used as a noun to describe the creation of a new look to improve attractiveness in a subject, conform to societal fashions, or simply modify the appearance. As such, giving a subject image a makeover through this interface creates a new ‘look’ which is made up of the options selected by a user through this interface and applied to the subject image. For example, this look could contain a specific style and color of eye shadow for application, a specific color and amount of foundation to apply, specific colors of blush and thickness of application, and so on until all desired makeups are specified and given color and quantity values within the look.
When users create a look by making various makeup choices and applying those choices to an image to arrive at a “look” the look can be saved and later applied to a different image and or the same image. Since a ‘look’ is the collection of options selected by a user from the option selection interface, the ‘look’ can be saved and stored independently of the subject image used to produce the look. Also, ‘looks’ can be predetermined and applied to any number of subject images. For example, if a user wished to apply a ‘high fashion look’ to their own face, the user can provide their own image at 601, which is processed by the system (further described and referenced in the “Personalized Makeovers” copending patent application). In processing, the areas of the image are identified to correspond with the mouth, eyes, skin areas and other parts of the image. The high fashion ‘look’ contains data relating to the beauty products used to create the look, such as a specific brand and color of eye shadow, lipstick and so on. This data is then applied over the subject image to create the look. In other embodiments, the look could, in the context of interior decoration, contain furniture types with an associated style, wall color and texture with associated RGB color values and/or a texture image, and door types. It will be apparent to the reader that the concept of a saved look can relate to any number of options, selected from option groups, associated with a subject and presented to the user applied to the subject.
Alternatively, looks can be created by a user, and subsequently saved, and selected by another user for application to a new subject image. In this, the image shown at
The information that defines a look is stored in a data structure that defines the color values and region information associated with each item that is needed to create the look. The color values are those the user selected while creating the look and the region information defines what part of the image the colors should be applied against. The various features identified during facial detection are used to define the regions for application of the color values. This information is saved and can later be applied to any image even if the look was created using a different image.
A method for applying a look to an image in accordance with one or more embodiments of the invention is shown in
The look is applied to the image as layers over the image using the order generally used during the application of makeup. Foundation for instance is applied before blush, etc. In one or more embodiments of the invention the process of applying a makeover is as follows: A makeover is represented as a base image, a set of layers that may follow the layers of application of real life makeup. Rendering proceeds by applying the lowest layers up through the highest layers. The low layers would include concealer and foundation. Blush might be applied on top of this. A hair image layer at the top layer with reasonable choices being made by the system about what layer is best to apply next. The makeup associated with the look has an applied area which is a defined style. This can be represented as a mask which might be binary or it might be an alpha mask containing numerical values. The mask shape may be predefined. It might be warped by a user. For example, multi-toned eye shadow may be represented by three masks, one for three different eye shadow products, and these will be rendered in order. The mask shapes are generally different. The user may modify the masks so as to, for example, fall on the crease of the eye and extend to the eye brow.
The mask for applying products like foundation or tanners or bronzers might use an automatically constructed mask based on automatic skin detection as described elsewhere. It may use a predefined mask whose shape and location is reference to automatically or manually detected face features.
A rendering process for each layer takes the partially rendered image to that point and renders the new layer to the region defined by the mask. The output of applying that layer will be a function of the makeup properties including its color, transparency, luster, gloss, metals content, and the underlying image color. There are different ways to model each of the ways types of makeup are applied. One method is called alpha blending. Others use the separation of specular from diffuse reflectance and apply colors to these components. Others use shape models of the face to determine specular components. Once a layer is applied, it is not modified by layers above.
The masks and their positions, and the product parameters (color, transparency, luster, gloss, metal content, and flecks) are called a look. This look may be saved to a disk or other storage device. The look may be loaded and applied to an original image. The look may be applied to a second image of the same person. For this to be effective, the underlying facial features from the first image must be brought into correspondence with the second image. This correspondence induces a transformation of the masked regions. The foundation and bronzer layer would still use the automatically detected skin mask. The subsequent layers would be transformed according to the correspondence.
Fast rendering can be accomplished using rendering at multiple resolutions and backgrounding. Each layer is rendered in turn at low resolution, then higher resolutions until the full resolution rendering proceeds. In an interactive setting such as a makeup editor, a user may be modifying the makeover (mask positions) faster than the rendering can terminate when at full resolution. With multi-resolution rendering with backgrounding, if a mouse event causes a movement of a layer, the rendering will terminate at whatever resolution it has reached and rendering would commence at the new position starting at the lowest resolution. This provides a high degree of interactivity. As the defined ‘looks’ can be considered separate but still associated with a subject presented to the system, it is possible for the same subject to have multiple looks applied to it and viewed at a glance. Conversely, it is also possible for the same look to be applied to multiple images.
Computer System Aspect
One or more embodiments of the invention may be implemented in the form of one or more computer programs that when executed in computer memory may cause one or more computer processors to initiate the methods and processes described herein. The files assembled to makeup the software program may be stored on one or more computer-readable medium and retrieved when needed to carry out the programmed methods and processes described herein. Within the scope of a computer-implemented embodiment of the invention, readers should note that one or more embodiments of the invention may comprise computer programs, data and other information further comprising but not limited to: sets of computer instructions, code sequences, configuration information, data and other information in any form, format or language usable by a general purpose computer or other data processing device, such that when such a computer or device contains, is programmed with, or has access to said computer programs, the data and other information transforms said general purpose computer into a machine capable of performing the methods and processes described herein, and specifically such as those described above.
Various embodiments of the invention may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, computer-readable media or any combination thereof. The term “article of manufacture” (or alternatively, “computer program product,”) as used herein is intended to encompass a computer program of any form accessible from any computer-readable device, carrier or media. In addition, the software in which various embodiments are implemented may be accessible through a transmission medium, such as for example, from a server over the network. The article of manufacture in which the program is implemented may also employ transmission media, such as a network transmission line and/or a wireless transmission media. Those skilled in the art will recognize that many modifications may be made to this configuration without departing from the scope of the invention.
A computer-readable medium suitable to provide computer readable instructions and/or computer readable data for the methods and processes described herein may be any type of magnetic, optical, electrical or other storage medium including disk, tape, CD, DVD, flash drive, thumb drive, storage card, distributed storage or any other memory device, location, approach or other storage medium or technique known to those of skill in the art.
In one or more embodiments of the invention, the methods described here may not be limited as to the type of computer it may run upon and may for instance operate on any generalized computer system that has the computational ability to execute the methods described herein and can display the results of the user's choices on one or more display devices. Display devices appropriate for providing interaction with the invention described herein includes, but is not limited to, computer monitors, cell phones, PDAs, televisions, or any other form of computer controllable output display. As used herein, a computer system refers to but is not limited to any type of computing device, including its associated computer software, data, peripheral devices, communications equipment and any required or desired computers that may achieve direct or indirect communication with a primary computing device.
In one or more embodiments of the invention, a general-purpose computer may be utilized to implement one or more aspects of the invention. In one or more embodiments of the invention, the computer may include various input and output means, including but not limited to a keyboard or other textual input devices, a display device such as a monitor or other display screen, and a pointing device and/or user selection indicator such as a mouse, keypad, touch screen, pointing device, or other known input/output devices known to those of skill in the art. The general purpose computer described herein may include one or more banks of random access memory, read only memory, and one or more central processing unit(s). The general purpose computer described herein may also include one or more data storage device(s) such as a hard disk drive, or other computer readable medium discussed above. An operating system that executes within the computer memory may provide an interface between the hardware and software. The operating system may be responsible for managing, coordinating and sharing of the limited resources within the computer. Software programs that run on the computer may be performed by an operating system to provide the program of the invention with access to the resources needed to execute. In other embodiments the program may run stand-alone on the processor to perform the methods described herein.
In one or more embodiments of the invention, the method(s) described herein, when loaded on or executing through or by one or more general purpose computer(s) described above, may transform the general purpose computer(s) into a specially programmed computer able to perform the method or methods described herein. In one or more embodiments of the invention, the computer-readable storage medium(s) encoded with computer program instructions that, when accessed by a computer, may cause the computer to load the program instructions to a memory there accessible, thereby creates a specially programmed computer able to perform the methods described herein as a specially programmed computer.
The specially programmed computer of the invention may also comprise a connection that allows the computer to send and/or receive data through a computer network such as the Internet or other communication network. Mobile computer platforms such as cellular telephones, Personal Desktop Assistants (PDAs), other hand-held computing devices, digital recorders, wearable computing devices, kiosks, set top boxes, games boxes or any other computational device, portable, personal, real or virtual or otherwise, may also qualify as a computer system or part of a computer system capable of executing the methods described herein as a specially programmed computer.
Main memory 906 may provide a computer readable medium for accessing and executed stored data and applications. Display Interface 908 may communicate with Display Unit 910 which may be utilized to display outputs to the user of the specially-programmed computer system. Display Unit 910 may comprise one or more monitors that may visually depict aspects of the computer program to the user. Main Memory 906 and Display Interface 908 may be coupled to Communication Infrastructure 902, which may serve as the interface point to Secondary Memory 912 and Communication Interface 924. Secondary Memory 912 may provide additional memory resources beyond main Memory 906, and may generally function as a storage location for computer programs to be executed by Processor 907. Either fixed or removable computer-readable media may serve as Secondary Memory 912. Secondary Memory 912 may comprise, for example, Hard Disk 914 and Removable Storage Drive 916 that may have an associated Removable Storage Unit 918. There may be multiple sources of Secondary Memory 912 and systems of the invention may be configured as needed to support the data storage requirements of the user and the methods described herein. Secondary Memory 912 may also comprise Interface 920 that serves as an interface point to additional storage such as Removable Storage Unit 922. Numerous types of data storage devices may serve as repositories for data utilized by the specially programmed computer system of the invention. For example, magnetic, optical or magnetic-optical storage systems, or any other available mass storage technology that provides a repository for digital information may be used.
Communication Interface 924 may be coupled to Communication Infrastructure 902 and may serve as a conduit for data destined for or received from Communication Path 926. A Network Interface Card (NIC) is an example of the type of device that once coupled to Communication Infrastructure 902 may provide a mechanism for transporting data to Communication Path 926. Computer networks such Local Area Networks (LAN), Wide Area Networks (WAN), Wireless networks, optical networks, distributed networks, the Internet or any combination thereof are some examples of the type of communication paths that may be utilized by the specially program computer system of the invention. Communication Path 926 may comprise any type of telecommunication network or interconnection fabric that can transport data to and from Communication Interface 924.
To facilitate user interaction with the specially programmed computer system of the invention, one or more Human Interface Devices (HID) 930 may be provided. Some examples of HIDs that enable users to input commands or data to the specially programmed computer of the invention may comprise a keyboard, mouse, touch screen devices, microphones or other audio interface devices, motion sensors or the like, as well as any other device able to accept any kind of human input and in turn communicate that input to Processor 907 to trigger one or more responses from the specially programmed computer of the invention are within the scope of the system of the invention.
While
One or more embodiments of the invention are configured to enable the specially programmed computer of the invention to take the input data given and transform it into an interface enabling dynamic layout of options or color within a group for application to an image, arrangement and ordering of colors, color selection history, context sensitive menus, and look history and saving, by applying one or more of the methods and/or processes of the invention as described herein. Thus the methods described herein are able to transform the raw input data provided to the system of the invention into a resulting output of the system using the specially programmed computer as described.
One or more embodiments of the invention are configured to enable a general-purpose computer to take one or more color palettes and the color choices associated with each color palette, from memory and transform and display the graphical user interface component on Display Unit 910 for example. The user, through the interaction with the Human Interface Device 930, enters a selection of a region of a subject image. Processor 907 receives the selection of a region of a subject image, transforms the color palette and color choices data, and transmits the information to the Display Unit 910 for display. The user may interact with the computer system through Human Interface Device 930 and may select a second color choice which causes Processor 907 to process the information and transmit signal to the graphical user interface on Display Unit 910.
In one or more embodiments of the invention, when a user selects an identified region of a subject image, Processor 907 transmits data to the Display Unit 910 and enables the user to see context sensitive menus that is associated with the specific identified region of the subject image.
In one or more embodiments of the invention, Processor 907 requests for the color records for a particular color palette. Processor 907 then calculates the arrangement of the colors in a color flywheel display through the use of a greedy algorithm. Processor 907 then transmits the signals to the Display Unit 910 where they may be displayed to a user.
In or more embodiments of the invention, Human Interface device 903 accepts a user's input in which the user may select a first identified region of a subject image to apply a first color, and a second identified region of a subject image to apply a second color. Processor 907 may process the metadata describing the first color and second color and store that information as a “look” in Main Memory 906 for example.
While the invention herein disclosed has been described by means of specific embodiments and applications thereof, numerous modifications and variations could be made thereto by those skilled in the art without departing from the scope of the invention set forth in the claims.
This application claims priority to three U.S. Provisional Patent Applications, all filed on Mar. 17, 2008, and all co-owned by the same assignee. These applications are entitled “SYSTEM AND METHOD FOR CREATING AND SHARING PERSONALIZED VIRTUAL MAKEOVERS,” Ser. No. 61/037,323, “GRAPHICAL USER INTERFACE FOR SELECTION OF OPTIONS FROM OPTION GROUPS AND METHODS RELATING TO SAME,” Ser. No. 61/037,319, and “METHOD OF MONETIZING ONLINE PERSONALIZED BEAUTY PRODUCT SELECTIONS,” Ser. No. 61/037,314, filed 17 Mar. 2008. These provisional patent applications are hereby incorporated by reference in their entirety into this specification.
Number | Date | Country | |
---|---|---|---|
61037323 | Mar 2008 | US | |
61037319 | Mar 2008 | US | |
61037314 | Mar 2008 | US |