Systems and methods for generating a 3-D model of a user for a virtual try-on product

Information

  • Patent Grant
  • 10147233
  • Patent Number
    10,147,233
  • Date Filed
    Tuesday, April 28, 2015
    9 years ago
  • Date Issued
    Tuesday, December 4, 2018
    6 years ago
Abstract
A computer-implemented method for generating a three-dimensional (3-D) model of a user. A plurality of images of a user are obtained. An angle of view relative to the user pictured in at least one of the plurality of images is calculated. It is determined whether the calculated angle of view matches a predetermined viewing angle. Upon determining the calculated angle of view matches the predetermined viewing angle, at least one of the plurality of images is selected.
Description
BACKGROUND

The use of computer systems and computer-related technologies continues to increase at a rapid pace. This increased use of computer systems has influenced the advances made to computer-related technologies. Indeed, computer systems have increasingly become an integral part of the business world and the activities of individual consumers. Computers have opened up an entire industry of internet shopping. In many ways, online shopping has changed the way consumers purchase products. For example, a consumer may want to know what they will look like in and/or with a product. On the webpage of a certain product, a photograph of a model with the particular product may be shown. However, users may want to see more accurate depictions of themselves in relation to various products.


SUMMARY

According to at least one embodiment, a computer-implemented method for generating a three-dimensional (3-D) model of a user is described. A plurality of images of a user may be obtained. An angle of view relative to the user pictured in at least one of the plurality of images may be calculated. It may be determined whether the calculated angle of view matches a predetermined viewing angle. The predetermined viewing angle may include a plurality of evenly spaced 10-degree rotation steps. Upon determining the calculated angle of view matches the predetermined viewing angle, at least one of the plurality of images may be selected.


In one embodiment, a real-time image of the user may be displayed while obtaining the plurality of images of the user. A guideline may be displayed in relation to the displayed real-time image of the user. A cross-correlation algorithm may be performed to track a feature between two or more of the plurality of images of the user. A 3-D model of the user may be generated from the detected features of the user. Texture coordinate information may be generated from the determined 3-D structure of the user. The texture coordinate information may relate a two-dimensional (2-D) coordinate of each selected image to a 3-D coordinate of the 3-D model of the user. At least one geometry file may be generated to store data related to a 3-D structure, wherein each at least one geometry file comprises a plurality of vertices corresponding to a universal morphable model.


In some configurations, a coefficient may be calculated for each generated geometry file based on the determined 3-D structure of the user. Each generated geometry file may be combined linearly based on each calculated coefficient to generate a polygon mesh of the user. Each selected image may be applied to the generated polygon mesh of the user according to the generated texture coordinate information.


A computing device configured to generate a three-dimensional (3-D) model of a user is also described. The device may include a processor and memory in electronic communication with the processor. The memory may store instructions that are executable by the processor to obtain a plurality of images of a user, calculate an angle of view relative to the user pictured in at least one of the plurality of images, determine whether the calculated angle of view matches a predetermined viewing angle, and upon determining the calculated angle of view matches the predetermined viewing angle, select at least one of the plurality of images.


A computer-program product to generate a three-dimensional (3-D) model of a user is also described. The computer-program product may include a non-transitory computer-readable medium that stores instructions. The instructions may be executable by a processor to obtain a plurality of images of a user, calculate an angle of view relative to the user pictured in at least one of the plurality of images, determine whether the calculated angle of view matches a predetermined viewing angle, and upon determining the calculated angle of view matches the predetermined viewing angle, select at least one of the plurality of images.


Features from any of the above-mentioned embodiments may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings illustrate a number of exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the instant disclosure.



FIG. 1 is a block diagram illustrating one embodiment of an environment in which the present systems and methods may be implemented;



FIG. 2 is a block diagram illustrating another embodiment of an environment in which the present systems and methods may be implemented;



FIG. 3 is a block diagram illustrating one example of a model generator;



FIG. 4 is a block diagram illustrating one example of an image processor;



FIG. 5 illustrates an example arrangement for capturing an image of a user;



FIG. 6 is a diagram illustrating an example of a device for capturing an image of a user;



FIG. 7 illustrates an example arrangement of a virtual 3-D space including a depiction of a 3-D model of a user;



FIG. 8 is a flow diagram illustrating one embodiment of a method for generating a 3-D model of a user;



FIG. 9 is a flow diagram illustrating one embodiment of a method for applying an image of a user to a polygon mesh model of the user;



FIG. 10 is a flow diagram illustrating one embodiment of a method for displaying a feedback image to a user; and



FIG. 11 depicts a block diagram of a computer system suitable for implementing the present systems and methods.





While the embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the instant disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims.


DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS

The systems and methods described herein relate to the virtually trying-on of products. Three-dimensional (3-D) computer graphics are graphics that use a 3-D representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering two-dimensional (2-D) images. Such images may be stored for viewing later or displayed in real-time. A 3-D space may include a mathematical representation of a 3-D surface of an object. A 3-D model may be contained within a graphical data file. A 3-D model may represent a 3-D object using a collection of points in 3-D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data (points and other information), 3-D models may be created by hand, algorithmically (procedural modeling), or scanned such as with a laser scanner. A 3-D model may be displayed visually as a two-dimensional image through a process called 3-D rendering, or used in non-graphical computer simulations and calculations. In some cases, the 3-D model may be physically created using a 3-D printing device.


A device may capture an image of the user and generate a 3-D model of the user from the image. A 3-D polygon mesh of an object may be placed in relation to the 3-D model of the user to create a 3-D virtual depiction of the user wearing the object (e.g., a pair of glasses, a hat, a shirt, a belt, etc.). This 3-D scene may then be rendered into a 2-D image to provide the user a virtual depiction of the user in relation to the object. Although some of the examples used herein describe articles of clothing, specifically a virtual try-on pair of glasses, it is understood that the systems and methods described herein may be used to virtually try-on a wide variety of products. Examples of such products may include glasses, clothing, foot-wear, jewelry, accessories, hair styles, etc.



FIG. 1 is a block diagram illustrating one embodiment of an environment 100 in which the present systems and methods may be implemented. In some embodiments, the systems and methods described herein may be performed on a single device (e.g., device 102). For example, a model generator 104 may be located on the device 102. Examples of devices 102 include mobile devices, smart phones, personal computing devices, computers, servers, etc.


In some configurations, a device 102 may include a model generator 104, a camera 106, and a display 108. In one example, the device 102 may be coupled to a database 110. In one embodiment, the database 110 may be internal to the device 102. In another embodiment, the database 110 may be external to the device 102. In some configurations, the database 110 may include polygon model data 112 and texture map data 114.


In one embodiment, the model generator 104 may enable a user to initiate a process to generate a 3-D model of the user. In some configurations, the model generator 104 may obtain multiple images of the user. For example, the model generator 104 may capture multiple images of a user via the camera 106. For instance, the model generator 104 may capture a video (e.g., a 5 second video) via the camera 106. In some configurations, the model generator 104 may use polygon model data 112 and texture map data 114 to generate a 3-D representation of a user. For example, the polygon model data 112 may include vertex coordinates of a polygon model of the user's head. In some embodiments, the model generator 104 may use color information from the pixels of multiple images of the user to create a texture map of the user. In some configurations, the model generator 104 may generate and/or obtain a 3-D representation of a product. For example, the polygon model data 112 and texture map data 114 may include a 3-D model of a pair of glasses. In some embodiments, the polygon model data 112 may include a polygon model of an object. In some configurations, the texture map data 114 may define a visual aspect (e.g., pixel information) of the 3-D model of the object such as color, texture, shadow, or transparency.


In some configurations, the model generator 104 may generate a virtual try-on image by rendering a virtual 3-D space that contains a 3-D model of a user and a 3-D model of a product. In one example, the virtual try-on image may illustrate the user with a rendered version of the product. In some configurations, the model generator 104 may output the virtual try-on image to the display 108 to be displayed to the user.



FIG. 2 is a block diagram illustrating another embodiment of an environment 200 in which the present systems and methods may be implemented. In some embodiments, a device 102-a may communicate with a server 206 via a network 204. Example of networks 204 include, local area networks (LAN), wide area networks (WAN), virtual private networks (VPN), wireless networks (using 802.11, for example), cellular networks (using 3G and/or LTE, for example), etc. In some configurations, the network 204 may include the internet. In some configurations, the device 102-a may be one example of the device 102 illustrated in FIG. 1. For example, the device 102-a may include the camera 106, the display 108, and an application 202. It is noted that in some embodiments, the device 102-a may not include a model generator 104. In some embodiments, both a device 102-a and a server 206 may include a model generator 104 where at least a portion of the functions of the model generator 104 are performed separately and/or concurrently on both the device 102-a and the server 206.


In some embodiments, the server 206 may include the model generator 104 and may be coupled to the database 110. For example, the model generator 104 may access the polygon model data 112 and the texture map data 114 in the database 110 via the server 206. The database 110 may be internal or external to the server 206.


In some configurations, the application 202 may capture multiple images via the camera 106. For example, the application 202 may use the camera 106 to capture a video. Upon capturing the multiple images, the application 202 may process the multiple images to generate result data. In some embodiments, the application 202 may transmit the multiple images to the server 206. Additionally or alternatively, the application 202 may transmit to the server 206 the result data or at least one file associated with the result data.


In some configurations, the model generator 104 may process multiple images of a user to generate a 3-D model of the user. The model generator 104 may render a 3-D space that includes the 3-D model of the user and a 3-D polygon model of an object to render a virtual try-on 2-D image of the object and the user. The application 202 may output a display of the user to the display 108 while the camera 106 captures an image of the user.



FIG. 3 is a block diagram illustrating one example of a model generator 104-a. The model generator 104-a may be one example of the model generator 104 depicted in FIGS. 1 and/or 2. As depicted, the model generator 104-a may include a scanning module 302, an image processor 304, and a display module 306.


In some configurations, the scanning module 302 may obtain a plurality of images of a user. In some embodiments, the scanning module 302 may activate the camera 106 to capture at least one image of the user. Additionally, or alternatively, the scanning module 302 may capture a video of the user.


In some embodiments, the image processor 304 may process an image of the user captured by the scanning module 302. The image processor 304 may be configured to generate a 3-D model of the user from the processing of the image. Operations of the image processor 304 are discussed in further detail below.


In some configurations, the display module 306 may display a real-time image of the user on a display (e.g. display 108) while obtaining the plurality of images of the user. For example, as the camera 106 captures an image of the user, the captured image of the user may be displayed on the display 108 to provide a visual feedback to the user. In some embodiments, the display module 306 may display a guideline on the display in relation to the displayed real-time image of the user. For example, one or more guidelines may provide a visual cue to the user. For instance, a guideline may provide a visual cue of the direction in which the user should be holding the device 102 (e.g., a tablet computing device in landscape or portrait mode). Additionally, a guideline may provide a visual leveling cue to assist the user in maintaining the device relatively level or in the same plane while the user pans or rotates the device 102 around him- or herself. Additionally, a guideline may provide the user a visual depth cue to assist the user in maintaining the device at a relatively same depth (e.g., at arm's length) while the user pans or rotates the device 102 around the user.



FIG. 4 is a block diagram illustrating one example of an image processor 304-a. The image processor 304-a may be one example of the image processor 304 illustrated in FIG. 3. As depicted, the image processor 304 may include a viewpoint module 402, a comparison module 404, a selection module 406, and a cross-correlation module 408. Additionally, the image processor 304-a may include a texture mapping module 410, a geometry module 412, a coefficient module 414, a linear combination module 416, and an application module 418.


In some configurations, the viewpoint module 402 may calculate an angle of view relative to the user pictured in at least one of the plurality of images. For example, the viewpoint module 402 may determine that in one image of the user's head, the user held the device 102 10-degrees to the left of center of the user's face. The comparison module 404 may determine whether the calculated angle of view (e.g., 10-degrees to the left of center of the user's face) matches a predetermined viewing angle. In some embodiments, the predetermined viewing angle includes a plurality of evenly spaced 10-degree rotation steps. For example, a head-on image showing the user facing the camera directly may be selected as a viewing angle reference point, or 0-degrees. The next predetermined viewing angles in either direction may include +/−10-degrees, +/−20-degrees, +/−30-degrees, and so forth, in 10-degree increments. Thus, the comparison module 404 may determine that an image depicting the user holding the device 102 10-degrees to the left of center of the user's face matches a predetermined viewing angle of +10-degrees (or −10-degrees).


In some embodiments, upon determining the calculated angle of view matches the predetermined viewing angle, the selection module 406 may select at least one of the plurality of images. For example, the selection module 406 may select an image for further processing. The cross-correlation module 408 may perform a cross-correlation algorithm to track a feature between two or more of the plurality of images of the user. For example, the image processor 304-a, via the cross-correlation module 408, may perform template matching. Additionally, or alternatively, the image processor 304-a, via the cross-correlation module 408, may perform a structure from motion algorithm to track features in the images of the user. From the detected features of the user, the image processor 304-a may construct a 3-D model of the user.


In some configurations, the texture mapping module 410 may generate texture coordinate information from the determined 3-D structure of the user. The texture coordinate information may relate a two-dimensional (2-D) coordinate (e.g., UV coordinates) of each selected image to a 3-D coordinate (e.g., XYZ coordinates) of the 3-D model of the user.


In one embodiment, the geometry module 412 generates at least one geometry file to store data related to a 3-D structure. Each at least one geometry file may include a plurality of vertices corresponding to a universal morphable model. For instance, each geometry file may include a different generic model of a user, where each model depicts a user with certain features and characteristics. For example, one geometry file may include a polygon mesh depicting characteristics typical of a male-looking face. Another geometry file may include a polygon mesh depicting characteristics of a female-looking face, and so forth.


In some configurations, the coefficient module 414 calculates a coefficient for each generated geometry file based on the determined 3-D structure of the user. The linear combination module 416 may combine linearly each generated geometry file based on each calculated coefficient to generate a polygon mesh of the user. In other words, each coefficient may act as a weight to determine how much each particular geometry file affects the outcome of linearly combining each geometry file. For example, if the user is a female, then the coefficient module 414 may associate a relatively high coefficient (e.g., 1.0) to a geometry file that depicts female characteristics, and may associate a relatively low coefficient (e.g., 0.01) to a geometry file that depicts male characteristics. Thus, each geometry file may be combined linearly, morphing a 3-D polygon mesh to generate a realistic model of the user based on the 3-D characteristics of the user calculated from one or more captured images of the user. The application module 418 may apply each selected image to the generated polygon mesh of the user according to the generated texture coordinate information, resulting in a 3-D model of the user.



FIG. 5 illustrates an example arrangement 500 for capturing an image 504 of a user 502. In particular, the illustrated example arrangement 500 may include the user 502 holding a device 102-b. The device 102-b may include a camera 106-a and a display 108-a. The device 102-b, camera 106-a, and display 108-a may be examples of the device 102, camera 106, and display 108 depicted in FIGS. 1 and/or 2.


In one example, the user 502 holds the device 102-b at arm's length with the camera 106-a activated. The camera 106-a may capture an image 504 of the user and the display 108-a may show the captured image 504 to the user 502 (e.g., a real-time feedback image of the user). In some configurations, the camera 106-a may capture a video of the user 502. In some embodiments, the user may pan the device 102-b around the user's face to allow the camera 106-a to capture a video of the user from one side of the user's face to the other side of the user's face. Additionally, or alternatively, the user 502 may capture an image of other areas (e.g., arm, leg, torso, etc.).



FIG. 6 is a diagram 600 illustrating an example of a device 102-c for capturing an image 602 of a user. The device 102-c may be one example of the device 102 illustrated in FIGS. 1 and/or 2. As depicted, the device 102-c may include a camera 106-b, a display 108-b, and an application 202-a. The camera 106-b, display 108-b, and application 202-a may each be an example of the respective camera 106, display 108, and application 202 illustrated in FIGS. 1 and/or 2.


In one embodiment, the user may operate the device 102-c. For example, the application 202-a may allow the user to interact with and/or operate the device 102-c. In one embodiment, the application 202-a may allow the user to capture an image 605 of the user. For example, the application 202-a may display the captured image 602 on the display 108-b. In some cases, the application 202-a may permit the user to accept or decline the image 602 that was captured.



FIG. 7 illustrates an example arrangement 700 of a virtual 3-D space 702. As depicted, the 3-D space 702 of the example arrangement 700 may include a 3-D model of a user's head 704. In some embodiments, the 3-D model of the user's head 704 may include a polygon mesh model of the user's head, which may be stored in the database 110 as polygon data 112. The polygon data 112 of the 3-D model of the user may include 3-D polygon mesh elements such as vertices, edges, faces, polygons, surfaces, and the like. Additionally, or alternatively, the 3-D model of the user's head 704 may include at least one texture map, which may be stored in the database 110 as texture map data 114.



FIG. 8 is a flow diagram illustrating one embodiment of a method 800 for generating a 3-D model of a user. In some configurations, the method 800 may be implemented by the model generator 104 illustrated in FIGS. 1, 2, and/or 4. In some configurations, the method 800 may be implemented by the application 202 illustrated in FIG. 2.


At block 802, a plurality of images of a user may be obtained. At block 804, an angle of view relative to the user pictured in at least one of the plurality of images may be calculated. At block 806, it may be determined whether the calculated angle of view matches a predetermined viewing angle. In some configurations, the predetermined viewing angle includes a plurality of rotation steps. As explained above, in some configurations, the predetermined viewing angle includes a plurality of evenly spaced 10-degree rotation steps. At block 808, upon determining the calculated angle of view matches the predetermined viewing angle, at least one of the plurality of images may be selected. Upon determining the calculated angle of view does not match the predetermined viewing angle, the method returns to block 804.



FIG. 9 is a flow diagram illustrating one embodiment of a method 900 for applying an image of a user to a polygon mesh model of the user. In some configurations, the method 900 may be implemented by the model generator 104 illustrated in FIGS. 1, 2, and/or 4. In some configurations, the method 900 may be implemented by the application 202 illustrated in FIG. 2.


At block 902, a cross-correlation algorithm to track features in the images of the user to determine a 3-D structure of the user. At block 904, texture coordinate information may be generated from the determined 3-D structure of the user. As explained above, the texture coordinate information may relate a 2-D coordinate (e.g., UV coordinates) of each selected image to a 3-D coordinate (e.g., XYZ coordinates) of the 3-D model of the user.


At block 906, at least one geometry file may be generated to store data related to a 3-D structure. As explained above, each at least one geometry file may include a plurality of vertices corresponding to a universal morphable model. At block 908, a coefficient for each generated geometry file based on the determined 3-D structure of the user may be calculated. At block 910, each generated geometry file may be combined linearly based on each calculated coefficient to generate a polygon mesh of the user. At block 912, each selected image may be applied to the generated polygon mesh of the user according to the generated texture coordinate information.



FIG. 10 is a flow diagram illustrating one embodiment of a method 1000 for displaying a feedback image to a user. In some configurations, the method 1000 may be implemented by the model generator 104 illustrated in FIGS. 1, 2, and/or 4. In some configurations, the method 1000 may be implemented by the application 202 illustrated in FIG. 2.


At block 1002, a real-time image of the user may be displayed on a display (e.g., display 108) while obtaining the plurality of images of the user. As explained above, as the camera 106 captures an image of the user, the captured image of the user may be displayed on the display 108 to provide a visual feedback to the user. At block 1004, a guideline on the display in relation to the displayed real-time image of the user may be displayed. One or more guidelines may provide a visual cue to the user while an image is being captured.



FIG. 11 depicts a block diagram of a computer system 1100 suitable for implementing the present systems and methods. The depicted computer system 1100 may be one example of a server 206 depicted in FIG. 2. Alternatively, the system 1100 may be one example of a device 102 depicted in FIGS. 1, 2, 5, and/or 6. Computer system 1100 includes a bus 1102 which interconnects major subsystems of computer system 1100, such as a central processor 1104, a system memory 1106 (typically RAM, but which may also include ROM, flash RAM, or the like), an input/output controller 1108, an external audio device, such as a speaker system 1110 via an audio output interface 1112, an external device, such as a display screen 1114 via display adapter 1116, serial ports 1118 and mouse 1146, a keyboard 1122 (interfaced with a keyboard controller 1124), multiple USB devices 1126 (interfaced with a USB controller 1128), a storage interface 1130, a host bus adapter (HBA) interface card 1136A operative to connect with a Fibre Channel network 1138, a host bus adapter (HBA) interface card 1136B operative to connect to a SCSI bus 1140, and an optical disk drive 1142 operative to receive an optical disk 1144. Also included are a mouse 1146 (or other point-and-click device, coupled to bus 1102 via serial port 1118), a modem 1148 (coupled to bus 1102 via serial port 1120), and a network interface 1150 (coupled directly to bus 1102).


Bus 1102 allows data communication between central processor 1104 and system memory 1106, which may include read-only memory (ROM) or flash memory (neither shown), and random access memory (RAM) (not shown), as previously noted. The RAM is generally the main memory into which the operating system and application programs are loaded. The ROM or flash memory can contain, among other code, the Basic Input-Output system (BIOS) which controls basic hardware operation such as the interaction with peripheral components or devices. For example, a model generator 104-b to implement the present systems and methods may be stored within the system memory 1106. The model generator 104-b may be one example of the model generator 104 depicted in FIGS. 1, 2, and/or 3. Applications resident with computer system 1100 are generally stored on and accessed via a non-transitory computer readable medium, such as a hard disk drive (e.g., fixed disk 1152), an optical drive (e.g., optical drive 1142), or other storage medium. Additionally, applications can be in the form of electronic signals modulated in accordance with the application and data communication technology when accessed via network modem 1148 or interface 1150.


Storage interface 1130, as with the other storage interfaces of computer system 1100, can connect to a standard computer readable medium for storage and/or retrieval of information, such as a fixed disk drive 1152. Fixed disk drive 1152 may be a part of computer system 1100 or may be separate and accessed through other interface systems. Modem 1148 may provide a direct connection to a remote server via a telephone link or to the Internet via an internet service provider (ISP). Network interface 1150 may provide a direct connection to a remote server via a direct network link to the Internet via a POP (point of presence). Network interface 1150 may provide such connection using wireless techniques, including digital cellular telephone connection, Cellular Digital Packet Data (CDPD) connection, digital satellite data connection or the like.


Many other devices or subsystems (not shown) may be connected in a similar manner (e.g., document scanners, digital cameras and so on). Conversely, all of the devices shown in FIG. 11 need not be present to practice the present systems and methods. The devices and subsystems can be interconnected in different ways from that shown in FIG. 11. The operation of at least some of the computer system 1100 such as that shown in FIG. 11 is readily known in the art and is not discussed in detail in this application. Code to implement the present disclosure can be stored in a non-transitory computer-readable medium such as one or more of system memory 1106, fixed disk 1152, or optical disk 1144. The operating system provided on computer system 1100 may be MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, Linux®, or another known operating system.


Moreover, regarding the signals described herein, those skilled in the art will recognize that a signal can be directly transmitted from a first block to a second block, or a signal can be modified (e.g., amplified, attenuated, delayed, latched, buffered, inverted, filtered, or otherwise modified) between the blocks. Although the signals of the above described embodiment are characterized as transmitted from one block to the next, other embodiments of the present systems and methods may include modified signals in place of such directly transmitted signals as long as the informational and/or functional aspect of the signal is transmitted between blocks. To some extent, a signal input at a second block can be conceptualized as a second signal derived from a first signal output from a first block due to physical limitations of the circuitry involved (e.g., there will inevitably be some attenuation and delay). Therefore, as used herein, a second signal derived from a first signal includes the first signal or any modifications to the first signal, whether due to circuit limitations or due to passage through other circuit elements which do not change the informational and/or final functional aspect of the first signal.


While the foregoing disclosure sets forth various embodiments using specific block diagrams, flowcharts, and examples, each block diagram component, flowchart step, operation, and/or component described and/or illustrated herein may be implemented, individually and/or collectively, using a wide range of hardware, software, or firmware (or any combination thereof) configurations. In addition, any disclosure of components contained within other components should be considered exemplary in nature since many other architectures can be implemented to achieve the same functionality.


The process parameters and sequence of steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.


Furthermore, while various embodiments have been described and/or illustrated herein in the context of fully functional computing systems, one or more of these exemplary embodiments may be distributed as a program product in a variety of forms, regardless of the particular type of computer-readable media used to actually carry out the distribution. The embodiments disclosed herein may also be implemented using software modules that perform certain tasks. These software modules may include script, batch, or other executable files that may be stored on a computer-readable storage medium or in a computing system. In some embodiments, these software modules may configure a computing system to perform one or more of the exemplary embodiments disclosed herein.


The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the present systems and methods and their practical applications, to thereby enable others skilled in the art to best utilize the present systems and methods and various embodiments with various modifications as may be suited to the particular use contemplated.


Unless otherwise noted, the terms “a” or “an,” as used in the specification and claims, are to be construed as meaning “at least one of.” In addition, for ease of use, the words “including” and “having,” as used in the specification and claims, are interchangeable with and have the same meaning as the word “comprising.” In addition, the term “based on” as used in the specification and the claims is to be construed as meaning “based at least upon.”

Claims
  • 1. A computing device configured to generate a three-dimensional (3-D) model of a user, comprising: a processor; andmemory in electronic communication with the processor, wherein the memory is configured to store a plurality of geometry files and instructions, the instructions being executable by the processor to: obtain a plurality of images of the user, wherein the images of the user have one or more angles of view;generate the 3-D model of the user using the plurality of images, the instructions to generate comprising instructions to: select one or more physical characteristics of the user from the plurality of images of the user;calculate a respective coefficient for each of the plurality of geometry files based on the one or more physical characteristics of the user;track the one or more physical characteristics of the user across at least two images of the plurality of images of the user, based on the plurality of geometry files and an algorithm; andcombine the plurality of geometry files based on at least one of the respective coefficients to create the 3-D model to represent the user; andgenerate a combined image of the user together with a product, the combined image comprising the 3-D model of the user and a rendered image of a 3-D model of the product.
  • 2. The device of claim 1, wherein the instructions to generate the 3-D model of the user further comprise instructions to provide a universal morphable model; and instructions to adjust the universal morphable model based on the respective coefficients of each of the plurality of geometry files.
  • 3. The device of claim 1, wherein the one or more physical characteristics of the user correspond to at least one of the plurality of images of the user; wherein the instructions to select the one or more physical characteristics of the user comprise:instructions to detect a feature of the user; andinstructions to track the detected feature of the user between at least two of the plurality of images of the user.
  • 4. The device of claim 1, wherein the processor is further configured to calculate an angle of view of a camera used to capture the plurality of images relative to the user.
  • 5. The device of claim 3, wherein the processor is further configured to perform a cross-correlation algorithm to track the detected feature.
  • 6. The computing device of claim 1, wherein each of the plurality of geometry files comprises a different polygon mesh model with different features.
  • 7. The computing device of claim 6, wherein the plurality of geometry files further comprises at least one first polygon mesh depicting a first set of facial characteristics and at least one second polygon mesh depicting a second set of facial characteristics.
  • 8. The computing device of claim 1, further comprising instructions stored in the memory, the instructions being executable by the processor to linearly combine the plurality of geometry files based on each respective coefficient.
  • 9. The computing device of claim 8, wherein an effect of each of the geometry files on the 3-D model of the user is weighted, based on each respective coefficient.
  • 10. The computing device of claim 6, wherein each polygon mesh model comprises at least one of a plurality of vertices and a plurality of texture coordinates.
  • 11. The computing device of claim 1, further comprising a universal morphable model, wherein the universal morphable model comprises at least one of a plurality of vertices and a plurality of texture coordinates corresponding to at least one of the geometry files.
  • 12. The computing device of claim 1, wherein the instructions being executable by the processor to select one or more physical characteristics of the user further comprise instructions executable by the processor to: determine a 3-D structure of the user from the plurality of images; andselect one or more physical characteristics of the user from the determined 3-D structure of the user.
  • 13. The computing device of claim 1, wherein the algorithm to track the one or more physical characteristics comprises at least one of a cross-correlation algorithm and a structure-from-motion algorithm.
  • 14. The computing device of claim 1, wherein the processor is further configured to determine whether an angle of view of a camera used to capture an image of the plurality of images relative to the user is one of a predetermined set of viewing angles.
  • 15. A method comprising: obtaining a plurality of images of a user, wherein the images of the user have one or more angles of view;generating, by at least one processor, a 3-D model of the user using the plurality of images, the generating comprising: selecting one or more physical characteristics of the use from the plurality of images of the user;accessing a plurality of geometry files stored in a memory;calculating a respective coefficient for each of the plurality of geometry files based on the one or more physical characteristics of the user;tracking the one or more physical characteristics of the user across at least two images of the plurality of images of the user, based on the plurality of geometry files and an algorithm; andcombining the plurality of geometry files based on at least one of the respective coefficients to create the 3D model of the user; andgenerating, by the at least one processor, a combined image of the user together with a product, the combined image comprising the generated 3-D model of the user and a rendered image of a 3-D model of the product.
  • 16. The method of claim 15, wherein the generating the 3-D model of the user further comprises providing a universal morphable model; and adjusting the universal morphable model based on at least one of the respective coefficients of at least one of the plurality of geometry files.
  • 17. The method of claim 15, wherein the one or more physical characteristics of the user correspond to at least one of the plurality of images of the user, wherein selecting the one or more physical characteristics of the user comprises: detecting a feature of the user.
  • 18. The method of claim 15, wherein each of the plurality of geometry files comprises a different generic polygon mesh model with different features.
  • 19. The method of claim 18, wherein the plurality of geometry files include at least one polygon mesh depicting a first set of facial characteristics and at least one polygon mesh depicting a second set of facial characteristics.
  • 20. The method of claim 15, further comprising the processor linearly combining the plurality of geometry files based on each respective coefficient.
  • 21. The method of claim 20, further comprising the processor weighting, based on the respective coefficients, an effect of each of the geometry files on the 3-D model of the user.
  • 22. The method of claim 15, wherein selecting one or more physical characteristics of the user further comprises: determining a 3-D structure of the user from the plurality of images; andselecting one or more physical characteristics of the user from the determined 3-D structure of the user.
  • 23. The method of claim 15, wherein the algorithm to track the one or more physical characteristics comprises at least one of a cross-correlation algorithm and a structure-from-motion algorithm.
  • 24. The method of claim 15, wherein the processor is further configured to determine whether an angle of view of a camera used to capture an image of the plurality of images relative to the user is one of a predetermined set of viewing angles.
  • 25. A computer-program product comprising a non-transitory computer-readable medium storing instructions thereon, the instructions being executable by the processor to: obtain a plurality of images of a user, wherein the images of the user have one or more angles of view;generate a 3-D model of the user using the plurality of images, the instructions to generate comprising instructions to: select one or more physical characteristics of the user from the plurality of images of the user;calculate a respective coefficient for each of a plurality of geometry files, based on the one or more physical characteristics of the user;track the one or more physical characteristics of the user across at least two images of the plurality of images of the user, based on the plurality of geometry files and an algorithm; andcombine the plurality of geometry files based on the respective coefficients to create the 3-D model to represent the user; andgenerate a combined image of the user with a product, the combined image comprising the 3-D model of the user and a rendered image of a 3-D model of the product.
  • 26. The computer-program product of claim 25, wherein the instructions to generate the 3-D model of the user further comprises instructions being executable by the processor to: provide a universal morphable model; andadjust the universal morphable model based on the respective coefficients of each of the plurality of geometry files.
  • 27. The computer-program product of claim 26, wherein the instructions to select one or more physical characteristics are executable by the processor to: detect a feature of the user.
  • 28. The computer-program product of claim 27, wherein the instructions to generate the 3-D model are further executable by the processor to: linearly combine the plurality of geometry files based on each respective coefficient; andweight an effect of each of the geometry files on the 3-D model to represent the user.
  • 29. The computer-program product of claim 25, wherein the instructions to select one or more physical characteristics of the user further comprise instructions executable by the processor to: determine a 3-D structure of the user from the plurality of images; andselect one or more physical characteristics of the user from the determined 3-D structure of the user.
  • 30. The computer-program product of claim 25, wherein the algorithm to track the one or more physical characteristics comprises at least one of a cross-correlation algorithm and a structure-from-motion algorithm.
  • 31. The computer-program product of claim 25, wherein the processor is further configured to determine whether an angle of view of a camera used to capture an image of the plurality of images relative to the user is one of a predetermined set of viewing angles.
RELATED APPLICATIONS

This application claims priority as a continuation of U.S. patent application Ser. No. 13/774,983 entitled SYSTEMS AND METHODS FOR GENERATING A 3-D MODEL OF A USER FOR A VIRTUAL TRY-ON PRODUCT, filed Feb. 22, 2013, which claims priority to U.S. Application No. 61/650,983, entitled SYSTEMS AND METHODS TO VIRTUALLY TRY-ON PRODUCTS, filed on May 23, 2012; and U.S. Application No. 61/735,951, entitled SYSTEMS AND METHODS TO VIRTUALLY TRY-ON PRODUCTS, filed on Dec. 11, 2012, each of which are incorporated herein in their entirety by this reference.

US Referenced Citations (488)
Number Name Date Kind
3927933 Humphrey et al. Dec 1975 A
4370058 Trotscher et al. Jan 1983 A
4467349 Maloomian et al. Aug 1984 A
4522474 Slavin Jun 1985 A
4534650 Clerget et al. Aug 1985 A
4539585 Spackova et al. Sep 1985 A
4573121 Saigo et al. Feb 1986 A
4613219 Vogel Sep 1986 A
4698564 Slavin Oct 1987 A
4724617 Logan et al. Feb 1988 A
4730260 Mori et al. Mar 1988 A
4781452 Ace et al. Nov 1988 A
4786160 Fuerter et al. Nov 1988 A
4845641 Ninomiya et al. Jul 1989 A
4852184 Tamura et al. Jul 1989 A
4957369 Antonsson Sep 1990 A
5139373 Logan et al. Aug 1992 A
5255352 Falk Oct 1993 A
5257198 Van Schoyck et al. Oct 1993 A
5280570 Jordan Jan 1994 A
5281957 Schoolman Jan 1994 A
5428448 Albert-Garcia Jun 1995 A
5485399 Saigo et al. Jan 1996 A
5550602 Braeuning Aug 1996 A
5592248 Norton et al. Jan 1997 A
5631718 Markovitz et al. May 1997 A
5666957 Juto Sep 1997 A
5682210 Weirich Oct 1997 A
5720649 Gerber et al. Feb 1998 A
5724522 Kagami et al. Mar 1998 A
5774129 Poggio et al. Jun 1998 A
5809580 Arnette Sep 1998 A
5844573 Poggio et al. Dec 1998 A
5880806 Conway Mar 1999 A
5908348 Gottschald Jun 1999 A
5974400 Kagami et al. Oct 1999 A
5980037 Conway et al. Nov 1999 A
5983201 Fay et al. Nov 1999 A
5987702 Simioni Nov 1999 A
5988862 Kacyra et al. Nov 1999 A
D417883 Arnette Dec 1999 S
6016150 Lengyel et al. Jan 2000 A
6018339 Stevens et al. Jan 2000 A
D420037 Conway Feb 2000 S
D420379 Conway Feb 2000 S
D420380 Simioni et al. Feb 2000 S
6024444 Little Feb 2000 A
D421764 Arnette Mar 2000 S
D422011 Simioni et al. Mar 2000 S
D422014 Simioni et al. Mar 2000 S
D423034 Arnette Apr 2000 S
D423552 Flanagan et al. Apr 2000 S
D423553 Brune Apr 2000 S
D423554 Conway Apr 2000 S
D423556 Conway Apr 2000 S
D423557 Conway Apr 2000 S
D424094 Conway May 2000 S
D424095 Brune May 2000 S
D424096 Conway May 2000 S
D424589 Arnette May 2000 S
D424598 Simioni May 2000 S
D425542 Arnette May 2000 S
D425543 Brune May 2000 S
D426568 Conway Jun 2000 S
D427225 Arnette Jun 2000 S
D427227 Conway Jun 2000 S
6072496 Guenter et al. Jun 2000 A
6095650 Gao et al. Aug 2000 A
6102539 Tucker Aug 2000 A
D430591 Arnette Sep 2000 S
D432156 Conway et al. Oct 2000 S
D433052 Flanagan Oct 2000 S
6132044 Sternbergh Oct 2000 A
6139141 Zider et al. Oct 2000 A
6139143 Brune et al. Oct 2000 A
6142628 Saigo Nov 2000 A
6144388 Bornstein Nov 2000 A
D434788 Conway Dec 2000 S
D439269 Conway Mar 2001 S
6208347 Migdal et al. Mar 2001 B1
6222621 Taguchi Apr 2001 B1
6231188 Gao et al. May 2001 B1
6233049 Kondo et al. May 2001 B1
6246468 Dimsdale Jun 2001 B1
6249600 Reed et al. Jun 2001 B1
6281903 Martin et al. Aug 2001 B1
6305656 Wemyss Oct 2001 B1
6307568 Rom Oct 2001 B1
6310627 Sakaguchi Oct 2001 B1
6330523 Kacyra et al. Dec 2001 B1
6356271 Reiter et al. Mar 2002 B1
6377281 Rosenbluth et al. Apr 2002 B1
6386562 Kuo May 2002 B1
6415051 Callari et al. Jul 2002 B1
6419549 Shirayanagi Jul 2002 B2
6420698 Dimsdale Jul 2002 B1
6434278 Hashimoto Aug 2002 B1
6456287 Kamen et al. Sep 2002 B1
6466205 Simpson et al. Oct 2002 B2
6473079 Kacyra et al. Oct 2002 B1
6492986 Metaxas et al. Dec 2002 B1
6493073 Epstein Dec 2002 B2
6508553 Gao et al. Jan 2003 B2
6512518 Dimsdale Jan 2003 B2
6512993 Kacyra et al. Jan 2003 B2
6516099 Davison et al. Feb 2003 B1
6518963 Waupotitsch et al. Feb 2003 B1
6527731 Weiss et al. Mar 2003 B2
6529192 Waupotitsch Mar 2003 B1
6529626 Watanabe et al. Mar 2003 B1
6529627 Callari et al. Mar 2003 B1
6533418 Izumitani et al. Mar 2003 B1
6535223 Foley Mar 2003 B1
6556196 Blanz et al. Apr 2003 B1
6563499 Waupotitsch et al. May 2003 B1
6583792 Agnew Jun 2003 B1
6624843 Lennon Sep 2003 B2
6634754 Fukuma et al. Oct 2003 B2
6637880 Yamakaji et al. Oct 2003 B1
6647146 Davison et al. Nov 2003 B1
6650324 Junkins Nov 2003 B1
6659609 Mothes Dec 2003 B2
6661433 Lee Dec 2003 B1
6664956 Erdem Dec 2003 B1
6668082 Davison et al. Dec 2003 B1
6671538 Ehnholm et al. Dec 2003 B1
6677946 Ohba Jan 2004 B1
6682195 Dreher Jan 2004 B2
6692127 Abitbol et al. Feb 2004 B2
6705718 Fossen Mar 2004 B2
6726463 Foreman Apr 2004 B2
6734849 Dimsdale et al. May 2004 B2
6736506 Izumitani et al. May 2004 B2
6760488 Moura et al. Jul 2004 B1
6775128 Leitao Aug 2004 B2
6785585 Gottschald Aug 2004 B1
6791584 Xie Sep 2004 B1
6792401 Nigro et al. Sep 2004 B1
6807290 Liu et al. Oct 2004 B2
6808381 Foreman et al. Oct 2004 B2
6817713 Ueno Nov 2004 B2
6825838 Smith et al. Nov 2004 B2
6847383 Agnew Jan 2005 B2
6847462 Kacyra et al. Jan 2005 B1
6876755 Taylor et al. Apr 2005 B1
6893245 Foreman et al. May 2005 B2
6903746 Fukushima et al. Jun 2005 B2
6907310 Gardner et al. Jun 2005 B2
6922494 Fay Jul 2005 B1
6943789 Perry et al. Sep 2005 B2
6944327 Soatto Sep 2005 B1
6950804 Strietzel Sep 2005 B2
6961439 Ballas Nov 2005 B2
6965385 Welk et al. Nov 2005 B2
6965846 Krimmer Nov 2005 B2
6968075 Chang Nov 2005 B1
6980690 Taylor et al. Dec 2005 B1
6999073 Zwern et al. Feb 2006 B1
7003515 Glaser et al. Feb 2006 B1
7016824 Waupotitsch Mar 2006 B2
7034818 Perry et al. Apr 2006 B2
7043059 Cheatle et al. May 2006 B2
7051290 Foreman et al. May 2006 B2
7062722 Carlin et al. Jun 2006 B1
7069107 Ueno Jun 2006 B2
7095878 Taylor et al. Aug 2006 B1
7103211 Medioni et al. Sep 2006 B1
7116804 Murase et al. Oct 2006 B2
7133048 Brand Nov 2006 B2
7152976 Fukuma et al. Dec 2006 B2
7154529 Hoke et al. Dec 2006 B2
7156655 Sachdeva et al. Jan 2007 B2
7184036 Dimsdale et al. Feb 2007 B2
7209557 Lahiri Apr 2007 B2
7212656 Liu et al. May 2007 B2
7212664 Lee et al. May 2007 B2
7215430 Kacyra et al. May 2007 B2
7218323 Halmshaw et al. May 2007 B1
7219995 Ollendorf et al. May 2007 B2
7224357 Chen et al. May 2007 B2
7234937 Sachdeva et al. Jun 2007 B2
7242807 Waupotitsch et al. Jul 2007 B2
7324110 Edwards et al. Jan 2008 B2
7415152 Jiang et al. Aug 2008 B2
7421097 Hamza et al. Sep 2008 B2
7426292 Moghaddam et al. Sep 2008 B2
7434931 Warden et al. Oct 2008 B2
7436988 Zhang et al. Oct 2008 B2
7441895 Akiyama et al. Oct 2008 B2
7450737 Ishikawa et al. Nov 2008 B2
7489768 Strietzel Feb 2009 B1
7492364 Devarajan et al. Feb 2009 B2
7508977 Lyons et al. Mar 2009 B2
7523411 Carlin Apr 2009 B2
7530690 Divo et al. May 2009 B2
7532215 Yoda et al. May 2009 B2
7533453 Yancy May 2009 B2
7540611 Welk et al. Jun 2009 B2
7557812 Chou et al. Jul 2009 B2
7563975 Leahy et al. Jul 2009 B2
7573475 Sullivan et al. Aug 2009 B2
7573489 Davidson et al. Aug 2009 B2
7587082 Rudin et al. Sep 2009 B1
7609859 Lee et al. Oct 2009 B2
7630580 Repenning Dec 2009 B1
7634103 Rubinstenn et al. Dec 2009 B2
7643685 Miller Jan 2010 B2
7646909 Jiang et al. Jan 2010 B2
7651221 Krengel et al. Jan 2010 B2
7656402 Abraham et al. Feb 2010 B2
7657083 Parr et al. Feb 2010 B2
7663648 Saldanha et al. Feb 2010 B1
7665843 Xie Feb 2010 B2
7689043 Austin et al. Mar 2010 B2
7699300 Iguchi Apr 2010 B2
7711155 Sharma et al. May 2010 B1
7717708 Sachdeva et al. May 2010 B2
7720285 Ishikawa et al. May 2010 B2
D616918 Rohrbach Jun 2010 S
7736147 Kaza et al. Jun 2010 B2
7755619 Wang et al. Jul 2010 B2
7756325 Vetter et al. Jul 2010 B2
7760923 Walker et al. Jul 2010 B2
7768528 Edwards et al. Aug 2010 B1
D623216 Rohrbach Sep 2010 S
7804997 Geng et al. Sep 2010 B2
7814436 Schrag et al. Oct 2010 B2
7830384 Edwards et al. Nov 2010 B1
7835565 Cai et al. Nov 2010 B2
7835568 Park et al. Nov 2010 B2
7845797 Warden et al. Dec 2010 B2
7848548 Moon et al. Dec 2010 B1
7852995 Strietzel Dec 2010 B2
7856125 Medioni et al. Dec 2010 B2
7860225 Strietzel Dec 2010 B2
7860301 Se et al. Dec 2010 B2
7876931 Geng Jan 2011 B2
7896493 Welk et al. Mar 2011 B2
7907774 Parr et al. Mar 2011 B2
7929745 Walker et al. Apr 2011 B2
7929775 Hager et al. Apr 2011 B2
7953675 Medioni et al. May 2011 B2
7961914 Smith Jun 2011 B1
8009880 Zhang et al. Aug 2011 B2
8026916 Wen Sep 2011 B2
8026917 Rogers et al. Sep 2011 B1
8026929 Naimark Sep 2011 B2
8031909 Se et al. Oct 2011 B2
8031933 Se et al. Oct 2011 B2
8059917 Dumas et al. Nov 2011 B2
8064685 Solem et al. Nov 2011 B2
8070619 Edwards Dec 2011 B2
8073196 Yuan et al. Dec 2011 B2
8090160 Kakadiaris et al. Jan 2012 B2
8113829 Sachdeva et al. Feb 2012 B2
8118427 Bonnin et al. Feb 2012 B2
8126242 Brett et al. Feb 2012 B2
8126249 Brett et al. Feb 2012 B2
8126261 Medioni et al. Feb 2012 B2
8130225 Sullivan et al. Mar 2012 B2
8131063 Xiao et al. Mar 2012 B2
8132123 Schrag et al. Mar 2012 B2
8144153 Sullivan et al. Mar 2012 B1
8145545 Rathod et al. Mar 2012 B2
8155411 Hof et al. Apr 2012 B2
8160345 Pavlovskaia et al. Apr 2012 B2
8177551 Sachdeva et al. May 2012 B2
8182087 Esser et al. May 2012 B2
8194072 Jones et al. Jun 2012 B2
8199152 Sullivan et al. Jun 2012 B2
8200502 Wedwick Jun 2012 B2
8204299 Arcas et al. Jun 2012 B2
8204301 Xiao et al. Jun 2012 B2
8204334 Bhagavathy et al. Jun 2012 B2
8208717 Xiao et al. Jun 2012 B2
8212812 Tsin et al. Jul 2012 B2
8217941 Park et al. Jul 2012 B2
8218836 Metaxas et al. Jul 2012 B2
8224039 Ionita et al. Jul 2012 B2
8243065 Kim Aug 2012 B2
8248417 Clifton Aug 2012 B1
8260006 Callari et al. Sep 2012 B1
8260038 Xiao et al. Sep 2012 B2
8260039 Shiell et al. Sep 2012 B2
8264504 Naimark Sep 2012 B2
8269779 Rogers et al. Sep 2012 B2
8274506 Rees Sep 2012 B1
8284190 Muktinutalapati et al. Oct 2012 B2
8286083 Barrus et al. Oct 2012 B2
8289317 Harvill Oct 2012 B2
8290769 Taub et al. Oct 2012 B2
8295589 Ofek et al. Oct 2012 B2
8300900 Lai et al. Oct 2012 B2
8303113 Esser et al. Nov 2012 B2
8307560 Tulin Nov 2012 B2
8330801 Wang et al. Dec 2012 B2
8346020 Guntur Jan 2013 B2
8351649 Medioni et al. Jan 2013 B1
8355079 Zhang et al. Jan 2013 B2
8372319 Liguori et al. Feb 2013 B2
8374422 Roussel Feb 2013 B2
8385646 Lang et al. Feb 2013 B2
8391547 Huang et al. Mar 2013 B2
8411092 Sheblak et al. Apr 2013 B2
8433157 Nijim et al. Apr 2013 B2
8447099 Wang et al. May 2013 B2
8459792 Wilson et al. Jun 2013 B2
8605942 Takeuchi Dec 2013 B2
8605989 Rudin et al. Dec 2013 B2
8743051 Moy et al. Jun 2014 B1
8813378 Grove Aug 2014 B2
20010023413 Fukuma et al. Sep 2001 A1
20010026272 Feld et al. Oct 2001 A1
20010051517 Srietzel Dec 2001 A1
20020010655 Kjallstrom Jan 2002 A1
20020105530 Waupotitsch et al. Aug 2002 A1
20020149585 Kacyra et al. Oct 2002 A1
20030001835 Dimsdale et al. Jan 2003 A1
20030030904 Huang Feb 2003 A1
20030071810 Shoov et al. Apr 2003 A1
20030110099 Trajkovic et al. Jun 2003 A1
20030112240 Cerny Jun 2003 A1
20040004633 Perry et al. Jan 2004 A1
20040090438 Alliez May 2004 A1
20040217956 Besl et al. Nov 2004 A1
20040223631 Waupotitsch et al. Nov 2004 A1
20040257364 Basler Dec 2004 A1
20050053275 Stokes Mar 2005 A1
20050063582 Park et al. Mar 2005 A1
20050111705 Waupotitsch et al. May 2005 A1
20050128211 Berger et al. Jun 2005 A1
20050162419 Kim Jul 2005 A1
20050190264 Neal Sep 2005 A1
20050208457 Fink et al. Sep 2005 A1
20050226509 Maurer et al. Oct 2005 A1
20060012748 Periasamy Jan 2006 A1
20060017887 Jacobson Jan 2006 A1
20060067573 Parr et al. Mar 2006 A1
20060127852 Wen Jun 2006 A1
20060161474 Diamond et al. Jul 2006 A1
20060212150 Sims Sep 2006 A1
20060216680 Buckwalter et al. Sep 2006 A1
20070013873 Jacobson Jan 2007 A9
20070091085 Wang Apr 2007 A1
20070104360 Huang May 2007 A1
20070127848 Kim et al. Jun 2007 A1
20070160306 Ahn et al. Jul 2007 A1
20070183679 Moroto et al. Aug 2007 A1
20070233311 Okada et al. Oct 2007 A1
20070262988 Christensen Nov 2007 A1
20080084414 Rosel et al. Apr 2008 A1
20080112610 Israelsen et al. May 2008 A1
20080136814 Chu et al. Jun 2008 A1
20080152200 Medioni et al. Jun 2008 A1
20080162695 Muhn et al. Jul 2008 A1
20080163344 Yang Jul 2008 A1
20080170077 Sullivan et al. Jul 2008 A1
20080201641 Xie Aug 2008 A1
20080219589 Jung et al. Sep 2008 A1
20080240588 Tsoupko-Sitnikov et al. Oct 2008 A1
20080246759 Summers Oct 2008 A1
20080271078 Gossweiler et al. Oct 2008 A1
20080278437 Barrus et al. Nov 2008 A1
20080278633 Tsoupko-Sitnikov et al. Nov 2008 A1
20080279478 Tsoupko-Sitnikov et al. Nov 2008 A1
20080280247 Sachdeva et al. Nov 2008 A1
20080294393 Laake et al. Nov 2008 A1
20080297503 Dickinson et al. Dec 2008 A1
20080310757 Wolberg et al. Dec 2008 A1
20090010507 Geng Jan 2009 A1
20090040216 Ishiyama Feb 2009 A1
20090123037 Ishida May 2009 A1
20090129402 Moller et al. May 2009 A1
20090132371 Strietzel et al. May 2009 A1
20090135176 Snoddy et al. May 2009 A1
20090135177 Strietzel et al. May 2009 A1
20090144173 Mo et al. Jun 2009 A1
20090153552 Fidaleo et al. Jun 2009 A1
20090153553 Kim et al. Jun 2009 A1
20090153569 Park et al. Jun 2009 A1
20090154794 Kim et al. Jun 2009 A1
20090184960 Carr et al. Jul 2009 A1
20090185763 Park et al. Jul 2009 A1
20090219281 Maillot Sep 2009 A1
20090279784 Arcas et al. Nov 2009 A1
20090304270 Bhagavathy et al. Dec 2009 A1
20090310861 Lang et al. Dec 2009 A1
20090316945 Akansu Dec 2009 A1
20090316966 Marshall et al. Dec 2009 A1
20090324030 Frinking et al. Dec 2009 A1
20090324121 Bhagavathy et al. Dec 2009 A1
20100030578 Siddique et al. Feb 2010 A1
20100134487 Lai et al. Jun 2010 A1
20100138025 Morton et al. Jun 2010 A1
20100141893 Altheimer et al. Jun 2010 A1
20100145489 Esser et al. Jun 2010 A1
20100166978 Nieminen Jul 2010 A1
20100179789 Sachdeva et al. Jul 2010 A1
20100191504 Esser et al. Jul 2010 A1
20100198817 Esser et al. Aug 2010 A1
20100209005 Rudin et al. Aug 2010 A1
20100277476 Johansson et al. Nov 2010 A1
20100293192 Suy et al. Nov 2010 A1
20100293251 Suy et al. Nov 2010 A1
20100302275 Saldanha et al. Dec 2010 A1
20100329568 Gamliel et al. Dec 2010 A1
20110001791 Kirshenboim et al. Jan 2011 A1
20110025827 Shpunt et al. Feb 2011 A1
20110026606 Bhagavathy et al. Feb 2011 A1
20110026607 Bhagavathy et al. Feb 2011 A1
20110029561 Slaney et al. Feb 2011 A1
20110040539 Szymczyk et al. Feb 2011 A1
20110043540 Fancher et al. Feb 2011 A1
20110043610 Ren et al. Feb 2011 A1
20110071804 Xie Mar 2011 A1
20110075916 Knothe et al. Mar 2011 A1
20110096832 Zhang et al. Apr 2011 A1
20110102553 Corcoran et al. May 2011 A1
20110115786 Mochizuki May 2011 A1
20110148858 Ni et al. Jun 2011 A1
20110157229 Ni et al. Jun 2011 A1
20110158394 Strietzel Jun 2011 A1
20110166834 Clara Jul 2011 A1
20110188780 Wang et al. Aug 2011 A1
20110208493 Altheimer et al. Aug 2011 A1
20110211816 Goedeken et al. Sep 2011 A1
20110227923 Mariani et al. Sep 2011 A1
20110227934 Sharp Sep 2011 A1
20110229659 Reynolds Sep 2011 A1
20110229660 Reynolds Sep 2011 A1
20110234581 Eikelis et al. Sep 2011 A1
20110234591 Mishra et al. Sep 2011 A1
20110249136 Levy Oct 2011 A1
20110262717 Broen et al. Oct 2011 A1
20110279634 Periyannan et al. Nov 2011 A1
20110292034 Corazza et al. Dec 2011 A1
20110293247 Bhagavathy et al. Dec 2011 A1
20110304912 Broen et al. Dec 2011 A1
20120002161 Altheimer et al. Jan 2012 A1
20120008090 Atheimer et al. Jan 2012 A1
20120013608 Ahn et al. Jan 2012 A1
20120016645 Altheimer et al. Jan 2012 A1
20120021835 Keller et al. Jan 2012 A1
20120038665 Strietzel Feb 2012 A1
20120075296 Wegbreit et al. Mar 2012 A1
20120079377 Goossens Mar 2012 A1
20120082432 Ackley Apr 2012 A1
20120114184 Barcons-Palau et al. May 2012 A1
20120114251 Solem et al. May 2012 A1
20120121174 Bhagavathy et al. May 2012 A1
20120130524 Clara et al. May 2012 A1
20120133640 Chin et al. May 2012 A1
20120133850 Broen et al. May 2012 A1
20120147324 Marin et al. Jun 2012 A1
20120158369 Bachrach et al. Jun 2012 A1
20120162218 Kim et al. Jun 2012 A1
20120166431 Brewington et al. Jun 2012 A1
20120170821 Zug et al. Jul 2012 A1
20120176380 Wang et al. Jul 2012 A1
20120183202 Wei et al. Jul 2012 A1
20120183204 Aarts et al. Jul 2012 A1
20120183238 Savvides et al. Jul 2012 A1
20120192401 Pavlovskaia et al. Aug 2012 A1
20120206610 Wang et al. Aug 2012 A1
20120219195 Wu et al. Aug 2012 A1
20120224629 Bhagavathy et al. Sep 2012 A1
20120229758 Marin et al. Sep 2012 A1
20120256906 Ross et al. Oct 2012 A1
20120263437 Barcons-Palau et al. Oct 2012 A1
20120288015 Zhang et al. Nov 2012 A1
20120294369 Bhagavathy et al. Nov 2012 A1
20120294530 Bhaskaranand et al. Nov 2012 A1
20120299914 Kilpatrick et al. Nov 2012 A1
20120306874 Nguyen et al. Dec 2012 A1
20120307074 Bhagavathy et al. Dec 2012 A1
20120313955 Choukroun Dec 2012 A1
20120314023 Barcons-Palau et al. Dec 2012 A1
20120320153 Barcons-Palau et al. Dec 2012 A1
20120323581 Strietzel et al. Dec 2012 A1
20130027657 Esser et al. Jan 2013 A1
20130033482 Luisi Feb 2013 A1
20130070973 Saito et al. Mar 2013 A1
20130088490 Rasmussen Apr 2013 A1
20130135579 Krug et al. May 2013 A1
20130187915 Lee et al. Jul 2013 A1
20130201187 Tong Aug 2013 A1
20130271451 Tong Oct 2013 A1
20150286857 Kim Oct 2015 A1
Foreign Referenced Citations (73)
Number Date Country
2196280 Feb 1996 CA
10007705 Sep 2001 DE
0092364 Oct 1983 EP
359596 Mar 1990 EP
444902 Jul 1995 EP
0994336 Apr 2000 EP
1011006 Jun 2000 EP
1136869 Sep 2001 EP
1138253 Oct 2001 EP
1231569 Aug 2002 EP
1450201 Aug 2004 EP
1728467 Dec 2006 EP
1154302 Aug 2009 EP
2535887 Dec 2012 EP
2615583 Jul 2013 EP
2955409 Jul 2011 FR
2966038 Apr 2012 FR
2449855 Dec 2008 GB
2003345857 Dec 2003 JP
2004272530 Sep 2004 JP
2005269022 Sep 2005 JP
20000028583 May 2000 KR
200000051217 Aug 2000 KR
20040097200 Nov 2004 KR
20080086945 Sep 2008 KR
20100050052 May 2010 KR
1993000641 Jan 1993 WO
1996004596 Feb 1996 WO
1997040342 Oct 1997 WO
1997040960 Nov 1997 WO
1998013721 Apr 1998 WO
1998027861 Jul 1998 WO
1998027902 Jul 1998 WO
1998035263 Aug 1998 WO
1998052189 Nov 1998 WO
1998057270 Dec 1998 WO
1999056942 Nov 1999 WO
1999064918 Dec 1999 WO
2000000863 Jan 2000 WO
2000016683 Mar 2000 WO
2000045348 Aug 2000 WO
2000049919 Aug 2000 WO
2000062148 Oct 2000 WO
2000064168 Oct 2000 WO
2001023908 Apr 2001 WO
2001032074 May 2001 WO
2001035338 May 2001 WO
2001061447 Aug 2001 WO
2001067325 Sep 2001 WO
2001074553 Oct 2001 WO
2001078630 Oct 2001 WO
2001088654 Nov 2001 WO
2002007845 Jan 2002 WO
2002041127 May 2002 WO
2003079097 Sep 2003 WO
2003084448 Oct 2003 WO
2007012261 Feb 2007 WO
2007017751 Feb 2007 WO
2007018017 Feb 2007 WO
2008009355 Jan 2008 WO
2008009423 Jan 2008 WO
2008135178 Nov 2008 WO
2009023012 Feb 2009 WO
2009043941 Apr 2009 WO
2010039976 Apr 2010 WO
2010042990 Apr 2010 WO
2011012743 Feb 2011 WO
2011095917 Aug 2011 WO
2011134611 Nov 2011 WO
2011147649 Dec 2011 WO
2012051654 Apr 2012 WO
2012054972 May 2012 WO
2012054983 May 2012 WO
Non-Patent Literature Citations (50)
Entry
Blanz, A Morphable Model for the Synthesis of 3D Faces, 2009, URL: cgm.cs.ntust.edu.tw/A9409004/paper/970416/Morphable%20Model.ppt, pp. 1-27.
Yun Ge, 3D Novel Face Sample Modeling for Face Recognition, Journal of Multimedia, vol. 6, No. 5, Oct. 2011 (Year: 2011).
Piotraschke, Automated 3D Face Reconstruction from Multiple Images using Quality Measures, CVPR or IEEE Xplore, pp. 3418-3427 (Year: 2016).
Supplementary European Search Report from European Patent Application No. EP13793686, dated Mar. 14, 2016.
Supplementary European Search Report from European Patent Application No. EP13793957, dated Mar. 15, 2016.
Blanz, Volker et al., “A Morphable Model for the Synthesis of 3D Faces,” Computer Graphics Proceedings, Siggraph 99, New York, NY, Aug. 8, 1999, pp. 187-194.
“Ray-Ban's Virtual Glasses Fitting Shop Window—Augmented by Activ'screen and Total Immersion,” URL:https://www.youtube.com/watch?v=JFhployxc6Y, Sep. 30, 2010, 1 page.
Shan, Ying et al., “Model-Based Bundle Adjustment with Application to Face Modeling,” Proceedings of the Eighth IEEE International Conference on Computer Vision (ICCV), Vancouver, British Columbia, Canada, Jul. 7-14, 2001, and International Conference on Computer Vision, Los Alamitos, California, IEEE Comp. Soc., US, vol. 2, pp. 644-651.
Fua, P., “Using Model-Driven Bundle-Adjustment to Model Heads from Raw Video Sequences,” Computer Vision, The Proceedings of the Seventh IEEE International AL Conference, Kerkyra, Greece, Sep. 20-27, 1999, vol. 1, pp. 46-53.
3D Morphable Model Face Animation, http://www.youtube.com/watch?v=nice6NYb_WA, Apr. 20, 2006.
Visionix 3D iView, Human Body Measurement Newsletter, vol. 1., No. 2, Sep. 2005, pp. 2 and 3.
Blaise Aguera y Arcas demos Photosynth, May 2007. Ted.com, http://www.ted.com/talks/blaise_aguera_y_arcas_demos_photosynth.html.
ERC Tecnology Leads to Eyeglass “Virtual Try-on” System, Apr. 20, 2012, http://showcase.erc-assoc.org/accomplishments/microelectronic/imsc6-eyeglass.htm.
PCT International Search Report for PCT International Patent Application No. PCT/US2012/068174, dated Mar. 7, 2013.
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042504, dated Aug. 19, 2013.
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042509, dated Sep. 2, 2013.
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042514, dated Aug. 30, 2013.
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042517, dated Aug. 29, 2013.
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042512, dated Sep. 6, 2013.
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042529, dated Sep. 17, 2013.
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042525, dated Sep. 17, 2013.
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042520, dated Sep. 27, 2013.
Tracker, Tracker Help, Nov. 2009.
Sinha et al., GPU-based Video Feautre Tracking and Matching, http::frahm.web.unc.edu/files/2014/01/GPU-based-Video-Feature -Tracking-And Matching.pdf, May 2006.
Dror et al., Recognition of Surface Relfectance Properties form a Single Image under Unknown Real-World Illumination, IEEE, Proceedings of the IEEE Workshop on Identifying Objects Across Variations in Lighting: Psychophysics & Computation, Dec. 2011.
Simonite, 3-D Models Created by a Cell Phone, Mar. 23, 2011, url: http://www.technologyreview.com/news/423386/3-d-models-created-by-a-cell-phone/.
English Machine translation of KR20000028583. May 25, 2000.
English Machine translation of KR200000051217. Aug. 16, 2000.
English Machine translation of KR 20080086945. Sep. 29, 2008.
English Machine translation of KR 20100050052. May 13, 2010.
English abstract and English machine translation of the specification and claims of WO1997040960. Nov. 6, 1997.
English Machine translation of WO2011134611. Nov. 3, 2011.
English Machine translation of WO2008135178. Nov. 13, 2008.
English Machine translation of WO 2008009423. Jan. 24, 2008.
English Machine translation of WO2008009355. Jan. 24, 2008.
English translation of abstract of WO2007018017. Feb. 15, 2007.
English translation of abstract of WO 2007012261. Feb. 1, 2007.
English abstract and English machine translation of the specification and claims of JP 2005269022. Sep. 29, 2005.
English abstract and English machine translation of the specification and claims of JP 2004272530. Sep. 30, 2004.
Fidaleo, Model-Assisted 3D Face Reconstruction from Video, AMFG'07 Analysis and Modeling of Faces and Gestures Lecture Notes in Computer Science vol. 4778, 2007, pp. 124-138.
Garcia-Mateos, Estimating 3D facial pose in video with just three points, CVPRW '08 Computer vision and Pattern Recognition Workshops, 2008.
English abstract and English machine translation of the specification and claims of JP 2003345857. Dec. 5, 2003.
English abstract and English machine translation of the specification and claims of DE 10007705. Sep. 6, 2001.
English abstract and English machine translation of the specification and claims of FR 2966038. Apr. 20, 2012.
English abstract and English machine translation of the specification and claims of EP 1450201. Aug. 25, 2004.
English abstract and English machine translation of the specification and claims of EP0359596. Mar. 21, 1990.
English abstract and English machine translation of the specification and claims of WO 1993000641. Jan. 7, 1993.
English abstract of WO 2001067325. Sep. 13, 2001.
English abstract of WO 2001078630. Oct. 25, 2001.
English abstract of WO 2003084448. Oct. 16, 2003.
Related Publications (1)
Number Date Country
20150235428 A1 Aug 2015 US
Provisional Applications (2)
Number Date Country
61650983 May 2012 US
61735951 Dec 2012 US
Continuations (1)
Number Date Country
Parent 13774983 Feb 2013 US
Child 14698655 US