Systems and methods for adjusting a virtual try-on

Information

  • Patent Grant
  • 9286715
  • Patent Number
    9,286,715
  • Date Filed
    Monday, February 25, 2013
    11 years ago
  • Date Issued
    Tuesday, March 15, 2016
    8 years ago
Abstract
According to at least one embodiment, a computer-implemented method for generating a virtual try-on is described. A first model is obtained. The first model includes a first set of attachment points. A second model is obtained. The second model includes a first set of connection points. The first model and the second model are combined. Combining the first and second models includes matching the first set of attachment points with the first set of connection points. An image is rendered based on at least a portion of the combined first and second models.
Description
BACKGROUND

The use of computer systems and computer-related technologies continues to increase at a rapid pace. This increased use of computer systems has influenced the advances made to computer-related technologies. Indeed, computer devices have increasingly become an integral part of the business world and the activities of individual consumers. Computing devices may be used to carry out several business, industry, and academic endeavors.


In various situations, advances in technology may allow activities that could only be done in person to be done virtually (e.g., online). For example, online shopping has enabled customers to be able to browse huge inventories of products without leaving the comfort of their own home. While the online shopping experience has allowed customers to seamlessly compare, analyze, and purchase certain products, purchasing clothing and other types of personal (e.g. personalized) accessories presents additional challenges. Typical brick and mortar clothing and accessory stores provide dressing rooms, mirrors, and other services to help the customer select items to purchase. It may be desirable to bring these types of services to the online shopping experience.


SUMMARY

According to at least one embodiment, a computer-implemented method for generating a virtual try-on is described. A first model is obtained. The first model includes a first set of attachment points. A second model is obtained. The second model includes a first set of connection points. The first model and the second model are combined. Combining the first and second models includes matching the first set of attachment points with the first set of connection points. An image is rendered based on at least a portion of the combined first and second models.


In one embodiment, an adjustment command may be received. In some cases, the combined first and second models may be adjusted based on the adjustment command.


In one example, the first model may additionally include a second set of attachment points. In this example, the combined first and second models may be adjusted by matching the second set of attachment points with the first set of connection points. In another example, the second model may additionally include a second set of connection points. In this example, the combined first and second models may be adjusted by matching the first set of attachment points with the second set of connection points. In yet another example, the first model may additionally include a second set of attachment points and the second model may additionally include a second set of connection points. In this example, the combined first and second models may be adjusted by matching the second set of attachment points with the second set of connection points.


In some cases, receiving the adjustment command may include receiving a touch input. In one example, the first model may be a three-dimensional model of a user. In one instance, the three-dimensional model of the user may be a morphable model. In one example, the first set of attachment points may include a noise point and at least one ear point.


In one example, the second model may be a three-dimensional model of glasses. In some cases, the first set of connection points may include a nose connection point and at least one earpiece connection point. In one embodiment, the combined first and second models may be a modeled virtual try-on.


A computing device configured to generate a virtual try-on is also described. The computing device includes a processor and memory in electronic communication with the processor. The computing device further includes instructions stored in the memory, the instructions being executable by the processor to obtain a first model, the first model comprising a first set of attachment points, obtain a second model, the second model comprising a first set of connection points, combine the first model and the second model, and render an image based on at least a portion of the combined first and second models. Combining the first and second models includes instructions executable by the processor to match the first set of attachment points with the first set of connection points.


A computer-program product to generate a virtual try-on is additionally described. The computer-program product may include a non-transitory computer-readable medium that stores instructions. The instructions may be executable by a processor to obtain a first model, the first model comprising a first set of attachment points, obtain a second model, the second model comprising a first set of connection points, combine the first model and the second model, and render an image based on at least a portion of the combined first and second models. Combining the first and second models includes instructions executable by the processor to match the first set of attachment points with the first set of connection points.


Features from any of the above-mentioned embodiments may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings illustrate a number of exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the instant disclosure.



FIG. 1 is a block diagram illustrating one embodiment of an environment in which the present systems and methods may be implemented;



FIG. 2 is a block diagram illustrating another embodiment of an environment in which the present systems and methods may be implemented;



FIG. 3 is a block diagram illustrating one example of a virtual try-on module;



FIG. 4 is a block diagram illustrating one example, of an attachment module;



FIG. 5 is a diagram illustrating one example of a modeled try-on;



FIG. 6 is a diagram illustrating one example of attachment points on the three-dimensional model of the user;



FIG. 7 is a diagram illustrating one example of a three-dimensional model of a pair of glasses;



FIG. 8 is a diagram illustrating another example of attachment points on the three-dimensional model of the user;



FIG. 9 is a diagram illustrating one example of a three-dimensional model of a pair of glasses;



FIG. 10 is a diagram illustrating an example of attachment points on the three-dimensional model of the user;



FIG. 11 is a diagram illustrating an example of a modeled try-on;



FIG. 12 is a diagram illustrating an example of a rendering box for rendering a portion of a modeled try-on;



FIGS. 13-22 illustrate various examples of a virtual try-on using the systems and methods described herein;



FIG. 23 is a flow diagram illustrating one example of a method to generate a virtual try-on;



FIG. 24 is a flow diagram illustrating one example of a method to adjust a virtual try-on; and



FIG. 25 depicts a block diagram of a computer system suitable for implementing the present systems and methods.





While the embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the instant disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims.


DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS

Different users may wear/use the same item differently. For example, some users may prefer that a pair of glasses sits close to their face (towards the base of their nose), while other users may prefer that a pair of glasses sits away from their face (towards the tip of their nose). Furthermore, some users may prefer that a pair of glasses sit so that that the temples (e.g., arms) slide horizontally by the ears, while other users may prefer that the temples are angled so that the temples sit above the ears. Naturally, there may be an infinite number of ways that a user may prefer to wear a pair of glasses (or any other product). Therefore, it may be desirable to allow a user to manipulate or otherwise adjust the way a user virtually tries-on a pair of glasses (or any other product).


In some cases, a virtual try-on may be generated by modeling the virtual try-on in a three-dimensional space and then rendering one or more images based on the modeled virtual try-on. In one example, the modeled virtual try-on may be generated by interfacing a three-dimensional model of a user with a three-dimensional model of a product. For instance, a three-dimensional model of a user's face/head and a three-dimensional model of a pair of glasses may be interfaced together to generate a modeled virtual try-on of a user trying-on a pair of glasses. This modeled virtual try-on may then be used to render one or more images of the user virtually trying-on the pair of glasses. Although the example, of a user virtually trying-on a pair of glasses is used hereafter, it is understood, that a user may virtually try-on any product using the systems and methods described herein.


The positioning of the glasses in the virtual try-on may be determined based on the way the three-dimensional model of the pair of glasses is interfaced (e.g., positioned) with respect to the three-dimensional model of the user's face/head in the modeled virtual try-on. For example, if the modeled virtual try-on interfaces the three-dimensional model of the glasses with the three-dimensional model of the user's face/head so that the glasses sit towards the tip of the nose, then the one or more images rendered based on the modeled virtual try-on may illustrate the virtually tried-on in a position where the glasses sit towards the tip of the nose. Therefore, adjusting the way that the three-dimensional model of the glasses and the three-dimensional model of the user's face/head are interfaced in the modeled virtual try-on may adjust the way that the one or more images render the virtual try-on.


In some cases, a user may adjust a virtual try-on through a user interface. For example, a user may use an input command (e.g., touch commands, sliders, etc.) to adjust the positioning of the virtually tried-on glasses. In some cases, the modeled virtual try-on is adjusted (one or more of the three-dimensional model of the glasses and the three-dimensional model of the user's face/head is repositioned, for example) based on the input command. At least a portion of the modeled virtual try-on may be used to render one or more images of the adjusted virtual try-on.


Turning now to the figures, FIG. 1 is a block diagram illustrating one embodiment of an environment 100 in which the present systems and methods may be implemented. In some embodiments, the systems and methods described herein may be performed on a single device (e.g., device 105). For example, the systems and method described herein may be performed by a virtual try-on module 115 that is located on the device 105. Examples of device 105 include mobile devices, smart phones, personal computing devices, computers, servers, etc.


In one embodiment, a device 105 may include the virtual try-on module 115, a camera 120, and a display 125. In one example, the device 105 may be coupled to a database 110. The database 110 may be internal to the device 105. Additionally or alternatively, the database 110 may be external to the device 105. The database 110 may include model data 130 and/or product data 135.


In one example, the virtual try-on module 115 may enable a user to virtually try-on a pair of glasses in a preferred position. The virtual try-on module 115 may obtain a three-dimensional model of a user (based on the model data 130, for example). The three-dimensional model of the user may include one or more sets of attachment points. In one example, each set of attachment points may correspond to a different position in which the glasses may be worn. The virtual try-on module 115 may also obtain a three-dimensional model of a pair of glasses (based on the product data 135, for example). The three-dimensional model of the glasses may include one or more sets of connection points. In one example, each set of connection points may correspond to the points of connection when the glasses are worn in a particular position. In another example, each set of connection points may correspond to a different way that the glasses may be adjusted to fit a user's head.


An initial position may be selected. In one example, the initial position may be determined based on stored (in database 110, for example) position information. In one example, the position information may correspond to a default initial position. In another example, the position information may correspond to a preselected position. The virtual try-on module 115 may generate a modeled try-on by combining the three-dimensional model of the user and the three-dimensional model of the glasses. In some cases, combining the three-dimensional model of the user and the three-dimensional model of the glasses includes matching the selected connection points with the selected attachment points. As noted previously, the position of the three-dimensional model of the glasses in the modeled try-on may be based on the set of attachment points used to attach the three-dimensional model of the glasses to the three-dimensional model of the glasses.


The virtual try-on module 115 may provide a virtual try-on experience by rendering one or more images of a virtual try-on based on at least a portion of the modeled try-on. In some cases, the one or more rendered images may be displayed via the display 125.


Additionally or alternatively, the virtual try-on module 115 may enable a user to adjust the position of a virtually tried-on pair of glasses (during the virtual try-on experience, for example). In one example, a modeled try-on may include the three-dimensional model of the user and the three-dimensional model of the glasses matched with a first set of attachment points being matched with a first set of connection points. The virtual try-on module 115 may receive adjustment information (a touch input, for example) indicating that the position of the glasses and/or the way the glasses are worn on the face should be adjusted. In this example, the virtual try-on module 115 may adjust the position of the glasses and/or the way the glasses are worn based on the adjustment information. In one example, a second set of attachment points and/or a second set of connection points may be selected based on the adjustment information. The virtual try-on module 115 may then generate an adjusted modeled try-on by combining the three-dimensional model of the user and the three-dimensional model of the glasses with the selected set of attachment points matched with the selected set of connection points. The virtual try-on module 115 may then render one or more images of a virtual try-on based on at least a portion of the adjusted modeled try-on.


In some cases, the three-dimensional model (e.g., morphable model) of the user may be obtained (e.g., received, generated, etc.) based on the model data 130. In one example, the model data 130 may include a three-dimensional model for the user. In another example, the model data 130 may include morphable model information. For instance, the model data 130 may include morphable model information. The morphable model information may include one or more average models (e.g., caricatures) that may be combined (in a linear combination, for example) based on a set of coefficients (corresponding to the user) to produce a morphable model of the user. In various situations, the three-dimensional model of the user may be generated based on one or more images of a user that were captured via the camera 120.


In some cases, the three dimensional model of the glasses (e.g., a pair of glasses) may be obtained (e.g., received, generated, etc.) based on the product data 135. In one example, the product data 135 may include one or more three-dimensional models of glasses and/or product information for generating one or more three-dimensional models of glasses. The three-dimensional model of the user and the three-dimensional model of the glasses may each be scaled models (that are scaled based on the same scaling standard, for example).



FIG. 2 is a block diagram illustrating another embodiment of an environment 200 in which the present systems and methods may be implemented. In some embodiments, a device 105-a may communicate with a server 210 via a network 205. Examples of networks 205 include local area networks (LAN), wide area networks (WAN), virtual private networks (VPN), cellular networks (using 3G and/or LTE, for example), etc. In some configurations, the network 205 may be the internet. In some configurations, the device 105-a may be one example of the device 105 illustrated in FIG. 1. For example, the device 105-a may include the camera 120, the display 125, and an application 215. It is noted that in some embodiments, the device 105-a may not include a virtual try-on module 115.


In some embodiments, the server 210 may include a virtual try-on module 115-a. The virtual try-on module 115-a may be one example of the virtual try-on module 115 illustrated in FIG. 1. In one embodiment, the server 210 may be coupled to the database 110. For example, the virtual try-on module 115-a may access the model data 130 in the database 110 via the server 210. The database 110 may be internal or external to the server 210.


In some configurations, the application 215 may capture one or more images via the camera 120. For example, the application 215 may use the camera 120 to capture one or more images of a user. In one example, the application 215 may transmit the captured one or more images to the server 210 (for processing and analysis, for example). In another example, the application 215 may process and/or analyze the one or more images and then transmit information (e.g., a selected set of images, set of coefficients, model data, etc.) to the server 210.


In some configurations, the virtual try-on module 115-a may obtain the one or more images and/or the information and may generate a modeled try-on based on the one or more images and/or the information as described above and as will be described in further detail below. In one example, the virtual try-on module 115-a may transmit one or more rendered images (based on a modeled try-on) to the application 215. In some configurations, the application 215 may obtain the one or more rendered images and may output one or more images of a virtual try-on to be displayed via the display 125.



FIG. 3 is a block diagram 300 illustrating one example of a virtual try-on module 115-b. The virtual try-on module 115-b may be one example of the virtual try-on modules 115 illustrated in FIG. 1 or 2. The virtual try-on module 115-b may include an attachment module 305, a rendering module 310, an adjustment module 315, and a displaying module 320.


In one embodiment, the attachment module 305 may combine a three-dimensional model of a user and a three-dimensional model of glasses by matching a set attachment points on the three-dimensional model of the user with a set of connection points on the three-dimensional model of the glasses. The set of attachment points and/or the set of connection points may be selected based on a default position, a pre-selected position, and/or adjustment information. In one example, combining the three-dimensional model of the user and the three-dimensional model of the glasses generates a modeled try-on. Details regarding the attachment module 305 are described below.


In one embodiment, the rendering module 310 may render one or more images for a virtual try-on based on the modeled try-on. In one example, the modeled try-on may be a pixel depth map that represents the geometry and the color, texture, etc., associated with each three-dimensional model. In this example, one or more images may be rendered by determining (and capturing) the visible pixels corresponding to a particular point of view of the modeled try-on. In some cases, the rendering may be limited to the addition of the three-dimensional model of the glasses and the addition of the interactions (e.g., shadows, reflections, etc.) between the three-dimensional model of the glasses and the three-dimensional model of the user. This may allow one or more rendering images to be overlaid onto one or more images of the user to create the virtual try-on experience. Since the three-dimensional model of the glasses and the interactions as a result of the addition of the three-dimensional model of the glasses may affect only a portion of the modeled try-on, the rendering module 310 may limit the rendering to a portion of the modeled try-on. For example, the rendering may be limited to the portion corresponding to the three-dimensional model of the glasses and interactions (e.g., shadows, reflections, etc.) between the three-dimensional model of the glasses and the three-dimensional model of the user. In one scenario, a three-dimensional rendering box may be the portion of the modeled try-on that is rendered. An example of a three-dimensional rendering box is described below.


In one embodiment, the adjusting module 315 may receive an input (touch input, for example) indicating a request to adjust the position of the glasses. Upon receiving the adjustment request, the adjusting module 315 may determine whether the requested adjustment corresponds to a possible position. Upon determining that the requested adjustment corresponds to a possible position, the adjusting module 315 may provide the adjustment request to the attachment module 305. The attachment module 305 may select a set of attachment points and/or a set of connection points that corresponds to the requested adjustment and may generate an adjusted modeled try-on as described previously. The rendering module 310 may then render one or more images based on the updated modeled try-on.


In one embodiment, the display module 320 may display the rendered one or more images (via the display 125, for example). In one example, the display module 320 may display a frontal view of a virtual try-on and/or a profile view of the virtual try-on. Examples of the displayed frontal view and displayed profile view are described below. In some cases, the display module 320 may receive touch inputs (e.g., swipes, drags, selections, etc.) via the display 125. In some cases, the display module 320 may determine if the touch input is received with respect to the frontal view and/or if the touch input is received with respect to the profile view. In one example, a vertical swipe in the frontal view slides the glasses to a various positions up or down the nose and a vertical swipe in the profile view tips the glasses to various positions. Examples are shown below.



FIG. 4 is a block diagram 400 illustrating one example, of an attachment module 305-a. The attachment module 305-a may be one example of the attachment module 305 illustrated in FIG. 3. In one embodiment, the attachment module 305-a may include a user model obtaining module 405, a glasses model obtaining module 410, an attachment point determination module 415, a position determination module 420, and a combining module 425.


The user model obtaining module 405 may obtain a three-dimensional model (e.g., morphable model) of a user based on the model data 130. The glasses model obtaining module 410 may obtain a three-dimensional model of a pair of glasses based on the product data 135.


The attachment point determination module 415 may identify one or more sets of attachment points on the three-dimensional model of the user. The attachment point determination module 415 may also identify one or more sets of connection points on the three-dimensional model of the glasses.


The position determination module 420 may determine a position to be used when generating the modeled try-on. In some cases, the determined position may correspond to a default position. In one example, the default position may correspond to a set of attachment points that are used by a majority of users when wearing glasses (the position that a majority of users prefer, for example). In the case that the user has previously adjusted the glasses to a custom position (e.g., pre-selected position), the attachment points corresponding to the custom position may be used as the determined position. In some cases, the position determination module 420 may determine a position based on a current position (e.g., the default position or a preselected position) and received adjustment information. In one example, the position determination module 420 may select a set of attachment points (e.g., a position) corresponding to the adjusted position. In some cases, the selected set of attachment points may be saved as the preselected position. In the case that the glasses are adjusted with respect to the face (in the same position, for example) then the position determination module 420 may determine the set of connection points that should be connected to the corresponding set of attachment points. As described with respect the default position, preselected position, and adjusted position (e.g., new preselected position), a set of connection points may correspond to a default set of connection points, a preselected set of attachment points, and an adjusted set of attachment points (e.g., a new set of preselected connection points).


The combining module 425 may combine the three-dimensional model of the user with the three-dimensional model of the glasses by matching the selected set of connection points with the selected set of attachment points. As a result, the combining module 425 may generate a modeled virtual try-on that positions the glasses in a consistent position and/or allows the position of the glasses to be adjusted based on a user's preference. The boundaries (e.g., surfaces) of the three-dimensional model of the user and the three-dimensional model of the glasses are defined and enforced. As a result, the combination of the three-dimensional model of the user and the three-dimensional model of the glasses is a non-interfering combination (there is no interference into the boundaries of the models, for example).



FIG. 5 is a diagram 500 illustrating one example of a modeled try-on. The modeled try-on may include the three-dimensional model of the user 515 and the three-dimensional model of the glasses 530. In one example, the three-dimensional model of the user 515 may include a nose 520, a right ear 535, and a left ear 525. In some configurations, the three-dimensional model of the glasses 530 may be positioned on the face of the three-dimensional model of the user 515 so that the three-dimensional model of the glasses 530 attaches to the nose 520 and regions around the left ear 525 and the right ear 535. In one example, the modeled try-on, which is a three-dimensional model (a three-dimensional depth map, for example), may be illustrated in a frontal view 505 and a profile view 510. Although the following examples utilize a frontal view 505 and a profile view 510 of the three-dimensional model of the user 515, it is understood, that various other angles (e.g., perspectives) may be used.



FIG. 6 is a diagram 600 illustrating one example of attachment points on the three-dimensional model of the user 515-a. The three-dimensional model of the user 515-a may be an example of the three-dimensional model of the user 515 illustrated in FIG. 5. In one example, the three-dimensional model of the user 515-a may include a plurality of nose points (e.g., attachment points) 605 along the nose 520. Although, five nose points 605 are shown (e.g., N1, N2, N3, N4, N5) in the present example, it is understood, that more or less nose points 605 may be used.


As noted previously, the three-dimensional model of the user 515-a may be a morphable model 515-a. In one example, the plurality of nose points 605 may correspond to particular points on the morphable model 515-a. As a result of being tied to specific points on the morphable model 515-a, each nose point 605 may correspond to the same point on the morphable model 515-a regardless of the shape or size of the user's face. For instance, if the user has a larger nose 520, then the nose points 605 will be spread further apart and if the user has a smaller nose 520, then the nose points 605 may be squished closer together. As a result, the nose point N1605 on a larger nose 520 may correspond to nose point N1605 on a smaller nose 520.


In one example, a touch sensor 620 may be associated with the frontal view 505. The touch sensor 620 may be used to adjust which nose point 605 the three-dimensional model of the glasses should be positioned at. In one example, a user may slide a three-dimensional model of the glasses up/down 610 and in/out 615 along the nose by swiping or dragging a finger up/down on the touch sensor 620. In one example, the default nose point 605 may correspond to nose point N2605. In this example, sliding the touch sensor 620 up may select a nose point 605 up the nose 520 (nose point N1605, for example) and sliding the touch sensor 620 down may select a nose point 605 down the nose 520 (nose point N3605, for example). It is noted that although the nose points 605 appear to go vertically up/down 610 as illustrated in frontal view 505, the nose points 605 actually go up and in/down and out 615 as illustrated in the profile view 510.



FIG. 7 is a diagram 700 illustrating one example of a three-dimensional model of a pair of glasses 530-a. The three-dimensional model of the glasses 530-a may be an example of the three-dimensional model of the glasses 530 illustrated in FIG. 5. In one embodiment, the three-dimensional model of the glasses 530-a may include a right earpiece 710 (for contacting the right ear 535, for example), a right eyepiece 715, a bridge 720 (for contacting the nose 520, for example), a left eyepiece 725, and a left earpiece 730 (for contacting the left ear 730, for example). In one example, the three-dimensional model of the glasses 530-a may include a plurality of possible nose connection points (e.g., NC1, NC2, NC3, NC4) 705. Depending on the nose pad configuration of the three-dimensional model of the glasses 530-a and/or the width of the nose 520, the three-dimensional model of the glasses 530-a may connect with the nose 520 at different nose connection points 705. In some cases, the three-dimensional model of the glasses 530-a may be adjusted to account for differences in nose connection points 705. It is noted each nose connection point 705 may be matched to and connected with one of the nose points 605.



FIG. 8 is a diagram 800 illustrating another example of attachment points on the three-dimensional model of the user 515-b. The three-dimensional model of the user 515-b may be an example of the three-dimensional model of the user 515 illustrated in FIG. 5 or 6. In one example, the three-dimensional model of the user 515-b may include a plurality of ear points (e.g., attachment points) 805, 810 at and above the ears 525, 535. Although, three ear points 805 are shown (e.g., EP1, EP2, EP3) in the present example, it is understood, that more or less ear points 805, 810 may be used.


As noted previously, the three-dimensional model of the user 515-b may be a morphable model 515-b. In one example, the plurality of ear points 805, 810 may correspond to particular points on the morphable model 515-b. As a result of being tied to specific points on the morphable model 515-b, each ear point 805, 810 will correspond to the same point on the morphable model 515-a regardless of the shape or size of the user's face. For instance, if the user has a larger head then the ear points 805, 810 will be spread further apart and if the user has a smaller head, then the ear points 805, 810 may be squished closer together. As a result, the ear point EP1a 810, EP1b 805 on a larger head may correspond to ear point EP1a 810, EP1b 805 on a smaller head. This may allow the three-dimensional model of the glasses to be positioned properly and consistently regardless of the size of the user's head.


In one example, a touch sensor 820 may be associated with the profile view 510. The touch sensor 820 may be used to adjust which ear point 805 the three-dimensional model of the glasses should be positioned at. In one example, a user may tilt a three-dimensional model of the glasses so that the temples rotate up and forward/down and back 830 and up and in/down and out 815, 825 by the ears 525, 535 by swiping or dragging a finger up/down on the touch sensor 820. In one example, the default ear points 805 may correspond to ear points EP1a 810, EP1b 805. In this example, sliding the touch sensor 820 up may select an ear point 805 up the side of the head (ear point EP2a 810, EP2b 805, for example) and sliding the touch sensor 820 down may not result in an adjustment. Typically, ear point EP1a 810, EP1b 805 correspond to the lowest that the ear pieces 710, 730 may go due to the connection between the ear 525 and the head. It is noted that although the ear points 805 appear to go up and forward/down and back 830 as illustrated in profile view 510, the ear points 805, 810 go up and in/down and out 825, 815.



FIG. 9 is a diagram 900 illustrating one example of a three-dimensional model of a pair of glasses 530-b. The three-dimensional model of the glasses 530-b may be an example of the three-dimensional model of the glasses 530 illustrated in FIG. 5 or 7. In one embodiment, each earpiece (e.g., left earpiece 730) may include a plurality of earpiece connection points (e.g., connection points) 905. For example, the three-dimensional model of the glasses 530-b may include a plurality of possible earpiece connection points (e.g., EC1, EC2, EC3, EC4, EC5) 905. Depending on the way the three-dimensional model of the glasses are positioned and/or situated on the three-dimensional model of the user, the three-dimensional model of the glasses 530-b may connect with the ear 525, 535 and/or head at different earpiece connection points 905. In some cases, the three-dimensional model of the glasses 530-b may be adjusted to account for differences in earpiece connection points 905. It is noted each earpiece connection point 905 may be matched to and connected with one of the ear points 805, 810.



FIG. 10 is a diagram 1000 illustrating an example of attachment points on the three-dimensional model of the user 515-c. The three-dimensional model of the user 515-c may be an example of the three-dimensional model of the user 515 illustrated in FIG. 5, 6 or 8. In this example, three-dimensional model of the user 515-c may include the nose points 605 and the ear points 805, 810 as described previously. In some configurations, a combination of nose points 605 and ear points 805, 810 may be used to define a set of attachment points (e.g., a position). In one example, a default position may correspond to nose point N2605 and ear points EP1a 810, EP1b 805. If the touch sensor 820 associated with the profile view 510 is used to adjust the tilt (tilt forward, for example) a three-dimensional model of glasses, then the set of attachment points (for the preselected position, for example) may correspond to nose point N2605 and ear points EP2a 810, EP2b 805. Similarly, if the touch sensor 620 associated with the frontal view 505 is used to adjust how far (slide down, for example) the three-dimensional model of the glasses slides down the nose 520, then the set of attachment points (for this position, for example) may correspond to nose point N3605 and ear points EP2a 810, EP2b 805. As a result, the three-dimensional model of the user 515-c and a three-dimensional model of a pair of glasses may be combined in numerous different positions based on various combinations of attachment points (and/or connection points, for example).



FIG. 11 is a diagram 1100 illustrating an example of a modeled try-on. In this example, the three-dimensional model of the user 515-c and the three-dimensional model of the glasses 530-c may be combined based on a selected position. The three-dimensional model of the glasses 530-c may be an example of the three-dimensional model of the glasses 530 illustrated in FIG. 5, 7, or 9. In this example, a nose point (N2, for example) 605 may be matched with a nose connection point (NC4, for example) and ear points (EP1a 810, EP1b 805, for example) 805, 810 may be matched with an earpiece connection point (EC3, for example) 905. As a result, the modeled try-on may be a modeled try-on with the glasses in a specific (and reproducible, position). As discussed previously, the touch sensor 620 associated with the frontal view 505 and the touch sensor 820 associated with the profile view 510 may allow the position (and situation, for example) of the three-dimensional model of the glasses 530-c to be adjusted.



FIG. 12 is a diagram 1200 illustrating an example of a rendering box 1205 for rendering a portion of a modeled try-on. As described previously, the modeled try-on may include the three-dimensional model of the user 515-c and a three-dimensional model of a pair of glasses 530-c. Regardless of the way the three-dimensional model of the glasses 530-c is positioned with respect to the three-dimensional model of the user 515-c, the three-dimensional model of the glasses 530-c may cover only a portion of the three-dimensional model of the user 515-c. The various interactions resulting from the combination of the three-dimensional model of the user 515-c and the three-dimensional model of the glasses 530-c (e.g., shadows, reflections, etc.) may also only cover a portion of the three-dimensional model of the user 530-c. As a result, the rendering may be limited to the portion of the modeled try-on that includes the visible portions of the three-dimensional model of the glasses 530-c and the visual interactions as a result of the addition of the three-dimensional model of the glasses 530-c. In one example, the bounding box 1205 may represent the portion of the modeled try-on that is to be rendered. As illustrated in this example, the bounding box 1205 may be a three-dimensional box. It is noted that reducing the area that needs to be rendered may reduce computations and increase efficiency. This may be particularly beneficial when adjustments are made (so that the adjustments may be rendered and reflected in an image in realtime, for example).



FIGS. 13-22 illustrate various examples of a virtual try-on using the systems and methods described herein. In these examples, the three-dimensional model of the user and the three-dimensional model of the glasses have been combined based on a matching of a set of selected connection points to a set of selected attachment points (based on an initial position or an adjusted position, for example). In these examples, one or more images may be rendered based on the resulting modeled try-on. The rendered images may then be used to provide a virtual try-on experience (that is displayed via the display 125, for example).



FIG. 13 is a diagram 1300 illustrating an example of a device 105-b that is providing a virtual try-on experience. The device 105-b may be an example of the device 105 illustrated in FIG. 1 or 2. In one example, the display 125 may display one or more images to provide a virtual try-on experience. In one example, the virtual try-on experience may include an image of a user 1315 that has been rendered (or overlaid with a portion of a rendered image, for example) to show the user virtually trying-on a pair of glasses 1330. In one example, the image of the user 1315 may correspond to an image of the user that does not include the virtually tried-on glasses 1330. The image of the user may include a nose 1320 and one or more ears 1325. In the virtual try-on experience, the position of the glasses 1330 may correspond to the position of the three-dimensional model of the glasses in the modeled try-on.


In this example, the display 125 may display a frontal view 1305 of a virtual try-on. In one example, the display 125 may include a touch sensor 620-a that allows a user to adjust the position of the glasses 1330 (as described previously, for example). In one example, the display 125 may be a touch screen display and the touch sensor 620-a for adjusting the frontal view 1305 may be anywhere within the portion of the display 125 that is displaying the frontal view 1305 (in this case, the entire display). This interface may allow a user to adjust the position of the glasses (the position along the nose, for example) by simply swiping or sliding the glasses 1330 up or down the nose 1320. The attachment points are shown for illustrative purposes only (indicating the possible positions of adjustment along the nose 1320). The possible attachment points may not typically be shown. FIG. 14 is a diagram 1400 that illustrates the result of a touch input used to adjust the glasses 1330 (using the touch sensor 620-a, for example).



FIG. 15 is a diagram 1500 illustrating an example of a device 105-b that is providing a virtual try-on experience. The example illustrated in FIGS. 15-16 is similar to the example shown in FIGS. 13-14 except that in this example, the display 125 may display the virtual try-on experience in the profile view 1310. The profile view 1310 may more fully illustrate the relationship between the temples and earpieces of the glasses 1330 and the user's 1315 head and ear 1325. The attachment points are shown for illustrative purposes only (indicating the possible positions of adjustment along the head by the ear 1325). The possible attachment points may not typically be shown. In one example, the display 125 may include a touch sensor 820-a that allows a user to adjust the position of the glasses 1330 (as described previously, for example). In one example, the display 125 may be a touch screen display and the touch sensor 820-a may be used to adjust the position of the glasses 1330 in the profile view 1310. In one example, the touch sensor 820-a may be used to adjust the tilt of the glasses 1330 from anywhere within the portion of the display 125 that is displaying the profile view 1310 (in this case, the entire display). This interface may allow a user to adjust the position of the glasses (the tile of the glasses 1330, for example) by simply swiping or sliding the temple or earpiece of the glasses 1330 up or down along the side of the head by the ear 1325. FIG. 16 is a diagram 1600 that illustrates the result of a touch input used to adjust the glasses 1330 (using the touch sensor 820-a, for example).



FIG. 17 is a diagram 1700 illustrating an example of a device 105-b that is providing a virtual try-on experience. The example illustrated in FIGS. 17-18 is similar to the example shown in FIGS. 13-14 except that in this example, the display 125 may display the virtual try-on experience in both the frontal view 1305 and the profile view 1310 simultaneously. As described previously, the touch sensor 620-a may adjust the position of the glasses along the nose. In some cases, the touch sensor 620-a may be anywhere within the frontal view 1305, but not in the profile view 1310. The attachment points are shown for illustrative purposes only (indicating the possible positions of adjustment along the nose 1320). The possible attachment points may not typically be shown. FIG. 18 is a diagram 1800 that illustrates the result of a touch input used to adjust the glasses 1330 (using the touch sensor 620-a, for example).



FIG. 19 is a diagram 1900 illustrating an example of a device 105-b that is providing a virtual try-on experience. The example illustrated in FIGS. 19-20 is similar to the example shown in FIGS. 15-16 except that in this example, the display 125 may display the virtual try-on experience in both the frontal view 1305 and the profile view 1310 simultaneously. As described previously, the touch sensor 820-a may adjust the position of the glasses along the head by the ear 1325. In some cases, the touch sensor 820-a may be anywhere within the profile view 1310, but not in the frontal view 1305. The attachment points are shown for illustrative purposes only (indicating the possible positions of adjustment along the head by the ear 1325). The possible attachment points may not typically be shown. FIG. 20 is a diagram 2000 that illustrates the result of a touch input used to adjust the glasses 1330 (using the touch sensor 820-a, for example).



FIG. 21 is a diagram 2100 illustrating an example of a device 105-b that is providing a virtual try-on experience. The example illustrated in FIGS. 19-20 is similar to the example shown in FIGS. 13-20 except that in this example, the display 125 may display the virtual try-on experience in both the frontal view 1305 and the profile view 1310 simultaneously. In this example, adjustments along the nose 1320 may be made in the frontal view 1305 via the touch sensor 620-a and adjustments of the tilt of the glasses 1330 may be made in the profile view 1310 via the touch sensor 820-a. In one example, the touch sensor 620-a may be anywhere within the frontal view 1305, but not in the profile view 1310 and the touch sensor 820-a may be anywhere within the profile view 1310, but not in the frontal view 1305. The attachment points are shown for illustrative purposes only (indicating the possible positions of adjustment along the nose 1320 and/or along the head by the ear 1325). The possible attachment points may not typically be shown. As a result of the touch sensor 620-a and the touch sensor 820-a being available simultaneously, the position of the glasses 1330 with respect to the nose 1320 and the ears 1325 may be adjusted simultaneously. FIG. 22 is a diagram 2200 that illustrates the result of a touch input used to adjust the glasses 1330 (using both the touch sensor 620-a and the touch sensor 820-a, for example).



FIG. 23 is a flow diagram illustrating one example of a method 2300 to generate a virtual try-on. In some configurations, the method 2300 may be implemented by the virtual try-on module 115 illustrated in FIG. 1, 2, or 3. At block 2305, a first model may be obtained. The first model may include a first set of attachment points. At block 2310, a second model may be obtained. The second model may include a first set of connection points. At block 2315, the first model and the second model may be combined. For example, the first model and the second model may be combined by matching the first set of attachment points with the first set of connection points. At block 2320, an image may be rendered based on at least a portion of the combined first and second models.


Thus, the method 2300 may allow for generating a virtual try-on. It should be noted that the method 2300 is just one implementation and that the operations of the method 2300 may be rearranged or otherwise modified such that other implementations are possible.



FIG. 24 is a flow diagram illustrating one example of a method 2400 to adjust a virtual try-on. In some configurations, the method 2400 may be implemented by the virtual try-on module 115 illustrated in FIG. 1, 2, or 3. At block 2405, a first model may be obtained. The first model may include a first set of attachment points. At block 2410, a second model may be obtained. The second model may include a first set of connection points. At block 2415, the first model and the second model may be combined. For example, the first model and the second model may be combined by matching the first set of attachment points with the first set of connection points. At block 2420, a first image may be rendered based on at least a portion of the combined first and second models. At block 2425, an adjustment command may be received. In one example, the adjustment command may be a touch input made with respect to a virtual try-on experience. At block 2430, the combined first and second models may be adjusted based on the adjustment command. At block 2435, a second image may be rendered based on at least a portion of the adjusted combined first and second models.


Thus, the method 2400 may allow for adjusting a virtual try-on. It should be noted that the method 2400 is just one implementation and that the operations of the method 2400 may be rearranged or otherwise modified such that other implementations are possible.



FIG. 25 depicts a block diagram of a computer system 2500 suitable for implementing the present systems and methods. For example, the computer system 2500 may be suitable for implementing the device 105 illustrated in FIG. 1, 2, or 13-22 and/or the server 210 illustrated in FIG. 2. Computer system 2500 includes a bus 2505 which interconnects major subsystems of computer system 2500, such as a central processor 2510, a system memory 2515 (typically RAM, but which may also include ROM, flash RAM, or the like), an input/output controller 2520, an external audio device, such as a speaker system 2525 via an audio output interface 2530, an external device, such as a display screen 2535 via display adapter 2540, a keyboard 2545 (interfaced with a keyboard controller 2550) (or other input device), multiple universal serial bus (USB) devices 2555 (interfaced with a USB controller 2560), and a storage interface 2565. Also included are a mouse 2575 (or other pointand-click device) interfaced through a serial port 2580 and a network interface 2585 (coupled directly to bus 2505).


Bus 2505 allows data communication between central processor 2510 and system memory 2515, which may include read-only memory (ROM) or flash memory (neither shown), and random access memory (RAM) (not shown), as previously noted. The RAM is generally the main memory into which the operating system and application programs are loaded. The ROM or flash memory can contain, among other code, the Basic Input-Output system (BIOS) which controls basic hardware operation such as the interaction with peripheral components or devices. For example, the virtual try-on module 115-c to implement the present systems and methods may be stored within the system memory 2515. Applications (e.g., application 215) resident with computer system 2500 are generally stored on and accessed via a non-transitory computer readable medium, such as a hard disk drive (e.g., fixed disk 2570) or other storage medium. Additionally, applications can be in the form of electronic signals modulated in accordance with the application and data communication technology when accessed via interface 2585.


Storage interface 2565, as with the other storage interfaces of computer system 2500, can connect to a standard computer readable medium for storage and/or retrieval of information, such as a fixed disk drive 2544. Fixed disk drive 2544 may be a part of computer system 2500 or may be separate and accessed through other interface systems. Network interface 2585 may provide a direct connection to a remote server via a direct network link to the Internet via a POP (point of presence). Network interface 2585 may provide such connection using wireless techniques, including digital cellular telephone connection, Cellular Digital Packet Data (CDPD) connection, digital satellite data connection, or the like.


Many other devices or subsystems (not shown) may be connected in a similar manner (e.g., document scanners, digital cameras, and so on). Conversely, all of the devices shown in FIG. 25 need not be present to practice the present systems and methods. The devices and subsystems can be interconnected in different ways from that shown in FIG. 25. The operation of a computer system such as that shown in FIG. 25 is readily known in the art and is not discussed in detail in this application. Code to implement the present disclosure can be stored in a non-transitory computer-readable medium such as one or more of system memory 2515 or fixed disk 2570. The operating system provided on computer system 2500 may be iOS®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, Linux®, or another known operating system.


While the foregoing disclosure sets forth various embodiments using specific block diagrams, flowcharts, and examples, each block diagram component, flowchart step, operation, and/or component described and/or illustrated herein may be implemented, individually and/or collectively, using a wide range of hardware, software, or firmware (or any combination thereof) configurations. In addition, any disclosure of components contained within other components should be considered exemplary in nature since many other architectures can be implemented to achieve the same functionality.


The process parameters and sequence of steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.


Furthermore, while various embodiments have been described and/or illustrated herein in the context of fully functional computing systems, one or more of these exemplary embodiments may be distributed as a program product in a variety of forms, regardless of the particular type of computer-readable media used to actually carry out the distribution. The embodiments disclosed herein may also be implemented using software modules that perform certain tasks. These software modules may include script, batch, or other executable files that may be stored on a computer-readable storage medium or in a computing system. In some embodiments, these software modules may configure a computing system to perform one or more of the exemplary embodiments disclosed herein.


The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the present systems and methods and their practical applications, to thereby enable others skilled in the art to best utilize the present systems and methods and various embodiments with various modifications as may be suited to the particular use contemplated.


Unless otherwise noted, the terms “a” or “an,” as used in the specification and claims, are to be construed as meaning “at least one of.” In addition, for ease of use, the words “including” and “having,” as used in the specification and claims, are interchangeable with and have the same meaning as the word “comprising.” In addition, the term “based on” as used in the specification and the claims is to be construed as meaning “based at least upon.”

Claims
  • 1. A computer-implemented method for generating a virtual try-on, the method comprising: obtaining, by a processor, a first model, the first model comprising a first set of two or more attachment points and a second set of one or more attachment points, wherein the first model comprises a three-dimensional model of a user's face;wherein the first set of attachment points includes at least an attachment point on a first facial feature on the model of a user's face and an attachment point on a second facial feature on the model of a user's face, and wherein the second set of attachment points includes a second attachment point on the first facial feature or on the second facial feature;obtaining, by the processor, a second model, the second model comprising a first set of connection points;combining, by the processor, the first model and the second model, wherein combining the first and second models comprises matching at least one of the first set of attachment points with at least one of the first set of connection points; andrendering, by the processor, an image based on at least a portion of the combined first and second models.
  • 2. The method of claim 1, further comprising: receiving an adjustment command; andadjusting the combined first and second models based on the adjustment command.
  • 3. The method of claim 2, wherein: adjusting the combined first and second models comprises matching the second set of attachment points with the first set of connection points.
  • 4. The method of claim 2, wherein: the second model further comprises a second set of connection points; andadjusting the combined first and second models comprises matching the first set of attachment points with the second set of connection points.
  • 5. The method of claim 2, wherein: the second model further comprises a second set of connection points; andadjusting the combined first and second models comprises matching the second set of attachment points with the second set of connection points.
  • 6. The method of claim 2, wherein receiving the adjustment command comprises receiving a touch input.
  • 7. The method of claim 6, wherein the three-dimensional model of the user comprises a morphable model.
  • 8. The method of claim 6, wherein the first set of connection points comprises a nosepiece connection point and at least one earpiece connection point; and wherein the first set of attachment points comprises a nose attachment point and an ear attachment point.
  • 9. The method of claim 6, wherein the second model comprises a three-dimensional model of glasses.
  • 10. The method of claim 9, wherein the first set of connection points comprises a nosepiece connection point and at least one temple point.
  • 11. The method of claim 9, wherein the combined first and second models comprises a modeled virtual try-on.
  • 12. The method of claim 1, wherein the first facial feature is an ear of the model of a user's face and the second facial feature is the nose of the model of a user's face; wherein the first set of attachment points includes at least a first ear attachment point and at least a first nose attachment point; andwherein the second set of attachment points includes at least one of a second nose attachment point relative to said nose or a second ear attachment point relative to said ear.
  • 13. The method of claim 12, wherein the first set of attachment points includes at least a first left ear attachment point, at least a first right ear attachment point, and at least a first nose attachment point; and wherein the second set of attachment points includes at least a second nose attachment point.
  • 14. The method of claim 13, wherein the first and second sets of attachment points together comprise two or more left ear attachment points, two or more right ear attachment points, and two or more nose attachment points.
  • 15. A computing device configured to generate a virtual try-on, comprising: a processor;memory in electronic communication with the processor;instructions stored in the memory, the instructions being executable by the processor to: obtain a first model, the first model comprising a first set of two or more attachment points and a second set of one or more attachment points, wherein the first model comprises a three-dimensional model of a user's face;wherein the first set of attachment points includes at least an attachment point on a first facial feature on the model of a user's face and an attachment point on a second facial feature on the model of a user's face, and wherein the second set of attachment points includes a second attachment point on the first facial feature or on the second facial feature;obtain a second model, the second model comprising a first set of connection points;combine the first model and the second model, wherein combining the first and second models comprises matching at least one of the first set of attachment points with at least one of the first set of connection points; andrender an image based on at least a portion of the combined first and second models.
  • 16. The computing device of claim 15, wherein the instructions are further executable by the processor to: receive an adjustment command; andadjust the combined first and second models based on the adjustment command.
  • 17. The computing device of claim 16, wherein: the instructions to adjust the combined first and second models are further executable by the processor to match the second set of attachment points with the first set of connection points.
  • 18. The computing device of claim 16, wherein: the second model further comprises a second set of connection points; andthe instructions to adjust the combined first and second models are further executable by the processor to match the first set of attachment points with the second set of connection points.
  • 19. The computing device of claim 16, wherein: the second model further comprises a second set of connection points; andthe instructions to adjust the combined first and second models are further executable by the processor to match the second set of attachment points with the second set of connection points.
  • 20. The computing device of claim 16, wherein the instructions to receive the adjustment command are further executable by the processor to receive a touch input.
  • 21. A computer-program product for generating a virtual try-on, the computer-program product comprising a non-transitory computer-readable medium storing instructions thereon, the instructions being executable by a processor to: obtain a first model, the first model comprising a first set of two or more attachment points and a second set of one or more attachment points, wherein the first model comprises a three-dimensional model of a user's face;wherein the first set of attachment points includes at least an attachment point on a first facial feature on the model of a user's face and an attachment point on a second facial feature on the model of a user's face, and wherein the second set of attachment points includes a second attachment point on the first facial feature or on the second facial feature;obtain a second model, the second model comprising a first set of connection points;combine the first model and the second model, wherein combining the first and second models comprises matching at least one of the first set of attachment points with at least one of the first set of connection points; andrender an image based on at least a portion of the combined first and second models.
  • 22. The computer-program product of claim 21, wherein the instructions are further executable by the processor to: receive an adjustment command; andadjust the combined first and second models based on the adjustment command.
RELATED APPLICATIONS

This application claims priority to U.S. Provisional Application No. 61/650,983, entitled SYSTEMS AND METHODS TO VIRTUALLY TRY-ON PRODUCTS, filed on May 23, 2012; and U.S. Provisional Application No. 61/735,951, entitled SYSTEMS AND METHODS TO VIRTUALLY TRY-ON PRODUCTS, filed on Dec. 11, 2012, which is incorporated herein in their entirety by this reference.

US Referenced Citations (486)
Number Name Date Kind
3927933 Humphrey Dec 1975 A
4370058 Trötscher et al. Jan 1983 A
4467349 Maloomian Aug 1984 A
4522474 Slavin Jun 1985 A
4534650 Clerget et al. Aug 1985 A
4539585 Spackova et al. Sep 1985 A
4573121 Saigo et al. Feb 1986 A
4613219 Vogel Sep 1986 A
4698564 Slavin Oct 1987 A
4724617 Logan et al. Feb 1988 A
4730260 Mori et al. Mar 1988 A
4781452 Ace Nov 1988 A
4786160 Fürter Nov 1988 A
4845641 Ninomiya et al. Jul 1989 A
4852184 Tamura et al. Jul 1989 A
4957369 Antonsson Sep 1990 A
5139373 Logan et al. Aug 1992 A
5255352 Falk Oct 1993 A
5257198 van Schoyck Oct 1993 A
5280570 Jordan Jan 1994 A
5281957 Schoolman Jan 1994 A
5428448 Albert-Garcia Jun 1995 A
5485399 Saigo et al. Jan 1996 A
5550602 Braeuning Aug 1996 A
5592248 Norton et al. Jan 1997 A
5631718 Markovitz et al. May 1997 A
5666957 Juto Sep 1997 A
5682210 Weirich Oct 1997 A
5720649 Gerber et al. Feb 1998 A
5724522 Kagami et al. Mar 1998 A
5774129 Poggio et al. Jun 1998 A
5809580 Arnette Sep 1998 A
5844573 Poggio et al. Dec 1998 A
5880806 Conway Mar 1999 A
5908348 Gottschald Jun 1999 A
5974400 Kagami et al. Oct 1999 A
5980037 Conway Nov 1999 A
5983201 Fay Nov 1999 A
5987702 Simioni Nov 1999 A
5988862 Kacyra et al. Nov 1999 A
D417883 Arnette Dec 1999 S
6016150 Lengyel et al. Jan 2000 A
6018339 Stevens Jan 2000 A
D420037 Conway Feb 2000 S
D420379 Conway Feb 2000 S
D420380 Simioni et al. Feb 2000 S
6024444 Little Feb 2000 A
D421764 Arnette Mar 2000 S
D422011 Conway Mar 2000 S
D422014 Simioni et al. Mar 2000 S
D423034 Arnette Apr 2000 S
D423552 Flanagan et al. Apr 2000 S
D423553 Brune Apr 2000 S
D423554 Conway Apr 2000 S
D423556 Conway Apr 2000 S
D423557 Conway Apr 2000 S
D424094 Conway May 2000 S
D424095 Brune et al. May 2000 S
D424096 Conway May 2000 S
D424589 Arnette May 2000 S
D424598 Simioni May 2000 S
D425542 Arnette May 2000 S
D425543 Brune May 2000 S
D426568 Conway Jun 2000 S
D427225 Arnette Jun 2000 S
D427227 Conway Jun 2000 S
6072496 Guenter et al. Jun 2000 A
6095650 Gao et al. Aug 2000 A
6102539 Tucker Aug 2000 A
D430591 Arnette Sep 2000 S
D432156 Conway et al. Oct 2000 S
D433052 Flanagan Oct 2000 S
6132044 Sternbergh Oct 2000 A
6139141 Zider Oct 2000 A
6139143 Brune et al. Oct 2000 A
6142628 Saigo Nov 2000 A
6144388 Bornstein Nov 2000 A
D434788 Conway Dec 2000 S
D439269 Conway Mar 2001 S
6208347 Migdal et al. Mar 2001 B1
6222621 Taguchi Apr 2001 B1
6231188 Gao et al. May 2001 B1
6233049 Kondo et al. May 2001 B1
6246468 Dimsdale Jun 2001 B1
6249600 Reed et al. Jun 2001 B1
6281903 Martin et al. Aug 2001 B1
6305656 Wemyss Oct 2001 B1
6307568 Rom Oct 2001 B1
6310627 Sakaguchi Oct 2001 B1
6330523 Kacyra et al. Dec 2001 B1
6356271 Reiter et al. Mar 2002 B1
6377281 Rosenbluth et al. Apr 2002 B1
6386562 Kuo May 2002 B1
6415051 Callari et al. Jul 2002 B1
6419549 Shirayanagi Jul 2002 B2
6420698 Dimsdale Jul 2002 B1
6434278 Hashimoto Aug 2002 B1
6456287 Kamen et al. Sep 2002 B1
6466205 Simpson et al. Oct 2002 B2
6473079 Kacyra et al. Oct 2002 B1
6492986 Metaxas et al. Dec 2002 B1
6493073 Epstein Dec 2002 B2
6508553 Gao et al. Jan 2003 B2
6512518 Dimsdale Jan 2003 B2
6512993 Kacyra et al. Jan 2003 B2
6516099 Davison et al. Feb 2003 B1
6518963 Waupotitsch et al. Feb 2003 B1
6527731 Weiss et al. Mar 2003 B2
6529192 Waupotitsch Mar 2003 B1
6529626 Watanabe et al. Mar 2003 B1
6529627 Callari et al. Mar 2003 B1
6533418 Izumitani et al. Mar 2003 B1
6535223 Foley Mar 2003 B1
6556196 Blanz et al. Apr 2003 B1
6563499 Waupotitsch et al. May 2003 B1
6583792 Agnew Jun 2003 B1
6624843 Lennon Sep 2003 B2
6634754 Fukuma et al. Oct 2003 B2
6637880 Yamakaji et al. Oct 2003 B1
6647146 Davison et al. Nov 2003 B1
6650324 Junkins Nov 2003 B1
6659609 Mothes Dec 2003 B2
6661433 Lee Dec 2003 B1
6664956 Erdem Dec 2003 B1
6668082 Davison et al. Dec 2003 B1
6671538 Ehnholm et al. Dec 2003 B1
6677946 Ohba Jan 2004 B1
6682195 Dreher Jan 2004 B2
6692127 Abitbol et al. Feb 2004 B2
6705718 Fossen Mar 2004 B2
6726463 Foreman Apr 2004 B2
6734849 Dimsdale et al. May 2004 B2
6736506 Izumitani et al. May 2004 B2
6760488 Moura et al. Jul 2004 B1
6775128 Leitao Aug 2004 B2
6785585 Gottschald Aug 2004 B1
6791584 Xie Sep 2004 B1
6792401 Nigro et al. Sep 2004 B1
6807290 Liu et al. Oct 2004 B2
6808381 Foreman et al. Oct 2004 B2
6817713 Ueno Nov 2004 B2
6825838 Smith et al. Nov 2004 B2
6847383 Agnew Jan 2005 B2
6847462 Kacyra et al. Jan 2005 B1
6876755 Taylor et al. Apr 2005 B1
6893245 Foreman et al. May 2005 B2
6903746 Fukushima et al. Jun 2005 B2
6907310 Gardner et al. Jun 2005 B2
6922494 Fay Jul 2005 B1
6943789 Perry et al. Sep 2005 B2
6944327 Soatto Sep 2005 B1
6950804 Strietzel Sep 2005 B2
6961439 Ballas Nov 2005 B2
6965385 Welk et al. Nov 2005 B2
6965846 Krimmer Nov 2005 B2
6968075 Chang Nov 2005 B1
6980690 Taylor et al. Dec 2005 B1
6999073 Zwern et al. Feb 2006 B1
7003515 Glaser et al. Feb 2006 B1
7016824 Waupotitsch et al. Mar 2006 B2
7034818 Perry et al. Apr 2006 B2
7043059 Cheatle et al. May 2006 B2
7051290 Foreman et al. May 2006 B2
7062722 Carlin et al. Jun 2006 B1
7069107 Ueno Jun 2006 B2
7095878 Taylor et al. Aug 2006 B1
7103211 Medioni et al. Sep 2006 B1
7116804 Murase et al. Oct 2006 B2
7133048 Brand Nov 2006 B2
7152976 Fukuma et al. Dec 2006 B2
7154529 Hoke et al. Dec 2006 B2
7156655 Sachdeva et al. Jan 2007 B2
7184036 Dimsdale et al. Feb 2007 B2
7209557 Lahiri Apr 2007 B2
7212656 Liu et al. May 2007 B2
7212664 Lee et al. May 2007 B2
7215430 Kacyra et al. May 2007 B2
7218150 Kitagawa et al. May 2007 B2
7218323 Halmshaw et al. May 2007 B1
7219995 Ollendorf et al. May 2007 B2
7224357 Chen et al. May 2007 B2
7234937 Sachdeva et al. Jun 2007 B2
7242807 Waupotitsch et al. Jul 2007 B2
7290201 Edwards Oct 2007 B1
7310102 Spicer Dec 2007 B2
7324110 Edwards et al. Jan 2008 B2
7415152 Jiang et al. Aug 2008 B2
7421097 Hamza et al. Sep 2008 B2
7426292 Moghaddam et al. Sep 2008 B2
7434931 Warden et al. Oct 2008 B2
7436988 Zhang et al. Oct 2008 B2
7441895 Akiyama et al. Oct 2008 B2
7450737 Ishikawa et al. Nov 2008 B2
7489768 Strietzel Feb 2009 B1
7492364 Devarajan et al. Feb 2009 B2
7508977 Lyons et al. Mar 2009 B2
7523411 Carlin Apr 2009 B2
7530690 Divo et al. May 2009 B2
7532215 Yoda et al. May 2009 B2
7533453 Yancy May 2009 B2
7540611 Welk et al. Jun 2009 B2
7557812 Chou et al. Jul 2009 B2
7563975 Leahy et al. Jul 2009 B2
7573475 Sullivan et al. Aug 2009 B2
7573489 Davidson et al. Aug 2009 B2
7587082 Rudin et al. Sep 2009 B1
7609859 Lee et al. Oct 2009 B2
7630580 Repenning Dec 2009 B1
7634103 Rubinstenn et al. Dec 2009 B2
7643685 Miller Jan 2010 B2
7646909 Jiang et al. Jan 2010 B2
7651221 Krengel et al. Jan 2010 B2
7656402 Abraham et al. Feb 2010 B2
7657083 Parr et al. Feb 2010 B2
7663648 Saldanha et al. Feb 2010 B1
7665843 Xie Feb 2010 B2
7689043 Austin et al. Mar 2010 B2
7699300 Iguchi Apr 2010 B2
7711155 Sharma et al. May 2010 B1
7717708 Sachdeva et al. May 2010 B2
7720285 Ishikawa et al. May 2010 B2
D616918 Rohrbach Jun 2010 S
7736147 Kaza et al. Jun 2010 B2
7755619 Wang et al. Jul 2010 B2
7756325 Vetter et al. Jul 2010 B2
7760923 Walker et al. Jul 2010 B2
7768528 Edwards et al. Aug 2010 B1
D623216 Rohrbach Sep 2010 S
7804997 Geng et al. Sep 2010 B2
7814436 Schrag et al. Oct 2010 B2
7830384 Edwards et al. Nov 2010 B1
7835565 Cai et al. Nov 2010 B2
7835568 Park et al. Nov 2010 B2
7845797 Warden et al. Dec 2010 B2
7848548 Moon et al. Dec 2010 B1
7852995 Strietzel Dec 2010 B2
7856125 Medioni et al. Dec 2010 B2
7860225 Strietzel Dec 2010 B2
7860301 Se et al. Dec 2010 B2
7876931 Geng Jan 2011 B2
7896493 Welk et al. Mar 2011 B2
7907774 Parr et al. Mar 2011 B2
7929745 Walker et al. Apr 2011 B2
7929775 Hager et al. Apr 2011 B2
7953675 Medioni et al. May 2011 B2
7961914 Smith Jun 2011 B1
8009880 Zhang et al. Aug 2011 B2
8026916 Wen Sep 2011 B2
8026917 Rogers et al. Sep 2011 B1
8026929 Naimark Sep 2011 B2
8031909 Se et al. Oct 2011 B2
8031933 Se et al. Oct 2011 B2
8059917 Dumas et al. Nov 2011 B2
8064685 Solem et al. Nov 2011 B2
8070619 Edwards Dec 2011 B2
8073196 Yuan et al. Dec 2011 B2
8090160 Kakadiaris et al. Jan 2012 B2
8113829 Sachdeva et al. Feb 2012 B2
8118427 Bonnin et al. Feb 2012 B2
8126242 Brett et al. Feb 2012 B2
8126249 Brett et al. Feb 2012 B2
8126261 Medioni et al. Feb 2012 B2
8130225 Sullivan et al. Mar 2012 B2
8131063 Xiao et al. Mar 2012 B2
8132123 Schrag et al. Mar 2012 B2
8144153 Sullivan et al. Mar 2012 B1
8145545 Rathod et al. Mar 2012 B2
8155411 Hof et al. Apr 2012 B2
8160345 Pavlovskaia et al. Apr 2012 B2
8177551 Sachdeva et al. May 2012 B2
8182087 Esser et al. May 2012 B2
8194072 Jones et al. Jun 2012 B2
8199152 Sullivan et al. Jun 2012 B2
8200502 Wedwick Jun 2012 B2
8204299 Arcas et al. Jun 2012 B2
8204301 Xiao et al. Jun 2012 B2
8204334 Bhagavathy et al. Jun 2012 B2
8208717 Xiao et al. Jun 2012 B2
8212812 Tsin et al. Jul 2012 B2
8217941 Park et al. Jul 2012 B2
8218836 Metaxas et al. Jul 2012 B2
8224039 Ionita et al. Jul 2012 B2
8243065 Kim Aug 2012 B2
8248417 Clifton Aug 2012 B1
8260006 Callari et al. Sep 2012 B1
8260038 Xiao et al. Sep 2012 B2
8260039 Shiell et al. Sep 2012 B2
8264504 Naimark Sep 2012 B2
8269779 Rogers et al. Sep 2012 B2
8274506 Rees Sep 2012 B1
8284190 Muktinutalapati et al. Oct 2012 B2
8286083 Barrus et al. Oct 2012 B2
8289317 Harvill Oct 2012 B2
8290769 Taub et al. Oct 2012 B2
8295589 Ofek et al. Oct 2012 B2
8300900 Lai et al. Oct 2012 B2
8303113 Esser et al. Nov 2012 B2
8307560 Tulin Nov 2012 B2
8330801 Wang et al. Dec 2012 B2
8346020 Guntur Jan 2013 B2
8355079 Zhang et al. Jan 2013 B2
8372319 Liguori et al. Feb 2013 B2
8374422 Roussel Feb 2013 B2
8385646 Lang et al. Feb 2013 B2
8391547 Huang et al. Mar 2013 B2
8459792 Wilson et al. Jun 2013 B2
8605942 Takeuchi Dec 2013 B2
8605989 Rudin et al. Dec 2013 B2
8743051 Moy et al. Jun 2014 B1
8813378 Grove Aug 2014 B2
20010023413 Fukuma et al. Sep 2001 A1
20010026272 Feld et al. Oct 2001 A1
20010051517 Strietzel Dec 2001 A1
20020010655 Kjallstrom Jan 2002 A1
20020105530 Waupotitsch et al. Aug 2002 A1
20020149585 Kacyra et al. Oct 2002 A1
20030001835 Dimsdale et al. Jan 2003 A1
20030030904 Huang Feb 2003 A1
20030071810 Shoov et al. Apr 2003 A1
20030110099 Trajkovic et al. Jun 2003 A1
20030112240 Cerny Jun 2003 A1
20040004633 Perry et al. Jan 2004 A1
20040090438 Alliez et al. May 2004 A1
20040217956 Besl et al. Nov 2004 A1
20040223631 Waupotitsch et al. Nov 2004 A1
20040257364 Basler Dec 2004 A1
20050053275 Stokes Mar 2005 A1
20050063582 Park et al. Mar 2005 A1
20050111705 Waupotitsch et al. May 2005 A1
20050128211 Berger et al. Jun 2005 A1
20050162419 Kim et al. Jul 2005 A1
20050190264 Neal Sep 2005 A1
20050208457 Fink et al. Sep 2005 A1
20050226509 Maurer et al. Oct 2005 A1
20060012748 Periasamy et al. Jan 2006 A1
20060017887 Jacobson et al. Jan 2006 A1
20060067573 Parr et al. Mar 2006 A1
20060127852 Wen Jun 2006 A1
20060161474 Diamond et al. Jul 2006 A1
20060212150 Sims Sep 2006 A1
20060216680 Buckwalter et al. Sep 2006 A1
20070013873 Jacobson et al. Jan 2007 A9
20070104360 Huang et al. May 2007 A1
20070127848 Kim et al. Jun 2007 A1
20070160306 Ahn et al. Jul 2007 A1
20070183679 Moroto et al. Aug 2007 A1
20070233311 Okada et al. Oct 2007 A1
20070262988 Christensen Nov 2007 A1
20080084414 Rosel et al. Apr 2008 A1
20080112610 Israelsen et al. May 2008 A1
20080136814 Chu et al. Jun 2008 A1
20080152200 Medioni et al. Jun 2008 A1
20080162695 Muhn et al. Jul 2008 A1
20080163344 Yang Jul 2008 A1
20080170077 Sullivan et al. Jul 2008 A1
20080201641 Xie Aug 2008 A1
20080219589 Jung et al. Sep 2008 A1
20080240588 Tsoupko-Sitnikov et al. Oct 2008 A1
20080246759 Summers Oct 2008 A1
20080271078 Gossweiler et al. Oct 2008 A1
20080278437 Barrus et al. Nov 2008 A1
20080278633 Tsoupko-Sitnikov et al. Nov 2008 A1
20080279478 Tsoupko-Sitnikov et al. Nov 2008 A1
20080280247 Sachdeva et al. Nov 2008 A1
20080294393 Laake et al. Nov 2008 A1
20080297503 Dickinson et al. Dec 2008 A1
20080310757 Wolberg et al. Dec 2008 A1
20090010507 Geng Jan 2009 A1
20090040216 Ishiyama Feb 2009 A1
20090123037 Ishida May 2009 A1
20090129402 Moller et al. May 2009 A1
20090132371 Strietzel et al. May 2009 A1
20090135176 Snoddy et al. May 2009 A1
20090135177 Strietzel et al. May 2009 A1
20090144173 Mo et al. Jun 2009 A1
20090153552 Fidaleo et al. Jun 2009 A1
20090153553 Kim et al. Jun 2009 A1
20090153569 Park et al. Jun 2009 A1
20090154794 Kim et al. Jun 2009 A1
20090184960 Carr et al. Jul 2009 A1
20090185763 Park et al. Jul 2009 A1
20090219281 Maillot Sep 2009 A1
20090279784 Arcas et al. Nov 2009 A1
20090296984 Nijim et al. Dec 2009 A1
20090304270 Bhagavathy et al. Dec 2009 A1
20090310861 Lang et al. Dec 2009 A1
20090316945 Akansu Dec 2009 A1
20090316966 Marshall et al. Dec 2009 A1
20090324030 Frinking et al. Dec 2009 A1
20090324121 Bhagavathy et al. Dec 2009 A1
20100030578 Siddique et al. Feb 2010 A1
20100134487 Lai et al. Jun 2010 A1
20100138025 Morton et al. Jun 2010 A1
20100141893 Altheimer et al. Jun 2010 A1
20100145489 Esser et al. Jun 2010 A1
20100166978 Nieminen Jul 2010 A1
20100179789 Sachdeva et al. Jul 2010 A1
20100191504 Esser et al. Jul 2010 A1
20100198817 Esser et al. Aug 2010 A1
20100209005 Rudin et al. Aug 2010 A1
20100277476 Johansson et al. Nov 2010 A1
20100293192 Suy et al. Nov 2010 A1
20100293251 Suy et al. Nov 2010 A1
20100302275 Saldanha et al. Dec 2010 A1
20100329568 Gamliel et al. Dec 2010 A1
20110001791 Kirshenboim et al. Jan 2011 A1
20110025827 Shpunt et al. Feb 2011 A1
20110026606 Bhagavathy et al. Feb 2011 A1
20110026607 Bhagavathy et al. Feb 2011 A1
20110029561 Slaney et al. Feb 2011 A1
20110040539 Szymczyk et al. Feb 2011 A1
20110043540 Fancher et al. Feb 2011 A1
20110043610 Ren et al. Feb 2011 A1
20110071804 Xie Mar 2011 A1
20110075916 Knothe et al. Mar 2011 A1
20110096832 Zhang et al. Apr 2011 A1
20110102553 Corcoran et al. May 2011 A1
20110115786 Mochizuki May 2011 A1
20110148858 Ni et al. Jun 2011 A1
20110157229 Ni et al. Jun 2011 A1
20110158394 Strietzel Jun 2011 A1
20110166834 Clara Jul 2011 A1
20110188780 Wang et al. Aug 2011 A1
20110208493 Altheimer et al. Aug 2011 A1
20110211816 Goedeken et al. Sep 2011 A1
20110227923 Mariani et al. Sep 2011 A1
20110227934 Sharp Sep 2011 A1
20110229659 Reynolds Sep 2011 A1
20110229660 Reynolds Sep 2011 A1
20110234581 Eikelis et al. Sep 2011 A1
20110234591 Mishra et al. Sep 2011 A1
20110249136 Levy et al. Oct 2011 A1
20110262717 Broen et al. Oct 2011 A1
20110279634 Periyannan et al. Nov 2011 A1
20110292034 Corazza et al. Dec 2011 A1
20110293247 Bhagavathy et al. Dec 2011 A1
20110304912 Broen et al. Dec 2011 A1
20110306417 Sheblak et al. Dec 2011 A1
20120002161 Altheimer et al. Jan 2012 A1
20120008090 Atheimer et al. Jan 2012 A1
20120013608 Ahn et al. Jan 2012 A1
20120016645 Altheimer et al. Jan 2012 A1
20120021835 Keller et al. Jan 2012 A1
20120038665 Strietzel Feb 2012 A1
20120075296 Wegbreit et al. Mar 2012 A1
20120079377 Goossens Mar 2012 A1
20120082432 Ackley Apr 2012 A1
20120114184 Barcons-Palau et al. May 2012 A1
20120114251 Solem et al. May 2012 A1
20120121174 Bhagavathy et al. May 2012 A1
20120130524 Clara et al. May 2012 A1
20120133640 Chin et al. May 2012 A1
20120133850 Broen et al. May 2012 A1
20120147324 Marin et al. Jun 2012 A1
20120158369 Bachrach et al. Jun 2012 A1
20120162218 Kim et al. Jun 2012 A1
20120166431 Brewington et al. Jun 2012 A1
20120170821 Zug et al. Jul 2012 A1
20120176380 Wang et al. Jul 2012 A1
20120177283 Wang et al. Jul 2012 A1
20120183202 Wei et al. Jul 2012 A1
20120183204 Aarts et al. Jul 2012 A1
20120183238 Savvides et al. Jul 2012 A1
20120192401 Pavlovskaia et al. Aug 2012 A1
20120206610 Wang et al. Aug 2012 A1
20120219195 Wu et al. Aug 2012 A1
20120224629 Bhagavathy et al. Sep 2012 A1
20120229758 Marin et al. Sep 2012 A1
20120256906 Ross et al. Oct 2012 A1
20120263437 Barcons-Palau et al. Oct 2012 A1
20120288015 Zhang et al. Nov 2012 A1
20120294369 Bhagavathy et al. Nov 2012 A1
20120294530 Bhaskaranand Nov 2012 A1
20120299914 Kilpatrick et al. Nov 2012 A1
20120306874 Nguyen et al. Dec 2012 A1
20120307074 Bhagavathy et al. Dec 2012 A1
20120314023 Barcons-Palau et al. Dec 2012 A1
20120320153 Barcons-Palau et al. Dec 2012 A1
20120321128 Medioni et al. Dec 2012 A1
20120323581 Strietzel et al. Dec 2012 A1
20130027657 Esser et al. Jan 2013 A1
20130070973 Saito et al. Mar 2013 A1
20130088490 Rasmussen et al. Apr 2013 A1
20130187915 Lee et al. Jul 2013 A1
20130201187 Tong et al. Aug 2013 A1
20130271451 Tong et al. Oct 2013 A1
Foreign Referenced Citations (68)
Number Date Country
10007705 Sep 2001 DE
0092364 Oct 1983 EP
0359596 Mar 1990 EP
0994336 Apr 2000 EP
1011006 Jun 2000 EP
1136869 Sep 2001 EP
1138253 Oct 2001 EP
0444902 Jun 2002 EP
1450201 Aug 2004 EP
1728467 Dec 2006 EP
1154302 Aug 2009 EP
2966038 Apr 2012 FR
2449855 Dec 2008 GB
2003345857 Dec 2003 JP
2004272530 Sep 2004 JP
2005269022 Sep 2005 JP
20000028583 May 2000 KR
200000051217 Aug 2000 KR
20040097200 Nov 2004 KR
20080086945 Sep 2008 KR
20100050052 May 2010 KR
WO 9300641 Jan 1993 WO
WO 9604596 Feb 1996 WO
WO 9740342 Oct 1997 WO
WO 9740960 Nov 1997 WO
WO 9813721 Apr 1998 WO
WO 9827861 Jul 1998 WO
WO 9827902 Jul 1998 WO
WO 9835263 Aug 1998 WO
WO 9852189 Nov 1998 WO
WO 9857270 Dec 1998 WO
WO 9956942 Nov 1999 WO
WO 9964918 Dec 1999 WO
WO 0000863 Jan 2000 WO
WO 0016683 Mar 2000 WO
WO 0045348 Aug 2000 WO
WO 0049919 Aug 2000 WO
WO 0062148 Oct 2000 WO
WO 0064168 Oct 2000 WO
WO 0123908 Apr 2001 WO
WO 0132074 May 2001 WO
WO 0135338 May 2001 WO
WO 0161447 Aug 2001 WO
WO 0167325 Sep 2001 WO
WO 0174553 Oct 2001 WO
WO 0178630 Oct 2001 WO
WO 0188654 Nov 2001 WO
WO 0207845 Jan 2002 WO
WO 0241127 May 2002 WO
WO 03079097 Sep 2003 WO
WO 03084448 Oct 2003 WO
WO 2007012261 Feb 2007 WO
WO 2007017751 Feb 2007 WO
WO 2007018017 Feb 2007 WO
WO 2008009355 Jan 2008 WO
WO 2008009423 Jan 2008 WO
WO 2008135178 Nov 2008 WO
WO 2009023012 Feb 2009 WO
WO 2009043941 Apr 2009 WO
2010039976 Apr 2010 WO
2010042990 Apr 2010 WO
WO 2011012743 Feb 2011 WO
WO 2011095917 Aug 2011 WO
WO 2011134611 Nov 2011 WO
WO 2011147649 Dec 2011 WO
WO 2012051654 Apr 2012 WO
WO 2012054972 May 2012 WO
WO 2012054983 May 2012 WO
Non-Patent Literature Citations (28)
Entry
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042512, mailed Sep. 6, 2013.
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042529, mailed Sep. 17, 2013.
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042525, mailed Sep. 17, 2013.
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042520, mailed Sep. 17, 2013.
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042504, mailed Aug. 19, 2013.
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042509, mailed Sep. 2, 2013.
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042514, mailed Aug. 30, 2013.
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042517, mailed Aug. 29, 2013.
PCT International Search Report for PCT International Patent Application No. PCT/US2012/068174, mailed Mar. 7, 2013.
3D Morphable Model Face Animation, http://www.youtube.com/watch?v=niceNYb—WA, Apr. 20, 2006.
Visionix 3D iView, Human Body Measurement Newsletter, vol. 1., No. 2, Sep. 2005, pp. 2 and 3.
Blaise Aguera y Areas demos Photosynth, May 2007. Ted.com, http://www.ted.com/talks/blaise—aguera—y—arcas—demos—photosynth.html.
ERC Tecnology Leads to Eyeglass “Virtual Try-on” System, Apr. 20, 2012, http://showcase.erc-assoc.org/accomplishments/microelectronic/imsc6-eyeglass.htm.
Information about Related Patents and Patent Applications, see the section below having the same title.
U.S. Appl. No. 13/775,785, filed Feb. 25, 2013, Systems and Methods for Adjusting a Virtual Try-On.
U.S. Appl. No. 13/775,764, filed Feb. 25, 2013, Systems and Methods for Feature Tracking.
U.S. Appl. No. 13/774,995, filed Feb. 22, 2013, Systems and Methods for Scaling a Three-Dimensional Model.
U.S. Appl. No. 13/774,985, filed Feb. 22, 2013, Systems and Methods for Generating a 3-D Model of a Virtual Try-On Product.
U.S. Appl. No. 13/774,983, filed Feb. 22, 2013, Systems and Methods for Generating a 3-D Model of a User for a Virtual Try-On Product.
U.S. Appl. No. 13/774,978, filed Feb. 22, 2013, Systems and Methods for Efficiently Processing Virtual 3-D Data.
U.S. Appl. No. 13/774,958, filed Feb. 22, 2013, Systems and Methods for Rendering Virtual Try-On Products.
U.S. Appl. No. 13/706,909, filed Dec. 6, 2012, Systems and Methods for Obtaining a Pupillary Distance Measurement Using a Mobile Computing Device.
Dror et al., Recognition of Surface Relfectance Properties form a Single Image under Unknown Real-World Illumination, IEEE, Proceedings of the IEEE Workshop on Identifying Objects Across Variations in Lighting: Psychophysics & Computation, Dec. 2011.
Sinha et al., GPU-based Video Feautre Tracking and Matching, http::frahm.web.unc.edu/files/2014101/GPU-based-Video-Feature-Tracking-And Matching.pdf, May 2006.
Tracker, Tracker Help, Nov. 2009.
Simonite, 3-D Models Created by a Cell Phone, Mar. 23, 2011, url: http://www.technologyreview.com/news/42338613-d-models-created-by-a-cell-phone/.
Fidaleo, Model-Assisted 3D Face Reconstruction from Video, AMFG'07 Analysis and Modeling of Faces and Gestures Lecture Notes in Computer Science vol. 4778, 2007, pp. 124-138.
Garcia-Mateos, Estimating 3D facial pose in video with just three points, CVPRW '08 Computer vision and Pattern Recognition Workshops, 2008.
Related Publications (1)
Number Date Country
20130321412 A1 Dec 2013 US
Provisional Applications (2)
Number Date Country
61650983 May 2012 US
61735951 Dec 2012 US