The use of computer systems and computer-related technologies continues to increase at a rapid pace. This increased use of computer systems has influenced the advances made to computer-related technologies. Indeed, computer systems have increasingly become an integral part of the business world and the activities of individual consumers. For example, computers have opened up an entire industry of internet shopping. In many ways, online shopping has changed the way consumers purchase products. However, in some cases, consumers may avoid shopping online. For example, it may be difficult for a consumer to know if they will look good in and/or with a product without seeing themselves in and/or with the product. In many cases, this challenge may deter a consumer from purchasing a product online. Therefore, rendering three-dimensional (3-D) scenes to improve the online shopping experience may be desirable.
According to at least one embodiment, a computer-implemented method for virtual rendering virtual try-on products is described. A first render viewpoint of a virtual three-dimensional (3-D) space may be selected that includes a 3-D model of at least a portion of a user generated from an image of the user and a 3-D polygon mesh of an object. Polygons of the 3-D polygon mesh may be designated as backwards-facing polygons and front-facing polygons in relation to the first render viewpoint. A shadow texture map of the object may be applied to the 3-D model of the user. A transparency texture map of the object may be applied to a backwards-facing polygon of the 3-D polygon mesh of the object. A first color texture map of the object may be applied to the result of the application of the transparency texture map to the backwards-facing polygon. The virtual 3-D space may be rendered at the first render viewpoint. The transparency texture map of the object may be applied to a front-facing polygon of the 3-D polygon mesh of the object. The first color texture map of the object may be applied to the result of the application of the transparency texture map to the front-facing polygon. The virtual 3-D space may be rendered at the first render viewpoint
In some embodiments, at least a portion of the 3-D polygon mesh of the object may be placed within a predetermined distance of at least one point on the 3-D model of the user.
In some embodiments, a shadow value of the object may be detected from a scan of the object. In some cases, a shadow texture map may be created from the detected shadow value. A 2-D coordinate of the shadow texture map may be mapped to a point on the 3-D model of the user and a value of the point on the 3-D model of the user may be multiplied by the shadow value.
In some embodiments, a transparency value of the object may be detected from a scan of the object. In some cases, a transparency texture map may be created from the detected transparency value. A 2-D coordinate of the transparency texture map may be mapped to a point on the 3-D model of the user and the 3-D polygon mesh of the object. A value of the point on the 3-D model of the user and the 3-D polygon mesh of the object may be multiplied by the transparency value.
In some embodiments, a first scanning angle of a scan of an object may be selected. The first scanning angle may correspond to the first render viewpoint. In some cases, a first color value of the object may be detected at the first scanning angle. A first color texture map may be created from the detected color value. A 2-D coordinate of the first color texture map may be mapped to a point on the 3-D model of the user and the 3-D polygon mesh of the object. The resultant value of multiplying the point on the 3-D model of the user and the 3-D polygon mesh of the object by the transparency value may be multiplied by the first color value.
In some embodiments, a second render viewpoint of the virtual 3-D space may be selected. In some cases, a second scanning angle of a scan of an object may be selected. The second scanning angle may correspond to the second render viewpoint. A second color value of the object at the second scanning angle may be detected. A second color texture map from the detected second color value may be created. In some cases, the shadow texture map of the object may be applied to the 3-D model of the user at the second render viewpoint. The transparency texture map of the object may be applied to the backwards-facing polygon of the 3-D polygon mesh of the object at the second render viewpoint. The second color texture map of the object may be applied to the result of the application of the transparency texture map to the backwards-facing polygon at the second render viewpoint. The transparency texture map of the object may be applied to the front-facing polygon of the 3-D polygon mesh of the object at the second render viewpoint. The second col- or texture map of the object may be applied to the result of the application of the transparency texture map to the front-facing polygon at the second render viewpoint. The virtual 3-D space may be rendered at the second render viewpoint.
In some embodiments, the 3-D polygon mesh of the object may be divided into two or more portions. An order to the portions of the divided 3-D polygon mesh of the object may be determined from furthest portion to closest portion relative to the determined render viewpoint of the virtual 3-D space.
In some cases, the present system may determine whether a portion of the 3-D polygon mesh of the object is visible in relation to the 3-D model of the user based on the determined render viewpoint. The 3-D polygon mesh of the object may be rendered from the furthest portion to the closest portion based on a visible portion of the 3-D polygon mesh of the object.
A computing device configured to scale a three-dimensional (3-D) model is also described. The device may include a processor and memory in electronic communication with the processor. The memory may store instructions that are executable by the processor to select a first render viewpoint of a virtual 3-D space. The virtual 3-D space may include a 3-D model of at least a portion of a user generated from an image of the user and a 3-D polygon mesh of an object. Additionally, the instructions may be executable by the processor to designate a first polygon of the 3-D polygon mesh of the object as a backwards-facing polygon in relation to the first render viewpoint, designate a second polygon of the 3-D polygon mesh of the object as a front-facing polygon in relation to the first render viewpoint, and apply a shadow texture map of the object to the 3-D model of the user. Additionally, the instructions may be executable by the process to apply a transparency texture map of the object to the backwards-facing polygon of the 3-D polygon mesh of the object, apply a first color texture map of the object to the result of the application of the transparency texture map to the backwards-facing polygon, and render the virtual 3-D space at the first render viewpoint.
A computer-program product to scale a three-dimensional (3-D) model is also described. The computer-program product may include a non-transitory computer-readable medium that stores instructions. The instructions may be executable by a processor to select a first render viewpoint of a virtual 3-D space. The virtual 3-D space comprises a 3-D model of at least a portion of a user generated from an image of the user and a 3-D polygon mesh of an object. Additionally, the instructions may be executable by the processor to designate a first polygon of the 3-D polygon mesh of the object as a backwards-facing polygon in relation to the first render viewpoint, designate a second polygon of the 3-D polygon mesh of the object as a front-facing polygon in relation to the first render viewpoint, and apply a shadow texture map of the object to the 3-D model of the user. Additionally, the instructions may be executable by a processor to apply a transparency texture map of the object to the backwards-facing polygon of the 3-D polygon mesh of the object, apply a first color texture map of the object to the result of the application of the transparency texture map to the backwards-facing polygon, and apply the transparency texture map of the object to the front-facing polygon of the 3-D polygon mesh of the object. Additionally, the instructions may be executable by the processor to apply the first color texture map of the object to the result of the application of the transparency texture map to the front-facing polygon and render the virtual 3-D space at the first render viewpoint.
Features from any of the above-mentioned embodiments may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.
The accompanying drawings illustrate a number of exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the instant disclosure.
While the embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the instant disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims.
The systems and methods described herein relate to the virtually tying-on of products. Three-dimensional (3-D) computer graphics are graphics that use a 3-D representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering 2-D images. Such images may be stored for viewing later or displayed in real-time. A 3-D space may include a mathematical representation of a 3-D surface of an object. A 3-D model may be contained within a graphical data file. A 3-D model may represent a 3-D object using a collection of points in 3-D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data (points and other information), 3-D models may be created by hand, algorithmically (procedural modeling), or scanned such as with a laser scanner. A 3-D model may be displayed visually as a two-dimensional image through a process called 3-D rendering, or used in non-graphical computer simulations and calculations. In some cases, the 3-D model may be physically created using a 3-D printing device.
A virtual 3-D space may include a 3-D model of a user's face and a polygon mesh of a pair of glasses. The 3-D polygon mesh of the pair of glasses may be placed on the user to create a 3-D virtual depiction of the user wearing a properly scaled pair of glasses. This 3-D scene may then be rendered into a two-dimensional (2-D) image to provide the user a virtual depiction of the user wearing a certain style of glasses. Although many of the examples used herein describe the virtual try-on of glasses, it is understood that the systems and methods described herein may be used to virtually try-on a wide variety of products. Examples of such products may include glasses, clothing, footwear, jewelry, accessories, hair styles, etc.
In some configurations, a device 102 may include a rendering module 104, a camera 106, and a display 108. In one example, the device 102 may be coupled to a database 110. In one embodiment, the database 110 may be internal to the device 102. In another embodiment, the database 110 may be external to the device 102. In some configurations, the database 110 may include polygon model data 112 and texture map data 114.
In one embodiment, the rendering module 104 may enable a user to virtually try-on a pair of glasses. In some configurations, the rendering module 104 may obtain multiple images of a user. For example, the rendering module 104 may capture multiple images of a user via the camera 106. For instance, the rendering module 104 may capture a video (e.g., a 5 second video) via the camera 106. In some configurations, the rendering module 104 may use polygon model data 112 and texture map data 114 to generate a 3-D representation of a user. For example, the polygon model data 112 may include vertex coordinates of a polygon model of the user's head. In some embodiments, the rendering module 104 may use color information from the pixels of multiple images of the user to create a texture map of the user. In some configurations, the rendering module 104 may generate and/or obtain a 3-D representation of a product. For example, the polygon model data 112 and texture map data 114 may include a 3-D model of a pair of glasses. In some embodiments, the polygon model data 112 may include a polygon model of an object. In some configurations, the texture map data 114 may define a visual aspect (e.g., pixel information) of the 3-D model of the object such as color, texture, shadow, or transparency.
In some configurations, the rendering module 104 may generate a virtual try-on image by rendering a virtual 3-D space that contains a 3-D model of a user and a 3-D model of a product. In one example, the virtual try-on image may illustrate the user with a rendered version of the product. In some configurations, the rendering module 104 may output the virtual try-on image to the display 108 to be displayed to the user.
In some embodiments, the server 206 may include the rendering module 104 and may be coupled to the database 110. For example, the rendering module 104 may access the polygon model data 112 and the texture map data 114 in the database 110 via the server 206. The database 110 may be internal or external to the server 206.
In some configurations, the application 202 may capture multiple images via the camera 106. For example, the application 202 may use the camera 106 to capture a video. Upon capturing the multiple images, the application 202 may process the multiple images to generate result data. In some embodiments, the application 202 may transmit the multiple images to the server 206. Additionally or alternatively, the application 202 may transmit to the server 206 the result data or at least one file associated with the result data.
In some configurations, the rendering module 104 may process multiple images of a user to generate a 3-D model of the user. In some configurations, the rendering module 104 may process a scan of an object to create a 3-D polygon model of the object. The rendering module 104 may render a 3-D space that ineludes the 3-D model of the user and the 3-D polygon model of the object to render a virtual try-on 2-D image of the object and the user. The application 202 may output the rendered virtual try-on image to the display 208 to be displayed to the user.
In some embodiments, the 3-D model of the user's head 304 may include a polygon model of the user's head, which may be stored in the database 110 as polygon data 112, and at least one texture map, which may be stored in the database 110 as texture map data 114. In some embodiments, the 3-D model of the glasses 306 may include a polygon model of the glasses, which may be stored in the database 110 as polygon data 112, and at least one texture map, which may be stored in the database 110 as texture map data 114. In some embodiments, the polygon model of the glasses may include front-facing polygons 312 and backwards-facing polygons 314. For example, those polygons that face the first rendering viewing angle 308 may be designated as front-facing polygons 312 and those polygons that do not face the first rendering viewing angle 308 may be designated as backwards-facing polygons 314.
In some embodiments, the 3-D model of the glasses 306 may be divided into multiple parts. As depicted in
In some embodiments, the rendering module 104 may determine whether a portion of the 3-D model of the glasses 306 is visible in relation to a render of the 3-D space 302 at a particular render viewpoint. For example, as depicted in
In some embodiments, the rendering module 104-a may include a scanning module 402, a polygon mesh module 404, a texture mapping module 406, a hidden surface detection module 408, a blurring module 410, and an edge detection module 412. In one embodiment, the rendering module 104-a may be configured to select a first render viewpoint of a virtual 3-D space. A render viewpoint may be the point of view of a virtual 3-D space, and may be referred to as the view reference point (VRP). In other words, the render viewpoint may be the view a user would see were a user to gaze at a depiction of the 3-D space or 3-D scene from a certain point of view. Thus, theoretically an infinite number of render viewpoints are possible that involve the orientation of the 3-D space relative to the position of a point of view of the 3-D space. The virtual 3-D space may include a 3-D model of at least a portion of a user generated from an image of the user. For example, the virtual 3-D space may include a 3-D model of a user's head that is generated from one or more images of the user's head. The virtual 3-D space may also include a 3-D polygon mesh of an object. For instance, the virtual 3-D space may include a 3-D polygon mesh of a pair of glasses. The 3-D polygon mesh may include a collection of vertices, edges and surfaces that define the shape of a polyhedral object in 3-D computer graphics and modeling. The surface of the 3-D polygon mesh may include triangles, quadrilaterals, or other convex polygons. In some configurations, the rendering module 104-a may be configured to render the virtual 3-D space at a selected render viewpoint such as the first render viewpoint. In some embodiments, the rendering module 104-a may be configured to place or position at least a portion of the 3-D polygon mesh of the object within a predetermined distance of at least one point on the 3-D model of the user. For instance, the 3-D polygon mesh of the object may include a 3-D polygon mesh of a pair of glasses. The 3-D polygon mesh of the glasses may be placed within a predetermined distance of a 3-D model of the user's head. For example, a 3-D polygon mesh of a pair of glasses may be placed within a predetermined distance of a 3-D model of a user's head so as to make the 3-D polygon mesh of the glasses appear to be worn on the head of a 3-D model of the user.
In some embodiments, the rendering module 104-a may be configured to select a second render viewpoint of the virtual 3-D space. For example, the rendering module 104-a may select a first render viewpoint that depicts a side-view, or profile of a 3-D model of a user's head wearing a 3-D model of a pair of glasses. The rendering module 104-a may select a second render viewpoint that depicts a frontal, head-on view of the 3-D model of the user's head wearing a 3-D model of the pair of glasses. In some configurations, the rendering module 104-a may be configured to render the virtual 3-D space at the first and second render viewpoints. Thus, the rendering module 104-a may render a side-view of the 3-D model of the user wearing the 3-D model of the pair of glasses (i.e., the first render viewpoint), and may render a head-on view where the 3-D depiction of the user's face is directly facing in the direction of the rendering of the 3-D space.
In some embodiments, the scanning module 402 may be configured to detect a shadow value of an object from a scan of the object. A shadow value of an object may include information about a shadow cast by the object captured from the scan of the object. For example, a pair of glasses may be scanned by a laser. From this laser scan the scanning module 402 may detect one or more values associated with a shadow cast by the object. For example, the scanning module 402 may detect a level of shadow cast by certain parts of a pair of glasses. The scanning module 402 may determine that the degree of shadow cast by an opaque segment of the pair of glasses is greater than the degree of shadow cast by the lens. Furthermore, the scanning module 402 may determine that directly behind the center of an arm of the glasses running the length of the arm may cast a higher degree of shadow than the edges of the arm where a shadow may gradually dissipate.
In some configurations, the scanning module 402 may be configured to detect a transparency value of an object from a scan of the object. A transparency value of an object may include information about the transparent nature of a portion of the object captured from the scan of the object. For example, the scanning module 402 may determine that a lens in a pair of glasses has a transparency value of 50%, meaning that 50% of the light that hits the surface of the lens is transferred through the lens and the other 50% of the light is reflected off the surface of the lens. The scanning module 402 may detect the 50% transparency as one transparency value associated with the scan of the glasses. Additionally, the scanning module 402 may determine that a portion of the frame of the pair of glasses has a transparency value of 0%, meaning that 100% of the light that hits the surface of the frame is reflected. The scanning module 402 may detect the 0% transparency as another transparency value associated with the scan of the glasses.
In one embodiment, the scanning module 402 may be configured to select a first scanning angle of a scan of an object. The first scanning angle may correspond to the first render viewpoint. Thus, scanning a pair of glasses at 30 degrees left of center of a pair of glasses may correspond to an image of a user taken at 30 degrees left of a center or head-on view of the user. In some embodiments, the scanning module 402 may be configured to detect a first color value from a scan of an object at the first scanning angle. A color value of an object may include information about a visual aspect of the object captured from the scan of the object. For example, the scanning module 402 may scan a pair glasses with shiny red frames. Thus, the scanning module 402 may detect the red color of the frames as one color value associated with the scan of the glasses. Additionally or alternatively, the scanning module 402 may detect other visual aspects associated with the scanned frames such as the reflectivity of the frames and save the reflectivity as a value associated with the surface of the frames. In some configurations, the scanning module 402 may be configured to select a second scanning angle of a scan of an object. The second scanning angle may correspond to the second render viewpoint. The scanning module 402 may be configured to detect a second color value of an object at the second scanning angle. Thus, scanning a pair of glasses at 40 degrees left of the center or head-on view of a pair of glasses may correspond to a second image of a user taken at 40 degrees left of the center or head-on view of the user. Similar to the scan at the first angle, the scanning module 402 may detect visual aspects associated with the frames scanned at the second scanning angle such as the color and reflectivity of the frames and save the color and reflectivity as values associated with the surface of the frames
In some embodiments, the hidden surface detection module 408 may be configured to determine whether a portion of the 3-D polygon mesh of the object is visible in relation to the 3-D model of the user based on the determined render viewpoint. The rendering of the 3-D space may include rendering the scene of the virtual 3-D space based on a visible portion of the 3-D polygon mesh of the object. In other words, rendering the 3-D space when the render viewpoint depicts the left side of the 3-D model of the user's head, portions of the 3-D polygon mesh of the object that are positioned to the right side of the 3-D model of the user's head would not be visible in the render. In other words, in some embodiments, the texture mapping module 406 does not apply one or more elements of the texture maps (i.e., shadow texture map, transparency texture map, and/or color texture map) to those portions of the 3-D polygon mesh of the object that would not be visible in the render due to the positioning of the 3-D model of the user relative to the selected render viewpoint. Thus, in some embodiments, the rendering module 402-a renders those portions of the 3-D polygon mesh of the object that are visible based on the determined render viewpoint.
In some embodiments, the polygon mesh module 404 may be configured to designate at least one polygon of the 3-D polygon mesh of the object as a backwards-facing polygon in relation to a render viewpoint. In some configurations, the polygon mesh module 404 may be configured to designate at least one polygon of the 3-D polygon mesh of the object as a front-facing polygon in relation to a render viewpoint. As explained above, the 3-D polygon mesh of the object may include a collection of vertices, edges and surfaces that define the shape of a polyhedral version of the object in a virtual 3-D space. Thus, the surface of a 3-D polygon mesh of a pair of glasses may include triangles, quadrilaterals, or other convex polygons. As with all 3-D objects, the surface of the 3-D polygon mesh of the pair of glasses may include polygons on six different surfaces. For example, the left arm of a pair of glasses may include top and bottom surfaces, left and right surfaces, and front and back surfaces in relation to a given render viewpoint. With a render viewpoint positioned to view the left side of a 3-D model of a user's head, the polygons of the outside surface of the left arm of a 3-D model of a pair of glasses worn on the 3-D model of the user's head would face the render viewpoint. The inside surface, the polygons facing the left side of the 3-D model of the user's face, would face away from the render viewpoint. Thus, with a render viewpoint positioned to view the left side of a 3-D model of a user's head, the polygon mesh module 404 may designate the polygons of the outside surface of the left arm of a 3-D model of a pair of glasses worn on the 3-D model of the user's head as front-facing polygons. Similarly, the polygon mesh module 404 may designates the inside polygons facing the left side of the 3-D model of the user's face as backwards-facing polygons. As explained above with reference to
In some embodiments, the rendering module 104-a may be configured to determine an order to multiple portions of a divided 3-D polygon mesh of an object from the farthest portion to the closest portion relative to a determined render viewpoint of the virtual 3-D space. For example, with a render viewpoint of a left profile of a 3-D model of a user's head wearing a 3-D model of a pair of glasses, the render module 104-a may determine the polygon mesh of the left arm of the pair of glasses to be the closest portion of the 3-D polygon mesh of the glasses, followed by the left lens and frame and the right lens and frame. Thus, the render module 104-a may determine that the polygon mesh of the right arm of the pair of glasses to be the farthest portion of the 3-D polygon mesh of the glasses. Upon determining the order of the parts of the 3-D polygon mesh of an object, in some embodiments, the rendering module 104-a may be configured to render the 3-D polygon mesh of the object from the furthest portion to the closest portion.
Referring again to
In some embodiments, the texture mapping module 406 may be configured to create a first color texture map from a detected first color value from a scan of the object at a first scanning angle. In some embodiments, the texture mapping module 406 may be configured to create a second color texture map from the detected second color value from a scan of the object at a second scanning angle. For example, the texture mapping module 406 may convert a color value detected from a scan of an object by the scanning module 402 into a 2-D image and store the color texture map 2-D image as texture map data in the database 110. In some embodiments, the texture map data 114 of the polygon mesh of the object may contain a color texture map for every angle at which the object is scanned. For example, with the user holding his or her head vertically, if the user's head is scanned in a pan around the user's head from −70 degrees to the side of the head-on view of the user's face to +70 degrees to the side of the head-on view of the user's face in 10 degree intervals, then the scan would include 15 reference viewpoints of the user's head, including a straight, head-on view of the user's face at 0 degrees. The scanning module 402 may then scan a pair of glasses from −70 degrees to +70 degrees to create 15 corresponding reference viewpoints of the glasses. Thus, in some embodiments, the texture mapping module 406 may create 15 color texture maps, one for each of the 15 corresponding reference viewpoints of the glasses. However, in some embodiments, the texture mapping module 406 may create a single shadow texture map and a single transparency map for the 15 corresponding reference viewpoints of the glasses. In some embodiments, the texture mapping module 406 may be configured to map a 2-D coordinate of the first color texture map to a point on the 3-D model of the user and a point on a 3-D polygon mesh of the object, which may be the same points associated with the application of the transparency texture map. Thus, in some configurations the texture mapping module 406 may be configured to multiply the result of multiplying the transparency texture map and the point on the 3-D model of the user and the 3-D polygon mesh of the object by the first color value. In other words, the texture mapping module 406 may first apply the transparency of the lens on a 3-D polygon mesh of a pair of glasses (i.e., merging the visible portion of the user with the transparent portion of the glasses) and then apply the color of the lens to that result.
In some embodiments, the texture mapping module 406 may be configured to apply a shadow texture map of an object to a 3-D model of a user. As explained above with reference to
In some configurations the texture mapping module 406 may be configured to apply a transparency texture map of the object to backwards-facing polygons of the 3-D polygon mesh of the object. Applying the transparency values of backwards-facing triangles before front-facing triangles allows portions of the 3-D polygon mesh that would be visible through a transparent section of the mesh (i.e., the lenses) to be rendered before other portions of the 3-D polygon mesh that would block portions of the 3-D polygon mesh of the object and 3-D model of the user that would normally be viewable through the transparent section. For example, with a render viewpoint from the left of the user, a portion of the back of the frames of the 3-D polygon mesh of a pair of glasses may be visible through the lens. Rendering that portion of the back of the frames before the front portion allows that back portion to be visible through the lens following a rendering of the 3-D space.
In some embodiments, the texture mapping module 406 may be configured to apply a first color texture map of the object to the result of the application of the transparency texture map to the backwards-facing polygons. In some embodiments, the texture mapping module 406 may be configured to apply a transparency texture map of the object to front-facing polygons of the 3-D polygon mesh of an object. The texture mapping module 406 may be configured to apply a first color texture map of the object to the result of the application of the transparency texture mapped to the front-facing polygons. The rendering module 104-a may then render the 3-D space at the first render viewing angle. For example, the backward-facing polygons of the lens may be applied to combine the value of a pixel of the 3-D model of a user with the value of the lens directly in front of that pixel of the 3-D model of the user. Combining the pixel with the transparency value gives renders the lens as being transparent so that the portion of the user behind the lens is seen in the render. Having applied the transparency value to the 3-D model of the user, the texture mapping module 406 may apply the color texture map to the same point. In other words, if the lens is a brown lens, the color texture map may include color information of the brown lens. Thus, the texture mapping module 406 may apply the brown color to the same point on the 3-D model of the user where the transparency texture map was applied. The process may then be repeated for the same point on the 3-D model of the user with the front-facing polygons of the 3-D polygon mesh of the object, resulting in a rendered brown transparent lens through which the 3-D model of the user's eye may be seen once rendering completes.
In some embodiments, the text, the texture mapping module 406 may be configured to apply the shadow texture map of the object to a 3-D model of the user at the second render viewpoint. The texture mapping module 406 may be configured to apply the transparency texture map of the object to backwards-facing polygons of the 3-D polygon mesh of the object at the second render viewpoint and then apply the second color texture map to the 3-D polygon mesh of the object as a result of the application of the transparency texture map to the backwards-facing polygons at the second render viewpoint. In some embodiments, the texture mapping module 406 may be configured to apply the transparency texture map of an object to front-facing polygons of the 3-D polygon mesh of the object at the second render viewpoint and then apply the second color texture map of the object to the result of the application of the transparency texture mapped to the front-facing polygons at the second render viewing angle. The rendering module 104-a may then render the 3-D space at the second render viewing angle.
In some embodiments, the blurring module 410 may be configured to determine a first level and a second level of blur accuracy. For example, applying a blurring effect to a portion of the rendered 3-D space with a relatively high accuracy may require a correspondingly high amount of processing time. Attempting to apply the blurring effect with relatively high accuracy while the render viewpoint of the 3-D space is modified may introduce a lag in the rendering of the 3-D space. On the other hand, applying a blurring effect to a portion of the rendered 3-D space with a relatively low accuracy may require a correspondingly low amount of processing time, permitting a real-time rendering of the 3-D space with a blurring effect without introducing lag. In some configurations the blurring module 410 may be configured to determine a first level and a second level of blur intensity. In other words, in some embodiments, a relatively low level of blur may be applied to the entire rendered depiction of the object, whereas a relatively high level of blur may be applied to the edges of the rendered depiction of the object. For instance, the blurring module 410 may apply a relatively high level of a blurring effect to the edges of a rendered pair of glasses and a relatively low level of a blurring effect to the glasses overall. Thus, the blurring module 410 may be configured to apply the first level of blur accuracy at the first level of blur intensity to the rendered depiction of the object. In some embodiments, the edge detection module 412 may be configured to detect an edge of the rendered depiction of the object. The blurring module 410 may be configured to apply the first level of blur accuracy at the second level of blur intensity to the rendered depiction of the object. In some embodiments, upon receiving a user input to adjust the render viewpoint, the blurring module 410 may be configured to apply the second level of blur accuracy to the rendered depiction of the object.
In some configurations, the systems and methods described herein may be used to facilitate rendering a virtual try-on shopping experience. For example, a user may be presented with a pair of glasses (e.g., for the first time) via a rendered virtual try-on image that illustrates the pair of glasses on the user's face, thus, enabling a user to shop for glasses and to see how the user looks in the glasses (via the virtual try-on) simultaneously.
At block 702, a render viewpoint of a virtual 3-D space may be selected. The virtual 3-D space may include a 3-D model of at least a portion of a user generated from an image of the user and a 3-D polygon mesh of an object. At block 704, a first polygon of the 3-D polygon mesh of the object may be designated as a backwards-facing polygon in relation to the render viewing angle. At block 706, a second polygon of the 3-D polygon mesh of the object may be designated as a front-facing polygon in relation to the render viewing angle.
At block 708, a shadow texture map of the object may be applied to the 3-D model of the user at the render viewing angle. At block 710, a transparency texture map of the object may be applied to the backwards-facing polygon of the 3-D polygon mesh of the object at the render viewing angle. At block 712, a first color texture map of the object may be applied to the result of the application of the transparency texture map to the backwards-facing polygon.
At block 714, a transparency texture map of the object may be applied to the backwards-facing polygon of the 3-D polygon mesh of the object at the render viewing angle. At block 716, the first color texture map of the object may be applied to the result of the application of the transparency texture map to the backwards-facing polygon. At block 718, the virtual 3-D space may be rendered at the render viewing angle. At block 720, a determination may be made whether there is another viewing angle to render. If it is determined that there is another viewing angle to render, then the method 700 returns to block 702.
At block 802, a shadow value of an object may be detected from a scan of the object. At block 804, a shadow texture map may be created from the detected shadow value. At block 806, a 2-D coordinate of the shadow texture map may be mapped to a point on the 3-D model of the user. At block 808, a value of the point on the 3-D model of the user may be multiplied by the shadow value.
At block 902, a transparency value of an object may be detected from a scan of the object. At block 904, a transparency texture map may be created from the detected transparency value. At block 906, a 2-D coordinate of the transparency texture map may be mapped to a point on the 3-D model of the user. At block 908, a value of the point on the 3-D model of the user may be multiplied by the transparency value.
At block 1002, a scanning angle of a scan of an object may be selected. The scanning angle may correspond to a render viewing angle of a 3-D polygon mesh of the object. At block 1004, a color value of an object may be detected from a scan of the object. At block 1006, a color texture map may be created from the detected color value. At block 1008, a 2-D coordinate of the color texture map may be mapped to a point on the 3-D model of the user. At block 1010, a value of the point on the 3-D model of the user may be multiplied by the color value. At block 1012, a determination may be made whether there is another scanning angle to process. If it is determined that there is another scanning angle to process, then the method 1000 returns to block 1002.
At block 1102, the 3-D polygon mesh of the object may be divided into multiple parts. At block 1104, an order may be determined to the multiple parts of the divided 3-D polygon mesh of the object from furthest part to closest part relative to the determined render viewing angle of the virtual 3-D space.
At block 1106, it is determined which portions of the 3-D polygon mesh of the object are visible in relation to the 3-D model of the user based on the determined render viewing angle. At block 1108, the 3-D polygon mesh of the object is rendered from the furthest part to the closest part based on the determined visible portions of the 3-D polygon mesh of the object.
At block 1202, a first level and a second level of blur accuracy may be determined. At block 1204, a first level and a second level of blur intensity may be determined. At block 1206, the first level of blur accuracy may be applied at the first level of blur intensity to the rendered depiction of the object.
At block 1208, an edge of the rendered depiction of the object may be detected. At block 1210, the first level of blur accuracy may be applied at the second level of blur intensity to the detected edges of the rendered depiction of the object. At block 1212, upon receiving a user input to adjust the render viewing angle, the second level of blur accuracy is applied to the rendered depiction of the object.
Bus 1312 allows data communication between central processor 1314 and system memory 1316, which may include read-only memory (ROM) or flash memory (neither shown), and random access memory (RAM) (not shown), as previously noted. The RAM is generally the main memory into which the operating system and application programs are loaded. The ROM or flash memory can contain, among other code, the Basic Input-Output system (BIOS) which controls basic hardware operation such as the interaction with peripheral components or devices. For example, the rendering module 104-c to implement the present systems and methods may be stored within the system memory 1316. Applications (e.g., application 202) resident with computer system 1310 are generally stored on and accessed via a non-transitory computer readable medium, such as a hard disk drive (e.g., fixed disk 1344) or other storage medium. Additionally, applications can be in the form of electronic signals modulated in accordance with the application and data communication technology when accessed via interface 1348.
Storage interface 1334, as with the other storage interfaces of computer system 1310, can connect to a standard computer readable medium for storage and/or retrieval of information, such as a fixed disk drive 1344. Fixed disk drive 1344 may be a part of computer system 1310 or may be separate and accessed through other interface systems. Network interface 1348 may provide a direct connection to a remote server via a direct network link to the Internet via a POP (point of presence). Network interface 1348 may provide such connection using wireless techniques, including digital cellular telephone connection, Cellular Digital Packet Data (CDPD) connection, digital satellite data connection, or the like.
Many other devices or subsystems (not shown) may be connected in a similar manner (e.g., document scanners, digital cameras, and so on). Conversely, all of the devices shown in
Moreover, regarding the signals described herein, those skilled in the art will recognize that a signal can be directly transmitted from a first block to a second block, or a signal can be modified (e.g., amplified, attenuated, delayed, latched, buffered, inverted, filtered, or otherwise modified) between the blocks. Although the signals of the above described embodiment are characterized as transmitted from one block to the next, other embodiments of the present systems and methods may include modified signals in place of such directly transmitted signals as long as the informational and/or functional aspect of the signal is transmitted between blocks. To some extent, a signal input at a second block can be conceptualized as a second signal derived from a first signal output from a first block due to physical limitations of the circuitry involved (e.g., there will inevitably be some attenuation and delay). Therefore, as used herein, a second signal derived from a first signal includes the first signal or any modifications to the first signal, whether due to circuit limitations or due to passage through other circuit elements which do not change the informational and/or final functional aspect of the first signal.
While the foregoing disclosure sets forth various embodiments using specific block diagrams, flowcharts, and examples, each block diagram component, flowchart step, operation, and/or component described and/or illustrated herein may be implemented, individually and/or collectively, using a wide range of hardware, software, or firmware (or any combination thereof) configurations. In addition, any disclosure of components contained within other components should be considered exemplary in nature since many other architectures can be implemented to achieve the same functionality.
The process parameters and sequence of steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
Furthermore, while various embodiments have been described and/or illustrated herein in the context of fully functional computing systems, one or more of these exemplary embodiments may be distributed as a program product in a variety of forms, regardless of the particular type of computer-readable media used to actually carry out the distribution. The embodiments disclosed herein may also be implemented using software modules that perform certain tasks. These software modules may include script, batch, or other executable files that may be stored on a computer-readable storage medium or in a computing system. In some embodiments, these software modules may configure a computing system to perform one or more of the exemplary embodiments disclosed herein.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the present systems and methods and their practical applications, to thereby enable others skilled in the art to best utilize the present systems and methods and various embodiments with various modifications as may be suited to the particular use contemplated.
Unless otherwise noted, the terms “a” or “an,” as used in the specification and claims, are to be construed as meaning “at least one of.” In addition, for ease of use, the words “including” and “having,” as used in the specification and claims, are interchangeable with and have the same meaning as the word “comprising.” In addition, the term “based on” as used in the specification and the claims is to be construed as meaning “based at least upon.”
This application claims priority to U.S. Provisional Application No. 61/650,983, entitled SYSTEMS AND METHODS TO VIRTUALLY TRY-ON PRODUCTS, filed on May 23, 2012; and U.S. Provisional Application No. 61/735,951, entitled SYSTEMS AND METHODS TO VIRTUALLY TRY-ON PRODUCTS, filed on Dec. 11, 2012, which is incorporated herein in its entirety by this reference.
Number | Name | Date | Kind |
---|---|---|---|
3927933 | Humphrey | Dec 1975 | A |
4370058 | Trötscher et al. | Jan 1983 | A |
4467349 | Maloomian | Aug 1984 | A |
4522474 | Slavin | Jun 1985 | A |
4534650 | Clerget et al. | Aug 1985 | A |
4539585 | Spackova et al. | Sep 1985 | A |
4573121 | Saigo et al. | Feb 1986 | A |
4613219 | Vogel | Sep 1986 | A |
4698564 | Slavin | Oct 1987 | A |
4724617 | Logan et al. | Feb 1988 | A |
4730260 | Mori et al. | Mar 1988 | A |
4781452 | Ace | Nov 1988 | A |
4786160 | Fürter | Nov 1988 | A |
4845641 | Ninomiya et al. | Jul 1989 | A |
4852184 | Tamura et al. | Jul 1989 | A |
4957369 | Antonsson | Sep 1990 | A |
5139373 | Logan et al. | Aug 1992 | A |
5255352 | Falk | Oct 1993 | A |
5257198 | van Schoyck | Oct 1993 | A |
5280570 | Jordan | Jan 1994 | A |
5281957 | Schoolman | Jan 1994 | A |
5428448 | Albert-Garcia | Jun 1995 | A |
5485399 | Saigo et al. | Jan 1996 | A |
5550602 | Braeuning | Aug 1996 | A |
5592248 | Norton et al. | Jan 1997 | A |
5631718 | Markovitz et al. | May 1997 | A |
5666957 | Juto | Sep 1997 | A |
5682210 | Weirich | Oct 1997 | A |
5720649 | Gerber et al. | Feb 1998 | A |
5724522 | Kagami et al. | Mar 1998 | A |
5774129 | Poggio et al. | Jun 1998 | A |
5809580 | Arnette | Sep 1998 | A |
5844573 | Poggio et al. | Dec 1998 | A |
5880806 | Conway | Mar 1999 | A |
5908348 | Gottschald | Jun 1999 | A |
5974400 | Kagami et al. | Oct 1999 | A |
5980037 | Conway | Nov 1999 | A |
5983201 | Fay | Nov 1999 | A |
5987702 | Simioni | Nov 1999 | A |
5988862 | Kacyra et al. | Nov 1999 | A |
D417883 | Arnette | Dec 1999 | S |
6016150 | Lengyel et al. | Jan 2000 | A |
6018339 | Stevens | Jan 2000 | A |
D420037 | Conway | Feb 2000 | S |
D420379 | Conway | Feb 2000 | S |
D420380 | Simioni et al. | Feb 2000 | S |
6024444 | Little | Feb 2000 | A |
D421764 | Arnette | Mar 2000 | S |
D422011 | Conway | Mar 2000 | S |
D422014 | Simioni et al. | Mar 2000 | S |
D423034 | Arnette | Apr 2000 | S |
D423552 | Flanagan et al. | Apr 2000 | S |
D423553 | Brune | Apr 2000 | S |
D423554 | Conway | Apr 2000 | S |
D423556 | Conway | Apr 2000 | S |
D423557 | Conway | Apr 2000 | S |
D424094 | Conway | May 2000 | S |
D424095 | Brune et al. | May 2000 | S |
D424096 | Conway | May 2000 | S |
D424589 | Arnette | May 2000 | S |
D424598 | Simioni | May 2000 | S |
D425542 | Arnette | May 2000 | S |
D425543 | Brune | May 2000 | S |
D426568 | Conway | Jun 2000 | S |
D427225 | Arnette | Jun 2000 | S |
D427227 | Conway | Jun 2000 | S |
6072496 | Guenter et al. | Jun 2000 | A |
6095650 | Gao et al. | Aug 2000 | A |
6102539 | Tucker | Aug 2000 | A |
D430591 | Arnette | Sep 2000 | S |
D432156 | Conway et al. | Oct 2000 | S |
D433052 | Flanagan | Oct 2000 | S |
6132044 | Sternbergh | Oct 2000 | A |
6139141 | Zider | Oct 2000 | A |
6139143 | Brune et al. | Oct 2000 | A |
6142628 | Saigo | Nov 2000 | A |
6144388 | Bornstein | Nov 2000 | A |
D434788 | Conway | Dec 2000 | S |
D439269 | Conway | Mar 2001 | S |
6208347 | Migdal et al. | Mar 2001 | B1 |
6222621 | Taguchi | Apr 2001 | B1 |
6231188 | Gao et al. | May 2001 | B1 |
6233049 | Kondo et al. | May 2001 | B1 |
6246468 | Dimsdale | Jun 2001 | B1 |
6249600 | Reed et al. | Jun 2001 | B1 |
6281903 | Martin et al. | Aug 2001 | B1 |
6305656 | Wemyss | Oct 2001 | B1 |
6307568 | Rom | Oct 2001 | B1 |
6310627 | Sakaguchi | Oct 2001 | B1 |
6330523 | Kacyra et al. | Dec 2001 | B1 |
6356271 | Reiter et al. | Mar 2002 | B1 |
6377281 | Rosenbluth et al. | Apr 2002 | B1 |
6386562 | Kuo | May 2002 | B1 |
6415051 | Callari et al. | Jul 2002 | B1 |
6419549 | Shirayanagi | Jul 2002 | B2 |
6420698 | Dimsdale | Jul 2002 | B1 |
6434278 | Hashimoto | Aug 2002 | B1 |
6456287 | Kamen et al. | Sep 2002 | B1 |
6466205 | Simpson et al. | Oct 2002 | B2 |
6473079 | Kacyra et al. | Oct 2002 | B1 |
6492986 | Metaxas et al. | Dec 2002 | B1 |
6493073 | Epstein | Dec 2002 | B2 |
6508553 | Gao et al. | Jan 2003 | B2 |
6512518 | Dimsdale | Jan 2003 | B2 |
6512993 | Kacyra et al. | Jan 2003 | B2 |
6516099 | Davison et al. | Feb 2003 | B1 |
6518963 | Waupotitsch et al. | Feb 2003 | B1 |
6527731 | Weiss et al. | Mar 2003 | B2 |
6529192 | Waupotitsch | Mar 2003 | B1 |
6529626 | Watanabe et al. | Mar 2003 | B1 |
6529627 | Callari et al. | Mar 2003 | B1 |
6533418 | Izumitani et al. | Mar 2003 | B1 |
6535223 | Foley | Mar 2003 | B1 |
6556196 | Blanz et al. | Apr 2003 | B1 |
6563499 | Waupotitsch et al. | May 2003 | B1 |
6583792 | Agnew | Jun 2003 | B1 |
6624843 | Lennon | Sep 2003 | B2 |
6634754 | Fukuma et al. | Oct 2003 | B2 |
6637880 | Yamakaji et al. | Oct 2003 | B1 |
6647146 | Davison et al. | Nov 2003 | B1 |
6650324 | Junkins | Nov 2003 | B1 |
6659609 | Mothes | Dec 2003 | B2 |
6661433 | Lee | Dec 2003 | B1 |
6664956 | Erdem | Dec 2003 | B1 |
6668082 | Davison et al. | Dec 2003 | B1 |
6671538 | Ehnholm et al. | Dec 2003 | B1 |
6677946 | Ohba | Jan 2004 | B1 |
6682195 | Dreher | Jan 2004 | B2 |
6692127 | Abitbol et al. | Feb 2004 | B2 |
6705718 | Fossen | Mar 2004 | B2 |
6726463 | Foreman | Apr 2004 | B2 |
6734849 | Dimsdale et al. | May 2004 | B2 |
6736506 | Izumitani et al. | May 2004 | B2 |
6760488 | Moura et al. | Jul 2004 | B1 |
6775128 | Leitao | Aug 2004 | B2 |
6785585 | Gottschald | Aug 2004 | B1 |
6791584 | Xie | Sep 2004 | B1 |
6792401 | Nigro et al. | Sep 2004 | B1 |
6807290 | Liu et al. | Oct 2004 | B2 |
6808381 | Foreman et al. | Oct 2004 | B2 |
6817713 | Ueno | Nov 2004 | B2 |
6825838 | Smith et al. | Nov 2004 | B2 |
6847383 | Agnew | Jan 2005 | B2 |
6847462 | Kacyra et al. | Jan 2005 | B1 |
6876755 | Taylor et al. | Apr 2005 | B1 |
6893245 | Foreman et al. | May 2005 | B2 |
6903746 | Fukushima et al. | Jun 2005 | B2 |
6907310 | Gardner et al. | Jun 2005 | B2 |
6922494 | Fay | Jul 2005 | B1 |
6943789 | Perry et al. | Sep 2005 | B2 |
6944327 | Soatto | Sep 2005 | B1 |
6950804 | Strietzel | Sep 2005 | B2 |
6961439 | Ballas | Nov 2005 | B2 |
6965385 | Welk et al. | Nov 2005 | B2 |
6965846 | Krimmer | Nov 2005 | B2 |
6968075 | Chang | Nov 2005 | B1 |
6980690 | Taylor et al. | Dec 2005 | B1 |
6999073 | Zwern et al. | Feb 2006 | B1 |
7003515 | Glaser et al. | Feb 2006 | B1 |
7016824 | Waupotitsch et al. | Mar 2006 | B2 |
7034818 | Perry et al. | Apr 2006 | B2 |
7043059 | Cheatle et al. | May 2006 | B2 |
7051290 | Foreman et al. | May 2006 | B2 |
7062722 | Carlin et al. | Jun 2006 | B1 |
7069107 | Ueno | Jun 2006 | B2 |
7095878 | Taylor et al. | Aug 2006 | B1 |
7103211 | Medioni et al. | Sep 2006 | B1 |
7116804 | Murase et al. | Oct 2006 | B2 |
7133048 | Brand | Nov 2006 | B2 |
7152976 | Fukuma et al. | Dec 2006 | B2 |
7154529 | Hoke et al. | Dec 2006 | B2 |
7156655 | Sachdeva et al. | Jan 2007 | B2 |
7184036 | Dimsdale et al. | Feb 2007 | B2 |
7209557 | Lahiri | Apr 2007 | B2 |
7212656 | Liu et al. | May 2007 | B2 |
7212664 | Lee et al. | May 2007 | B2 |
7215430 | Kacyra et al. | May 2007 | B2 |
7218323 | Halmshaw et al. | May 2007 | B1 |
7219995 | Ollendorf et al. | May 2007 | B2 |
7224357 | Chen et al. | May 2007 | B2 |
7234937 | Sachdeva et al. | Jun 2007 | B2 |
7242807 | Waupotitsch et al. | Jul 2007 | B2 |
7290201 | Edwards | Oct 2007 | B1 |
7310102 | Spicer | Dec 2007 | B2 |
7324110 | Edwards et al. | Jan 2008 | B2 |
7415152 | Jiang et al. | Aug 2008 | B2 |
7421097 | Hamza et al. | Sep 2008 | B2 |
7426292 | Moghaddam et al. | Sep 2008 | B2 |
7434931 | Warden et al. | Oct 2008 | B2 |
7436988 | Zhang et al. | Oct 2008 | B2 |
7441895 | Akiyama et al. | Oct 2008 | B2 |
7450737 | Ishikawa et al. | Nov 2008 | B2 |
7489768 | Strietzel | Feb 2009 | B1 |
7492364 | Devarajan et al. | Feb 2009 | B2 |
7508977 | Lyons et al. | Mar 2009 | B2 |
7523411 | Carlin | Apr 2009 | B2 |
7530690 | Divo et al. | May 2009 | B2 |
7532215 | Yoda et al. | May 2009 | B2 |
7533453 | Yancy | May 2009 | B2 |
7540611 | Welk et al. | Jun 2009 | B2 |
7557812 | Chou et al. | Jul 2009 | B2 |
7563975 | Leahy et al. | Jul 2009 | B2 |
7573475 | Sullivan et al. | Aug 2009 | B2 |
7573489 | Davidson et al. | Aug 2009 | B2 |
7587082 | Rudin et al. | Sep 2009 | B1 |
7609859 | Lee et al. | Oct 2009 | B2 |
7630580 | Repenning | Dec 2009 | B1 |
7634103 | Rubinstenn et al. | Dec 2009 | B2 |
7643685 | Miller | Jan 2010 | B2 |
7646909 | Jiang et al. | Jan 2010 | B2 |
7651221 | Krengel et al. | Jan 2010 | B2 |
7656402 | Abraham et al. | Feb 2010 | B2 |
7657083 | Parr et al. | Feb 2010 | B2 |
7663648 | Saldanha et al. | Feb 2010 | B1 |
7665843 | Xie | Feb 2010 | B2 |
7689043 | Austin et al. | Mar 2010 | B2 |
7699300 | Iguchi | Apr 2010 | B2 |
7711155 | Sharma et al. | May 2010 | B1 |
7717708 | Sachdeva et al. | May 2010 | B2 |
7720285 | Ishikawa et al. | May 2010 | B2 |
D616918 | Rohrbach | Jun 2010 | S |
7736147 | Kaza et al. | Jun 2010 | B2 |
7755619 | Wang et al. | Jul 2010 | B2 |
7756325 | Vetter et al. | Jul 2010 | B2 |
7760923 | Walker et al. | Jul 2010 | B2 |
7768528 | Edwards et al. | Aug 2010 | B1 |
D623216 | Rohrbach | Sep 2010 | S |
7804997 | Geng et al. | Sep 2010 | B2 |
7814436 | Schrag et al. | Oct 2010 | B2 |
7830384 | Edwards et al. | Nov 2010 | B1 |
7835565 | Cai et al. | Nov 2010 | B2 |
7835568 | Park et al. | Nov 2010 | B2 |
7845797 | Warden et al. | Dec 2010 | B2 |
7848548 | Moon et al. | Dec 2010 | B1 |
7852995 | Strietzel | Dec 2010 | B2 |
7856125 | Medioni et al. | Dec 2010 | B2 |
7860225 | Strietzel | Dec 2010 | B2 |
7860301 | Se et al. | Dec 2010 | B2 |
7876931 | Geng | Jan 2011 | B2 |
7896493 | Welk et al. | Mar 2011 | B2 |
7907774 | Parr et al. | Mar 2011 | B2 |
7929745 | Walker et al. | Apr 2011 | B2 |
7929775 | Hager et al. | Apr 2011 | B2 |
7953675 | Medioni et al. | May 2011 | B2 |
7961914 | Smith | Jun 2011 | B1 |
8009880 | Zhang et al. | Aug 2011 | B2 |
8026916 | Wen | Sep 2011 | B2 |
8026917 | Rogers et al. | Sep 2011 | B1 |
8026929 | Naimark | Sep 2011 | B2 |
8031909 | Se et al. | Oct 2011 | B2 |
8031933 | Se et al. | Oct 2011 | B2 |
8059917 | Dumas et al. | Nov 2011 | B2 |
8064685 | Solem et al. | Nov 2011 | B2 |
8070619 | Edwards | Dec 2011 | B2 |
8073196 | Yuan et al. | Dec 2011 | B2 |
8090160 | Kakadiaris et al. | Jan 2012 | B2 |
8113829 | Sachdeva et al. | Feb 2012 | B2 |
8118427 | Bonnin et al. | Feb 2012 | B2 |
8126242 | Brett et al. | Feb 2012 | B2 |
8126249 | Brett et al. | Feb 2012 | B2 |
8126261 | Medioni et al. | Feb 2012 | B2 |
8130225 | Sullivan et al. | Mar 2012 | B2 |
8131063 | Xiao et al. | Mar 2012 | B2 |
8132123 | Schrag et al. | Mar 2012 | B2 |
8144153 | Sullivan et al. | Mar 2012 | B1 |
8145545 | Rathod et al. | Mar 2012 | B2 |
8155411 | Hof et al. | Apr 2012 | B2 |
8160345 | Pavlovskaia et al. | Apr 2012 | B2 |
8177551 | Sachdeva et al. | May 2012 | B2 |
8182087 | Esser et al. | May 2012 | B2 |
8194072 | Jones et al. | Jun 2012 | B2 |
8199152 | Sullivan et al. | Jun 2012 | B2 |
8200502 | Wedwick | Jun 2012 | B2 |
8204299 | Arcas et al. | Jun 2012 | B2 |
8204301 | Xiao et al. | Jun 2012 | B2 |
8204334 | Bhagavathy et al. | Jun 2012 | B2 |
8208717 | Xiao et al. | Jun 2012 | B2 |
8212812 | Tsin et al. | Jul 2012 | B2 |
8217941 | Park et al. | Jul 2012 | B2 |
8218836 | Metaxas et al. | Jul 2012 | B2 |
8224039 | Ionita et al. | Jul 2012 | B2 |
8243065 | Kim | Aug 2012 | B2 |
8248417 | Clifton | Aug 2012 | B1 |
8260006 | Callari et al. | Sep 2012 | B1 |
8260038 | Xiao et al. | Sep 2012 | B2 |
8260039 | Shiell et al. | Sep 2012 | B2 |
8264504 | Naimark | Sep 2012 | B2 |
8269779 | Rogers et al. | Sep 2012 | B2 |
8274506 | Rees | Sep 2012 | B1 |
8284190 | Muktinutalapati et al. | Oct 2012 | B2 |
8286083 | Barrus et al. | Oct 2012 | B2 |
8289317 | Harvill | Oct 2012 | B2 |
8290769 | Taub et al. | Oct 2012 | B2 |
8295589 | Ofek et al. | Oct 2012 | B2 |
8300900 | Lai et al. | Oct 2012 | B2 |
8303113 | Esser et al. | Nov 2012 | B2 |
8307560 | Tulin | Nov 2012 | B2 |
8330801 | Wang et al. | Dec 2012 | B2 |
8346020 | Guntur | Jan 2013 | B2 |
8355079 | Zhang et al. | Jan 2013 | B2 |
8372319 | Liguori et al. | Feb 2013 | B2 |
8374422 | Roussel | Feb 2013 | B2 |
8385646 | Lang et al. | Feb 2013 | B2 |
8391547 | Huang et al. | Mar 2013 | B2 |
8459792 | Wilson | Jun 2013 | B2 |
8605942 | Takeuchi | Dec 2013 | B2 |
8605989 | Rudin et al. | Dec 2013 | B2 |
8743051 | Moy et al. | Jun 2014 | B1 |
8813378 | Grove | Aug 2014 | B2 |
20010023413 | Fukuma et al. | Sep 2001 | A1 |
20010026272 | Feld et al. | Oct 2001 | A1 |
20010051517 | Strietzel | Dec 2001 | A1 |
20020010655 | Kjallstrom | Jan 2002 | A1 |
20020105530 | Waupotitsch et al. | Aug 2002 | A1 |
20020149585 | Kacyra et al. | Oct 2002 | A1 |
20030001835 | Dimsdale et al. | Jan 2003 | A1 |
20030030904 | Huang | Feb 2003 | A1 |
20030071810 | Shoov et al. | Apr 2003 | A1 |
20030110099 | Trajkovic et al. | Jun 2003 | A1 |
20030112240 | Cerny | Jun 2003 | A1 |
20040004633 | Perry et al. | Jan 2004 | A1 |
20040090438 | Alliez et al. | May 2004 | A1 |
20040217956 | Besl et al. | Nov 2004 | A1 |
20040223631 | Waupotitsch et al. | Nov 2004 | A1 |
20040257364 | Basler | Dec 2004 | A1 |
20050053275 | Stokes | Mar 2005 | A1 |
20050063582 | Park et al. | Mar 2005 | A1 |
20050111705 | Waupotitsch et al. | May 2005 | A1 |
20050128211 | Berger et al. | Jun 2005 | A1 |
20050162419 | Kim et al. | Jul 2005 | A1 |
20050190264 | Neal | Sep 2005 | A1 |
20050208457 | Fink et al. | Sep 2005 | A1 |
20050226509 | Maurer et al. | Oct 2005 | A1 |
20060012748 | Periasamy et al. | Jan 2006 | A1 |
20060017887 | Jacobson et al. | Jan 2006 | A1 |
20060067573 | Parr et al. | Mar 2006 | A1 |
20060127852 | Wen | Jun 2006 | A1 |
20060161474 | Diamond et al. | Jul 2006 | A1 |
20060212150 | Sims, Jr. | Sep 2006 | A1 |
20060216680 | Buckwalter et al. | Sep 2006 | A1 |
20070013873 | Jacobson et al. | Jan 2007 | A9 |
20070104360 | Huang et al. | May 2007 | A1 |
20070127848 | Kim et al. | Jun 2007 | A1 |
20070160306 | Ahn et al. | Jul 2007 | A1 |
20070183679 | Moroto et al. | Aug 2007 | A1 |
20070233311 | Okada et al. | Oct 2007 | A1 |
20070262988 | Christensen | Nov 2007 | A1 |
20080084414 | Rosel et al. | Apr 2008 | A1 |
20080112610 | Israelsen et al. | May 2008 | A1 |
20080136814 | Chu et al. | Jun 2008 | A1 |
20080152200 | Medioni et al. | Jun 2008 | A1 |
20080162695 | Muhn et al. | Jul 2008 | A1 |
20080163344 | Yang | Jul 2008 | A1 |
20080170077 | Sullivan et al. | Jul 2008 | A1 |
20080201641 | Xie | Aug 2008 | A1 |
20080219589 | Jung et al. | Sep 2008 | A1 |
20080240588 | Tsoupko-Sitnikov et al. | Oct 2008 | A1 |
20080246759 | Summers | Oct 2008 | A1 |
20080271078 | Gossweiler et al. | Oct 2008 | A1 |
20080278437 | Barrus et al. | Nov 2008 | A1 |
20080278633 | Tsoupko-Sitnikov et al. | Nov 2008 | A1 |
20080279478 | Tsoupko-Sitnikov et al. | Nov 2008 | A1 |
20080280247 | Sachdeva et al. | Nov 2008 | A1 |
20080294393 | Laake et al. | Nov 2008 | A1 |
20080297503 | Dickinson et al. | Dec 2008 | A1 |
20080310757 | Wolberg et al. | Dec 2008 | A1 |
20090010507 | Geng | Jan 2009 | A1 |
20090040216 | Ishiyama | Feb 2009 | A1 |
20090123037 | Ishida | May 2009 | A1 |
20090129402 | Moller et al. | May 2009 | A1 |
20090132371 | Strietzel et al. | May 2009 | A1 |
20090135176 | Snoddy et al. | May 2009 | A1 |
20090135177 | Strietzel et al. | May 2009 | A1 |
20090144173 | Mo et al. | Jun 2009 | A1 |
20090153552 | Fidaleo et al. | Jun 2009 | A1 |
20090153553 | Kim et al. | Jun 2009 | A1 |
20090153569 | Park et al. | Jun 2009 | A1 |
20090154794 | Kim et al. | Jun 2009 | A1 |
20090184960 | Carr et al. | Jul 2009 | A1 |
20090185763 | Park et al. | Jul 2009 | A1 |
20090219281 | Maillot | Sep 2009 | A1 |
20090279784 | Arcas et al. | Nov 2009 | A1 |
20090296984 | Nijim et al. | Dec 2009 | A1 |
20090304270 | Bhagavathy et al. | Dec 2009 | A1 |
20090310861 | Lang et al. | Dec 2009 | A1 |
20090316945 | Akansu | Dec 2009 | A1 |
20090316966 | Marshall et al. | Dec 2009 | A1 |
20090324030 | Frinking et al. | Dec 2009 | A1 |
20090324121 | Bhagavathy et al. | Dec 2009 | A1 |
20100030578 | Siddique et al. | Feb 2010 | A1 |
20100134487 | Lai et al. | Jun 2010 | A1 |
20100138025 | Morton et al. | Jun 2010 | A1 |
20100141893 | Altheimer et al. | Jun 2010 | A1 |
20100145489 | Esser et al. | Jun 2010 | A1 |
20100166978 | Nieminen | Jul 2010 | A1 |
20100179789 | Sachdeva et al. | Jul 2010 | A1 |
20100191504 | Esser et al. | Jul 2010 | A1 |
20100198817 | Esser et al. | Aug 2010 | A1 |
20100209005 | Rudin et al. | Aug 2010 | A1 |
20100277476 | Johanson et al. | Nov 2010 | A1 |
20100293192 | Suy et al. | Nov 2010 | A1 |
20100293251 | Suy et al. | Nov 2010 | A1 |
20100302275 | Saldanha et al. | Dec 2010 | A1 |
20100329568 | Gamliel et al. | Dec 2010 | A1 |
20110001791 | Kirshenboim et al. | Jan 2011 | A1 |
20110025827 | Shpunt et al. | Feb 2011 | A1 |
20110026606 | Bhagavathy et al. | Feb 2011 | A1 |
20110026607 | Bhagavathy et al. | Feb 2011 | A1 |
20110029561 | Slaney et al. | Feb 2011 | A1 |
20110040539 | Szymczyk et al. | Feb 2011 | A1 |
20110043540 | Fancher et al. | Feb 2011 | A1 |
20110043610 | Ren et al. | Feb 2011 | A1 |
20110071804 | Xie | Mar 2011 | A1 |
20110075916 | Knothe et al. | Mar 2011 | A1 |
20110096832 | Zhang et al. | Apr 2011 | A1 |
20110102553 | Corcoran et al. | May 2011 | A1 |
20110115786 | Mochizuki | May 2011 | A1 |
20110148858 | Ni et al. | Jun 2011 | A1 |
20110157229 | Ni et al. | Jun 2011 | A1 |
20110158394 | Strietzel | Jun 2011 | A1 |
20110166834 | Clara | Jul 2011 | A1 |
20110188780 | Wang et al. | Aug 2011 | A1 |
20110208493 | Altheimer et al. | Aug 2011 | A1 |
20110211816 | Goedeken et al. | Sep 2011 | A1 |
20110227923 | Mariani et al. | Sep 2011 | A1 |
20110227934 | Sharp | Sep 2011 | A1 |
20110229659 | Reynolds | Sep 2011 | A1 |
20110229660 | Reynolds | Sep 2011 | A1 |
20110234581 | Eikelis et al. | Sep 2011 | A1 |
20110234591 | Mishra et al. | Sep 2011 | A1 |
20110249136 | Levy | Oct 2011 | A1 |
20110262717 | Broen et al. | Oct 2011 | A1 |
20110267578 | Wilson | Nov 2011 | A1 |
20110279634 | Periyannan et al. | Nov 2011 | A1 |
20110292034 | Corazza et al. | Dec 2011 | A1 |
20110293247 | Bhagavathy et al. | Dec 2011 | A1 |
20110304912 | Broen et al. | Dec 2011 | A1 |
20110306417 | Sheblak et al. | Dec 2011 | A1 |
20120002161 | Altheimer et al. | Jan 2012 | A1 |
20120008090 | Atheimer et al. | Jan 2012 | A1 |
20120013608 | Ahn et al. | Jan 2012 | A1 |
20120016645 | Altheimer et al. | Jan 2012 | A1 |
20120021835 | Keller et al. | Jan 2012 | A1 |
20120038665 | Strietzel | Feb 2012 | A1 |
20120075296 | Wegbreit et al. | Mar 2012 | A1 |
20120079377 | Goosens | Mar 2012 | A1 |
20120082432 | Ackley et al. | Apr 2012 | A1 |
20120114184 | Barcons-Palau et al. | May 2012 | A1 |
20120114251 | Solem et al. | May 2012 | A1 |
20120121174 | Bhagavathy et al. | May 2012 | A1 |
20120130524 | Clara et al. | May 2012 | A1 |
20120133640 | Chin et al. | May 2012 | A1 |
20120133850 | Broen et al. | May 2012 | A1 |
20120147324 | Marin et al. | Jun 2012 | A1 |
20120158369 | Bachrach et al. | Jun 2012 | A1 |
20120162218 | Kim et al. | Jun 2012 | A1 |
20120166431 | Brewington et al. | Jun 2012 | A1 |
20120170821 | Zug et al. | Jul 2012 | A1 |
20120176380 | Wang et al. | Jul 2012 | A1 |
20120177283 | Wang et al. | Jul 2012 | A1 |
20120183202 | Wei et al. | Jul 2012 | A1 |
20120183204 | Aarts et al. | Jul 2012 | A1 |
20120183238 | Savvides et al. | Jul 2012 | A1 |
20120192401 | Pavlovskaia et al. | Aug 2012 | A1 |
20120206610 | Wang et al. | Aug 2012 | A1 |
20120219195 | Wu et al. | Aug 2012 | A1 |
20120224629 | Bhagavathy et al. | Sep 2012 | A1 |
20120229758 | Marin et al. | Sep 2012 | A1 |
20120256906 | Ross et al. | Oct 2012 | A1 |
20120263437 | Barcons-Palau et al. | Oct 2012 | A1 |
20120288015 | Zhang et al. | Nov 2012 | A1 |
20120294369 | Bhagavathy et al. | Nov 2012 | A1 |
20120294530 | Bhaskaranand | Nov 2012 | A1 |
20120299914 | Kilpatrick et al. | Nov 2012 | A1 |
20120306874 | Nguyen et al. | Dec 2012 | A1 |
20120307074 | Bhagavathy et al. | Dec 2012 | A1 |
20120314023 | Barcons-Palau et al. | Dec 2012 | A1 |
20120320153 | Barcons-Palau et al. | Dec 2012 | A1 |
20120321128 | Medioni et al. | Dec 2012 | A1 |
20120323581 | Strietzel et al. | Dec 2012 | A1 |
20130027657 | Esser et al. | Jan 2013 | A1 |
20130070973 | Saito et al. | Mar 2013 | A1 |
20130088490 | Rasmussen et al. | Apr 2013 | A1 |
20130187915 | Lee et al. | Jul 2013 | A1 |
20130201187 | Tong et al. | Aug 2013 | A1 |
20130271451 | Tong et al. | Oct 2013 | A1 |
Number | Date | Country |
---|---|---|
10007705 | Sep 2001 | DE |
0092364 | Oct 1983 | EP |
0359596 | Mar 1990 | EP |
0994336 | Apr 2000 | EP |
1011006 | Jun 2000 | EP |
1136869 | Sep 2001 | EP |
1138253 | Oct 2001 | EP |
0444902 | Jun 2002 | EP |
1450201 | Aug 2004 | EP |
1728467 | Dec 2006 | EP |
1154302 | Aug 2009 | EP |
2966038 | Apr 2012 | FR |
2449855 | Dec 2008 | GB |
2003345857 | Dec 2003 | JP |
2004272530 | Sep 2004 | JP |
2005269022 | Sep 2005 | JP |
20000028583 | May 2000 | KR |
200000051217 | Aug 2000 | KR |
20040097200 | Nov 2004 | KR |
20080086945 | Sep 2008 | KR |
20100050052 | May 2010 | KR |
WO 9300641 | Jan 1993 | WO |
WO 9604596 | Feb 1996 | WO |
WO 9740342 | Oct 1997 | WO |
WO 9740960 | Nov 1997 | WO |
WO 9813721 | Apr 1998 | WO |
WO 9827861 | Jul 1998 | WO |
WO 9827902 | Jul 1998 | WO |
WO 9835263 | Aug 1998 | WO |
WO 9852189 | Nov 1998 | WO |
WO 9857270 | Dec 1998 | WO |
WO 9956942 | Nov 1999 | WO |
WO 9964918 | Dec 1999 | WO |
WO 0000863 | Jan 2000 | WO |
WO 0016683 | Mar 2000 | WO |
WO 0045348 | Aug 2000 | WO |
WO 0049919 | Aug 2000 | WO |
WO 0062148 | Oct 2000 | WO |
WO 0064168 | Oct 2000 | WO |
WO 0123908 | Apr 2001 | WO |
WO 0132074 | May 2001 | WO |
WO 0135338 | May 2001 | WO |
WO 0161447 | Aug 2001 | WO |
WO 0167325 | Sep 2001 | WO |
WO 0174553 | Oct 2001 | WO |
WO 0178630 | Oct 2001 | WO |
WO 0188654 | Nov 2001 | WO |
WO 0207845 | Jan 2002 | WO |
WO 0241127 | May 2002 | WO |
WO 03079097 | Sep 2003 | WO |
WO 03084448 | Oct 2003 | WO |
WO 2007012261 | Feb 2007 | WO |
WO 2007017751 | Feb 2007 | WO |
WO 2007018017 | Feb 2007 | WO |
WO 2008009355 | Jan 2008 | WO |
WO 2008009423 | Jan 2008 | WO |
WO 2008135178 | Nov 2008 | WO |
WO 2009023012 | Feb 2009 | WO |
WO 2009043941 | Apr 2009 | WO |
2010039976 | Apr 2010 | WO |
2010042990 | Apr 2010 | WO |
WO 2011012743 | Feb 2011 | WO |
WO 2011095917 | Aug 2011 | WO |
WO 2011134611 | Nov 2011 | WO |
WO 2011147649 | Dec 2011 | WO |
WO 2012051654 | Apr 2012 | WO |
WO 2012054972 | May 2012 | WO |
WO 2012054983 | May 2012 | WO |
Entry |
---|
PCT International Search Report for PCT International Patent Application No. PCT/US2012/068174, mailed Mar. 7, 2013. |
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042504, mailed Aug. 19, 2013. |
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042509, mailed Sep. 2, 2013. |
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042514, mailed Aug. 30, 2013. |
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042517, mailed Aug. 29, 2013. |
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042512, mailed Sep. 6, 2013. |
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042529, mailed Sep. 17, 2013. |
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042525, mailed Sep. 17, 2013. |
PCT International Search Report for PCT International Patent Application No. PCT/US2013/042520, mailed Sep. 27, 2013. |
Tracker, Tracker Help, Nov. 2009. |
3D Morphable Model Face Animation, http://www.youtube.com/watch?v=nice6NYb—WA, Apr. 20, 2006. |
Visionix 3D iView, Human Body Measurement Newsletter, vol. 1., No. 2, Sep. 2005, pp. 2 and 3. |
Blaise Aguera y Arcas demos Photosynth, May 2007. Ted.com, http://www.ted.com/talks/blaise—aguera—y—arcas—demos—photosynth.html. |
ERC Tecnology Leads to Eyeglass “Virtual Try-on” System, Apr. 20, 2012, http://showcase.erc-assoc.org/accomplishments/microelectronic/imsc6-eyeglass.htm. |
U.S. Appl. No. 13/775,785, filed Feb. 25, 2013, Systems and Methods for Adjusting a Virtual Try-On. |
U.S. Appl. No. 13/775,764, filed Feb. 25, 2013, Systems and Methods for Feature Tracking. |
U.S. Appl. No. 13/774,995, filed Feb. 22, 2013, Systems and Methods for Scaling a Three-Dimensional Model. |
U.S. Appl. No. 13/774,985, filed Feb. 22, 2013, Systems and Methods for Generating a 3-D Model of a Virtual Try-On Product. |
U.S. Appl. No. 13/774,983, filed Feb. 22, 2013, Systems and Methods for Generating a 3-D Model of a User for a Virtual Try-On Product. |
U.S. Appl. No. 13/774,978, filed Feb. 22, 2013, Systems and Methods for Efficiently Processing Virtual 3-D Data. |
U.S. Appl. No. 13/774,958, filed Feb. 22, 2013, Systems and Methods for Rendering Virtual Try-On Products. |
U.S. Appl. No. 13/706,909, filed Dec. 6, 2012, Systems and Methods for Obtaining a Pupillary Distance Measurement Using a Mobile Computing Device. |
Sinha et al., GPU-based Video Feautre Tracking and Matching, http::frahm.web.unc.edu/files/2014/01/GPU-based-Video-Feature—Tracking-And Matching.pdf, May 2006. |
Dror et al., Recognition of Surface Relfectance Properties form a Single Image under Unknown Real-World Illumination, IEEE, Proceedings of the IEEE Workshop on Identifying Objects Across Variations in Lighting: Psychophysics & Computation, Dec. 2011. |
Simonite, 3-D Models Created by a Cell Phone, Mar. 23, 2011, url: http://www.technologyreview.com/news/42338613-d-models-created-by-a-cell-phone/. |
Fidaleo, Model-Assisted 3D Face Reconstruction from Video, AMFG'07 Analysis and Modeling of Faces and Gestures Lecture Notes in Computer Science vol. 4778, 2007, pp. 124-138. |
Garcia-Mateos, Estimating 3D facial pose in video with just three points, CVPRW '08 Computer vision and Pattern Recognition Workshops, 2008. |
Number | Date | Country | |
---|---|---|---|
20130314410 A1 | Nov 2013 | US |
Number | Date | Country | |
---|---|---|---|
61650983 | May 2012 | US | |
61735951 | Dec 2012 | US |