Buying eyewear can be a daunting task for many individuals given the sheer number of frame styles available. Many individuals only purchase one or two pairs of eyewear for daily use. Because the eyewear is often worn during all waking hours, the frames must be both comfortable and visually appealing to the wearer. As a result, eyewear manufacturers provide many selections corresponding to all conceivable design preferences and budgets.
Opticians and other physical stores, outlets, and boutiques maintain physical inventories of frames for consumers to try on before purchasing. However, storage space and showroom space is typically limited, restricting the number of frames that are available for try on and purchase. Additionally, consumers may not have the time or desire to physically browse through significant quantities of frames, pick them up, try them on in front of a mirror, and replace them before repeating the process innumerable times.
Virtual try on technology exists that allows a user to see an image of themselves on a display that will superimpose eyewear onto the user's face. However, conventional virtual try on technology does not provide the user with a realistic experience that adequately simulates the user's appearance with the virtual eyewear properly positioned and secured to the user's face. Rather, using existing virtual try on technology, when the user moves his or her head too fast, the virtual frames often float out of place or become unglued from the proper positioning on the user's face.
Moreover, conventional virtual try on experiences often do not properly place the eyewear on the user's image. The precise dimensions of an individual's face, including three-dimensional measurements and positioning of the individual's eyes, nose, and ears in relation to one another has a profound effect on the positioning and fit of eyewear. The same eyewear may rest in different places on different individuals, creating significant differences in the fit and resulting appearance of the eyewear on each of the individuals' faces. Existing virtual try on systems simply superimpose images of eyewear on a user's face according to a fixed position of the user's eyes or other specific attribute, which does not accurately depict the actual positioning of the physical eyewear on the user's face according to the precise dimensions and three-dimensional aspects of the user's features.
Accordingly, there is a need for improved systems and methods that address these and other needs.
It should be appreciated that this Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to be used to limit the scope of the claimed subject matter.
According to a first aspect of the disclosure, a computer-implemented method provides an eyewear visualization experience. The method includes receiving an image of the user's face that is positioned a distance from a first side of a viewing surface of a visualization device. The image of the user's face is used to create a three-dimensional map of the user's face. The three-dimensional map includes a positioning of features of the user's face according to three-dimensional measurements associated with the features of the user's face. An image of eyewear is received and used with the three-dimensional map of the user's face to determine an eyewear placement position. The eyewear placement position corresponds to a positioning of a visualized image of the user's face on the first side of the viewing surface. The image of the eyewear is displayed on a display that is positioned a distance from a second side of the viewing surface. The image of the eyewear is displayed at the eyewear placement position on the visualized image of the user's face.
According to another aspect, an eyewear visualization system is provided. The eyewear visualization system includes a visualization device, at least one camera, a display and a processor. The visualization device has a viewing surface that is at least partially reflective and at least partially transmissive, with a first side and a second side opposite the first side. The camera is configured to capture an image of a user's face that is positioned a distance from the first side of the viewing surface. The display is positioned adjacent to and a distance from the second side of the viewing surface. The processor is communicatively coupled to the camera and the display and is operative to receive the image of the user's face from the camera and use the image to create a three-dimensional map of the image. The three-dimensional map includes a positioning of the features of the user's face according to three-dimensional measurements associated with the features. The processor is further operative to receive an image of the eyewear and use the image and the three-dimensional map of the image of the user's face to determine an eyewear placement position. The eyewear placement position corresponds to a positioning of a reflected image of the user's face on the first side of the viewing surface. The processor displays the image of the eyewear on the display at the eyewear placement position.
According to yet another aspect, an eyewear visualization system is provided. The system includes a visualization device, at least one camera, a display and a processor. The visualization device has a screen with a first side and a second side opposite the first side. The camera is configured to capture an image of a user's face that is positioned a distance from the first side of the viewing surface. The display is positioned adjacent to and a distance from the second side of the viewing surface. The processor is communicatively coupled to the camera and the display and is operative to receive the image of the user's face from the camera. The processor uses the image to create a three-dimensional map of the image of the user's face. The three-dimensional map includes a positioning of the features of the user's face according to three-dimensional measurements associated with the features. The processor receives an image of the eyewear and uses the image and the three-dimensional map of the image of the user's face to determine an eyewear placement position corresponding to a positioning of a reflected image of the user's face on the first side of the viewing surface. The processor displays the image of the eyewear on the display at the eyewear placement position.
The features, functions, and advantages that have been discussed can be achieved independently in various embodiments of the present disclosure or may be combined in yet other embodiments, further details of which can be seen with reference to the following description and drawings.
Various embodiments of systems and methods for providing an enhanced eyewear fit visualization experience are described below. In the course of this description, reference will be made to the accompanying drawings, which are not necessarily drawn to scale and wherein:
Various embodiments will now be described more fully hereinafter with reference to the accompanying drawings. It should be understood that the invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Like numbers refer to like elements throughout.
Overview
Eyewear is an extremely personal product that may be as unique as the person wearing the eyewear, often emulating personality or lifestyle traits of the wearer. People prefer different colors, styles, materials, weight, and price points. There are virtually unlimited choices of eyewear to select from when purchasing new eyewear. When a person buys new eyewear, he or she typically visits a brick and mortar business and browses through tens or hundreds of pairs of frames before selecting a subset of those frames to try on. After selecting hopeful frames, the person usually stands in front of a mirror and tries the eyewear on to determine whether or not the look and feel are desirable. This process is repeated many times before the choices are narrowed down to the eyewear to be purchased. While any particular eyewear boutique or business may showcase hundreds of eyewear selections, the potential number of selections is limited by inventory space.
Utilizing the concepts and technologies described herein, these challenges are overcome through the use of an eyewear fit visualization system. According to various embodiments, the eyewear fit visualization system provides the user with a screen or mirror on which the user may view his or her real-time image. The eyewear fit visualization system then provides a realistic depiction of any selected eyewear in place over the user's image to simulate the user's image wearing the selected eyewear. The fit of the eyewear is precise due to a three-dimensional mapping of the user's face created by the eyewear fit visualization system. This three-dimensional map of the user's face, coupled with three-dimensional maps of the eyewear stored within one or more associated databases, provides detailed measurements of the user's face and corresponding eyewear that allows the system described herein to accurately calculate contact points of the eyewear with the user's face. Using these contact points, the system realistically displays the eyewear in the precise position on the displayed or reflected image of the user's face to show the user how he or she will look in the eyewear. The system further monitors movement of the user to smoothly track the corresponding movement of the eyewear being displayed so that the virtual frames do not float out of place or become unglued from the proper positioning on the user's face.
According to various embodiments, the eyewear fit visualization system described herein utilizes a visualization device with a viewing surface that is partially reflective and partially transmissive. A display is positioned behind the visualization device on the opposite side from the first side that is viewed by the user. The user is able to see his or her reflection in the first side of the viewing surface. Images displayed on the display are also visible from the first side of the viewing surface. In this manner, images of eyewear that are precisely displayed according to a configuration and positioning of the user's image being reflected in the viewing surface are visible in place on the user's face as if being worn by the user. Three-dimensional measurement and mapping techniques are utilized to locate an accurate eyewear placement position on the user's image. Any movement of the eyewear image to match corresponding movement of the user is produced by modification of the image being displayed and/or forward or backward movement of the display with respect to the viewing surface.
According to other embodiments, the eyewear fit visualization system described herein utilizes a screen with a display positioned behind the screen. The system utilizes a camera to provide a real-time image of the user on the display. Three-dimensional measurement and mapping techniques are used to determine contact points of the selected eyewear with the user's facial features. Images of the eyewear are provided with the real-time image of the user. The eyewear image is provided at the accurate eyewear placement position on the user's face according to the determined contact points. The system smoothly modifies the image of the eyewear according to the real-time movement of the user.
Exemplary Technical Platforms
As will be appreciated by one skilled in the relevant field, the present systems and methods may be, for example, embodied as a computer system, a method, or a computer program product. Accordingly, various embodiments may be entirely hardware or a combination of hardware and software. Furthermore, particular embodiments may take the form of a computer program product stored on a computer-readable storage medium having computer-readable instructions (e.g., software) embodied in the storage medium. Various embodiments may also take the form of Internet-implemented computer software. Any suitable computer-readable storage medium may be utilized including, for example, hard disks, compact disks, DVDs, optical storage devices, and/or magnetic storage devices.
Various embodiments are described below with reference to block diagram and flowchart illustrations of methods, apparatuses, (e.g., systems), and computer program products. It should be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by a computer executing computer program instructions. These computer program instructions may be loaded onto a general purpose computer, a special purpose computer, or other programmable data processing apparatus that can direct a computer or other programmable data processing apparatus to function in a particular manner such that the instructions stored in the computer-readable memory produce an article of manufacture that is configured for implementing the functions specified in the flowchart block or blocks.
The computer instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on a user's computer and partly on a remote computer, or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any suitable type of network, including but not limited to: a local area network (LAN); a wide area network (WAN), such as the Internet; and/or a cellular network.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner such that the instructions stored in the computer-readable memory produce an article of manufacture that is configured for implementing the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process (e.g., method) such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
Example System Architecture
The one or more networks 115 may include any of a variety of types of wired or wireless computer networks such as the Internet (or other WAN), a private intranet, a mesh network, a public switch telephone network (PSTN), and/or any other type of network (e.g., a network that uses Bluetooth or near field communications to facilitate communication between computing devices). The communication link between the one or more remote computing devices 154 and the eyewear fit visualization server 120 may be, for example, implemented via a local area network (LAN) or via the Internet (or other WAN).
In particular embodiments, the eyewear fit visualization server 120 may be connected (e.g., networked) to other computing devices in a LAN, an intranet, an extranet, and/or the Internet as shown in
As shown in
The processing device 202 represents one or more general-purpose or specific processing devices such as a microprocessor, a central processing unit (CPU), or the like. More particularly, the processing device 202 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. The processing device 202 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 202 may be configured to execute processing logic 226 for performing various operations and steps discussed herein.
The eyewear fit visualization server 120 may further include a network interface device 208. The eyewear fit visualization server 120 may also include a video display unit 210 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alpha-numeric input device 212 (e.g., a keyboard), a cursor control device 214 (e.g., a mouse), a signal generation device 216 (e.g., a speaker), and a data storage device 218.
The data storage device 218 may include a non-transitory computing device-accessible storage medium 230 (also known as a non-transitory computing device-readable storage medium, a non-transitory computing device-readable medium, or a non-transitory computer-readable medium) on which is stored one or more sets of instructions (e.g., the eyewear fit visualization module 300) embodying any one or more of the methodologies or functions described herein. The one or more sets of instructions may also reside, completely or at least partially, within the main memory 204 and/or within the processing device 202 during execution thereof by the eyewear fit visualization server 120—the main memory 204 and the processing device 202 also constituting computing device-accessible storage media. The one or more sets of instructions may further be transmitted or received over a network 115 via a network interface device 208.
While the computing device-accessible storage medium 230 is shown in an exemplary embodiment to be a single medium, the term “computing device-accessible storage medium” should be understood to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computing device-accessible storage medium” should also be understood to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the computing device and that causes the computing device to include any one or more of the methodologies of the present invention. The terms “computing device-accessible storage medium” and like terms should accordingly be understood to include, but not be limited to, solid-state memories, optical and magnetic media, etc.
Exemplary Visualization Devices
As shown in
With this viewing surface 306, the eyewear fit visualization system 100 presents an image of eyewear in a position on the display 308 that corresponds with the precise eyewear placement position 322 on the visualized image 320 of the user's face on the first side 312 of the viewing surface 306. Because the image of the eyewear provided on the display 308 at the second side 314 of the viewing surface 306 is brighter than the light reflected on the first side 312 of the viewing surface 306 at the eyewear placement position, the user 304 sees the eyewear on the visualized image 320 as if being worn by the user 304. With the visualization device 302, the user 304 may “wear” any quantity and type of eyewear without moving from the visualization device 302.
As will be described in greater detail below, the eyewear fit visualization system 100 determines the eyewear placement position 322, including the size and orientation of the image of the eyewear to display at the eyewear placement position, utilizing the distance of the user 304 from the first side 312 of the viewing surface 306 and the distance of the display 308 from the second side 314 of the viewing surface 306. According to one embodiment, the distance between the display 308 and the second side 314 of the viewing surface 306 is fixed such that the display 308 is adjacent or proximate to the viewing surface 306. In these embodiments, the eyewear fit visualization module 300 controls the characteristics of the displayed image of the eyewear (e.g., brightness, size, orientation) via the lighting elements in the display 308. For example, as the user 304 changes the distance between the user 304 and the viewing surface 306, the eyewear fit visualization module 300 detects the change and alters the lighting elements in the display 308 to compensate for any size and orientation changes in the user's face and maintain the image of the eyewear at the eyewear placement position 322.
According to alternative embodiments, the eyewear fit visualization module 300 controls the characteristics of the displayed image of the eyewear (e.g., brightness, size, orientation) via movement of the display 308 to alter the distance between the display 308 and the second side 314 of the viewing surface 306. For example, as the user 304 increases the distance between the user 304 and the viewing surface 306, the eyewear fit visualization module 300 detects the change and triggers a corresponding rearward movement the display 308 away from the viewing surface 306 via movement mechanism 310 to compensate for any size and orientation changes in the user's face and maintain the image of the eyewear at the eyewear placement position 322. The movement mechanism 310 may include tracks, rails, or any mechanism used in conjunction with an actuator or other motor to facilitate movement of the display 308 with respect to the viewing surface 306. It should be understood that the movement mechanism 310 is not limited to translational movement of the display 308 toward and away from the viewing surface 306, but may include any suitable device operative to facilitate movement of the display 308 in three dimensions to change the rotational and/or directional movement of the display 308.
Exemplary System Platform
As noted above, a system, according to various embodiments, is adapted to provide an eyewear visualization experience. Various aspects of the system's functionality may be executed by certain system modules, including the eyewear fit visualization module 300. The eyewear fit visualization module 300 is discussed in greater detail below.
Eyewear Fit Visualization Module
At operation 506, the eyewear fit visualization module 300 receives or retrieves an image of eyewear that the user would like to visualize wearing. This eyewear may be selected by the user from a total group of available eyewear, or the eyewear fit visualization module 300 may determine provide the user with a subset of eyewear according to the user's eyewear size category (e.g., small, medium, or large) according to the determined points of contact between the eyewear and the user's face according to the methods described herein. The image of the eyewear may be stored along with any number of eyewear image files in the eyewear databases 140, or within a storage medium within the visualization device 302. The image of the eyewear may contain a 3D map of the eyewear similar to the 3D image of the user's face. Additionally or alternatively, the image of the eyewear may contain eyewear data that corresponds to all applicable measurements of the eyewear.
The routine 500 continues to operation 508, where the eyewear fit visualization module 300 determines the eyewear placement position 322. As described above, the eyewear placement position 322 is the location on the visualized image 320, which is displayed or reflected on the viewing surface 306, at which the image of the eyewear will be shown to simulate the wearing of the eyewear by the user 304. To determine the eyewear placement position 322, the eyewear fit visualization module 300 utilizes the 3D map of the eyewear and the 3D map of the image of the user's face to identify the contact points of the eyewear to the user's face. Based on the contact points of the eyewear to the user's face, the eyewear fit visualization module 300 may determine the exact location on the display 308 to provide the image of the eyewear according to the size, position, and orientation of the visualized image 320 of the user's face.
At operation 510, the eyewear fit visualization module 300 displays the eyewear image at the eyewear placement position 322. According to various embodiments, the eyewear fit visualization module 300 may continue to monitor movement of the user 304 using the camera 310. When movement is detected at operation 512, the routine 500 returns to operation 508 and continues as described above. In this manner, the eyewear fit visualization module 300 is capable of providing a realistic simulation of the eyewear being worn by the user as the eyewear remains in place at the accurately determined eyewear placement position 322 as the user moves in three-dimensional space.
Determining Proper Eyewear Fit
Limitations of Conventional Technology
Conventional computer vision technology is used to identify borders and features, or points, of a user's face. These points create a map that can be used to compare against other facial maps for facial recognition purposes. However, conventional facial recognition techniques commonly identify only 64 points, with 128 points being the maximum. Sixty-four points, and even 128 points, are insufficient to accurately render eyewear frames on a person's face or to provide eyewear fit information. Generally, 64 points will provide points around a person's eyes, eyebrows, nose ridge, and lips. When trying to accurately fit a pair of glasses on a person's face, having additional points on the person's cheek, brow, forehead, sides of the head, and ears is useful. Having brow and cheek points ensures that the lens of the eyewear does not rest on the cheeks instead of resting on the user's nose. A person's ears provide one of the more problematic areas to fit. The size and positioning of ears, the tops of the ears, and the area behind the ears are significant since the temples of the eyewear rest on or proximate to the ears. Consequently, establishing points around a user's ears when mapping the user's face for fit purposes is useful in providing a realistic fit simulation.
Solution for Projecting a 2D Image into a 3D Volume
The eyewear fit visualization system 100 provides a uniquely accurate fit of the eyewear images to the user's face using three-dimensional mapping techniques to acquire and utilize precise measurements of the user's facial features. To achieve this accuracy, a camera 310 is used to create an image of the user's face. According to some embodiments, the camera 310 takes a two-dimensional image and transforms the two-dimensional image into a three-dimensional map of the user's face. The three-dimensional map acts as a topographical map of the user's face that provides very precise dimensions of the user's facial features with respect to one another. Embodiments for creating these 3D maps will be described below with respect to
The various embodiments described herein for creating the 3D maps of the user's face and corresponding facial features utilize still or video images from the camera 310. Accordingly, it should be noted that the camera 310 may include any type of still or video camera for creating an image of the user's face. For example, the camera 310 may include a forward-facing camera such as those used on a smartphone, desktop or laptop computer, or tablet computing device. According to alternative embodiments described below, the camera 310 is a depth sensing camera having two camera lenses that are spaced apart. A depth sensing camera allows the eyewear fit visualization system 100 to compare the images from the adjacent camera lenses to determine the depth of the user's face and corresponding facial features. These depth measurements are then used to determine the measurements of the various facial features with respect to one another to create the 3D map of the user's face.
Whether using still, video, or depth camera images, the images of the user's face captured by the camera 310 are used to map the features of the user's face to create a representation that accurately positions the facial features of the user in a three-dimensional space. This 3D map is similar to a topographical map of the user's face, positioning the facial features of the user's face with respect to one another on a granular level, providing dimensions within a few millimeters of accuracy. The 3D mapping and subsequent processing is performed by the eyewear fit visualization module 300 to convert images to 3D point clouds, locate facial features, annotate the images with respect to the facial features, and identify the contact points of specific eyewear on the particular user's face, which will now be described in greater detail.
According to one embodiment, the image data collection process begins with a three-dimensional head scan using the camera 310. This process may involve a short (e.g., approximately 20 seconds) video of the user's head, prompting the user 304 to rotate the user's head approximately 180 degrees starting with one ear facing the camera 310 and rotating until the other ear is facing the camera 310. Images may then be split out from the video, such as every frame being split to create a number of images corresponding to the various head poses during the 180-degree rotation. The eyewear fit visualization module 300 creates a point cloud from the image scan.
From the point cloud 602, the various facial features are located. The ears, and specifically the ear holes, may be located first and used as an anchor from which the nose is located. According to one embodiment, the eyewear fit visualization module 300 searches for the ear hole. To do so, as seen in
The elevation decrease is due to the projection of the ear structure from the side of the user's head. Any decrease in elevation (e.g., from the surface inward do to the ear hole or from the ear down to the head) would indicate the probability that the area within the defined grid contains a facial feature. A significant change in the topography of the head surface inward, such as is the case with the ear hole, would create a substantial disorganization of the normal vectors. Consequently, a location on the user's facial point cloud that has a significant elevation decrease and a significant disorganization of the normal vectors would indicate a probability that the structure within the grid is an ear. Once the ear hole is identified, the system can trace lines emanating out from the center and trace the outline of the ear. When reaching the boundary of the ear, there is a decrease in elevation, from the earlobe and outermost ear border to the head. In this manner, a detailed ear map can be created, identifying all major parts of the ear. Once both ears are found and mapped, the nose is easily found as it is positioned between the ears and will have a significant elevation increase from the surface of the face. The eyes and other points of interest may be found in similar manners.
Once the points of interest corresponding to the user's facial features are known, the points are downsized and regularized, which is shown in view 902 of
Looking now at the view 1002 of
Utilizing this mapping data corresponding to the user's face and facial features, 3D map data associated with an eyewear selection may be retrieved from the one or more eyewear databases 140. The eyewear data is compared and analyzed with respect to the facial image data to predict contact points of the eyewear to the user's face. Knowing the exact dimensions of the eyewear in three-dimensional space, and knowing the precise location, shape, and size of the user's facial features (e.g., ears, nose, eyes, cheeks, brows) and corresponding dimensional data, the eyewear fit visualization module 300 can determine the exact locations where portions of the eyewear will contact the user's various facial features. Based on these locations, the eyewear fit visualization module 300 can virtually fit the eyewear to the user's face and provide a corresponding representation of the eyewear at this determined eyewear placement position on the reflection or display of the user's image, as described in further detail below.
The 3D mapping data associated with the eyewear and of the user's face will additionally allow the eyewear fit visualization module 300 to determine whether the eyewear will fit and how much pressure the user will experience at the determined contact points. The eyewear fit visualization system 100 may provide this information to the user via the visualization device 302. In order to estimate fit and to provide proper scale for the eyewear frames, the eyewear fit visualization module 300 utilizes a reference measurement. According to one embodiment, a user may be asked to place a credit card or other object of known dimensions on his or her forehead when an image is taken. According to an alternative embodiment, the transverse measurement of the user's eye is used as a reference. The transverse measurement of the eye is approximately 24.2 mm and only differs by approximately 1-2 mm from person to person. Using the eye as a reference measurement minimizes the actions required of the user (e.g., holding a credit card to their head) while providing an accurate reference within an error range of only +/−2 mm.
During a fit determination, if the eyewear fit visualization module 300 determines that the eyewear will contact the user's brow area or cheek, or if there the pressure applied at the determined contact points exceeds a predetermined threshold, the eyewear fit visualization module 300 may provide a notification to the user via the visualization device 302 that the selected eyewear is not compatible or may be uncomfortable when worn by the user. According to one embodiment, the eyewear fit visualization module 300 determines a user's eyewear size category according to the plurality of contact points of the eyewear to the user's face, selects a subset of eyewear for presentation to the user according to the eyewear size category, and provides the subset of eyewear to the user for visualization. Doing so effectively narrows down the number of frames that the user may virtually wear to only those that properly fit the user.
Alternative Solution for Projecting a 2D Image into a 3D Volume
According to one embodiment, the camera 310 is a depth sensing camera having two camera lenses that are spaced apart. A depth sensing camera allows the eyewear fit visualization system 100 to compare the images from the adjacent camera lenses to determine the depth of the user's face and corresponding facial features. These depth measurements are then used to determine the measurements of the various facial features with respect to one another to create the 3D map of the user's face. Using a depth sensing camera allows the point cloud to be generated in real time as the user moves around. The techniques discussed above may be used to identify the nose, ears, and other facial features. These points may be used to generate the bounding box, down sample and regularize the point cloud before determining the eyewear placement position 322. With the precise measurements from the depth sensing camera, reference measurements from the transverse of the eye or from an object of known dimensions is not necessary.
Many modifications and other embodiments of the invention will come to mind to one skilled in the art to which this invention pertains, having the benefit of the teaching presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the invention is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for the purposes of limitation.
Number | Name | Date | Kind |
---|---|---|---|
4762407 | Anger et al. | Aug 1988 | A |
4852184 | Tamura et al. | Jul 1989 | A |
4991005 | Smith | Feb 1991 | A |
5175941 | Ziegler et al. | Jan 1993 | A |
5400522 | Kremer et al. | Mar 1995 | A |
5592248 | Norton et al. | Jan 1997 | A |
5615486 | Igarashi et al. | Apr 1997 | A |
5983201 | Fay | Nov 1999 | A |
6095650 | Gao et al. | Aug 2000 | A |
6142628 | Saigo | Nov 2000 | A |
6231188 | Gao et al. | May 2001 | B1 |
6508553 | Gao et al. | Jan 2003 | B2 |
6533418 | Izumitani et al. | Mar 2003 | B1 |
6535223 | Foley | Mar 2003 | B1 |
6583792 | Agnew | Jun 2003 | B1 |
6624843 | Lennon | Sep 2003 | B2 |
6634754 | Fukuma et al. | Oct 2003 | B2 |
6682195 | Dreher | Jan 2004 | B2 |
6692127 | Abitbol et al. | Feb 2004 | B2 |
6736506 | Izumitani et al. | May 2004 | B2 |
6791584 | Xie | Sep 2004 | B1 |
6792401 | Nigro et al. | Sep 2004 | B1 |
6944327 | Soatto | Sep 2005 | B1 |
6965385 | Welk et al. | Nov 2005 | B2 |
7010146 | Fukuma et al. | Mar 2006 | B2 |
7062454 | Giannini et al. | Jun 2006 | B1 |
7152976 | Fukuma et al. | Dec 2006 | B2 |
7222091 | Yoshida | May 2007 | B2 |
7274806 | Fukuma et al. | Sep 2007 | B2 |
7292713 | Fukuma et al. | Nov 2007 | B2 |
7665843 | Xie | Feb 2010 | B2 |
7845797 | Warden et al. | Dec 2010 | B2 |
7959287 | Saffra | Jun 2011 | B1 |
8142017 | Drobe et al. | Mar 2012 | B2 |
8220922 | Chauveau et al. | Jul 2012 | B2 |
8220923 | Saffra | Jul 2012 | B2 |
8231220 | Baranton | Jul 2012 | B2 |
8459792 | Wilson et al. | Jun 2013 | B2 |
8506078 | Chauveau et al. | Aug 2013 | B2 |
8556420 | Sayag | Oct 2013 | B2 |
8655053 | Hansen | Feb 2014 | B1 |
8690326 | Wilson et al. | Apr 2014 | B2 |
8733936 | Kornilov et al. | May 2014 | B1 |
8919955 | Hsieh et al. | Dec 2014 | B2 |
8959781 | Delort | Feb 2015 | B2 |
9058765 | Mallick et al. | Jun 2015 | B1 |
9086582 | Barton | Jul 2015 | B1 |
9091867 | Farache | Jul 2015 | B1 |
9164299 | Fisher et al. | Oct 2015 | B2 |
9198576 | Barnes et al. | Dec 2015 | B1 |
9236024 | Coon | Jan 2016 | B2 |
9254081 | Kornilov et al. | Feb 2016 | B2 |
9265414 | Wilson et al. | Feb 2016 | B2 |
9341867 | Kim | May 2016 | B1 |
9395562 | Nguyen et al. | Jul 2016 | B1 |
9418378 | Staicut et al. | Aug 2016 | B2 |
9429773 | Ben-Shahar | Aug 2016 | B2 |
9467630 | Dotan et al. | Oct 2016 | B2 |
9529213 | Fonte et al. | Dec 2016 | B2 |
9557583 | Farache | Jan 2017 | B2 |
9568748 | Kim | Feb 2017 | B2 |
9569890 | Choukroun | Feb 2017 | B2 |
9645413 | Aikawa | May 2017 | B2 |
9665984 | Ye et al. | May 2017 | B2 |
9703123 | Fonte et al. | Jul 2017 | B2 |
9799064 | Ohnemus et al. | Oct 2017 | B2 |
9810927 | Fenton et al. | Nov 2017 | B1 |
9842246 | Surkov et al. | Dec 2017 | B2 |
9858719 | Dorner et al. | Jan 2018 | B2 |
9885887 | Gardner | Feb 2018 | B2 |
9990780 | Kornilov et al. | Jun 2018 | B2 |
10013796 | Kornilov et al. | Jul 2018 | B2 |
10031350 | Fonte | Jul 2018 | B2 |
10042188 | Ritt et al. | Aug 2018 | B2 |
10043068 | Hansen | Aug 2018 | B1 |
10043207 | Inoue et al. | Aug 2018 | B2 |
10048516 | Liang | Aug 2018 | B2 |
10067361 | Ozaki et al. | Sep 2018 | B2 |
10082682 | Liang | Sep 2018 | B2 |
10108847 | Arikawa et al. | Oct 2018 | B2 |
10120207 | Le Gallou et al. | Nov 2018 | B2 |
10121189 | Adeyoola et al. | Nov 2018 | B2 |
10130252 | Chuang et al. | Nov 2018 | B2 |
10156740 | Tadokoro et al. | Dec 2018 | B2 |
10194799 | Gerrans | Feb 2019 | B2 |
10216010 | Tiemann et al. | Feb 2019 | B2 |
10222636 | Barton | Mar 2019 | B2 |
10311755 | Kenderes | Jun 2019 | B2 |
10330958 | Fonte et al. | Jun 2019 | B2 |
10331111 | Cluckers et al. | Jun 2019 | B2 |
10339581 | Kirschner et al. | Jul 2019 | B2 |
10360731 | Zhang | Jul 2019 | B2 |
10386657 | El-Hajal et al. | Aug 2019 | B2 |
20010023413 | Fukuma et al. | Sep 2001 | A1 |
20010042028 | Yoshida | Nov 2001 | A1 |
20020093515 | Fay et al. | Jul 2002 | A1 |
20020105530 | Waupotitsch | Aug 2002 | A1 |
20020196333 | Gorischek | Dec 2002 | A1 |
20030063105 | Agnew | Apr 2003 | A1 |
20040004633 | Perry et al. | Jan 2004 | A1 |
20050162419 | Kim et al. | Jul 2005 | A1 |
20050175234 | Sakamoto | Aug 2005 | A1 |
20050190264 | Neal | Sep 2005 | A1 |
20070244722 | Wortz et al. | Oct 2007 | A1 |
20080040080 | Bae | Feb 2008 | A1 |
20080198328 | Seriani et al. | Aug 2008 | A1 |
20080201641 | Xie | Aug 2008 | A1 |
20110071804 | Xie | Mar 2011 | A1 |
20120016763 | Kirschner | Jan 2012 | A1 |
20120079377 | Goossens | Mar 2012 | A1 |
20130006814 | Inoue et al. | Jan 2013 | A1 |
20130088490 | Rasmussen et al. | Apr 2013 | A1 |
20130132898 | Cuento | May 2013 | A1 |
20130141694 | Seriani | Jun 2013 | A1 |
20130262259 | Xie et al. | Oct 2013 | A1 |
20140104568 | Cuta | Apr 2014 | A1 |
20140253707 | Gangadhar | Sep 2014 | A1 |
20140270370 | Saito | Sep 2014 | A1 |
20140363059 | Hurewitz | Dec 2014 | A1 |
20150055085 | Fonte | Feb 2015 | A1 |
20150061166 | Van De Vrie et al. | Mar 2015 | A1 |
20150070650 | Seriani | Mar 2015 | A1 |
20150127132 | Nyong et al. | May 2015 | A1 |
20150127363 | Nyong et al. | May 2015 | A1 |
20150277155 | Raviv | Oct 2015 | A1 |
20150293382 | Jethmalani | Oct 2015 | A1 |
20150310519 | Gura et al. | Oct 2015 | A1 |
20150363853 | Werzer | Dec 2015 | A1 |
20160162965 | Lee et al. | Jun 2016 | A1 |
20160171596 | Angerbauer et al. | Jun 2016 | A1 |
20160178936 | Yang | Jun 2016 | A1 |
20160292917 | Dorner | Oct 2016 | A1 |
20160317025 | Lee | Nov 2016 | A1 |
20160327811 | Haddadi et al. | Nov 2016 | A1 |
20160371539 | Ming | Dec 2016 | A1 |
20170169501 | Xia | Jun 2017 | A1 |
20170213385 | Yu | Jul 2017 | A1 |
20170270574 | Hessurg | Sep 2017 | A1 |
20180017815 | Chumbley et al. | Jan 2018 | A1 |
20180067340 | Chumbley et al. | Mar 2018 | A1 |
20180149886 | Zweerts et al. | May 2018 | A1 |
20180218442 | McKechnie et al. | Aug 2018 | A1 |
20180239173 | Cuento | Aug 2018 | A1 |
20180252942 | Gamliel et al. | Sep 2018 | A1 |
20180268458 | Popa et al. | Sep 2018 | A1 |
20180336720 | Larkins et al. | Nov 2018 | A1 |
20180336737 | Varady | Nov 2018 | A1 |
20180374261 | Kornilov et al. | Dec 2018 | A1 |
20190033624 | Breuninger et al. | Jan 2019 | A1 |
20190108687 | Kelly et al. | Apr 2019 | A1 |
Number | Date | Country |
---|---|---|
19752729 | Jun 1999 | DE |
20040097200 | Nov 2004 | KR |
1993013447 | Jul 1993 | WO |
1997029441 | Aug 1997 | WO |
1998027861 | Jul 1998 | WO |
1998057270 | Dec 1998 | WO |
1999040526 | Aug 1999 | WO |
2000016683 | Mar 2000 | WO |
2001032074 | May 2001 | WO |
2001035338 | May 2001 | WO |
2001079918 | Oct 2001 | WO |
2001088654 | Nov 2001 | WO |
2001097683 | Dec 2001 | WO |
2001098730 | Dec 2001 | WO |
2001098862 | Dec 2001 | WO |
2002065199 | Aug 2002 | WO |
2003079097 | Sep 2003 | WO |
2003081536 | Oct 2003 | WO |
2005029158 | Mar 2005 | WO |
2005038511 | Apr 2005 | WO |
2007012261 | Feb 2007 | WO |
2008032173 | Mar 2008 | WO |
2008049173 | May 2008 | WO |
2008129168 | Oct 2008 | WO |
2009024681 | Feb 2009 | WO |
2009044080 | Apr 2009 | WO |
2009138879 | Nov 2009 | WO |
2010042990 | Apr 2010 | WO |
2010119190 | Oct 2010 | WO |
2011067478 | Jun 2011 | WO |
2011139366 | Nov 2011 | WO |
2012022380 | Feb 2012 | WO |
2012054983 | May 2012 | WO |
2013149891 | Oct 2013 | WO |
2014048199 | Apr 2014 | WO |
2014062468 | Apr 2014 | WO |
2014077462 | May 2014 | WO |
2014128361 | Aug 2014 | WO |
2014164347 | Oct 2014 | WO |
2015011286 | Jan 2015 | WO |
2015023667 | Feb 2015 | WO |
2015046466 | Apr 2015 | WO |
2015052571 | Apr 2015 | WO |
2015101737 | Jul 2015 | WO |
2015101738 | Jul 2015 | WO |
2015151097 | Oct 2015 | WO |
2015157505 | Oct 2015 | WO |
2015166048 | Nov 2015 | WO |
2015192733 | Dec 2015 | WO |
2016026570 | Feb 2016 | WO |
2016030620 | Mar 2016 | WO |
2016041536 | Mar 2016 | WO |
2016109884 | Jul 2016 | WO |
2016118169 | Jul 2016 | WO |
2016125798 | Aug 2016 | WO |
2016126656 | Aug 2016 | WO |
2016135078 | Sep 2016 | WO |
2016142668 | Sep 2016 | WO |
2016145021 | Sep 2016 | WO |
2016164859 | Oct 2016 | WO |
2016176630 | Nov 2016 | WO |
2016189551 | Dec 2016 | WO |
2016195488 | Dec 2016 | WO |
2017042612 | Mar 2017 | WO |
2017042824 | Mar 2017 | WO |
2017077279 | May 2017 | WO |
2017149180 | Sep 2017 | WO |
2017174525 | Oct 2017 | WO |
2017181257 | Oct 2017 | WO |
2017196948 | Nov 2017 | WO |
2017205903 | Dec 2017 | WO |
2018106241 | Jun 2018 | WO |
2018106242 | Jun 2018 | WO |
2018130291 | Jul 2018 | WO |
2018154271 | Aug 2018 | WO |
20180191784 | Oct 2018 | WO |
2018199890 | Nov 2018 | WO |
2018209967 | Nov 2018 | WO |
2018220203 | Dec 2018 | WO |
2018224655 | Dec 2018 | WO |
2019007939 | Jan 2019 | WO |
2019020521 | Jan 2019 | WO |
2019075526 | Apr 2019 | WO |
Entry |
---|
Huang et al., “Human-centric design personalization of 3D glasses frame in markerless augmented reality”, Advanced Engineering Informatics 26 (2012) 35-45, 11 pages. |
Topology: Custom-Tailored Eyewear “Glasses for every one face, not everyone's face”, https://www.topologyeyewear.com, 11 pages. |
Ditto: Secret's out on our 3D virtual try-on, http://blog.ditto.com, 11 pages. |
New Hampshire Business Review, Internet helps customers ‘try on’ eyeglass frames, Dec. 29, 2000, Business Publications, Inc., 2 pages. |
PR Newswire, “Geometrix Partners With Visionix Ltd to Provide 3D Face Capture and Virtual Try Eyewear Market”, PR Newswire [New York] 20 Aug. 2001, 3 pages. |
Number | Date | Country | |
---|---|---|---|
20200160556 A1 | May 2020 | US |