Digital centration is a process used in the eyewear industry to obtain optical measurements to ensure that the prescription lenses included in eyeglasses or glasses, provide optimal visual performance, comfort, and prescription accuracy. The process typically involves using specialized equipment, such as a digital pupillometer, to measure the position of the wearer's pupils relative to an eyeglasses frame, and then using this information to determine the correct position for the lenses in the frame.
Digital centration typically requires several key optical measurements as follows:
Interpupillary Distance or Binocular Pupil Distance (PD)—measures the distance between the centers of the pupils, to ensure that the lenses are positioned correctly in front of the eyes. There are two types of PD: (1) Distance (or Far) PD refers to the distance between the pupils when a wearer is viewing a distant object. Eye doctors, e.g. optometrists, use Distance PD to create distance vision glasses. Near PD refers to the distance between the pupils, measured on the spectacle plane, when a wearer is looking at a near object such as the page of a book or a computer screen when reading. PD may be used non-specifically, i.e. as a general measurement that is not specific to either Distance PD or Near PD, such as in the case of self-measurement, as described hereinbelow which provides a rough measure.
Monocular Pupil Distance (MPD)—measures the distance between the center of a pupil and the bridge of the nose, to ensure that the optical center of each lens is aligned with the center of each pupil. Thus, there are two MPD values, one for the right pupil and one for the left pupil. Thus, Distance PD is the sum of the two MPD values.
Segment height (or fitting height)—refers to the vertical distance from the bottom of the lens to the top of the bifocal or progressive segment, to ensure that the reading portion of the lens is located in the correct position for optimal vision correction.
Vertex distance (VD), also known as Black Vertex Distance (BVD)—refers to the distance between the back surface of the lens and the front surface of the eye of a wearer, when the eyeglasses are being worn, to ensure that the effective power of the lens is correct. Vertex distance is typically in the range of 10-16 mm.
WD refers to the working distance, which is the distance from eye 3 to the spectacle plane SP. WD is typically in the range of 35-40 cm.
Spectacle plane (SP), as used herein, refers to a plane that contacts the inner, or back, lens surface of the glasses, i.e. the surface closest to the eyes.
Distance c is the distance from the front surface of the cornea to the center of rotation of the eye. This value is normally assumed to be 13.5 mm.
The vertex distance, i.e. the distance from the front surface of the cornea to the spectacle plane typically ranges from 10-16 mm.
These measurements are usually taken by a qualified optician or eye doctor in a clinic. Online eyewear sellers use different methods to obtain these measurements from customers who are buying at home, including:
Self-measurement-Some online eyewear sellers provide detailed instructions on how to measure PD and/or MPD at home using a ruler or other measuring tools. Customers can then provide these measurements when placing their order.
Smartphone app-Some online eyewear sellers allow customers to use a smartphone app to take pictures of their face. The photos are then analyzed, and key facial features are extracted. To obtain an accurate scale, the photo often needs to be taken with a credit card, or another object of known size, which is placed or held next to the eyes. Many smartphones, such as some versions of the APPLE IPHONE, have depth sensors, which can be used to obtain the measurements, thus eliminating the use of a credit card.
While these methods can be effective, they are not always as accurate as measurements taken in person by a qualified optician or eye doctor. For example, for accurate prescription lens fitting, the errors in MPD and segment height measurements should be within 1 mm; however, self-measuring PD with a ruler can introduce errors ranging up to 4 mm, according to the results of at least one study.
Another challenge for home measurement is that segment height and vertex distance measurements can only be taken with the eyeglasses on so they are usually omitted for at-home measurements.
As a result, some online eyewear sellers offer a “try before you buy” option, allowing customers to test the eyeglasses for a certain period and return them if they are not satisfied. In such cases, inaccurate measurements may result in poor prescription fitting and higher return rates.
For these reasons, a simple digital centration method that allows a customer to purchase eyewear online and which obtains all the necessary optical measurements accurately by taking pictures of the customer's face, and without requiring a customer to hold a credit card or another external object near their face, is desirable.
The invention is a method and computer system that processes a single photo of the face of a subject wearing a pair of eyeglasses with a known frame size to compute optical measurements for digital centration.
The system uses advanced computer vision and image processing technology to detect facial features, pupil centers, and frame contours in the photo. Given the known frame size of the eyeglasses as a reference, PD, MPD and segment heights of the wearer can be estimated with an average error of less than 1 mm.
While the method described requires only a single photo as input, it can also process multiple photos of the same subject to improve accuracy.
The invention includes a method for computing centration measurements based on a digital photo, including receiving a digital photo of the face of a person wearing glasses, where the glasses have two lenses of equal size and a visible frame that surrounds each lens, where the wearer is facing the device that captured the photo, obtaining information about the frame size of the glasses, detecting facial landmarks for the face, extracting a bounding box for each of the two lenses, and computing a set of centration measurements that can be provided to a glasses manufacturer.
Non-limiting and non-exhaustive embodiments of the present invention are described with reference to the following drawings. In the drawings, like reference numerals refer to like parts throughout the various figures unless otherwise specified.
The figures depict embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
The invention now will be described more fully hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific exemplary embodiments by which the invention may be practiced. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Among other things, the invention may be embodied as methods, processes, systems, or devices. The following detailed description is, therefore, not to be taken in a limiting sense.
As used herein the following terms have the meanings given below:
Glasses, prescription glasses, pair of glasses are used interchangeably herein to refer to a pair of glasses with prescription lenses made specifically for the wearer.
Wearer or user—refers to a person who wears a pair of glasses, has a photo taken and the measurements are computed based on a photo of the wearer wearing a pair of glasses.
The system and method described herein pertain to the computation of measurements of the wearer's eyes which will subsequently be used to create a pair of prescription glasses for the wearer. The system and method doesn't include the details of the manufacturing process that creates the prescription glasses, and which uses the computed measurements.
Measurement glasses 4 are used herein only as a reference with a known frame size to compute optical measurements for digital centration. These may be an existing pair of prescription glasses owned by wearer 2 or they may be provided to wearer 2 for the specific purpose of taking photo 8.
Photo 8 is a digital photo. If camera 6 is an analog device and takes an analog photo, e.g. a film negative, film slide, or print, then it is scanned to a digital format, resulting in photo 8 in digital format.
Photo 8 in digital format is provided to a digital centration analyzer 10. This is typically a commercially available computer such as a WINDOWS personal computer, or a smartphone such as an IPHONE from APPLE COMPUTER that runs customized digital centration analysis software. It can also be a server computer or a cloud service, i.e. a service that is available across the Internet.
Digital centration analyzer 10 receives, or maintains, information about measurement glasses 8 and camera 6 and receives photo 8. Digital centration analyzer 10 analyzes photo 8, as described hereinbelow, and provides centration measurements 12 to a prescription glasses manufacturer. Prescription glasses manufacturer 14 creates a pair of prescription glasses 16.
It may be appreciated that parts of digital centration system 1 may be provided by a glasses manufacturer 14 or by a company or organization that provides a digital centration service as a standalone business.
At step 305 digital photo 8 of the wearer is received. The input data includes: (1) photo 8 of wearer 2 where wearer 2 is wearing measurement glasses 4, (2) optionally, the focal length of the camera used to take the photo. The focal length is typically embedded in the digital photo.
At step 310 the frame width of measurement glasses 4 is obtained. There are several ways to compute the frame width; also, in some cases, the frame width can be provided.
Generally, focal length and frame width data may be stored, or received, or, as in the case of frame width, it can be computed based on information extracted from photo 8.
At step 315 facial landmarks are detected. Facial landmarks typically include pupil centers and may also include iris contours, nose and lip identification. As described hereinbelow, a commercially available landmark detector may be used at this step.
At step 320 a frame region is extracted from photo 8.
At step 325 the two lens contours of measurement glasses 4 are detected inside the extracted frame region.
At step 330 the detected contours are rectified to correct for any perspective distortion, resulting in symmetrical contours.
Next, at step 335 a bounding box is determined for each rectified contour. It may be appreciated that the term bounding box, as used herein, refers to a set of values which can be used to define a left bounding box that encapsulates the left lens of the glasses and a right bounding box that defines a right bounding box that encapsulates the right lens of the glasses. For example, since a bounding box is a rectangle, it can be defined by 3 points or, in this case, pixel locations in a digital image, e.g. an upper right corner, a lower right corner and a left corner.
At step 340 centration measurements 12 are computed. These measurements are typically (1) near PD and segment height, and (2) distance PD and MPD. In other embodiments, different measurements or alternative measurements, such as those illustrated hereinbelow in
Generally, method 300 determines a bounding box that encloses each lens region or contour of measurement glasses 4 and then computes a number of centration measurements 12 that can be used to manufacture a custom-fitted pair of glasses. In certain embodiments, one or more of steps 320, 325 and 330 are performed to improve the accuracy of the resulting centration measurements prior to determining the bounding boxes. However, there may be cases where one or more of steps 320, 325, and 330 are skipped, combined, or performed in a different order. For example, in certain embodiments one or more of steps 320, 325, 330 are combined or performed by a machine learning model, such as convolutional neural networks (CNN), which has been extensively trained and performs one or more of these steps as part of model execution. While such embodiments are not explicitly addressed herein, generally, use of a machine learning model or other algorithms that rearrange the order or eliminate the need to individually perform one or more of steps 320, 325 and 330 are within the scope and spirit of the subject invention. Each of the steps in method 300 are now described in greater detail.
There are several requirements for obtaining a satisfactory photo 8. Photo 8 should be of a person, wearer 2, who is directly facing the camera with minimal head tilt or rotation. Wearer 2 should be wearing a pair of measurement glasses, such as measurement glasses 4, preferably with visible rims since rimless or transparent rims may not work. As discussed hereinbelow, the method first estimates near PD. thus, to approximate reading condition, wearer 2 should be looking at, i.e. focusing on, the camera when photo 8 is taken. The capturing distance, i.e. the distance from the lens of camera 6 to the face of wearer 2, should be roughly 30-40 cm. This distance is approximately the same as the working distance or working distance plus vertex distance but does not have to be a precise value. Lighting conditions should be adequate to allow facial features, pupils, and the eyeglasses lens contours to be detectable by a digital image sensor, such as the sensors included in commercially available photographic equipment and mobile phones.
If the focal length of the photo is known, it can be used for computing the working distance, the distance between the camera and the spectacle plane as described with reference to
There are several ways to obtain the frame width of measurement glasses 4. As illustrated in
Thus, if a known pair of measurement glasses 4 are worn in photo 8 for which the frame size number are known then frame width is easily computed. If measurement glasses 4 are not known, e.g. wearer 2 simply uses a pair of glasses that he/she already possesses then wearer 2 can be prompted to (1) take a photo of the frame size numbers as they appear on the inside of the frame temple and (2) upload the photo that shows the frame size numbers.
Alternatively, a database of many known eyeglass frames can be used to obtain the frame size numbers. Each entry in the database stores the frame's shape contour and its corresponding frame size numbers. When a measurement is needed, photo 8 is processed to detect the frame shape, which is used to probe the database to see if there is a match. If a match is found the corresponding frame size numbers are retrieved from the database and the frame width is computed as described above.
Additionally, the frame width can be obtained by measuring the physical frame with a ruler.
Facial landmark detection is a well-established digital image processing method to detect key facial features, such as points around the eyes, nose, lips, iris and pupils in a photo. Many such methods exist already, and they are sufficiently robust to perform the processing of step 315 of method 300 even in the presence of eyeglasses. As used herein, facial landmark detection is used to extract pupil centers and iris contours.
One such facial landmark detection method is provided by a commercially available toolkit named FACE MESH by MEDIAPIPE. Further information about FACEMESH as well as the toolkit itself can be obtained from GOOGLE'S GITHUB website at the URL https://github.com/google/mediapipe/blob/master/docs/solutions/face_detection.md. FACE MESH is based on a face detector technology referred to as BLAZEFACE which is described in an article entitled “BlazeFace: Sub-Millisecond Neural Face Detection On Mobile GPU's” by V. Bazarevsky et al. The entirety of this paper is hereby incorporated by reference.
In certain embodiments, a mask of the region of photo 8 that exactly covers the frame of measurement glasses 4 is extracted. While this step is not required, it is desirable to separate the glasses frame region from the face to make the subsequent lens contour detection, performed at step 325, easier and more robust.
Recent advances in AI-based semantic image segmentation, including the Segment Anything Model (SAM), for which information can be found at https://segment-anything.com/, enable automatic segmentation of an image by creating a series of image masks to classify pixels by their semantic labels (such as glasses). This article is hereby incorporated by reference in its entirety. Using such technology, it is possible to obtain a precise sub-image that includes the frame by extracting only pixels that belong to the glasses mask.
In one embodiment, SAM is used to create a sub-image that defines a frame region mask for measurement glasses 4. It may be appreciated that when SAM is used step 315, detection of facial landmarks, is not performed since SAM uses machine learning to directly extract a frame region mask.
At step 325 image edge detection is used to detect the edges of each of the two lens contours within the frame region mask area of photo 8. As used herein a lens contour is a vector of connected 2d points. Each point has XY coordinates of its position on the image and each contour forms a closed loop that traces the edge of the lens. In other embodiments, a contour may be defined differently, for example as a series of pixels or a series of bezier curves.
One such method for detecting the contours of lenses is disclosed in an article entitled “Eyeglasses Lens Contour Extraction from Facial Images Using an Efficient Shape Description” by D. Borza et al., available at “https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3859084/. This article is hereby incorporated by reference in its entirety.
Another method for performing lens contour detection is disclosed in an article entitled “Magic Glasses: from 2D to 3D, by X. Yuan et al., and available at https://liuyebin.com/liuyebin_files/glasses.pdf. This article is hereby incorporated by reference in its entirety.
In certain embodiments, one or the other of the above cited methods are used to detect the two lens contours of the frame included in photo 8. Both have been used in various embodiments and have yielded acceptable results.
For accurate measurements, the two lens contours detected in the preceding step should be symmetrical, and not have any perspective distortion. Since photo 8 is taken with an unknown head rotation, the frame surface may not be parallel to the imaging plane and may include perspective distortion. For this reason, step 330 rectifies or eliminates any perspective distortion.
One way to correct for the perspective distortion is to identify corner points in the detected contours, assuming they are planar, and use them to construct a homography matrix that transforms the contours to a rectified space so that the two contours are symmetrical. The homography matrix transforms the glasses based on the lens corner points to view where the corner points are symmetric with respect to a vertical axis (referred to as axis Y in
Since the lens contour corner detection may contain errors, in certain embodiments, the above two processes: contour detection and rectification are repeated iteratively until the desired errors (the symmetry difference between the two sets of contour points) is within a target threshold.
Once an acceptable transformation matrix is found, it is applied to all the facial, pupil landmarks, and lens contour points so that all the coordinates are now in the rectified image space. As previously discussed, rectification step 330 is not performed in certain embodiments. However, in embodiments where step 330 is performed then subsequent use of the term “image space” as well as references to distances, measurements and points in the image refer to the rectified image. Examples of cases where rectification may not be required include when photo 8 is known to have been taken by a professional photographer or in an environment that reliably generates front facing photos of wearer 2 with no head tilt or rotation
Distance PD is related to near PD according to Equation 4, below:
In this embodiment, 13.5 mm is the assumed distance between the cornea front surface and the center of eye. In other embodiments, this distance may be measured or computed.
Working distance can be computed if the focal length of the camera is known. Otherwise, the working distance can be provided as input or is set to be an expected distance, such as 35 cm. It should be noted that the working distance is only used to convert near PD to distance PD and does not need to be very exact. An error of 5 cm in working distance estimate translates to around 0.5 mm conversion error. Working distance is computed according to Equation 5, below:
MPD is computed from PD by using the ratio between the left and right pupils' horizontal distances to the center of the frame bridge, which is the midpoint of the two lens contour boxes.
The final output from DCS 1, referred to as centration measurements 12, typically include distance PD, MPD, near PD, segment heights, working distance, and vertex distance. Typically, prescription glasses manufacturer 14 specifies which centration measurements they require and these may include all of the aforementioned measurements 12 or slightly different measurements without departing from the scope and spirit of the invention.
Upon reading this disclosure, those of skill in the art will appreciate that while particular embodiments and applications have been illustrated and described herein, the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope defined in the appended claims.
Number | Date | Country | |
---|---|---|---|
63458225 | Apr 2023 | US |