The present disclosure relates to a method for estimating the position of the optical center of an ophthalmic lens. The present disclosure also relates to a system for implementing such a method.
In order to be able to duplicate an ophthalmic lens, it is necessary to know the optical parameters defining the correction applied by the lens, and in particular the lens power.
Measuring the lens power of an ophthalmic lens is a complex operation usually performed by an eye care professional (also referred to as an ECP) thanks to a lensmeter.
Document U.S. Pat. No. 10,775,266 B2 discloses methods and systems for testing eyeglasses using a background object.
Document WO 2021/140204 A1 discloses a method making it possible to automatically retrieve the optical power of an ophthalmic lens, in particular using a smartphone equipped with a camera and displaying predetermined patterns and a mirror.
However, in order to perform an accurate automatic measurement of the optical power, any user must perform this measurement at the optical center of the lens.
It is a technical challenge to be able to guide the user to position the smartphone in front of the mirror such that the pattern displayed by the smartphone is at the right position with respect to the measured lens. The objective is to keep the measurement process for the user very simple and seamless.
In particular, there is a need for estimating the position of the optical center of the lens.
An object of the disclosure is to overcome the above-mentioned gaps of the prior art.
To that end, the disclosure provides a method for estimating a position of an optical center of an ophthalmic lens mounted on a frame, according to claim 1.
Thus, the only contribution of the consumer or low-skilled ECP is to use an image capture device, which is much easier to do than using a lens meter.
In the method according to the disclosure, the steps of detecting the position of the lens, obtaining a plurality of dimensional parameters and determining the estimated position of the optical center of the lens may be done semi-automatically, so that the usability and the accuracy/reproducibility are much higher.
Thus, the method according to the disclosure may be used for example for manufacturing a duplicate of an ophthalmic lens in a very simple and convenient manner for the consumer, without the need to spend time in an optician's shop.
In an embodiment, the method comprises obtaining a plurality of dimensional parameters of the frame and further comprises obtaining an estimated pupillary distance, determining the estimated position of the optical center being based on the detected lens position, the plurality of frame dimensional parameters and the estimated pupillary distance.
In that embodiment, the estimated pupillary distance is either an exact value, measured on a wearer of the ophthalmic lens, or an approximate value, obtained from a statistical model.
In an embodiment, detecting the lens position comprises using a neural network.
In an embodiment, detecting the lens position comprises using an image processing algorithm.
In an embodiment, the plurality of lens dimensional parameters comprises at least one of a dimension corresponding to a lens width and a dimension corresponding to a lens height.
In that embodiment, the method further comprises extracting the lens width and the lens height from a bounding box that is the smallest rectangle containing the lens.
In that embodiment and where detecting the lens position comprises using a neural network, the neural network may provide the bounding box in real time.
In an embodiment, the plurality of frame dimensional parameters comprises at least a dimension corresponding to a frame bridge width.
In that embodiment, the method further comprises obtaining the frame bridge width as a statistical estimate based on a collection of frames.
In an embodiment, obtaining the plurality of dimensional parameters, either of the frame, or of the lens, comprises using a neural network, or an image processing algorithm, or a database using a reference code on an arm of the frame, or a statistical model.
To the same end as mentioned above, the present disclosure also provides a system for implementing a method for estimating a position of an optical center of an ophthalmic lens mounted on a frame, according to claim 12.
In an embodiment, the mobile device is a smartphone and the image capture device is a smartphone camera.
To the same end as mentioned above, the present disclosure further provides a computer program product according to claim 14.
To the same end as mentioned above, the present disclosure further provides a non-transitory information storage medium according to claim 15.
As the advantages of the system, of the computer program product and of the information storage medium are similar to those of the method, they are not repeated here.
The system, the computer program product and the information storage medium are advantageously configured for executing the method in any of its execution modes.
For a more complete understanding of the description provided herein and the advantages thereof, reference is now made to the brief descriptions below, taken in connection with the accompanying drawings and detailed description, wherein like reference numerals represent like parts.
In the description which follows, although making and using various embodiments are discussed in detail below, it should be appreciated that as described herein are provided many inventive concepts that may embodied in a wide variety of contexts. Embodiments discussed herein are merely representative and do not limit the scope of the disclosure. It will also be obvious to one skilled in the art that all the technical features that are defined relative to a process can be transposed, individually or in combination, to a device and conversely, all the technical features relative to a device can be transposed, individually or in combination, to a process and the technical features of the different embodiments may be exchanged or combined with the features of other embodiments.
The terms “comprise” (and any grammatical variation thereof, such as “comprises” and “comprising”), “have” (and any grammatical variation thereof, such as “has” and “having”), “contain” (and any grammatical variation thereof, such as “contains” and “containing”), and “include” (and any grammatical variation thereof such as “includes” and “including”) are open-ended linking verbs. They are used to specify the presence of stated features, integers, steps or components or groups thereof, but do not preclude the presence or addition of one or more other features, integers, steps or components or groups thereof. As a result, a method, or a step in a method, that “comprises”, “has”, “contains”, or “includes” one or more steps or elements possesses those one or more steps or elements, but is not limited to possessing only those one or more steps or elements.
An ophthalmic lens according to the present disclosure may be a spectacle lens, a contact lens, an instrument lens or any other kind of lens used in ophthalmology or optics. For instance, it may be a corrective lens having a power of the sphere, cylinder, axis, addition and/or prism type. The lens power can be defined as the inverse of the focal distance of the lens. The lens may be a single vision lens having a constant power, or it may be a progressive lens having variable power, or it may be a bi-focal or a tri-focal lens.
If the lens is a single vision lens, the power of the lens is the power at the optical center of the lens, i.e. the point of the lens where light is not deviated when it goes through the lens.
If the lens is a progressive lens, the focal distance varies all along the lens, including far vision, intermediate vision and near vision areas. The power of the progressive lens comprises the power at a far vision point, the power at a near vision point and the power distribution in the lens.
If the lens is a bi-focal (respectively tri-focal) lens, the focal distance varies between the two (respectively three) different areas of the lens. The power of the bi-focal or tri-focal lens comprises the power in each of those areas.
As shown in
By way of non-limiting example, the at least one optical parameter may be the optical center or any of the optical parameters defining the power of the lens, i.e. any of the optical parameters contained in the prescription of the lens, namely, sphere and/or cylinder and/or axis and/or addition and/or prism.
Using an image capture device and a source pattern comprising a first and a second patterns, the general principle of the method disclosed in document WO 2021/140204 A1 consists in getting to know the source pattern and the deformation of the source pattern seen through the lens and analyzing the data relating to the source pattern and its deformation to retrieve the at least one optical parameter of the lens.
The implementation of that principle involves at least one processor, which may be comprised in a fixed and/or mobile device, such as in a fixed or portable computer and/or in a smartphone and/or in the cloud.
The implementation also involves a fixed or mobile device equipped with the above-mentioned image capture device.
The mobile device may be a smartphone.
The image capture device may be comprised in the smartphone i.e. the smartphone may be equipped with a camera. As a variant, the image capture device may be a separated image capture device.
In addition to the at least one processor, the mobile device and the image capture device, various combinations or configurations of elements are possible, provided at least one of the elements is able to show or display or reflect patterns and at least one of the elements is able to capture images.
The mobile device may be combined either with a reflection device, such as a mirror, or with a non-portable or fixed or portable computer. It is to be noted that, in the present disclosure, the “portable computer” may be a laptop, or a tablet, or any other type of portable computer.
As to the source pattern, it may be a two-dimension (2D) or three-dimension (3D) source pattern, available as an object, or printed on a piece of paper, or displayed on a screen.
The source pattern may be a 3D gauge the dimensions of which are known, or a credit card, or a target printed on A4 paper, or a target the size of which is known in pixels, displayed on the screen of a computer, smartphone or tablet.
By way of non-limiting example, the image capture device may be a 2D camera or a 3D scanner, with or without other sensors embedded into it, such as a telemeter and/or a gyroscope: it may be a 2D camera of a smartphone or tablet, or a combination of a high definition 2D camera with a 3D sensor, such as for example one or more TOF (Time Of Flight) sensors (i.e. sensors emitting light towards the source pattern which then reflects it, the distance between the source pattern and the TOF sensor being deduced from the light travelling time) or structured-light sensors (i.e. sensors projecting fringes or other known patterns towards the source pattern the deformation of which is analyzed by the sensor). The image capture device may or may not comprise additional hardware, such as a holder.
The image capture device is located at a first position and captures an image of the first and second patterns.
Namely, the method disclosed in document WO 2021/140204 A1 is based on the use of the source pattern, a part of which is seen by the image capture device through the lens (the first part) and another part of which is seen directly by the image capture device (the second part).
During the following steps 12 and 14, from the image of the first and second patterns obtained at step 10, a first set of data is obtained from the first pattern seen through the lens by the image capture device (step 12) and a second set of data is obtained from the second pattern seen outside the lens by the image capture device (step 14). Step 12 may be carried out before or after or at the same time as step 14.
Then, during step 16, by using the first and second data and taking into account, in a manner detailed hereafter, relative positions, i.e. positions with respect to each other, of the image capture device, the lens and the first and second patterns, the at least one optical parameter is retrieved, in a manner also detailed below.
Relative positions of the image capture device, the lens and the first and second patterns, i.e. positions, with respect to each other, of the image capture device, the lens and the first and second patterns, may be obtained partially or totally by using the second set of data.
A rough estimate and a refined estimate of the at least one optical parameter may be obtained, namely:
As shown in
There are several ways of detecting the frame.
The detection of the frame may comprise obtaining a model of a frame from a database where a plurality of kinds of frames have been stored and defined.
As a variant, the process of detecting the frame may comprise obtaining information on the frame, and then finding the frame in the image of the first and second patterns.
For example, the process of detecting the frame may comprise obtaining information on the frame by using the image capture device 26 located at a second position for taking a picture of the frame. To that end, the frame may be put on a plane surface such as a table, with the at least one lens in contact with the plane surface. In that case, the detection of the frame will also comprise obtaining a model from the image of the frame taken as described with reference to
If the source pattern (i.e. the first and second patterns 20) is not known (because for example the source pattern is not displayed on the screen of the smartphone), a picture of the first and second patterns 20 in front of the reflection device 28 (shown in
A picture of the first and second patterns together with a reference object (e.g. a credit card) might be captured.
As a variant, a picture of the first and second patterns might be captured with a camera with known focal length and pixel size, in addition with camera pattern distance information provided by a sensor (e.g. telemeter).
As another variant, a picture of the first and second patterns might be captured with a 3D camera.
If the source pattern is already known in pixels and the resolution and dimensions of the smartphone 24 are known, the source pattern is already known in the coordinate system of the image capture device 26, so that it is not necessary to take a picture of the source pattern with the image capture device 26 located at the first position or at the third position.
The obtaining of (i) the first and second sets of data, (ii) relative positions of the image capture device, the lens and the first and second patterns and (iii) the rough and refined estimates of the at least one optical parameter, will be described below in detail in a particular embodiment of the method, in which, in step 10, (a) the obtaining the image of the first pattern comprises reflecting, by a reflection device, the first pattern before it is seen by the image capture device through the lens and (b) the obtaining the image of the second pattern comprises reflecting, by the reflection device, the second pattern before it is seen by the image capture device outside the lens. Namely, it may be assumed that the step of seeing by the image capture device is carried out after the step of reflecting by the reflection device.
As shown in
The patterns 20 can be seen by the image capture device 26 thanks to a reflection device 28 which, by way of non-limiting example, is a mirror. The lens 30 (or a frame, if any, on which the lens 30 is mounted) is located between the image capture device 26 and the reflection device 28 so that the front surface of the lens 30 is tangent to the reflection device 28 at a contacting point P. The first and second patterns 20 and the image capture device 26 are oriented towards the lens 30.
For stability, the mirror may be put on a wall, in a vertical position, or on a table, in a horizontal position.
The above-mentioned general principle may be implemented as follows:
The coordinate system of the source pattern is constrained to the coordinate system of the camera of the smartphone 24, as they are physically bound to one another. The coordinate system of the frame is partially known, because the frame is held in contact with the reflection device 28.
Obtaining the first set of data, the second set of data and retrieving the at least one optical parameter implies calculations, which may be done through an algorithm fully embedded in the smartphone or running in a remote computer, or via an application (API) available on the cloud, or with a combination of elements embedded in the smartphone and elements available in a remote computer and/or on the cloud. Such a combination makes it possible to optimize the volume of data transferred and the computation time needed.
The above-mentioned calculations are detailed below in the configuration of
The calculations may be done in two parts.
The first part of the calculations is based on the points of the source pattern outside the lens 30. It is related to the second set of data mentioned previously.
In brief, in the first part of the calculations, the points outside the lens are used for determining the relative positions (including orientation and location) between the mirror and the camera i.e. the positions of the mirror and camera with respect to each other, by using ray tracing and running an optimization algorithm.
The first part of the calculations is described below in more detail.
In the following, a coordinate system R is a system uniquely determining the position of points or other geometric elements in an Euclidean space, by assigning a set of coordinates to each point. The set of coordinates used below is (x, y, z), referring to three axes X, Y, Z that are orthogonal to each other. Qobject, R is an object point expressed in the coordinate system R.
As the front surface of the lens 30 is tangent to the mirror, it is assumed that the coordinate system Rlens of the lens 30 is the same as the coordinate system Rmirror of the mirror: Rlens=Rmirror. The coordinate system of the smartphone 24 (also referred to as the “device”) is denoted Rdevice and the coordinate system of the image capture device 26 (also referred to as the “camera”) is denoted Rcamera. Since the source pattern (displayed on the screen of the device) and the image capture device 26 are physically on the same device, which is the smartphone 24, the transformation between Rdevice and Rcamera is known.
The transformation between Rmirror and Rdevice is calculated. This fully determines the transformation between Rcamera and Rlens:
where Rmirror->lens is the identity.
It is assumed that the physical dimensions of the screen 22 are known, so that the same notation is used for the object points which may be obtained in pixel units and for referring to the corresponding 3D points in Rdevice.
Images of the Qobject points are invariant with respect to rotations around the Z axis of the mirror and translations along the X and Y axes of the mirror. Therefore, only rotations around the X and Y axes of the mirror and translations along the Z axis of the mirror can be retrieved using the reflected points of the source pattern.
Let us define the following change of coordinate system from the device to the mirror, denoted Kdevice->mirror.
The function Reflect: (QRmirror, CRmirror)->ZRmirror, calculates the image of an object point QRmirror on the mirror, seen by the camera point CRmirror, after being reflected on the mirror. The result ZRmirror, as well as the object point QRmirror and the camera point CRmirror, are expressed in the coordinate system Rmirror of the mirror.
This is illustrated by
Let us define ZRcamera (QRdevice,θx,θy,tz) as the image point of the object point Q expressed in the coordinate system Rdevice of the device, as seen by the camera after reflection on the mirror. The rotation and translation parameters show the dependence of the image on the orientation of the device with respect to the mirror. The result ZRcamera is expressed in the coordinate system Rcamera of the camera. The following formula makes explicit how the ZRcamera image point is calculated:
The 3D image point ZRcamera can then be projected onto the 2D camera plane, in pixel units, using an appropriate camera model. A well-known pinhole camera model that factors in radial and tangential distortions may be used. Projection of a 3D image point on to the camera plane may use some camera parameters such as intrinsic parameters fx, fy, cx, cy and distortion parameters (see below). Such parameters may be obtained in many different manners, such as camera calibration, as detailed below.
The camera model is defined as follows for an object point (x,y,z) expressed in the coordinate system Rcamera of the camera:
This model requires that the camera be accurately calibrated, so that fx, fy, cx, cy, k1, k2 and k3 are known precisely.
However, a camera model different from the one described above (either more complex, or simpler) may be used, depending on the degree of precision necessary for a particular use of the method.
The orientation of the device with respect to the mirror can then be estimated by minimizing the following cost function Jorientation:
where m is an integer higher than or equal to 1.
The above equation can be solved by using any non-linear least-squares algorithm, such as the Gauss-Newton or the Levenberg-Marquardt algorithm.
In brief, ray-tracing and optimization enable to know the relative positions of the mirror, frame and lens in the camera coordinate system, that is to say the positions of the mirror, frame and lens with respect to each other in the camera coordinate system.
The second part of the calculations is based on the points of the source pattern seen by the image capture device 26 through the lens 30.
In brief, in the second part of the calculations, the points inside the lens and the relative positions (including orientation and distance) between mirror and camera which were determined in the first part of the calculations are used, by running an optimization algorithm and using ray tracing, for:
The second part of the calculations is described below in more detail.
In order to estimate the power of the lens 30, the observed magnification may be used for calculating the linear magnification, as follows:
where tz* is the distance from the device to the mirror (which is approximately the same as the distance from the device to the lens 30), obtained previously.
In order to obtain the linear magnification, the area magnification is first calculated, by forming the convex hull of the refracted and reflected object points, denoted respectively
The linear magnification is taken as the square root of the area magnification, which then gives an estimate of the power of the lens 30.
This is a rough estimate, because it does not account for astigmatism and the paraxial approximation is implied in the lens power formula, which means that the rays are assumed to make a small angle with respect to the optical axis and enter the lens close to the optical center.
In the following steps, an estimated physical lens is calculated, which will serve as a starting point for the optimization algorithm.
Using the estimated power, the most likely lens material is selected. A non-limiting example of classification regarding the refractive index (hereafter “Index”) of the material is given below:
Using the estimated power and the lens material, a spherical front surface Sfront is chosen, based on a compromise between esthetics and optical performance. This process is called base curve selection and is specific to each lens manufacturer. However, it is considered that the selected front surface will not vary greatly between manufacturers for a given prescription and a given material.
In that step, a center thickness for the lens is selected as well and is denoted e.
Then, a rear spherical surface is calculated in order to match the estimated power. This may be done by using a thin lens model:
where Index is the refractive index and FrontRadius and RearRadius are respectively the radii of the spherical front and rear surfaces.
A ray-tracing based least-squares optimization is performed in order to find a physical lens (hereafter “OptimalLens”) that yields the same image Pimage of the Pobject set of points as the one observed, i.e. Image (OptimalLens, Pobject)=Pimage.
As mentioned previously, the lens front surface is assumed to be tangent to the mirror at the contact point.
For an object point Pobject, Rmirror, let us define the function Propagate (Pobject, Rmirror, CRmirror,Lens)->WRmirror, which calculates the image of the object point Pobject, Rmirror seen by the camera point CRmirror, after being refracted by the lens 30 and reflected on the mirror, then refracted by the lens 30 once more, as shown in
At this point, the orientation and distance of the device (i.e. the smartphone 24) with respect to the mirror are known, thanks to the first part of the calculation described previously.
Using the same notation as before for the transformation from the coordinate system Rdevice of the device to the coordinate system Rmirror of the mirror, we have Kdevice->mirror (tx,ty)=Kdevice->mirror (θx*,θy*,tz*)+Tx(tx)+Ty(ty).
The previously estimated physical lens is used as a starting lens, which can then be optimized.
Namely, a toroidal rear surface will replace the spherical rear surface of the previously estimated physical lens, with both torus radii equal to the sphere radius at the beginning of the optimization process. The torus radii of the lens rear surface are denoted r1 and r2 and the torus axis of the lens rear surface is denoted a.
Given an object point Pobject, Rdevice expressed in the coordinate system Rdevice of the device, WRcamera is its image point seen through the lens 30, expressed in the coordinate system Rcamera of the camera:
WRcamera (Pobject, Rdevice, tx,ty,r1, r2,a)=(Kcamera->device∘Kdevice->mirror)-1∘ Propagate (Kdevice->mirrorPobject, Rdevice,Kdevice->mirrorCRdevice,lens)
The translation parameters tx and ty, which were left undetermined in the previous steps, intervene here in the device to mirror coordinate system transformation. The radii and axis parameters intervene in the lens definition.
Last, the cost function defining the least-squares problem, which has to be minimized in order to reconstruct a lens that yields the same image as the one observed, is defined as follows:
where n is an integer higher than or equal to 1.
The above optimization procedure may be applied to other configurations, as long as the changes of coordinate system between the camera, the source pattern and the lens are known. Of course, if no reflection device is involved and rays coming from the source pattern directly reach the camera, the propagation function would need to be adapted, by removing the reflection.
The optimal lens is calculated using the back vertex power formula in both torus meridians Srear, 1 and Srear,2 of the lens rear surface:
Both meridians powers P1 and P2 are then obtained as follows:
where e is the center thickness.
It is assumed that P1<P2 (if this is not the case, those values are switched).
The cylinder value is P2-P1 in the positive cylinder convention.
The prescription axis is the torus axis a*, corrected using the rotation parameter θz, which is the angle between the X axis in the coordinate system Rcamera of the camera and the X axis in the coordinate system Rmirror of the mirror and which is obtained using the frame detected in the picture.
In brief, the method disclosed in document WO 2021/140204 A1:
At the end of such calculations, the power of the lens in the coordinate system Rframe of the frame is obtained.
It should be noted that it is not necessary to detect the frame for separating points that are within the lens and points that are outside the lens. Nevertheless, as frame detecting may be used for determining the cylinder axis of the lens if the lens is a single vision lens, frame detecting may also be used in that particular embodiment for securing separation of points within and outside the lens.
As described above, the method disclosed in document WO 2021/140204 A1 is based on the use of a source pattern, a part of which is seen directly by the image capture device and another part of which is seen by the image capture device through the lens. Identification of the source pattern is detailed below.
A feature matching algorithm may be run in order to group object points on a known source pattern with the corresponding image points on the picture taken by the image capture device. The frame contour may be used as a mask to separate points seen through the lens (Pobject, Pimage) from those that are only seen on the mirror (Qobject, Qimage).
Therefore, two sets of matching points are obtained.
where n and m are integers higher than or equal to 1.
As an example, the first and second patterns 20 may comprise concentric black rings and white rings, as shown in
Such a source pattern may be used for both the lower and the upper parts of the screen 22, i.e. both for the first pattern and the second pattern. As another example, it may be used only for the upper part of the screen 22 and a QR-code may be used for the lower part of the screen 22.
Using an image processing algorithm, the four circles of that source pattern (two black circles and two white circles) may be extracted and each circle may be discretized into a predefined number of points.
On the picture taken by the image capture device 26, representing the deformed pattern, an ellipse is fit on each circle.
In order to find the ellipses, a Region Of Interest (ROI) may be set on the picture to restrain the research area. The picture may be converted from RGB to grayscale (as shown in
Then, the projection of each discretized point is computed from the object circles with ray tracing through the simulated lens on the 2D plane of the ellipses, so as to obtain so-called projected points. An algorithm is then used in order to find the position of the closest point, on the corresponding ellipse, of each projected point (as shown in
Advantageously, only the two largest ellipses (shown on the left of
The gap in each dimension (x and y) between each point and its closest point on the ellipse is used for the optimization. The gap in each dimension between the center of each ellipse and the centroid of the corresponding projected points is also used, for example with a weight equal to the number of projected points. This makes it possible for the centroid of the projected points to converge at the same position as the center of the ellipses, as shown in
In any of the above-mentioned pattern configurations:
As shown in
In the “frame-learning” stage, for facilitating for a user the operation of obtaining an image of the frame, the image capturing may be guided e.g. by displaying centering and/or aligning marks on the screen 22 of the smartphone 24, so that the positioning of the frame on a table with preferably homogeneous background be optimal (e.g. frame visible on the full width of the picture, centered and aligned with the horizontal axis). The above-mentioned marks may be red at the beginning and become green when predefined conditions are respected.
If the smartphone 24 is equipped as usual with a gyroscope, such gyroscope may be used for alerting the user if the smartphone 24 is not positioned correctly (e.g. horizontal/parallel to the table).
The color of the background may then be detected by means of a histogram analysis of the colors. The background may then be extracted by a flood fill (also known as seed fill) algorithm. Then, the image of the frame may be binarized and morphology operators which are known per se may be applied to extract the part of the image corresponding to the frame and its mask, i.e. its contour. Any residual rotation of the image may be corrected by techniques known per se.
Using the information on the frame (also referred to as a “model” of the frame) obtained in the “frame learning” stage, the spectacle frame contour can be detected in the picture taken with the help of the reflection device 28 during the frame detection stage: the scaling factor to apply to the model may be calculated based on the distance from the smartphone 24 to the frame. The lens to search (left or right) is known. The useful part of the frame may be extracted and the photo may be enlarged on the edges. A certain number of orientations of the model of the frame may be tested in order to find the best position of the frame by studying, for each angle, the correlation between the frame visible in the image of first and second patterns 20 and the model obtained during the “frame learning” stage. If the frame is detected, the best position (location and orientation) of the frame in the image is selected. The technique described above is also valid when the frame is partially visible in the image of the first and second patterns 20.
The schematic view of
As will be seen below from the detailed description of those various embodiments, the computer 180 may be used for displaying the first and second patterns 20, while the smartphone 24 may be used for capturing images. In other words, in such a configuration, two electronic devices are involved.
On the other hand, in previously described examples of the method disclosed in document WO 2021/140204 A1 not involving any computer, but only the smartphone 24 and the reflection device 28, the smartphone 24 may be used both for displaying the first and second patterns 20 and for capturing images. In other words, in such a configuration, only one electronic device is involved. In addition, in such a configuration, the position of the frame 140 may be at least partially known by simply positioning the frame 140 in contact with the reflection device 28. Thus, the obtaining of relative positions of the reflection device 28, the frame 140 if any, and the image capture device 26 (i.e. positions of the reflection device 28, the frame 140 and the image capture device 26 with respect to each other) becomes simplified.
Therefore, the configuration combining the use of the smartphone 24 and the reflection device 28 is a simplified configuration in comparison with the configuration combining the use of the smartphone 24 and the computer 180.
According to one of those examples, the implementation of the method disclosed in document WO 2021/140204 A1 comprises:
For facilitating use of the system disclosed in document WO 2021/140204 A1 by any user without wearing his/her eyeglasses, automatic assistance to the user, also referred to as “user guidance” in the present disclosure, may be provided, as described previously in relationship with the “frame learning” stage.
Positioning the smartphone 24 may comprise following steps:
In addition, eyeglasses positioning may comprise following steps:
Smartphone positioning and eyeglasses positioning steps are detailed below.
For smartphone positioning, the user opens and runs an application available in the smartphone 24 while holding the smartphone 24 in front of a mirror with the smartphone screen facing the mirror.
Advantageously, the smartphone 24 has a predetermined tilt angle with respect to the mirror. This simplifies the user experience, as described below.
The smartphone gravitometer, which computes the earth's gravitational attraction on the three axes X, Y, Z, respectively pitch, roll and yaw axes of the smartphone 24 as shown in
Having a tilt angle amounts for the smartphone 24 to have a predetermined part of the earth's gravitational attraction on the yaw axis Z and the rest of the attraction on the roll axis. Thus, the smartphone 24 is oriented with respect to the mirror in such a manner that the upper part of the smartphone 24 is closer to the mirror than the lower part of the smartphone 24. In other words, the smartphone is tilted forward.
In this embodiment, as shown in
By way of non-limiting example, the first fixed object 220 may be a colored geometric shape, e.g. a blue rectangle.
In addition, a first moving object 222 of a second predetermined color different from the first color and having a size lower than or equal to the size of the first fixed object 220, is also displayed on the smartphone screen.
By way of non-limiting example, the first moving object 222 may be a geometric shape identical to the shape of the first fixed object 220, e.g. the first moving object 222 may be a white rectangle.
The first moving object 222 is moving according to the tilting of the smartphone 24. Having the first moving object 222 displayed inside the first fixed object 220, as shown in
As a variant, both first objects 220 and 222 could be moving with respect to each other, although this may be less ergonomic for the user.
For distance positioning, i.e. for ensuring that the smartphone 24 is at the right distance from the mirror, one or several distance positioning patterns 230 may be displayed at the bottom of the smartphone screen. If several distance positioning patterns 230 are displayed, e.g. two distance positioning patterns 230, they may be identical to each other.
As the position of the distance positioning patterns 230 on the smartphone screen may be fixed independently of the smartphone type, that position will thus be known, which makes it possible to calculate the distance between two adjacent distance positioning patterns 230 at various distances from the mirror, so that the distance between the smartphone 24 and the mirror is known and it is known when the smartphone 24 is an appropriate distance from the mirror, e.g. 30 cm, for performing the subsequent steps of the method.
A message, such a mirrored character or character string, may be displayed on the smartphone screen in case the smartphone is too close to the mirror, in order to invite the user to move the smartphone backward.
Similarly, a message such as a mirrored character or character string may be displayed on the smartphone screen in case the smartphone is too far from the mirror, in order to invite the user to move the smartphone forward.
Adapting the brightness of the smartphone screen may be useful, because the light environment may vary. To that end, the following steps may be implemented.
During a loop, the smartphone screen brightness is first increased by small steps until it reaches a maximum predetermined value and is then decreased similarly by small steps until it reaches a minimum predetermined value.
As soon as all the distance positioning patterns 230 are detected in one picture during the loop, the loop stops and the smartphone screen brightness is adjusted by small steps until the mean color of the matched template 240 is in the range [120; 140] in the grayscale color space [0; 255].
Thus, the user will see the smartphone screen alternate between bright and dark during the loop and, as soon as all the distance positioning patterns 230 are detected, a predetermined sign such as a “stop” sign will be displayed and/or voice guidance will invite the user to stop moving the smartphone.
At such time, the smartphone 24 is tilted optimally and positioned at the right distance and optimal brightness has been reached.
For detecting the eyeglasses frame 140 in the camera stream, an object detection and recognition model based on a neural network may be used in order to detect the lens 30 in the camera stream. By way of non-limiting example, a neural network of the Yolo v3-Tiny type may be used. The model is trained using a predetermined number of pictures of a frame against a mirror and pictures of a frame on a person's face.
The neural network returns the position and size, called Region Of Interest (ROI), of all the lenses it detects in the camera stream. For example, it may return at least two lenses, which correspond to the left and right lenses in the eyeglasses frame 140.
As shown in
The second fixed object 260 may for example be a colored geometric shape, e.g. a red circle.
In addition, a second moving object 262 of a second predetermined color different from the first color and having a shape and size equal to the shape and size of the first fixed object 260, is also displayed on the smartphone screen. It represents the lens 30 of which at least one optical parameter is to be retrieved.
If the second fixed object 260 is a red circle, the second moving object 262 may be for example a green circle.
The user is invited to move the smartphone 24 so that the second moving object 262 matches the second fixed object 260, which means that the frame 140 and the smartphone 24 are correctly positioned.
As a variant, both second objects 260 and 262 could be moving with respect to each other, although this may be less ergonomic for the user.
When the second moving object 262 matches the second fixed object 260, a predetermined object or message is displayed on the smartphone screen, so that the user knows that the frame 140 and smartphone 24 should not be moved. By way of non-limiting example, as a predetermined object, a white circle on a green background may be displayed.
At that time, pictures are automatically taken by the camera of the smartphone 24 for processing according to the method disclosed in document WO 2021/140204 A1 in order to retrieve the at least one optical parameter of the lens 30.
A variant of the step of obtaining of a rough estimate of the at least one parameter is described below, in an example where the at least one parameter is the lens power.
In that variant, the three following steps, detailed hereafter, are carried out:
It is noted that, in this variant, the relative positions of the camera, the lens and the pattern 20 are not used in this step, as detailed below.
By using the part of the pattern 20 that is seen by the camera only i.e. outside the lens 30 (for example the QR code of
Then, by using the part of the pattern 20 that is seen by the camera through the lens 30 (for example the circular target of
Then, the horizontal, vertical and diagonal magnifications for the lens 30, respectively denoted Mh, Mv and Md, are extracted as follows:
Mh=MLCh/MCh
Mv=MLCv/MCv
Md=MLCd/MCd
Step C: estimation of the lens power based on the estimation obtained at step A and the estimations obtained at step B
By using the estimated distance d obtained at step A and the magnifications Mh, Mv and Md obtained at step B, the lens power in the horizontal direction, denoted Power_h, the lens power in the vertical direction, denoted Power_v and the lens power in the diagonal direction, denoted Power_d are determined as follows:
Power_h=(Mh−1)/(Mh×d)
Power_v=(Mv−1)/(Mv×d)
Power_d=(Md−1)/(Md×d)
In more detail, the above formulas are obtained as follows, referring to
Magnifications M are defined as A′B′/AB=OA′/OA.
The power P is defined as (1/OA′)−(1/OA).
Thus, OA′=(1/(P+(1/OA))), which gives OA′/OA=(1/(P·OA+1)).
As OA=−d, M=1/(1−d·P).
As a result, P=(M−1)/(M·d).
In the context of the above-described method disclosed in document WO 202/140204 A1, it is necessary to guide the user to position the smartphone 24 such that the displayed pattern is at the right position with respect to the lens 30 of which it is desired to retrieve at least one optical parameter such as the lens power. This is the purpose of the method according to the present disclosure for estimating the position of the optical center of the lens 30 mounted on the frame 140. This method is described in detail below.
As shown in
In a particular embodiment, a neural network is used for such detection. By way of non-limiting example, a finely tuned neural network of the Yolo v3-Tiny type as mentioned above may be used.
During step 280, the position of the frame may also be detected in the camera stream.
In a particular embodiment, the neural network is capable of providing in real time a bounding box that is the smallest rectangle containing the image of the lens 30.
In an embodiment where the position of the frame is detected, the method according to the present disclosure is performed using the referential of the frame, taking the center of the referential in the center of the bounding box, assuming it is the center of the bridge.
Instead of using a neural network, an image processing algorithm may be used for detecting the lens position in the camera stream.
Following step 280, at a step 282, a plurality of dimensional parameters either of the frame, or of the lens, is obtained.
Step 282 may be performed by using a neural network, which may be the same as the one used for detecting the position of the lens 30 at step 280. When obtaining dimensional parameters of the frame 140, the neural network may detect the image of the frame in the camera stream and may provide a bounding box around the frame bridge.
As a variant, step 282 may be performed by using an image processing algorithm.
As another variant, step 282 may be performed by using a database using a reference code displayed on one of the arms of the frame 140.
As still another variant, step 282 may be performed by using a statistical model that provides average values, which may depend on the frame type that may be detected by a neural network.
The plurality of lens dimensional parameters may comprise a dimension corresponding to the lens width A and/or a dimension corresponding to the lens height B.
By way of non-limiting example, the lens width A and the lens height B may be extracted from the bounding box.
The plurality of frame dimensional parameters may comprise at least a dimension corresponding to the frame bridge width D. In addition, the plurality of frame dimensional parameters may also comprise the A size and/or the B size.
By way of non-limiting example, the frame bridge width D may be obtained as a statistical estimate based on a collection of frames.
Last, at a step 284, the estimated position of the optical center is determined based on the detected lens position and either the plurality of frame dimensional parameters, or the plurality of lens dimensional parameters, by estimating a relationship between the obtained dimensional parameters. By way of non-limiting example, in the coordinate system of the lens, a first order estimation of the optical center position may be A/2.
In a particular embodiment, for more accuracy of the estimation of the optical center position, the method according to the present disclosure comprises obtaining a plurality of dimensional parameters of the frame 140 and further comprises obtaining an estimated pupillary distance PD. In that embodiment, step 284 is based on the detected lens position, the plurality of frame dimensional parameters and the estimated pupillary distance PD. By way of non-limiting example, the estimated position of the optical center may be given by the position of the eye inside the lens (corresponding to the pupillary distance PD) in the referential of the bounding box. Thus, for the full frame, the estimation of the optical center position may be Ax2+D−PD, centered on the bridge and for one lens, it may be A+D/2−PD/2.
The estimated pupillary distance PD may be an exact value, measured on a wearer of the lens 30 either manually or by using a mobile application (API). As a variant, it may be an approximate value, obtained from a statistical model, in which case the value may depend on ethnicity.
A system according to the disclosure, for implementing a method comprising the above-mentioned steps 280, 282 and 284 for estimating the position of the optical center of an ophthalmic lens mounted on a frame, comprises at least one processor and a mobile device equipped with the above-mentioned image capture device.
In a particular embodiment, the mobile device is a smartphone and the image capture device is a smartphone camera, as described above with reference to
In a particular embodiment, the method according to the disclosure is computer-implemented. Namely, a computer program product comprises one or more sequences of instructions that are accessible to a processor and that, when executed by the processor, cause the processor to carry out steps 280, 282 and 284 of the method as described above for estimating the position of the optical center of an ophthalmic lens mounted on a frame.
The sequence(s) of instructions may be stored in one or several non-transitory computer-readable storage medium/media, including a predetermined location in a cloud.
Although representative systems and methods have been described in detail herein, those skilled in the art will recognize that various substitutions and modifications may be made without departing from the scope of what is described and defined by the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
21306739.0 | Dec 2021 | EP | regional |
This application is the US national stage of PCT/EP2022/084922, filed Dec. 8, 2022 and designating the United States, which claims the priority of EP 21306739.0, filed Dec. 10, 2021. The entire contents of each foregoing application are incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2022/084922 | 12/8/2022 | WO |