The disclosure relates to calibration method for determining at least one parameter of an image acquisition module of an electronic device.
Usually, a person wishing to have an optical equipment goes over to an eye care practitioner.
The determination of the wearer's prescription and fitting data may require carrying out complex and time-consuming measurements. Such measurements usually require complex and costing material and qualified personnel to be carried out.
However, recent developments allow using an electronic device, such as a smartphone to determine optical parameters of a person, such as the prescription of the wearer or the fitting parameter, or optical parameter of an optical device.
An example of the use of a portable electronic device to determine an optical parameter of a lens of eyewear adapted for a person is disclosed in WO 2019/122096.
The use of a portable electronic device to determine optical parameters requires knowing some of the characteristics of the portable electronic device.
The variety of different portable electronic devices available requires having a calibration protocol that is easy to implement and allows determining parameters of a portable electronic device to determine if such device may be used to determine specific optical parameters and the key characteristic of such portable electronic devices that are required to determine the optical parameters.
The calibration method of the disclosure is an alternative to a characterization process that is usually done in a laboratory with specific metrological equipment. Such characterization process is often done as a conclusion of a manufacturing process of the electronic device and renewed regularly to maintain the precision of the device.
Such characterization process requires specific metrological equipment and highly trained professional and therefore may not be carried out on a large scale for a great variety of portable electronic devices.
The existing smartphone application uses many of the integrated hardware sensors to allow a simple and precise determination of parameters relative to the prescription of an optical device, for example lens fitting. Such applications are usually used on pre-qualified smartphones which have been individually calibrated in a laboratory. This calibration can be done on a single sample of a given model if the dispersion of the characteristic parameters is known to be low enough. Otherwise, the calibration needs to be done on each smartphone individually. This is particularly the case for smartphones running with Android or Windows® operating systems. These operating systems are used for a broad number of smartphones, and these smartphones have different image acquisition module parameters.
This could also be extended to other portable electronic devices provided with an image acquisition module placed on the same side of a display screen.
Therefore, there is a need for a method for determining at least one parameter of the image acquisition module of an electronic device that can be easily implemented by an untrained user and for calibrating any portable electronic device without requiring the use of specific metrological equipment or requiring the presence of an eyecare professional or a trained professional.
One object of the present disclosure is to provide such a calibration method.
To this end, the disclosure relates to a method for determining at least one parameter of an image acquisition module of an electronic device, the electronic device having a display screen and the image acquisition module on the same side of the electronic device, the method comprises the following steps:
Advantageous, the method of determination of the disclosure is an assisted determination method. Providing indications to the user, the calibration method of the disclosure relies as little as possible on the user operating the method and does not require any specific knowledge. Additionally, the method requires a low effort from the user.
Advantageously, the method enables to determine at least one parameter of an image acquisition module regardless the type of electronic device, as long as the image acquisition module, for example a front camera, is placed on the same side of the electronic device as the display screen.
According to further embodiments of the method which can be considered alone or in combination:
Another object of the disclosure is a computer program product comprising one or more stored sequences of instructions which, when executed by a processing unit, are able to perform the parameter determining step of the method according to the disclosure.
The disclosure further relates to a computer program product comprising one or more stored sequences of instructions that are accessible to a processor and which, when executed by the processor, causes the processor to carry out at least the steps of the method according to the disclosure.
The disclosure also relates to a computer-readable storage medium having a program recorded thereon; where the program makes the computer execute at least the steps of the method of the disclosure.
The disclosure further relates to a device comprising a processor adapted to store one or more sequences of instructions and to carry out at least steps of the method according to the disclosure.
Non limiting embodiments of the disclosure will now be described, by way of example only, and with reference to the following drawings in which:
Elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figure may be exaggerated relative to other elements to help improve the understanding of the embodiments of the present disclosure.
The disclosure relates to a method, for example at least partly implemented by computer means, for determining at least one parameter of an image acquisition module 12 of an electronic device 10.
The electronic device further comprises a display screen 14.
The electronic device 10 may be a smartphone or a personal digital assistant or a laptop or a webcam or a tablet computer.
The image acquisition module 12 is located on the same side of the electronic device 10 than the display screen 14. The image acquisition module 12 may typically be a camera.
In a preferential embodiment, the image acquisition module 12 comprises a lens.
The electronic device may be portable, and for example may further comprise a battery.
The electronic device may comprise processing means that may be used to carry out at least part of the steps of the method of determination according to the disclosure.
The method aims at determining parameters of the image acquisition module 12 of the electronic device 10.
The method comprises a first step S2 being an initialization step, wherein a first pattern 16 is displayed on the display screen 14.
The first pattern 16 comprises a first element 16a, a second element 16b and a third element 16c.
The first element 16a has a fixed location on the display screen. The second element 16b is movable over the display screen 14 based on the orientation of the electronic device 10.
An element having a fixed location implies that said element remains static over the display screen 14, when the electronic device 10 is moved.
An element is considered to be movable, when the position of the element on the screen is dependent on the orientation of the electronic device 10. By rotating the electronic device 10, the movable element moves on the display screen.
The third element 16c has a given shape. By achieving a particular positioning of the second element 16b with respect to the first element 16a, the particular shape of the third element 16c is reproduced.
The method comprises a second step S4 being a positioning step, wherein the electronic device 10 is positioned in front a mirror 18 (shown
Based on said positioning of the electronic device 10 with respect to the mirror 18, the content of the display screen 14 is reflected on the mirror and can be acquired by the image acquisition module 12, when desired.
The method comprises a third step S6 being an orientation step, wherein the electronic device 10 is oriented, with respect to the mirror 18, in a particular orientation such that the second element 16b of the first pattern 16 moves, based on a rotation of the electronic device 10 provided by the user, to reach a target position.
The target position is reached, when the positioning of the second element 16b with respect to the first element forms a shape identical to the particular shape of the third element 16c.
Advantageously, the third element 16c is displayed on the screen to help the user to rightfully position the second element 16b with respect to the first element 16a, so as to form together a shape identical to the third element 16c.
Advantageously, the third element 16c is displayed to help the user when orienting the electronic device and to show the shapes to be achieved by having the second element 16b moved with respect to the first element 16a.
The method comprises a fourth step S8 being an orientation confirmation step, wherein the electronic device is maintained in the particular orientation over a period of time.
For example, the period of time may be 1.5 s, preferably Is, even more preferably 0.5 s.
After the electronic device 10 has been maintained in position for the given period of time, the first pattern 16 is no longer displayed. Once the first pattern 16 has disappeared, a second pattern 20 is displayed.
The second pattern comprises a set of fourth elements 20a having fixed locations on the screen.
The second pattern may be a set of circular elements. The second pattern may be a set of square elements or rectangular elements or polygonal elements or triangular elements or star shape elements.
For circular elements, the reference point can be the center of the circular element.
For square elements or the rectangular elements, the reference point can be the intersection of the diagonals.
For triangular elements, the reference point can be the intersection of the medians, bisectors or the perpendicular bisectors.
For polygonal elements, the reference point can be the centroid of the polygon, which can be computed as the center of gravity of its vertices, or for example using the plumb line method or the balancing method.
The second pattern may comprise fourth elements 20a having different shapes, for example a combination of circular and/or square and/or rectangular and/or polygonal and/or triangular and/or star shape elements.
A picture of the second pattern 20, seen through the mirror 18, is acquired by the image acquisition module 12.
The method comprises a fifth step S10 being a reference point determination step, wherein said set of fourth elements 20a of the second pattern 20 are detected on the acquired image. A reference point associated with each of said fourth elements 20a is determined.
Steps S2 to S10 are reiterated several times, wherein each time the position of the first element 16a of the first pattern 16 is different, resulting in different orientations of the electronic device 10 in the orientation step S6.
Finally, the method comprises a sixth step S12 that is an image acquisition module 12 parameter determination step. Based on the reference points of each element 20a of the set of fourth elements obtained during each orientation of the electronic device 10, the image acquisition module parameter is determined.
To determine the image acquisition module parameter value, the following parameters should be considered:
A given point is defined as Q=(XQ, YQ, ZQ) in a three-dimensional reference system R being attached to the image acquisition module 12.
The three-dimensional reference system R may be a three-dimensional reference system specific to the image acquisition module 12, for example centered on the lens of the image acquisition module.
A projection of a point Q defined (XQ, YQ, ZQ) defined in R on an image acquired by the image acquisition module 12, having the two-dimensional reference system R2, is defined as (u,v)=Φ(Q) and is calculated by the following steps:
a. u′=αXN+2p1XNYN+p2(n+2XN2)
b. v′=αYN+2p2XNYN+p1(n+2YN2), and
When performing Steps S2 to S10 according to the invention, N images Ik, with k=1, . . . , N of the second pattern 20 are acquired. The second pattern 20 comprises m points Pi=(Xi, Yi, Zi) with i=1, . . . , m, wherein Zi is constant. Xi and Yi are defined along orthogonal axis X and Y of a plane, wherein the plane is defined by the display screen 14 of the electronic device 10 (as shown in
In each image Ik, m reference points are acquired during the reference point determination step S10. One reference point is determined for each fourth element 20a.
The coordinates, in the two-dimensional reference system R2, of each of the m reference points on the image Ik are determined by the projection (uk,i, Vk,i), with k=1, . . . , N and i=1, . . . , m.
Each image Ik acquired by the image acquisition module 12 may have a different number of reference points mk based on the number of fourth elements displayed on the display screen 14 and/or based on the number of fourth elements visible on the acquired image based on the orientation of the electronic device 10, induced by the degree of rotation of said electronic device with respect to the mirror 18.
For the simplicity of the disclosure, the number of reference points mk is kept identical among the different images Ik acquired by the image acquisition module 12.
For each image Ik, with k=1, . . . , N, the image formed by the reflection of the second pattern 20 displayed by the display screen 14 on the mirror 18 varies in the three-dimensional referential system R, resulting in different acquired images Ik. This position of the image of the second pattern 20 is defined by a rotation matrix Mk and a translation vector Tk.
The points Pi=(Xi, Yi, Zi), with i=1, . . . , m and with Zi=0 formed on the second pattern 20, for a given orientation of the electronic device 10, are then expressed in the three-dimensional referential system R by:
The projection of the points Qk,i, defined in the three-dimensional reference system R, in the two-dimensional reference system R2 of the image should correspond to the detected points:
As describe in Burger, Wilhelm, “Zhang's Camera Calibration Algorithm: In-Depth Tutorial and Implementation”, 2016, a procedure enables to calculate the parameters of the image acquisition module, such as the radial or the tangential distortion coefficients k1, k2, k3, p1, p2 and the intrinsic parameters fx, fy, u0, v0.
Said radial distortion coefficients k1, k2, k3, of the distortion factor a, and tangential distortion coefficients p1, p2, of the distortion corrections u′ and v′, and the intrinsic parameters fx, fy, u0, v0 are derived from the position (uk,i, Vk,i) of each reference point Qk,i in each of the image Ik acquired by the image acquisition module 12, and the points Pi=(Xi, Yi, Zi) with i=1, . . . , m, considering the following steps:
The calculated distortion coefficients may be the radial distortion coefficients k1, k2, k3 or the tangential distortion coefficients p1, p2.
The determination of the homography may involve a non-linear refinement.
In a preferred embodiment, an optimization algorithm is used in order to provide a better estimate of the parameters of the image acquisition module 12, such as the radial or the tangential distortion.
The optimization algorithm may be a Levenberg-Marquardt algorithm.
In said Levenberg-Marquardt algorithm, a cost function may be calculated taking into consideration the known second pattern 20 comprising the m points Pi=(Xi, Yi, Zi) with i=1, . . . , m and the known detected reference points (uk,i, Vk,i), with k=1, . . . , N and, i=1, . . . , m
The extrinsic parameters are exclusive to each of the acquired image Ik. Therefore, the vector ω comprises 9 parameters defined by the intrinsic and distortion coefficients, as well as 6×N parameters (3 parameters for each rotation matrix Mk and 3 parameters for each translation vector Tk), with N defining the number of acquired images by the image acquisition module 12.
Given the parameters vector ω, the projection (P can be calculated.
The vector ω comprises parameters, ω0, . . . , ω8 corresponds to the intrinsic and distortion coefficients fx, fy, u0, v0, k1, k2, k3, p1, p2, the other parameters ω9, . . . , ω9+6(N-1)+5 corresponding to the extrinsic parameters linked to the rotations (rotation matrix Mk) and translations (translation vector Tk) of each image Ik.
The projection Φ can be calculated with the following steps:
Step 1: Extrinsic Parameters Determination for Each of the Images Ik, with k=1, . . . , N
A vector Rk(ω)=(ω9+6(k-1)+0, ω9+6(k-1)+1, ω9+6(k-1)+2), having 3 parameters, leads to the determination of the rotation matrix Mk, using the Euler-Rodrigues method.
Said determination comprises the following sub-steps:
The translation vector Tk is determined by the following parameters of the vector ω,
For each of the acquired images Ik, with k=1, . . . , N, and each of the points Pi of the given acquired image, with i=1, . . . , m, the points Pi are defined in the three-dimensional reference system R based on the following equation:
For each of the acquired images Ik, with k=1, . . . , N, and each of the reference points Qk,i(ω) of the given acquired image, with i=1, . . . , m, the error is obtained based on the following equation:
For each of the acquired images Ik, with k=1, . . . , N, and each of the reference points Qk,i(ω) of the given acquired image, with i=1, . . . , m, the cost function is defined as:
The cost function J enables to Optimize the Zang's method, in a second estimation of the parameters, such as the intrinsic parameters and the distortion coefficients.
The image acquisition module 12 comprises a camera having a lens.
The image acquisition module parameter may be the focal length of the lens of the camera.
The image acquisition module parameter may be a chromatism parameter of the lens of the image acquisition module.
The image acquisition module parameter may be a luminosity parameter of the lens of the image acquisition module.
The image acquisition module parameter may be a distortion coefficient of the lens of the camera.
The distortion coefficient may be radial distortion and/or tangential distortion and/or barrel distortion and/or pincushion distortion and/or decentering distortion and/or thin prism distortion.
The image acquisition module parameter may be the optical center of the lens of the camera.
Preferably, the steps S2 to S10 are reiterated at least nine times in order to have a robust value of the parameter of the acquisition module 12.
Said parameter value may be even more robust, if further reiteration of the steps S0 to S10 are proceeded, for example more than ten iterations, more than fifteen iterations, more than twenty iterations.
In each of the iteration the user is requested in the orientation step S6, solely to rotate the electronic device 10 according to the pitch axis X (
No translation of the electronic device 10 with respect to the mirror 18, in the orientation step S6, is requested as it would not result in a different angular positioning of the electronic device 10 with respect to the mirror 18.
According to an embodiment, the steps S2 to S10 are repeated at least four times, and in each of this orientation step S6, the electronic device is rotated at least according to one rotational degree of freedom.
According to an embodiment, the steps S2 to S10 are repeated at least four times, and in each of this orientation step S6, the electronic device is rotated according to one rotational degree of freedom.
In an embodiment, the method may comprise an additional method step S0 being performed for each reiteration, starting from the second iteration.
The additional step S0 is a controlling step, wherein it is controlled that the new position of the first element 16a of the first pattern would lead to an orientation of the electronic device which is different from the orientations of the electronic device which has already been achieved in one of the previous iterations of steps S2 to S10.
Namely the controlling step S0, aims to display the first element 16a at a particular location of the display screen 14 being different from the one used previously in the different initialization step S2 of the previous iterations.
The first pattern 16 comprises a first element 16a, a second element 16b and a third element 16c.
The third element 16c comprises at least a first portion 16c1 and a second portion 16c2. The arrangement of said first and second portions 16c1, 16c2 corresponds to a particular positioning of the first element 16a with respect to the second element 16b.
Advantageously, the third element is displayed to help the user when orientating the electronic device and to show the shapes to be achieved when moving the second element 16b with respect to the first element 16a.
The displacement shown in
The displacement of the second element 16b on the display screen 14 is caused by the orientation of the electronic device. A sensor measures the degree of rotation and/or inclination of the electronic device and based on the inclination measures by the sensor, a processor performs a translation of the second element 16b over the display screen 14.
The sensor might be an accelerometer and/or a gyroscope.
The first and the second elements 16a, 16b are considered to be forming a shape identical to the third element 16c, if the second element 16b is positioned with respect to the first element 16a making a form similar to the one of the third element 16c, tolerating a margin of a few pixels, for example 1 pixel or 5 pixels.
The given margin of a few pixels may be greater than 1 and smaller than 10 pixels, preferably smaller than 5 pixels.
The shapes of the first, second and third elements shown in
In the embodiment illustrated in
In the orientation step S6, the user is requested to move the second element 16b, by rotating the electronic device 10, in the manner that the arrangement between the first element 16a and the second element is identical, within a margin of few pixels, to the arrangement of the third and the fourth half annular shapes 161c1, 16c2.
During the orientation step S6, the electronic device 10 is oriented in a particular orientation such that the second element 16b fully overlaps a portion of the third element, and more particularly a portion 16c1 of the third element 16c.
In an embodiment, the first element 16a and the second element 16b have different colors.
In an embodiment, the first portions 16c1, 16c2 of the third element 16c have different colors.
In a particular embodiment, the first element 16a has the same color as the first portion 16c1 of the third element. The second element 16b has the same color as the second portion 16c2 of the third element. And the first element 16a and the second element 16b have different colors.
The electronic device 10 comprises a top portion 10a and a bottom portion 10b.
In an embodiment, the top portion 10a of the electronic device is positioned above the bottom portion 10b in each occurrence of the positioning step S4 and orientation step S6. The electronic device 10 remains substantially vertical during each of the positioning step S4 and orientation step S6.
If the user rotates the electronic device 10 about any angle of rotation, for example 180°, about a yaw axis Z (shown in
Following the orientation confirmation step S8, a second pattern 20 is displayed on the display screen 14. The second pattern comprises a set of fourth elements 20a, is a grid of circular elements.
The number of fourth element 20a to be displayed is depending on the size of the display screen 14 of the electronic device 10.
In an embodiment, the second pattern comprises at least two lines of two circular elements.
The
In the illustrative embodiment of
It is desired that each of the circular elements is clearly spaced from the neighboring circular elements to correctly define the border of said circular element.
In an embodiment, each of the fourth element 20a is spaced one from another of a given distance. Said given distance may be greater or equal than 2 mm and lower or equal to 3 cm, preferably greater or equal than 5 mm and lower or equal to 5 cm, and even more preferably greater or equal than 8 mm and lower or equal to 1.5 cm.
The circular elements can have different shapes.
In the embodiment illustrated in
In the embodiment illustrated in
The circular elements, being a disc or an annular element, and the remaining portion of the display screen have a different color.
In an embodiment, the circular elements, being a disc or an annular element, are black and the remaining portion of the display screen white.
Advantageously, said pattern provides a better blur management than a chessboard. In a chessboard, the vicinity of the black squares complicates to determine in a precise manner the limits of each square.
In an embodiment, the circular elements, being discs or annular elements, are green, and the remaining portion of the screen is black.
In the embodiment illustrated in
In a more preferred embodiment, the disc and the annular elements have different colors.
In an even more preferred embodiment, the disc, the annular elements and the remaining portion of the display screen have three different colors.
The
In the
In the
Following, said first orientation of the electronic device 10 with respect to the mirror 18, an image of the second pattern reflected on the mirror is acquired by the acquisition module.
The image may be acquired automatically by the image acquisition device.
Alternatively, the user is requested to take the picture manually.
In the
Following, said second orientation of the electronic device 10 with respect to the mirror 18, an image of the second pattern reflected on the mirror is acquired by the acquisition module.
The image processing library OpenCV allows to retrieve at least one intrinsic parameter of the acquisition device, as disclosed in the documentation “The Common Self-polar Triangle of Concentric Circles and Its Application to Camera Calibration”, Haifei Huang, Hui Zhang and Yiu-ming Cheung. Said documentation discloses a method for a camera calibration consisting of the following steps:
The calibration method according to the invention provides a better blur management than the OpenCV mentioned above. The accuracy of the result is strongly linked to the precision of the detection of these reference points, and as a consequence the determination of at least one parameter of the image acquisition module 12.
High precision is crucial, mainly when blurry images are captured by the image acquisition module.
In order to improve the accuracy, it is preferable to improve the method of determination of reference points of the circular element, of each of the fourth element 20a of the second pattern 20, and to use pattern that are less sensitive to blur.
Advantageously, the use of a method involving the detection of circular elements, of each of the fourth element 20a of the second pattern 20, and the determination of their reference points is more robust rather than determining the intersection of contract colors for example the arrangement of black and white squares on a chessboard.
There are two ways to improve accuracy: the first one is to improve the detection of the center of the pattern, the second is to use pattern that are less sensitive to blur.
The reference point determination step S10 is achieved with respect to the image acquired by the image acquisition module.
The reference point determination step S10 comprises two embodiments depending on the set of fourth elements 20a is formed by discs or annular elements.
In each of the embodiments relative to the reference point determination step S10, the OpenCV algorithm is solely used to identify the circular elements 20a the second pattern 20.
The reference point determination step S10 comprises the following sub steps being performed for each of the fourth elements 20a of the second pattern:
In said embodiment, the reference point of each disc is formed by the center of the ellipse 22.
In order to further improve the accuracy of the reference point detection, the color of annular elements and their environment may be modified.
In order to easily find each ring, three colors can be used.
In this particular embodiment, the remaining portion of the display screen 14 not covered by fourth elements 20a is black. The annular element has a green color. And the central portion of the annular element, forming a disc, is blue or red.
Pixel's color of a displayed image are conditioned by the free following channel color R (red), G (green), B (blue). Each pixel p(i,j) of the acquired image as a level of each color RGB between 0 and 255.
For example, Black is (0,0,0) and white is (255,255,255).
A green pixel is defined as follows (0, 255, 0).
And the image is composed of three matrices R(i,j), G(i,j), B(i,j).
A grey image is defined as grey(i,j)=min(R(i,j), G(i,j) B(i,j)). In the grey image the circular elements 20a formed by annular elements are converted into discs.
Advantageously, using a grey image helps to find the locations of the fourth elements 20a.
Then, it is proceeded to green channel in further image processing of the acquired image by the image acquisition module 12.
Advantageously, green channel is used to enhance the contrast.
Following the use of green channel, the detection of the annular element is enhanced.
From the grey image, a first approximation of the center of each disc is obtained, using for example an Opencv function.
Then for each annular element detected, the reference point is estimated using two ellipses relative to the approximated internal and external contour of the annular element. This method provides a better estimation of the center of the reference point.
The reference point determination step S10 comprises the following sub steps being performed for each of the fourth elements 20a of the second pattern:
Preferably, the internal and the external contour determination steps S10b3 and S10b4 are performed thanks to an algorithm using green channel enhancing the contrast and helping determine the internal and the external contour of the green annular element.
In order to further improve the determination of the internal and the external contours of the annular element, an additional program can be executed to avoid outliers.
This algorithm consists in extracting the green annular element and determining the first ellipse 24a corresponding to the external contour and the second ellipse 24b corresponding to the internal contour of the annular element. Following the determination of said ellipses, the method of the mean square is used to calculate the center, the radius according to the semi minor axis and to the semi major axis of each of the ellipses.
Based on the center of the two ellipses, the reference point can be acquired.
When considering a second pattern comprising at least one square element, at least one triangle, at least one polygonal element, the first ellipse corresponds to an estimation of a circumscribed circle and the second ellipse corresponds to an estimation of an inscribed circle.
Based on the determination of the reference points of the set of fourth elements, at least one parameter of the acquisition module is derived. More specifically, the value of said at least one parameter of the acquisition module is determined
According to an embodiment, a database may comprise parameters of the acquisition module provided by the manufacturer.
According to an embodiment, a database may comprise a determination of a value of at least one parameter of the acquisition module provided by certified organization.
According to an embodiment, a database may store a determination of a value of at least one parameter of the acquisition module provided by a user achieving the method according to the invention.
In a more particular embodiment, the database may store a database may store a determination of a value of at least one parameter of the acquisition module provided by a plurality of users achieving the method according to the invention. The database may also comprise a parameter mean value, the parameter mean value corresponds to the average of the determined value of the at least one parameter of the acquisition module provided by the plurality of users achieving the method according to the invention.
The method according to invention may comprise an additional steps S14 shown in
A database comparison step S14, wherein the value of the image acquisition module 12, determined in the parameter determination step S12, is compared to a value of said parameters stored on the database. The value of said parameters stored on the database is for example provided by the manufacturer, by certified organization, by a user or an average of the determined value of the at least one parameter of the acquisition module provided by the plurality of users achieving the method according to the invention.
If the difference, in absolute value, between the value of the image acquisition module 12, determined in the parameter determination step S12 and the value of said parameter stored in the database is smaller or equal to 5%, for example smaller or equal to 2%, of the value of said parameter stored in the database, the value of the image acquisition module 12 determined in the parameter determination step S12 is confirmed.
If the difference is bigger than 5%, the user performing the method according to the invention is requested to reproduce the steps S2 to S12 at least one more time. Preferably, the steps S2 to S12 are reproduced until the difference, in absolute value, between the value of the image acquisition module 12, determined in the parameter determination step S12 and the value of said parameter stored in the database is smaller or equal to 5% of the value of said parameter stored in the database.
In a particular embodiment, the method according to the invention may not require at least nine reiteration of the steps S2 to S10, if the difference, in absolute value, between the value of the image acquisition module 12, determined in the parameter determination step S12 and the value of said parameter stored in the database is smaller or equal to 5% of the value of said parameter stored in the database.
The electronic device 10 is used to determine at least one of optical fitting parameters of a user, optical parameters of an optical lens, acuity parameters of a user.
The fitting parameters comprises:
The optical parameter of the lens comprises:
The disclosure has been described above with the aid of embodiments without limitation of the general inventive concept.
Many further modifications and variations will suggest themselves to those skilled in the art upon making reference to the foregoing illustrative embodiments, which are given by way of example only and which are not intended to limit the scope of the disclosure, that being determined solely by the appended claims.
In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used. Any reference signs in the claims should not be construed as limiting the scope of the disclosure.
Number | Date | Country | Kind |
---|---|---|---|
22305429.7 | Mar 2022 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2023/058342 | 3/30/2023 | WO |