The present disclosure relates to a technology of measurement, and more particularly, to an eyes measurement system, a method, and a computer-readable medium thereof.
In the prior arts, before performing eye surgeries or during diagnosing, the clinicians conducted eye measurements manually. For example, the clinicians set up a camera in front of the patient, keep the camera horizontal at the patient's eyes level, and maintain a fixed distance between the camera and the patient in order to take photographs of the patient's eyes. After that, the clinicians manually obtain various measures of the eyes according to the patient's eye photographs.
As can be seen in the prior art above, the measurement of the subject's eyes is conducted in a conventional and inconvenient way and is confined to the environment and method of taking photographs, which is not only difficult for obtaining such photographs, but also leads to inaccuracy of the manually conducted eye measure due to man-made errors during taking photographs. This would lead to serious misjudgments by the clinicians while making their diagnosis.
Therefore, how to conduct an eye measurement that obtains a subject's eye measure accurately and quickly without restricting by the environment and to provide a clear diagnosis basis for the clinicians has become an urgent issue in the art.
In order to solve and benefit the above-mentioned conventional technical problems, the present disclosure provides an eye measurement system, which includes: a client device with a measurement application, configured for capturing a first eye image, a second eye image and a third eye image of a subject, or for selecting the first eye image, the second eye image and the third eye image of the subject by the measurement application from the client device; and a cloud processing device, communicatively connected to the client device, configured for receiving the first eye image, the second eye image and the third eye image of the subject from the measurement application, and the cloud processing device includes: a pre-processing module, configured for cropping the first eye image, the second eye image and the third eye image of the subject into a first eye orbit image, a second eye orbit image and a third eye orbit image, respectively, and superimposing the second eye orbit image to the third eye orbit image to generate a superimposed eye orbit image; and an eye measure prediction module with a prediction model to calculate a predicted eye measure of the subject according to the first eye orbit image and the superimposed eye orbit image of the subject, wherein the cloud processing device sends the predicted eye measure of the subject back to the measurement application for supplying the predicted eye measure.
The present disclosure further provides an eye measurement method, comprising: capturing a first eye image, a second eye image, and a third eye image of a subject by a client device with a measurement application, or selecting the first eye image, the second eye image, and the third eye image of the subject from the client device by the measurement application; receiving the first eye image, the second eye image and the third eye image of the subject from the measurement application by a cloud processing device; cropping the first eye image, the second eye image and the third eye image of the subject into a first eye orbit image, a second eye orbit image, and a third eye orbit image by the cloud processing device, respectively, wherein the second eye orbit image is superimposed to the third eye orbit image to generate a superimposed eye orbit image; using a prediction model by the cloud processing device to calculate a predicted eye measure of the subject based on the first eye orbit image and the superimposed eye orbit image; and sending the predicted eye measure of the subject back to the measurement application by the cloud processing device for supplying the predicted eye measure.
In the aforementioned embodiment, the first eye image is a photograph of the subject's left and right eyes viewing forward, the second eye image is a photograph of the subject's left and right eyes gazing up, and the third eye image is a photograph of the subject's left and right eyes gazing down.
In the aforementioned embodiment, the predicted eye measure includes an MRD1 (Margin Reflex Distance 1, a vertical distance between an upper eyelid margin and a corneal light reflex), an MRD2 (Margin Reflex Distance 2, a vertical distance between a lower eyelid margin and the corneal light reflex) and an LF (Levator Function), wherein the MRD1 and the MRD2 are calculated from the first eye orbit image, and the LF is calculated from the superimposed eye orbit image by the prediction model of the eye measure prediction module.
In the aforementioned embodiment, the pre-processing module uses a coordinate regression model to determine a position of a corneal light reflex of the first eye image, the second eye image and the third eye image of the subject, and crops the first eye image, the second eye image, and the third eye image into the first eye orbit image, the second eye orbit image and the third eye orbit image based on the position of the corneal light reflex.
In the aforementioned embodiment, the measurement application includes a data management module for viewing the subject's data, eye operation information, or monitoring subject's eye condition.
In the aforementioned embodiment, the measurement application includes a notification management module for receiving health-related information and notifications, managing the subject's groups, and displaying sent message history.
In the aforementioned embodiment, the prediction model is established based on an EfficientNet in combination with a SENet, the eye measure prediction module inputs a plurality of training images into the prediction model to perform deep learning, and the prediction model calculates the predicted eye measure of the subject based on the first eye orbit image and the superimposed eye orbit image after finishing the deep learning, wherein the SENet increases feature weights of significant features and reduces feature weights of invalid or insignificant features in the plurality of training images, such that the EfficientNet performs the deep learning based on adjusted feature weights and the plurality of training images.
The present disclosure further provides a computer-readable medium for use in a computer or computing device having a processor and/or a memory, wherein the computer or computing device executes a target program and a computer-readable medium via the processor and/or the memory for performing the above-described eye measurement method while executing the computer-readable medium.
As can be seen from the above, the eye measurement system, the eye measurement method, and the computer-readable medium of the present disclosure receives the first eye image, the second eye image, and the third eye image of the subject from the uploads by clinicians or subjects using a measurement application in a client device via a cloud processing device. After the first eye image, the second eye image and the third eye image are pre-processed, the first eye orbit image and the superimposed eye orbit image of the patient are obtained, such that the prediction model can obtain the MRD1, the MRD2, and the LF of the patient based on the first eye orbit image and the superimposed eye orbit image, and provide the MRD1, the MRD2 and the LF of the patient to the clinicians as the basis for diagnosis. Therefore, compared with the prior art, the present disclosure does not confine to the places where the eye images are taken, and the prediction model is used to accurately obtain the subject's eye measure, thereby providing the clinicians with a clear basis for diagnosis.
The following describes the implementation of the present disclosure through embodiments, and those skilled in the art can easily understand other advantages and effects of the present disclosure from the contents disclosed in this specification. It should be noted that the structures, proportions, sizes, etc. shown in the accompanying drawings in this specification are used to cooperate with the contents disclosed in the specification, for the understanding and reading of those who are familiar with this field, and are not used to limit this specification. Any modification of the structure, change of the proportional relationship or adjustment of the size should not affect the effect and the purpose of the present disclosure, and it falls within the scope of the technical content disclosed in the present disclosure. At the same time, terms such as “a,” “first,” “second,” “upper” and “lower” quoted in this specification are for the convenience of description and are not used to limit the scope of the present disclosure. The scope of implementation and the change or adjustment of its relative relationship shall be regarded as the scope of implementation of the present disclosure without substantially changing the technical content.
In one embodiment, the client device 10 includes, but is not limited to, a smartphone, a personal computer, a notebook computer or other electronic devices with computing capabilities, and the electronic device includes a processor, a display screen, a camera lens, etc. The measurement application 10a is executed via the processor of the client device 10, and the measurement application 10a can be an application (APP) installed on a smartphone. Furthermore, the cloud processing device 20 can be built in the same (or different) server (such as a general-purpose server, file-type server, a storage-unit-type server, etc.) and an electronic device with a suitable computing mechanism, such as a computer, wherein each module in the client device 10 and the cloud processing device 20 can be software, a hardware or a firmware; if it is a hardware, it can be a processing unit, processor, computer or server with data processing and computing capabilities; in the case of software or firmware, it may include instructions executable by a processing unit, processor, computer or server, and may be installed on the same hardware device or distributed across multiple hardware devices.
As shown in
In one embodiment, when the user is a new user, the account management module 101 allows the user to click the “register” button in the login interface (as shown in
In one embodiment, when the user forgets the password, the account management module 101 allows the user to click the “forgot password” button in the login interface (as shown in
In one embodiment, when the user needs to reset the password, the account management module 101 provides the password reset interface (as shown in
In one embodiment, when the user modifies his/her basic personal information, the account management module 101 provides a data change interface (as shown in registration interface of
As shown in
In one embodiment, when the user is viewing his/her basic personal information, the patient information management module 102 allows the user to click the “account” button in menu A of the main page, and then the patient information management module 102 displays a basic personal information interface (as shown in
In one embodiment, when the user inquires about his/her patients' information, the patient information management module 102 allows the user to click the “patient” button in menu A of the main page interface, and then the patient information management module 102 displays a patients' information interface (as shown in
In one embodiment, after clicking the “Home follow-up” button in the patient information interface, the patient information management module 102 displays the preoperative and/or postoperative test results list of the user (i.e., patient) (as shown in
In one embodiment, when the user needs to perform eye monitoring, the patient data management module 102 allows the user to click the “Report” button in menu A of the main page, and then the patient data management module 102 displays a main monitoring interface (as shown in
Furthermore, after the user clicks “eye measurement” in the main monitoring interface, the patient data management module 102 displays a measurement sub-interface (as shown in
In one embodiment, when measuring MRD1, the patient data management module 102 allows the user to click the button of “MRD1” in the measurement sub-interface to display an MRD1 measurement interface (as shown in
Furthermore, the MRD1 measurement interface is for the user to click the button of “Full-face” and then choose to take photos of the full-face area of the user (as shown in
After the patient data management module 102 obtains the user's first eye image, the patient data management module 102 allows the user to click the “file input” button to display a data upload interface (as shown in
In another embodiment, the measurement of MRD2 is substantially the same as the measurement of MRD1 in the above-mentioned embodiment. When measuring MRD2, the patient data management module 102 allows the user to click the button “MRD2” in the measurement sub-interface to display an MRD2 measurement interface (not shown) managed by the patient data module 102, and is for the user to choose to use the client device to capture the first eye image of the left eye and/or the right eye, or to select the left eye and/or the right eye from the client device 10 (such as a smartphone). The first eye image is sent to the patient data management module 102.
After the patient data management module 102 received the user's first eye image, the patient data management module 102 allows the user to click the “file input” button to display a data upload interface (as shown in
After the patient data management module 102 obtains the second eye image and the third eye image of the user, the patient data management module 102 allows the user to click “file input” button to display a data upload interface (as shown in
In one embodiment, after the cloud processing device 20 completes the measurement of the predicted values (i.e., MRD1, MRD2 and LF), the cloud processing device 20 sends an analysis report back to the measurement application 10a, and the patient data management module 102 presents the analysis report to the user via an analysis report interface (as shown in
Furthermore, the patient data management module 102 is for the user to click the “upload and save report” button in the analysis report interface to upload the analysis report to the cloud processing device 20 for storage, and a healthcare provider uses another client device to obtain the analysis report from the cloud processing device 20, and the other client device provides medical advice (such as explanations and suggestions from medical personnel and/or postoperative care) provided by the healthcare provider based on the analysis report. Home care advices are transmitted via the cloud processing device 20 to the measurement application 10a in the client device 10 of the user. After that, the patient data management module 102 of the measurement application 10a displays the medical advice (as shown in
As shown in
Furthermore, when the user is a healthcare provider, the reminder notification also includes an eye operation reminder, etc., and the eye operation reminder includes preoperative and postoperative reminding to the healthcare provider, as shown in Table 2.
In another embodiment, when the user is a clinician or an orthopedic clinic, the notification management module 103 allows the user to click on “Letter” icon on the notification interface to display a message sending interface (as shown in
In one embodiment, refer to
Furthermore, the pre-processing module 22 has a pupil coordinate regression model to determine the position of the corneal light reflex of the first eye image, the second eye image and the third eye image, respectively. After that, the pre-processing module 22 expands from the corneal light reflex position in the first eye image, the second eye image and the third eye image to automatically crop the images into the first eye orbit image (as shown in
In addition, the pupil coordinate regression model is a model trained by constructing a MobilenetV2 model, which enables the pupil coordinate regression model to learn from the first eye image, the second eye image and the third eye image to find the position of corneal light reflex. In one embodiment, the eye measure prediction module 23 in the cloud processing device 20 includes a prediction model, and the prediction model is a deep learning model, for example, a convolutional neural network (CNN) model, wherein the eye measure prediction module 23 receives the first eye orbit image and the superimposed eye orbit image, and uses the prediction model to calculate the user's MRD1 and MRD2 based on the first eye orbit image, and LF based on the superimposed eye orbit image.
In one embodiment, the establishment of the prediction model of the eye measure prediction module 23 is described as follows.
1. Establishment of photographs and gold standard measure (actual measure): a scale of 20×20 millimeters (mm) is placed on the dorsum of the subject's nose as a reference. To be clear, this scale is only necessary for gold standard measure while not for deep learning model training or for determining the model's accuracy. Next, a photographing device (such as a smartphone or a camera, etc.) takes the eye orbit photographs of the subjects' bilateral eyes (the subject is standing or sitting; a total of 6 photos, including viewing forward, upward gaze and downward gaze), and the shooting position is about 20-30 centimeters (cm) away from the subject's eyes. The shooting device and the subject's eyes are at the same level, thereby simulating the distance between the patient and the doctor when MRD1, MRD2, LF are manually measured by the doctors with a hand-held ruler in the clinic.
Furthermore, the photographs of the eye orbits taken are enlarged on the computer to obtain the measures of MRD1, MRD2 and LF manually by a plurality of doctors. The measures obtained by the plurality of doctors are averaged to respectively obtain the gold-standard measurement values (actual values) of MRD1, MRD2 and LF, followed by generating a plurality of labels according to the gold-standard measurement values (actual values) of MRD1, MRD2 and LF to serve as input data for deep learning model training. In addition, all MRD1 measures in upper eyelid ptosis without conical light reflex (i.e. pupillary light reflex) are set to 0. 2. Establishment of the prediction model: the prediction model of the eye measure prediction module 23 includes SENet (Squeeze-and-Excitation Networks, compression and excitation network) and Convolutional Neural Network (CNN) models such as EfficientNet. In detail,
(1) Input whole face images of the plurality of subjects and the plurality of labels to the eye measure prediction module 23, wherein the whole face images of the plurality of subjects include primary-gaze (eyes viewing forward) images and images of the eyes gazing up and down. In one embodiment, the eye measure prediction module 23 carries out a model training with minibatches mode, such as: input thirty two training images at a time, wherein the minibatch is selected upon the maximum number that can be processed according to the performance of the memory consumption and Graphics Processing Unit (GPU).
(2) The eye measure prediction module 23 confirms the whole face images of the plurality of subjects (the file format can be .PNG) and the plurality of labels (the file format can be .CSV), and the plurality of labels are mapping to the whole face images of the plurality of subjects.
(3) The eye measure prediction module 23 performs data pre-processing upon the whole face images of the plurality of subjects to crop the whole face images of the plurality of subjects into the first eye orbit images, the second eye orbit images and the third eye orbit images of the plurality of subjects, respectively, and then the second eye orbit images and the third eye orbit images of the plurality of subjects are superimposed to generate the superimposed eye orbit images of the plurality of subjects, so that the first orbit images and the superimposed eye orbit images of the plurality of subjects are used as a plurality of training images, wherein the first eye orbit images are used as the training input data for the prediction model to predict MRD1 and MRD, and the superimposed eye orbit images are used as the training input data for the prediction model to predict the LF, which further includes the following steps:
(3-1) The eye measure prediction module 23 uses the bilinear interpolation method to adjust the plurality of training images to 256×256 pixels.
(3-2) The eye measure prediction module 23 performs horizontal flip on the plurality of training images and imposes the separation method to rotate them randomly to increase the amount of data during training.
(3-3) The five-fold cross-validation of the eye measure prediction module 23 is used to estimate the performance of the model, and to divide the plurality of training images into five equal parts, wherein four parts are used for training and one part is used for validation.
(4) The eye measure prediction module 23 utilizes the plurality of training images after pre-processing to train the prediction model, wherein the prediction model is established based on EfficientNet in combination with SENet. The architecture diagram illustrating the prediction model is shown in
Furthermore, the eye measure prediction module 23 sets the dropout rate to 0.25 to 0.5 for regularization, and the learning rate is set based on cosine annealing and one-cycle policy strategy to adjust the step size of the prediction model training, and thereby to regulate the learning paces. Finally, the average output of the models is used to integrate EfficientNet and SENet to obtain more accurate results, and to minimize the deviation of the prediction errors to improve the prediction accuracy of the prediction model in the eye measure prediction module 23.
In one embodiment, the present disclosure uses a parameter optimized method to optimize the prediction model. For instance, the training process of the prediction model is optimized by AdamW of the Neural Weight Optimizer using weight decay and L2 regularization, wherein L2 regularization will add a penalty term composed of the square of all weights in the prediction model to the Loss Function, and combine with specific hyperparameters to control the penalty. Furthermore, the Loss Function used in the present disclosure is MSE (mean square error) Loss, so that the prediction model described herein can be judged through the Loss Function.
Further, the prediction model of the eye measure prediction module 23 is established by SENet in combination with EfficientNet, wherein EfficientNet is a fast and high-precision model, using a model constructed by depth, width and the resolution of the image. EfficientNet is adjusted via depth, width and resolution.
In one embodiment, by using the resolution of the plurality of training images, it is helpful to capture fine features. At the same time, the accuracy of the model can be improved. Furthermore, by increasing the depth of EfficientNet (that is, increasing the number of convolutional layers), various complex features in the inputted plurality of training images can be found, and complex problems can be dealt. Ability of generalization is also better. However, the deeper the depth of EfficientNet, the more difficult it is to train the model, and its effect on model accuracy decreases as the depth increases. On the other hand, fine-grained features can be found by adjusting the width, which also makes the model easier to train. However, a model that is too wide will increase the difficulty of extracting higher level features, and the increase in model accuracy will soon saturate with width (that is, the accuracy will no longer improve).
In this regard, the eye measure prediction module 23 is based on EfficientNet in combination with SENet, to significantly establish the interdependence between the features and channels via SENet, and to adopt a whole new “feature reweight” strategy. Specifically, the feature re-calibration strategy is for SENet to automatically obtain the importance of each feature and channel via deep learning, so as to improve the useful features according to the importance of each feature and channel, and suppress the current task with a feature that is of little use. Therefore, the core module of SENet learns the feature weight via the network and the loss, so that the feature weight of effective features is enlarged, and the feature weight of invalid or insignificant features is reduced. In this way, the core model of SENet is trained. In order to better distinguish the importance of each feature and channel, the feature weights adjusted by SENet are used to improve the accuracy of EfficientNet.
(5) The eye measure prediction module 23 completes the training of the prediction model.
Therefore, after the above-mentioned training, the prediction model of the eye measure prediction module 23 can accurately calculate MRD1 and MRD2 according to the first eye orbit image of the subject, as well as the superimposed eye orbit images of the subject to calculate LF.
In one embodiment, the data transmission module 21 sends MRD1, MRD2 and LF of the user back to the measurement application 10a of the client device 10 for use in an analysis report interface (such as
In step S61, a doctor or patient uses the client device 10 with the measurement application 10a to capture the first eye image of the patient's left and right eyes, the second eye image (the image of the left and right eyes gazing up), and the third eye image (the image of the left and right eyes gazing down), or selects the stored images of the first eye image, the second eye image and the third eye image of the patient from the client device 10.
In step S62, the measurement application 10a uploads the first eye image, the second eye image and the third eye image of the patient to a cloud processing device 20.
In step S63, the cloud processing device 20 receives the first eye image, the second eye image and the third eye image of the patient. The first eye image, the second eye image, and the third eye image are pre-processed via cropping into the rectangular first eye orbit image, the second eye orbit image and the third eye orbit image, and the second eye orbit image is superimposed to the third eye orbit image to produce a superimposed eye orbit image.
In step S64, the cloud processing device 20 uses a prediction model to calculate MRD1 and MRD2 of the patient according to the first eye orbital image, and uses the prediction model to calculate LF according to the superimposed eye orbit image.
In step S65, the cloud processing device 20 sends the predicted measures such as MRD1, MRD2 and LF to the measurement application 10a of the client device 10, so as to present the predicted measures to the physician, enabling the physician to diagnose the patient based on the predicted measures.
In addition, the present disclosure also discloses a computer-readable medium, which is applied to a computing device or computer having a processor (e.g., CPU, GPU, etc.) and/or memory, and stores instructions, and utilizes the computing device or computer to execute the computer-readable medium via a processor and/or memory so as to execute the above-mentioned methods and steps when executing the computer-readable medium.
The following is an embodiment of the present disclosure, and is described with reference to
In an embodiment, during clinical practice, the physician uses a smartphone (i.e., the client device 10) with the measurement application 10a to take a first eye image of the left and right eyes of a patient (as shown in the
After the cloud processing device 20 receives the first eye image, the second eye image and the third eye image of the patient, the pre-processing module 22 in the cloud processing device 20 pre-processes the first eye image, the second eye image and the third eye image by cropping the first eye image, the second eye image and the third eye image into the first eye orbit image (as shown in
Afterwards, the eye measure prediction module 23 in the cloud processing device 20 uses its prediction model to calculate MRD1 and MRD2 according to the patient's first eye orbital image, and calculates LF according to the patient's superimposed eye orbit image. The data transmission module 21 in the cloud processing device 20 sends the patient's MRD1, MRD2 and LF back to the measurement application 10a of the smartphone, so that the measurement application 10a displays the predicted measures of MRD1, MRD2 and LF via the analysis report interface (as shown in
To sum up, the eye measurement system, the method and the computer-readable medium of the present disclosure receive, via the cloud processing device, the subject's first eye image (left eye and/or right eye viewing forward), second eye image (left eye and/or right eye gazing upward), and third eye image (left eye and/or the right eye gazing down) uploaded by a clinician or a subject himself/herself using a measurement application in the client device. After pre-processing by the cloud processing device, the first eye orbit image and the superimposed eye orbit image of the patient are obtained, so that the prediction model can obtain MRD1, MRD2 and LF according to the first eye orbit image and the superimposed eye orbit image, and provide MRD1, MRD2 and LF to the clinicians as a basis for diagnosis. Therefore, as compared with the prior art, the present disclosure does not confine to the places where the eye images are taken, and the prediction model can accurately obtain the subject's eye measures, thereby providing doctors with a clear diagnosis basis.
In addition, the eye measurement system, the method and the computer-readable medium of the present disclosure have the following advantages or technical effects.
1. The present disclosure utilizes the deep learning prediction model to measure MRD1, MRD2 and LF on close-up eye images (such as eye orbit photos). In addition to record the subjects' eye state in detail, through such prediction model, the accuracy of measurement can be improved, and the detection time is reduced, which improves the efficiency of clinics.
2. Through the measurement application designed in the present disclosure, a device such as a smartphone can be used to measure the predicted value of the patient's eye in any occasion, and the measurement application can use the smartphone to measure the predicted measures of the eye. The patient's eye photos taken by the smartphone are transmitted to the cloud processing device, and the artificial intelligence deep learning module can be used to automatically analyze the eye images, accurately analyze the patient's eye state, and the obtained predicted measures (MRD1, MRD2 and LF value) are then sent back to the client device to provide the clinicians as a basis for diagnosis.
3. The measurement application designed by the present disclosure can help to record the patient's eye images, and receive reminders and health-related information provided by the healthcare providers, thereby improving the quality and efficiency of patient care.
4. The present disclosure provides users (such as patients, their family members or healthcare providers, etc.) account management functions (such as login, password change, etc.) and patient information management functions via the user interface presented by the measurement application (e.g., patient information, eye monitoring, eye operation information, etc.) and notification management functions (e.g., reminder notification, health-related information, etc.), giving the users a good user experience of the application, and providing a complete application function.
5. The present disclosure utilizes the pupil coordinate regression model to confirm the corneal light reflex position in the eye image, thereby automatically cropping the first eye image, the second eye image and the third eye image based on the corneal light reflex position to obtain the first eye orbit image, the second eye orbit image and the third eye orbit image, which can be used as the source of measurement, or as the input materials for the training of the prediction model.
6. The prediction model of the present disclosure is based on EfficientNet and is established in combination with SENet. Through SENet, the interdependence between features and channels is significantly established to improve the feature weight of effective features, and reduce the feature weights of invalid or insignificant features, thereby providing adjusted feature weights to EfficientNet. Therefore, EfficientNet can significantly improve its prediction accuracy by adjusting the feature weights.
The above-mentioned embodiments are illustrations of the principles and effects of the present disclosure, but are not intended to limit the present disclosure. Any person skilled in the art can modify and change the above-mentioned embodiments without departing from the spirit and scope of the present disclosure. Therefore, the protection scope of the present disclosure should be listed in the claims of the present disclosure.
Number | Date | Country | |
---|---|---|---|
63294924 | Dec 2021 | US |