The present disclosure relates to a technique for estimating a recovery level of a patient.
While healthcare costs are putting pressure on national finances worldwide, the number of patients with cerebrovascular diseases in Japan stands at 1,115,000, with annual healthcare costs amounting to more than 1.8 trillion yen. The number of stroke patients is expected to increase as the birthrate declines and the population ages; however, medical resources are limited, and there is a strong need for operational efficiency not only in acute care hospitals but also in convalescent rehabilitation hospitals.
Because cerebral infarction can cause a serious sequela unless emergency transport and measures are taken promptly after onset, it is important to detect and take measures as early as possible while symptoms are mild. Approximately half of the patients with cerebral infarction will develop cerebral infarction again within 10 years and will likely recur the same type of cerebral infarction as the first. Therefore, there is also a strong need for early detection of signs of recurrence.
However, in order to measure a recovery level of a patient in a convalescent rehabilitation hospital, it is necessary for a medical professional to accompany the patient and conduct various tests, which are time-consuming and labor-intensive. Accordingly, the frequency of measuring a recovery level is reduced, feedback to patients and providers will be lost, and patients will be less motivated to rehabilitate, resulting in reduced rehabilitation volume and delayed review of inappropriate rehabilitation plans, which will reduce the effectiveness of recovery. In addition, signs of recurrence are difficult for the patient to recognize on his or her own and often do not occur in time for periodic examinations and medical examinations.
Patent document 1 describes a more objective quantification of recovery status related to gait, based on a movement of a patient and eye movements while walking. Patent document 2 describes the estimation of psychological states from features based on eye movements. Patent document 3 describes determining reflexivity of the eye movements under predetermined conditions. Patent document 4 describes estimating a recovery transition based on movement information quantified from data of a rehabilitation subject.
Traditionally, estimation of a recovery level of a patient has been conducted by quantifying a recovery status by having a medical professional or a specialist visually or palpatively evaluate the patient performing a given operation. It is also known to quantify a recovery status of the patient in a remote location by transmitting a video of movements of the patient and a human body posture analysis result as data, and allowing the medical professional or the specialist to visually evaluate the data. In addition, Patent Document 1 describes a medical information processing system which quantifies a recovery status by analyzing a manner in which a human body moves based on a video of a walking scene of the patient.
In order to estimate the recovery level using a traditional method, the patient needs to go to a hospital where the medical personnel and the specialist are available. However, many patients have difficulty going to the hospital for a variety of reasons. By transmitting patient data, hospital visits of the patient are reduced, but it requires a lot of time and effort on the medical staff and other professionals to visually evaluate the patient data. Moreover, a method of quantifying recovery status based on the video of the walking scene does not require much effort on the medical personnel and the like, but it can only evaluate the patient who has recovered to a level where the patient can walk, and there is also the problem of a risk of falling when walking.
It is one object of the present disclosure to quantitatively estimate the recovery level without burdening the patient or the medical professional.
According to an example aspect of the present disclosure, there is provided a recovery level estimation device including:
According to another example aspect of the present disclosure, there is provided a method including:
According to a further example aspect of the present disclosure, there is provided a recording medium storing a program, the program causing a computer to perform a process including:
According to the present disclosure, it becomes possible to quantitatively estimate a recovery level without burdening a patient or a medical professional.
In the following, example embodiments will be described with reference to the accompanying drawings.
(Configuration)
The interface 11 exchanges data with the camera 2. The interface 11 is used when receiving the captured images D1 generated by the camera 2. Moreover, the interface 11 is used when the recovery level estimation device 1 transmits and receives data to and from a predetermined device connected by a wired or wireless communication.
The processor 12 corresponds to one or more processors each being a computer such as a CPU (Central Processing Unit) and controls the whole of the recovery level estimation device 1 by executing programs prepared in advance. The memory 13 is formed by a ROM (Read Only Memory) and a RAM (Random Access Memory). The memory 13 stores the programs executed by the processor 12 Moreover, the memory 13 is used as a working memory during executions of various processes performed by the processor 12.
The recording medium 14 is a non-volatile and non-transitory recording medium such as a disk-shaped recording medium or a semiconductor memory and is formed to be detachable with respect to the recovery level estimation device 1. The recording medium 14 records the various programs executed by the processor 12. When the recovery level estimation device 1 executes a recovery level estimation process, a program recorded in the recording medium 14 is loaded into the memory 13 and executed by the processor 12.
The display section 15 is, for instance, an LCD (Liquid Crystal Display and displays the estimation recovery level or the like which indicates a result of estimating the recovery level of the patient. The display section 15 may display the task of the third example embodiment to be described later. The input section 16 is a keyboard, a mouse, a touch panel, or the like, and is used by an operator such as a medical professional or a specialist,
The recovery level estimation device 1 generates and updates a recovery level estimation model which learns a relationship between an eye movement feature of the patient and the recovery level by referring to eye movements. In detail, the recovery level estimation device 1, for instance, can be applied to estimate the recovery level by rehabilitation from the sequela caused by cerebral infarction. A learning algorithm may use any machine learning technique such as a neural network, a SVM (Support Vector Machine), a logistic regression (Logistic Regression), or the like. In addition, the recovery level estimation device 1 estimates the recovery level by using the recovery level estimation model to calculate the estimation recovery level of the patient based on the eye movement feature of the patient.
The eye movement feature storage unit 21 stores the eye movement feature used as input data in training of the recovery level estimation model.
As illustrated in
As illustrated in
As illustrated in
As illustrated in
The recovery level correct answer information storage unit 23 stores correct answer information (correct answer label) used in the learning process of training the recovery level estimation model. In detail, the recovery level correct answer information storage unit 23 stores correct answer information for the recovery level for each eye movement feature stored in the eye movement feature storage unit 21. For the recovery level, for instance, a BBS (Berg Balance Scale), a TUG (Timed Up and Go test), a FIM (Functional Independence Measure), or the like can be arbitrarily applied.
The recovery level estimation model update unit 22 trains the recovery level estimation model using training data prepared in advance. Here, the training data include the input data and correct answer data. The eye movement feature stored in the eye movement feature storage unit 21 is used as the input data, and the correct answer information for the recovery level stored in the recovery level correct answer information storage unit 23 is used as the correct answer data. In detail, the recovery level estimation model update unit 22 acquires the eye movement feature from the eye movement feature storage unit 21, and acquires the correct answer information for the recovery level corresponding to the eye movement feature from the recovery level correct answer information storage unit 23. Next, the recovery level estimation model update unit 22 calculates the estimation recovery level of the patient based on the acquired eye movement feature by using the recovery level estimation model, and matches the calculated estimation recovery level with the correct answer information for the recovery level. After that, the recovery level estimation model update unit 22 updates the recovery level estimation model to reduce an error between the recovery level calculated by the recovery level estimation model and the correct answer information for the recovery level. The recovery level estimation model update unit 22 overwrites and stores the updated recovery level estimation model in which an estimation accuracy of the recovery level is improved, in the recovery level estimation model storage unit 24.
The recovery level estimation model storage unit 24 stores the updated recovery level estimation model which is trained and updated by the recovery level estimation model update unit 22.
The image acquisition unit 25 acquires the captured images D1 which are obtained by imaging the eyes of the patient and supplied from the camera 2. Note that when the captured images D1 captured by the camera 2 are collected and stored in a database or the like, the image acquisition unit 25 may acquire the captured images D1 from the database or the like.
The eye movement feature extraction unit 26 performs a predetermined image process with respect to the captured images D1 acquired by the image acquisition unit 25, and extracts the eye movement feature of the patient. In detail, the eye movement feature extraction unit 26 extracts time series information of a vibration pattern of the eyes in the captured images D1 as the eye movement feature.
The recovery level estimation unit 27 calculates the estimation recovery level of the patient based on the eye movement feature which the eye movement feature extraction unit 26 extracts, by using the recovery level estimation model. The calculated estimation recovery level is stored in the memory 13 or the like in association with the information concerning the patient. The alert output unit 28 refers to the memory 13, and outputs the alert to the patient to the display section 15 when the estimation recovery level of the patient deteriorates below a threshold value. In a case where a time period is given for the alert and the estimation recovery level of the patient deteriorates below the threshold value within the given time period, the alert is output.
(Learning Process)
Next, the learning process by the recovery level estimation device 1 will be described.
First, the recovery level estimation device 1 acquires the eye movement feature from the eye movement feature storage unit 21, and acquires the correct answer information for the recovery level with respect to the eye movement feature from the recovery level correct answer information storage unit 23 (step S101), Next, the recovery level estimation device 1 calculates the estimation recovery level based on the acquired eye movement feature by using the recovery level estimation model, and matches the calculated estimation recovery level with the correct answer information for the recovery level (step S102). After that, the recovery level estimation device 1 updates the recovery level estimation model to reduce the error between the estimation recovery level calculated by the recovery level estimation model and the correct answer information for the recovery level (step S103). The recovery level estimation device 1 updates the recovery level estimation model so as to improve the estimation accuracy by repeating this learning process while changing the training data.
(Recovery Level Estimation Process)
Next, the recovery level estimation process by the recovery level estimation device 1 will be described.
First, the recovery level estimation device 1 acquires the captured ages D1 obtained by capturing the eyes of the patient (step S201). Next, the recovery level estimation device 1 extracts the eye movement feature by an image process from the captured images D1 being acquired (step S202). Next, the recovery level estimation device 1 calculates the estimation recovery level based on the extracted eye movement feature by using the recovery level estimation model (step S203). The estimation recovery level is presented to the patient, the medical professional, and the like in any manner. Accordingly, it is possible for the recovery level estimation device 1 to estimate the recovery level of the patient based on the captured images D1 obtained by capturing the eyes even in an absent of the medical professional or the specialist, and thus it is possible to reduce a burden of the medical professional or the like. Moreover, since a daily recovery level can be predicted even in a sedentary position, the recovery level estimation device 1 can be applied to each patient who has difficulty walking independently without a need for hospital visits or the risk of falling.
Note that the recovery level estimation device 1 stores the calculated estimation recovery level in the memory 13 or the like for each patient, and outputs an alert to the patient to the display section 15 or the like in response to the estimation recovery level of the patient that is worse than the threshold value.
As described above, according to the recovery level estimation device 1 of the first example embodiment, it is possible for the patient to easily and quantitatively measure the estimation recovery level daily at home or elsewhere, and to objectively visualize the daily recovery level and to objectively visualize their daily recovery level. Therefore, it can be expected to increase an amount of rehabilitation due to improved patient motivation for the rehabilitation, and to improve a quality of rehabilitation through frequent revisions of a rehabilitation plan, thereby improving the effectiveness of recovery. In addition, it is possible to detect the abnormality such as a sign of a recurrent cerebral infarction at an early stage, without waiting for an examination or a consultation by the medical professional. Examples of industrial applications of the recovery level estimation device 1 include a remote instruction, a management, and the like of the rehabilitation.
(Configuration)
A recovery level estimation device 1x of the second example embodiment utilizes patient information concerning a patient such as an attribute and a recovery record in addition to a eye movement feature, in estimating a recovery level of the patient. Since a schematic configuration and a hardware configuration of the recovery level estimation device 1x are the same as those of the first example embodiment, the explanations thereof will be omitted.
The recovery level estimation device 1x of the second example embodiment generates and updates the recovery level estimation model which estimates the recovery level based on the eye movement feature and the patient data of the patient. The learning algorithm may use any machine learning technique such as the neural network, the SVM, the logistic regression, or the like. In addition, the recovery level estimation device 1x calculates the estimation recovery level of the patient by using the recovery level estimation model based on the eye movement feature of the patient and the patient data to estimate the recovery level.
The patient information storage unit 39 stores the patient information concerning the patient. The patient information includes, previous recovery records of the patient including records of attributes such as a gender and an age, a history of the recovery level, a disease name, symptoms, rehabilitation contents, and the like, for instance. The patient information storage unit 39 stores the patient information in association with identification information for each patient.
The recovery level correct answer information storage unit 33 stores the correct answer information for each of respective recovery levels corresponding to combinations of the patient information and the eye movement feature.
The recovery level estimation model update unit 32 trains and updates the recovery level estimation model based on the training data prepared in advance. Here, the training data includes the input data and the correct answer data. In the second example embodiment, the eye movement features stored in the eye movement feature storage unit 31 and the patient information stored in the patient information storage unit 39 are used as the input data. The recovery level correct answer information storage unit 33 stores the correct answer information for the recovery level corresponding to each combination of the eye movement feature and the patient information, and the correct answer information is used as the correct answer data. In detail, the recovery level estimation model update unit 32 acquires the eye movement feature from the eye movement feature storage unit 31, and acquires the patient information from the patient information storage unit 39. Moreover, the recovery level estimation model update unit 32 acquires the correct answer information for the recovery level corresponding to the acquired patient information and the eye movement feature, from the recovery level correct answer information storage unit 33. Next, the recovery level estimation model update unit 32 calculates the estimation recovery level of the patient based on the eye movement feature and the patient information by using the recovery level estimation model, and matches the estimation recovery level with the correct answer information for the recovery level. After that, the recovery level estimation model update unit 32 updates the recovery level estimation model in order to reduce an error between the recovery level calculated by the recovery level estimation model and the correct answer information for the recovery level. The updated recovery level estimation model is stored in the recovery level estimation model storage unit 34.
The recovery level estimation unit 37 retrieves the patient information of a certain patient from the patient information storage unit 39, and retrieves the eye movement feature of the certain patient from the eye movement feature extraction unit 36. Next, the recovery level estimation unit 37 calculates the estimation recovery level of the certain patient based on the eye movement feature and the patient information by using the recovery level estimation model. The calculated estimation recovery level is stored in the memory 13 or the like in association with the identification information of the certain patient.
Since the eye movement feature storage unit 31, the recovery level estimation model storage unit 34, the image acquisition unit 35, the eye movement feature extraction unit 36, and the alert output unit 38 are the same as in the first example embodiment, the explanations thereof will be omitted.
(Learning Process)
Next, the learning process by the recovery level estimation device 1x will be described.
First, the recovery level estimation device 1x acquires the patient information of a certain patient from the patient information storage unit 39, and acquires the eye movement feature of the patient from the eye movement feature storage unit 31 (step S301). Next, the recovery level estimation device 1x acquires the correct answer information of the recovery level for the patient information and the eye movement feature from the recovery level correct answer information storage unit 33 (step S302). Subsequently, the recovery level estimation device 1x calculates the estimation recovery level of the patient based on the eye movement feature and the patient information, and matches the estimation recovery level with the correct answer information for the recovery level (step S303). After that, the recovery level estimation device 1x updates the recovery level estimation model in order to reduce the error between the estimation recovery level calculated by the recovery level estimation model and the correct answer information of the recovery level (step S304). The recovery level estimation device 1x updates the recovery level estimation model so as to improve the estimation accuracy by repeating the learning process while changing the training data.
(Recovery Level Estimation Process)
Next, the recovery level estimation process by the recovery level estimation device 1x will be described.
First, the recovery level estimation device 1x acquires the captured images D1 obtained by capturing the eyes of the patient (step S401). Next, the recovery level estimation device 1x extracts the eye movement feature from the captured images D1 being acquired, by an imaging process (step S402). Subsequently, the recovery level estimation device 1x acquires the patient information of the patient from the patient information storage unit 39 (step S403). Next, the recovery level estimation device 1x calculates the estimation recovery level of the patient from the extracted eye movement feature and the acquired patient information by using the recovery level estimation model (step S404). After that, the recovery level estimation process is terminated. The estimation recovery level is presented to the patient, the medical professional, or the like in any manner.
Note that the recovery level estimation device 1x stores the calculated recovery level in the memory 13 or the like for each patient, and outputs an alert to the patient to the display section 15 or the like when the estimation recovery level of the patient is worse than the threshold value.
As described above, according to the recovery level estimation device 1x of the second example embodiment, since the recovery level estimation model which estimates the recovery level based on the eye movement feature and the patient information is used, it is possible to estimate the recovery level by considering* an individuality and features of each patient.
(Configuration)
A recovery level estimation device 1y of a third example embodiment presents a task in capturing eyes of a patient. The task corresponds to a predetermined condition or a task related to the eye movement. By presenting the patient with the task in a case of capturing images of the eyes, the recovery level estimation device 1y is capable of capturing images from which the eye movement feature necessary for estimating the recovery level is easily extracted.
Incidentally, different from the first example embodiment and the second example embodiment, the recovery level estimation device 1y of the third example embodiment internally includes the camera 2. The interface 11, the processor 12, the memory 13, the recording medium 14, the display section 15, and the input section 16 are the same as those of the first example embodiment and the second example embodiment, and explanations thereof will be omitted.
By referring to the eye movement, the recovery level estimation device 1y generates and updates the recovery level estimation model which has been trained regarding a relationship between the eye movement feature and the recovery level. The learning algorithm may use any machine learning technique such as the neural network, the SVM, the logistic regression, or the like. Moreover, the recovery level estimation device 1y presents a task concerning the eye movement to the patient, and acquires the captured images D1 which capture the eyes of the patient whom the task has been presented. Accordingly, the recovery level estimation device 1y estimates the recovery level by calculating the estimation recovery level of the patient from the eye movement feature of the patient based on the captured images D1 being acquired, by using the recovery level estimation model.
The task presentation unit 49 presents the task to the patient on the display section 15. The task is a predetermined condition or a task related to the eye movement, and may be arbitrarily set such as “viewing a predetermined image with variation”, “following a moving light spot with the eyes”, or the like, for instance.
The image acquisition unit 45 acquires the captured images D1 by capturing the eyes moved by the patient along the task with the camera 2 built into the recovery level estimation device.
Note that since the eye movement feature storage unit 41, the recovery level estimation model update unit 42, the recovery level correct answer information storage unit 43, the recovery level estimation model storage unit 44, the eye movement feature extraction unit 46, the recovery level estimation unit 47, and the alert output unit 48 are the same as those in the first example embodiment, the explanations thereof will be omitted. Since the learning process by the recovery level estimation device 1y is the same as that in the first example embodiment, the explanations thereof will be omitted.
(Recovery Level Estimation Process)
Next, a recovery level estimation process by the recovery level estimation device 1y will be described.
First, the recovery level estimation device 1y presents the task to the patient using the display section 15 or the like (step S501), Next, the recovery level estimation device 1y captures the eyes of the patient whom the task is presented, by the camera 2, and acquires the captured images D1 (step S502). In addition, the recovery level estimation device 1y extracts the eye movement feature from the captured images D1 which have been acquired, by the imaging process (step S503). Subsequently, the recovery level estimation device 1y calculates the patient estimation recovery level based on the extracted eye movement feature by using the recovery level estimation model (step S504). The estimation recovery level is presented to the patient, the medical professional, and the like in any manner. By presenting a predetermined task as described above, it is possible for the recovery level estimation device 1y to acquire the captured images D1 from which the eye movement feature is easily extracted.
Note that the recovery level estimation device 1y stores the calculated recovery level in the memory 13 or the like for each patient, and outputs an alert to the patient to the display section 1.5 or the like in response to the estimation recovery level of the patient that is worse than the threshold value.
Moreover, in the third example embodiment, for convenience of explanations, the recovery level estimation device 1y incorporates the camera 2, and presents the task on the display section 15. However, the present disclosure is not limited thereto, and the recovery level estimation device may internally include the camera 2 and be connected to the camera 2 by a wired or wireless communication to exchange data. In this case, the recovery level estimation device 1y outputs the task for the patient to the camera 2, and acquires the captured images D1 which the camera 2 has been captured.
Moreover, the recovery level estimation device 1y in the third example embodiment may use the patient information, similar to the recovery level estimation model described in the second example embodiment. Furthermore, the recovery level estimation device 1 in the first example embodiment and the recovery level estimation device 1x in the second example embodiment may present the task described in this example embodiment.
According to the recovery level estimation device 60 of the fourth example embodiment, based on the images obtained by capturing the eyes of the patient, it is possible to estimate the recovery level of the patient with a predetermined disease.
A part or all of the example embodiments described above may also be described as the following supplementary notes, but not limited thereto.
(Supplementary Note 1)
A recovery level estimation device comprising:
(Supplementary Note 2)
The recovery level estimation device according to supplementary note 1, wherein the eye movement feature includes eye vibration information concerning vibrations of the eyes.
(Supplementary Note 3)
The recovery level estimation device according to supplementary note 1 or 2, wherein the eye movement feature includes information concerning one or more of a bias of movement directions of the eyes and a misalignment of right and left movements.
(Supplementary Note 4)
The recovery level estimation device according to any one of supplementary notes 1 to 3, further comprising a task presentation means configured to present a task concerning eye movements, wherein the image acquisition means acquires the images of the eyes of the patient whom the task is presented, and the eye movement feature extraction means extracts the eye movement feature in the task based on the images.
(Supplementary Note 5)
The recovery level estimation device according to supplementary note 4, wherein the eye movement feature includes visual field defect information concerning a visual field defect.
(Supplementary Note 6)
The recovery level estimation device according to supplementary note 1, further comprising a patient information storage means configured to store patient information concerning one or more of an attribute of the patient and previous recovery records of the patient, wherein the recovery level estimation means estimates a recovery level of the patient based on the patient information and the eye movement feature.
(Supplementary Note 7)
The recovery level estimation device according to supplementary note 1, further comprising an alert output means configured to output an alert in response to the recovery level of the patient that is worse than a threshold value.
(Supplementary Note 8)
A method comprising:
(Supplementary Note 9)
A recording medium storing a program, the program causing a computer to perform a process comprising:
While the disclosure has been described with reference to the example embodiments and examples, the disclosure is not limited to the above example embodiments and examples. It will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the claims.
This application is a Continuation of U.S. application Ser. No. 18/278,959 filed on Aug. 25, 2023, which is a National Stage Entry of PCT/JP2021/025427 filed on Jul. 6, 2021, the contents of all of which are incorporated herein by reference, in their entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 18278959 | Jan 0001 | US |
Child | 18485787 | US |