The disclosure relates to a medical system. More particularly, the disclosure relates to a computer-aided medical system to generate a medical prediction based on symptom inputs.
Recently the concept of computer-aided medical system has emerged in order to facilitate self-diagnosis for patients. The computer-aided medical system may request patients to provide some information, and then attempts to diagnose the potential diseases based on the interactions with those patients. In some cases, the patients do not know how to describe their health conditions or the descriptions provided by the patients may not be understandable to the computer-aided medical system.
The disclosure provides a medical system. The medical system includes an interaction interface and an analysis engine. The interaction interface is configured for receiving an initial symptom. The analysis engine is communicated with the interaction interface. The analysis engine includes a prediction module. The prediction module is configured for generating symptom inquiries to be displayed on the interaction interface according to a prediction model and the initial symptom. The interaction interface is configured for receiving responses corresponding to the symptom inquiries. Finally, the prediction module is also configured to generate a result prediction according to the prediction model, the initial symptom and the responses.
In an embodiment, the prediction module is configured to generate a first symptom inquiry according to the prediction model and the initial symptom. The first symptom inquiry is displayed on the interaction interface. The interaction interface is configured to receive a first response corresponding to the first symptom inquiry. The prediction module is further configured to generate a second symptom inquiry according to the prediction model, the initial symptom and the first response. The second symptom inquiry is displayed on the interaction interface. The interaction interface is configured to receive a second response corresponding to the second symptom inquiry. The prediction module is configured to generate the result prediction according to the prediction model, the initial symptom, the first response and the second response.
In an embodiment, the medical system further includes a learning module configured for generating a prediction model according to the training data. The training data includes known medical records. The learning module utilizes the known medical records to train the prediction model.
In an embodiment, the training data further include a user feedback input collected by the interaction interface, a doctor diagnosis record received from an external server or a prediction logfile generated by the prediction module. The learning module further updates the prediction model according to the user feedback input, the doctor diagnosis record or the prediction logfile.
In an embodiment, the result prediction comprises at least one of a disease prediction and a medical department suggestion matching the disease prediction, wherein the disease prediction comprises a disease name or a list of disease names ranked by probability.
In an embodiment, after the result prediction is displayed on the interaction interface. The interaction interface is configured to receive a user command in response to the result prediction. The medical system is configured to send a medical registration request corresponding to the user command to an external server.
In an embodiment, the prediction model includes a first prediction model generated by the learning module according to a Bayesian inference algorithm. The first prediction model includes a probability relationship table. The probability relationship table records relative probabilities between different diseases and different symptoms.
In an embodiment, the prediction model includes a second prediction model generated by the learning module according to a decision tree algorithm. The second prediction model includes a plurality of decision trees constructed in advance according to the training data.
In an embodiment, the prediction model includes a third prediction model generated by the learning module according to a reinforcement learning algorithm. The third prediction model is trained according to the training data to maximize a reward signal. The reward signal is positive or negative according to the correctness of a training prediction made by the third prediction model. The correctness of the training prediction is verified according to a known medical record in the training data.
The disclosure further provides a method for providing a disease prediction which includes the following steps. An initial symptom is received. Symptom inquiries are generated according to the prediction model and the initial symptom. Responses are received corresponding to the symptom inquiries. A disease prediction is generated according to the prediction model, the initial symptom and the responses.
The disclosure further provides a non-transitory computer readable storage medium with a computer program to execute a method. The method include includes the following steps. An initial symptom is received. Symptom inquiries are generated according to a prediction model and the initial symptom. Responses are received corresponding to the symptom inquiries. A disease prediction is generated according to the prediction model, the initial symptom and the responses.
It is to be understood that both the foregoing general description and the following detailed description are by examples, and are intended to provide further explanation of the invention as claimed.
The disclosure can be more fully understood by reading the following detailed description of the embodiment, with reference made to the accompanying drawings as follows:
Reference will now be made in detail to the present embodiments of the disclosure, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the description to refer to the same or like parts.
Reference is made to
In some embodiments, the medical system 100 is established with a computer, a server or a processing center. The analysis engine 120 can be implemented by a processor, a central processing unit or a computation unit. The interaction interface 140 can include an output interface (e.g., a display panel for display information) and an input device (e.g., a touch panel, a keyboard, a microphone, a scanner or a flash memory reader) for user to type text commands, give voice commands or to upload some related data (e.g., images, medical records, or personal examination reports).
In some other embodiments, at least a part of the medical system 100 is established with a distribution system. For example, the analysis engine 120 is established by a cloud computing system. In this case, the interaction interlace 140 can be a smart phone, which is communicated with the analysis engine 120 by wireless. The output interface of the interaction interface 140 can be a display panel on the smart phone. The input device of the interaction interface 140 can be a touch panel, a keyboard and/or a microphone on the smart phone.
As shown in
Reference is further made to
In one embodiment, the training data includes a probability relationship table according to statistics of the known medical records TDi. An example of the probability relationship table is shown in Table 1.
The values in Table 1 represent the percentages of patients who have the diseases on the top have the symptoms listed in the leftmost column. According to the probability relationship table shown in Table 1, 23 out of 100 Pneumonia patients have the symptom of coryza, and 43 out of 100 Pneumonia patients have the symptom of difficulty breathing. In this embodiment, the training data include a probability relationship between different symptoms and different diseases. In an example, the training data including the probability relationship table as shown in Table 1 can be obtained from data and statistics information from the Centers for Disease Control and Prevention (https://www.cdc.gov/datastatistics/index.html).
As shown in
As shown in
In an embodiment, the learning module 122 and the prediction module 124 can be implemented by a processor, a central processing unit, or a computation unit.
As shown in
In some embodiments, the patient may provide the initial symptom Sini (e.g., fever, headache, palpitation, hard to sleep). The prediction module 124 will generate a first symptom inquiry (e.g., including a question of one symptom or multiple questions of different symptoms) according to the initial symptom Sini. The first symptom inquiry is the first one of the symptom inquiries Sqry shown in
In some embodiments, the symptom inquiry Sqry can be at least one question to ask whether the patient experience another symptom (e.g., “do you cough?”) other than the initial symptom Sini. The patient responds to the first symptom inquiry through the interaction interface 140. The interaction interface 140 is configured to receive a first response from the user U1 corresponding to the first symptom inquiry. The interaction interface 140 will send the first response to the prediction module 124. The first response is the first one of the responses Sans shown in
After the patient responds to the first symptom inquiry, the prediction module 124 will generate a second symptom inquiry (i.e., the second one of the symptom inquiries Sqry) according to the initial symptom Sini and also the first response.
Similarly, the interaction interface 140 is configured to receive a second response from the user U1 corresponding to the second symptom inquiry. The interaction interface 140 will send the second response (i.e., the second one of the responses Sans) to the prediction module 124. After the patient responds to the second symptom inquiry, the prediction module 124 will generate a third symptom inquiry according to all previous symptoms (the initial symptom Sini and the all previous responses Sans), and so on.
Each symptom inquiry is determined by the prediction module 124 according to the initial symptom Sini and all previous responses Sans.
After giving sequential symptom inquiries and receiving the responses from the patients, the prediction module 124 will generate the result prediction according to these symptoms (the initial symptom Sini and all the responses Sans). It is noticed that the medical system 100 in the embodiment will actively provide the symptom inquiries one by one to the user other than passively wait for the symptom inputs from the user. Therefore, the medical system 100 may provide an intuitive interface for self-diagnosis to the user.
In some embodiments, the result prediction will be made when a predetermined number of inquiries (e.g., 6 inquiries) has been asked, when a predetermined time limitation (e.g., 15 minutes) is reached, and/or a confidence level of the prediction by prediction module exceed a threshold level (e.g., 85%).
Besides the initial symptom(s) input, other information of the patient, such as a Demographic Information Input (e.g., gender, age of the patient), a Medical Record Input (e.g., blood pressure, SPO2, ECG, Platelet, etc.), a Psychological Information Input (e.g., emotion, mental status, etc.), and/or gene input (e.g., DNA, RNA, etc.), can be provided to the prediction module 124.
These personal information can be taken in consideration while the prediction module 124 selects the symptom inquiry or makes the prediction. For example, when the gender of the patient is male, the prediction will avoid “Cervical Cancer” or/and “Obstetrics and Gynecology Department” and the symptom inquiry will avoid “Menstruation Delay”. In some other embodiments, when the patient is adult, the prediction will avoid “Newborn jaundice” or/and “Pediatric Department” and the symptom inquiry will avoid “Infant feeding problem”.
The aforementioned embodiments are related to what disease or/and department the module should avoid predicting according to the personal information. However, the prediction module 124 and the analysis engine 120 are not limited thereto. In some other embodiments, the personal information is taken into consideration to adjust the weights or probabilities of different symptoms. The personal information may provide a hint or suggestion to increase/decrease the weight or probability of a specific type of symptoms and/or the probability of the predicted diseases and/or department. In these embodiments, the prediction module 124 and the analysis engine 120 will evaluate or select the symptom inquiry and make the result prediction according to the combination of the initial symptom, previously responses and/or these personal information together (e.g., the disease prediction PDT is determined according to a weighted consideration of a weight of 30% on the initial symptom, a weight of 40% on the previously responses and a weight of 30% on the personal information, or other equivalent weight distributions).
The prediction module 124 is utilized to help the patient and/or a doctor to estimate the health condition of the patient. The result prediction can be provided to the patient and/or the medical professionals. In an embodiment, the result prediction is displayed on the interaction interface 140, such that the user U1 can see the disease prediction or/and the medical department suggestion and decide to go to a hospital for further examinations and treatments. In another embodiment, the result prediction can also be transmitted to the external server 200, which can be a server of a hospital. The medical system 100 can generate a registration request to the external server 200 for making a medical appointment between the user U1 and the hospital. In addition, the result prediction, the initial symptom Sini and the responses Sans can be transmitted to the external server 200, such that the doctor in the hospital can evaluate the health condition of the user U1 faster.
In another embodiment, the training data utilized by the learning module 122 further include a user feedback input Ufb collected by the interaction interface 140. For example, after the result prediction is given by the medical system 100, the user can make a medical appointment to a hospital and the user can get a diagnosis and/or a treatment from a medical professional (e.g., doctor). Afterward, the interaction interface 140 will send a follow-up inquiry to check the correctness the result prediction (e.g., the follow-up inquiry can be sent to the user three days or one week after the result prediction). The follow-up inquiry may include questions about “how do you feel now”, “do you go to hospital after the last prediction”, “does the doctor agree with our prediction” and some other related questions. The interaction interface 140 will collect the answers from the user as the user feedback input Ufb. The user feedback input Ufb will be sent to the learning module 122 to refine the prediction model MDL. For example, when the user feedback input Ufb include an answer implying that the result prediction is not correct or the user does not feel well, the learning module 122 will update the result prediction to decrease the probability (or weight) of symptom inquiries or disease result related to the corresponding result prediction.
In another embodiment, the training data utilized by the learning module 122 further include a doctor diagnosis record DC received from an external server 200. For example, after the result prediction is given by the medical system 100, the user can make a medical appointment to a hospital and a medical profession (e.g., doctor) can make an official diagnosis. The official diagnosis is regarded as the doctor diagnosis record DC, which can be stored in the external server 200 (e.g., a server of a hospital, and the server of the hospital include a medical diagnosis database). Afterward, the medical system 100 will collect the doctor diagnosis record DC from the external server 200. The doctor diagnosis record DC will be sent to the learning module 122 to refine the prediction model MDL.
In another embodiment, the training data utilized by the learning module 122 further include a prediction logfile PDlog generated by the prediction module 124. For example, when the prediction module 124 gives the symptom inquiries to the user and one of the symptom inquiry is also has the same answer (e.g., the user always say yes in response to “do you feel tired”), the one symptom inquiry is not effective. The prediction logfile PDlog includes a history of the symptom inquiries and user's answers. The learning module 122 can refine the prediction model MDL according to the prediction logfile PDlog.
The learning module 122 further updates the prediction model MDL according to the user feedback input Ufb, the doctor diagnosis record DC or the prediction logfile PDlog.
The prediction module 124 may also generate a result prediction further included treatment recommendation, such as a therapy recommendation, a prescription recommendation and/or a medical equipment recommendation, to the medical professionals such as doctors, therapists and/or pharmacists. Therefore, the medical professionals are able to perform treatment(s) to the patient according to the treatment recommendation along with their own judgments. The aforementioned treatment(s) includes prescribed medication (e.g., antibiotic, medicine), prescribed medical device (e.g., X-ray examination, nuclear magnetic resonance imaging examination), surgeries, etc.
After the disease prediction PDT or the medical department suggestion is displayed on the interaction interface 140, the interaction interface 140 is configured to receive a user command in response to the disease prediction PDT or the medical department suggestion. The medical system 100 is configured to send a medical registration request RQ corresponding to the user command to the external server 200.
The learning module 122 is able to collect activity logs (e.g., the initial symptom(s), related information of the patient, a history of the symptom inquiries and responses to the inquiries) from the prediction module 124, the diagnosis results and/or the treatment results from medical departments (e.g., hospital, clinics, or public medical records). The learning module 122 will gather/process the collect information and store the processed results, so as to update parameters/variables for refining the prediction model MDL utilized by the prediction module 124. In some embodiments, the collected diagnosis results and/or the treatment results are utilized to update the prediction model MDL.
In one embodiment, the prediction module 124 in
Reference is made to
In the Bayesian inference algorithm, the probability relationship table (as shown in Table 1) between different diseases and different symptoms is utilized to determine how to select the next inquiry.
When the prediction module 124 based on the Bayesian inference algorithm selects the next inquiry, the prediction module 124 will consider the initial symptom Sini and previously response Sans and the probability relationship table as shown in Table 1.
When the initial symptom is given, the scores for each possible symptom can be derived from the probability relationship table, i.e., Table 1, according to an impurity function. Table 3 demonstrates an example of one score lookup table with 7 symptoms when the initial symptom is “cough”.
In Table 3, the scores of these symptoms can be derived from an impurity function (e.g., Gini impurity function or other equivalent impurity function) according to the probability relationship table, i.e., Table 1. An impurity function is a mapping from a probability distribution P={pi|1<=i<=N, sum(pi)=1, pi>=0} to a non-negative real value which satisfies the following constraints (a), (b), (c) and (d):
(a) the function achieves minimum values on P if there exists i, pi=1;
(b) the function achieves a maximum value on P if for all i, pi=1/N;
(c) the function is symmetric with respect to the components pi; and
(d) the function is smooth, i.e. differentiable everywhere.
The above constraints implies that the value of the function will be smaller if the probabilities are denser or higher. In order to get a certain prediction, the prediction module tends to pick the inquiry that leads to smallest impurity function value after the inquiry is answered.
To achieve this we calculate a score for each possible choice of inquiry. For each candidate, the score is determined by:
Score=“impurity function value before this inquiry”−“expected impurity function value after this inquiry”.
The score can be interpreted as the “gain” of impurity function value after each inquiry. Therefore, the prediction engine tends to pick the one with maximum score (if the score is positive).
According to the scores given in Table 3, when the initial symptom is “cough”, the prediction module 124 based on the Bayesian inference algorithm will select “weakness” as the next symptom to inquire. This selection leads to the consequence that if the patient's response to “weakness” is positive, the Bayesian inference algorithm could distinguish Pneumonia from Otitis media and COPD.
When the initial symptom (and/or the previous responses) is different, the scores for each candidate symptom will be different accordingly. There is an example of another score lookup table when the initial symptom provided is “Weakness”. In this case, the scores for each candidate symptom are shown as Table 4.
According to the scores above in Table 4, when the initial symptom is “Weakness”, the prediction module 124 based on the Bayesian inference algorithm will pick “Difficulty breathing” as the next symptom to inquire. Consequently, if the patient's response is positive then the Bayesian engine could distinguish Pneumonia from Anemia and White blood cell disease.
There are many selection criteria can be utilized in the Bayesian inference algorithm. For example, impurity based selection criteria (information gain, Gini gain), normalize based selection criteria (gain ratio, distance measure), binary metric selection criteria (towing, orthogonality, Kolmogorov-Smirnov), continuous attribute selection criteria (variance reduction) and other selection criteria (permutation statistic, mean posterior improvement, hypergeometric distribution) are possible ways to implement the selection criteria based on the Bayesian inference algorithm.
Reference is made to
When the initial symptom is received, the prediction module 124 select one decision tree from the constructed decision trees. Reference is further made to
As shown in
Table 5 shows embodiments of different initial symptoms and different inquiry answers will lead to different predictions in different decision trees.
As shown in
Reference is made to
The reinforcement learning algorithm utilizes training data set with known disease diagnosis(s) and known symptom(s) to train the third prediction model MDL3. In the embodiment, the training data utilized by the reinforcement learning algorithm may include the probability relationship table according to statistics of the known medical records TDi as shown in Table 1. The known medical records TDi can be obtained from data and statistics information from the Centers for Disease Control and Prevention (https://www.cdc.gov/). In some embodiments, the training data utilized by the decision tree algorithm may further include the user feedback input Ufb, the doctor diagnosis record DC or the prediction logfile PDlog to update the prediction model MDL, and it is discussed in the aforementioned embodiments. The reinforcement learning model is trained by performing a simulation of inputting the initial symptom(s) input and patient's responses to the symptom(s) inquiries, and the reinforcement learning model will make a result prediction afterward. The learning module 122 uses the known disease diagnosis to verify the predicted disease. If it is correct, reinforcement learning algorithm increases a potential reward of the asked inquiries in the simulation. If it is not correct, a potential reward of the asked inquiries is remained the same or decreased.
When the third prediction model MDL3 trained with the reinforcement learning algorithm selects the next inquiry, the third prediction model MDL3 tends to choose an optimal inquiry with the highest potential reward, so as to shorten the inquiry duration and elevate the preciseness of the prediction. Further details of the third prediction model MDL3 trained with the reinforcement learning algorithm are disclosed in the following paragraphs.
The third prediction model MDL3 trained with the reinforcement learning algorithm considers the diagnosis process as a sequential decision problem of an agent that interacts with a patient. There are a set of possible diseases and a set of possible symptoms. At each time step, the agent inquires a certain symptom of the patient (e.g., the user U1). The patient then replies with a true or false answer to the agent indicating whether the patient suffers from the symptom. In the meantime, the agent can integrate user responses over time steps to revise subsequent questions. At the end of the process, the agent receives a scalar reward if the agent can correctly predict the disease, and the goal of the agent is to maximize the reward. In other words, the goal is to correctly predict the patient disease by the end of the diagnosis process.
Based on the correctness of the prediction, the agent receives a reward signal (i.e. if the prediction is correct, the reward signal=1; otherwise the reward signal=0). The goal of training is to maximize the reward signal. On the other hand, reinforcement learning model use π(st|h1:t-1,θ) to denote the policy function, where parameter θ represents the set of parameters, st is one of possible symptoms, “t” is the time step, and h1:t-1 is the sequence of interaction history from time 1 to t−1. The parameter θ is learned to maximize the reward that the agent expects when the agent interacts with the patient.
The third prediction model MDL3 trained with the reinforcement learning algorithm is described as that effectively combines the representation learning of medical concepts and policies in an end-to-end manner. Due to the nature of sequential decision problems, the third prediction model MDL3 trained with the reinforcement learning algorithm adopts a recurrent neural network (RNN) as a core ingredient of the agent. At each time step, the recurrent neural network accepts patient's response into the network, integrates information over time in the long short-term memory (LSTM) units, and chooses a symptom to inquire the patient in the next time step. Last, the recurrent neural network predicts the patient disease indicating the completion of the diagnosis process.
Reference is further made to
Reference is further made to
It is noticed that the step S830 and the step S840 in
In an embodiment, the step S830 and the step S840 in
It should be noted that, details of the method operation described above can be ascertained with reference to the embodiments described above, and a description in this regard will not be repeated herein.
As mentioned above, the computer-aided diagnosis engine requires the user to input an initial symptom, and the computer-aided diagnosis engine will generate proper inquiry questions according to the initial symptom (and the user's answers to previous inquiries). It is important to encourage the user to input a clear description of the initial symptom Sini.
Reference is further made to
As shown in
In an embodiment, the inquiry questions generated by the medical system will consider personal information of the user/patient. The personal information can include gender, age, a medical record (e.g., blood pressure, SPO2, ECG, Platelet, etc.), psychological information (e.g., emotion, mental status, etc.) and/or gene (e.g., DNA, RNA, etc.) of the patient. The personal information can be collected by the medical system. For example, when the personal information indicates the person is a male, the medical system will not bring up the inquiry question about “are you pregnant and experiencing some pregnancy uncomfortable”. In other words, when the personal information indicates the gender of the patient is female, the symptom inquiry will avoid “Delayed Ejaculation”. In some other embodiments, when the patient is adult, the symptom inquiry will avoid “Infant feeding problem”. When the patient is an infant, the symptom inquiry will avoid “Premature menopause”. Similarly, the prediction generated by the medical system will also consider personal information of the user/patient.
As shown in
Reference is further made to
Reference is further made to
In another embodiment, when the user selects a clinical section which is already full, the interaction system can provide a function to remind the user to make the appointment automatically for the same doctor at the same time section (e.g., also on Monday morning) in the future. If the user accepts to make the appointment automatically, the medical system makes the appointment (e.g., the clinical section of Dr Joe Foster on April 17, Monday Morning) automatically for the user when the clinical section is open to accept the online registration.
Reference is further made to
When the department suggestion is activated, the step S901 is executed, and the interaction interface 140 shows the system question to ask the user about the initial symptom. In addition, the interaction interface 140 may also provide the functional key in the step S902a to open a body map if the user doesn't know how to describe his/her feelings or conditions. Step S902b is executed to determine whether the functional key is triggered. When the functional key is triggered, the body map will be shown accordingly. Reference is further made to
When the user provides an answer in response to the system question, the medical system will try to recognize the answer provide by the user in the step S903. If the answer cannot be recognized by the medical system (e.g., the answer does not include any keyword which can be distinguished by the interaction system), the interaction interface 140 will show the body map in the step S904, such that the user can select a region where the symptom occurs from the body map. When the answer can be recognized by the medical system, the step S905 is executed to determine whether the keyword recognized in the answer may either include a distinct name of symptom matched to one of symptoms existed in the database or without any distinct name. If the keyword in the answer includes the distinct name, the interaction system can set the initial symptom according to the distinct name in the step S906. If the keyword in the answer does not include a distinct name of a symptom, the candidate can provide a list of candidate symptom according to the keyword in the step S907. Afterward, the medical system can set the initial symptom according to the selected symptom from the list of candidate symptoms in the step S908.
On the other hand, after the body map is shown in the step S904. Step S909 is executed to receive a selected part on the body map. Step S910 is executed to show a list of candidate symptoms related to the selected part on the body map. Step S911 is executed to set the initial symptom according to the selected symptom from the list of candidate symptoms.
Based on the aforementioned embodiments, the medical system provides a way to guide to user for making an appointment, querying the medication and deciding the department to consult (and also other services). The medical system can guide the user to complete the procedures step-by-step. The user may be required to answer one question at a time or to answer some related questions step-by-step. The medical system may provide intuitive services related to medical applications.
Although the present invention has been described in considerable detail with reference to certain embodiments thereof, other embodiments are possible. Therefore, the spirit and scope of the appended claims should not be limited to the description of the embodiments contained herein.
It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present invention without departing from the scope or spirit of the invention. In view of the foregoing, it is intended that the present invention cover modifications and variations of this invention provided they fall within the scope of the following claims.
This application claims priority to U.S. Provisional Application Ser. No. 62/373,966, filed Aug. 11, 2016, and U.S. Provisional Application Ser. No. 62/505,135, filed May 12, 2017, which are herein incorporated by reference.
Number | Date | Country | |
---|---|---|---|
62373966 | Aug 2016 | US | |
62505135 | May 2017 | US |