The present invention relates to a method of patient reporting based on a voice using artificial intelligence (AI) and an apparatus for performing the method. More particularly, the present invention relates to a method of patient reporting based on a voice using artificial intelligence (AI) for generating patient reports according to various purposes and an apparatus for performing the method.
With the development of various smart technologies, data of personal daily activities is recorded, and individual life can be efficiently managed on the basis of the recorded data. In the meantime, health-related data logging is attracting attention due to the increasing interest in healthcare. Many users have already been generating and utilizing various health-related data including data on exercise, diet, sleep, and the like through user devices such as smartphones, wearable devices, and the like. In the past, health-related data was generated and managed only by medical institutions, but now users have begun to generate and manage their own health-related data through user devices such as smartphones and wearable devices.
In many cases, health-related data logging is performed through a wearable device. A wearable device is a user device that is carried by or attached to a user. Due to the development of Internet of things (IoT) and the like, wearable devices are frequently used for collecting health-related data. A wearable device may collect a user's physical change information and surrounding data of the user through equipment and provide advice required for the user's healthcare on the basis of the collected data.
A user's health-related data may include a user biomarker, and research is ongoing on a method of making a medical prescription adaptively to a user on the basis of the user's health-related data.
As related art, there is Korean Patent No. 10-2425479.
An object of the present invention is to solve all of the above problems.
In addition, the present invention is directed to determining a user's disease state based on user voice data and generating a report on a user's state.
In addition, the present invention is also directed to providing patient information for various subjects (guardian, doctor, insurance company) only with a user's voice by generating a patient report based on user voice data.
According to an aspect of the present invention, there is provided a method of patient self-reporting based on a voice using artificial intelligence (AI) comprises receiving, by a patient analysis device, patient analysis basic information, generating, by the patient analysis device, report basic information based on the patient analysis basic information and generating, by a patient report generation device, a patient report based on the report basic information.
Meanwhile, the patient analysis basic information includes voice data, and the patient analysis device inputs the voice data to a voice data analysis AI engine and generates the report basic information.
Further, the patient analysis device further includes a query generating AI engine for acquiring necessary information from a user when the necessary information exists in the voice data analysis AI engine.
According to another aspect of the present invention, there is provided a patient report system for patient self-reporting based on a voice using artificial intelligence (AI), comprises a patient analysis device configured to receive patient analysis basic information and generate report basic information based on the patient analysis basic information and a patient report generating device configured to generate a patient report based on the report basic information.
Meanwhile, the patient analysis basic information includes voice data and the patient analysis device inputs the voice data to a voice data analysis AI engine and generates the report basic information.
Further, the patient analysis device further includes a query generating AI engine for acquiring necessary information from a user when the necessary information exists in the voice data analysis AI engine.
The above and other objects, features and advantages of the present invention will become more apparent to those of ordinary skill in the art by describing exemplary embodiments thereof in detail with reference to the accompanying drawings, in which:
The detailed description of the present invention will be made with reference to the accompanying drawings showing examples of specific embodiments of the present invention. These embodiments will be described in detail such that the present invention can be performed by those skilled in the art. It should be understood that various embodiments of the present invention are different but are not necessarily mutually exclusive. For example, a specific shape, structure, and characteristic of an embodiment described herein may be implemented in another embodiment without departing from the scope and spirit of the present invention. In addition, it should be understood that a position or arrangement of each component in each disclosed embodiment may be changed without departing from the scope and spirit of the present invention. Accordingly, there is no intent to limit the present invention to the detailed description to be described below. The scope of the present invention is defined by the appended claims and encompasses all equivalents that fall within the scope of the appended claims. Like reference numerals refer to the same or like elements throughout the description of the figures.
Hereinafter, in the present invention, a user is assumed to be a patient for convenience of description, but the user may be interpreted to include not only patients but also non-patients who have not been diagnosed with a specific disease, and these embodiments may also be included in the scope of the present invention.
In
Referring to
The user device 100 may be a device for transmitting patient analysis basic information, which is based on patient analysis, such as user voice data, to a patient analysis device.
The patient analysis device 120 may be implemented to generate report basic information for generating a patient report based on patient analysis basic information transmitted from a user. For example, the patient analysis device 120 may generate report basic information for generating reports for various subjects (e.g., doctor, guardian, insurance company) based on the user's voice information.
Patient report generating device 140 may be implemented to generate a patient report. The patient report generating device 140 may generate a patient report based on report basic information generated by the patient analysis device. For example, the patient report generating device 140 may generate a first patient report for a doctor, a second patient report for a guardian of the patient, and a third patient report for an insurance company.
The information security device 160 may be implemented to secure information generated from the user device 100, the patient analysis device 120, and the patient report generating device 140.
In
Referring to
The voice data analysis AI engine 200 may generate different types of report basic information for each report in processing the preprocessed voice data and generating the report basic information. For example, the voice data analysis AI engine 200 may generate first report basic information for a first patient report, second report basic information for a second patient report, and third report basic information for a third patient report.
In addition, the voice data analysis AI engine 200 may be divided into a voice data analysis AI engine (type 1) 210 for preferentially determining a user without a previous medical history record and a voice data analysis AI engine (type 2) for a user with previous medical record 220.
Separate previous medical history data is not input to the voice data analysis AI engine (type 1) 210, and separate previous medical history data is additionally input to the voice data analysis AI engine (type 2) 220, so the report basic information may be generated by additionally considering the previous medical history data.
In addition, according to an embodiment of the present invention, when there is information required for the voice data analysis AI engine 200, a query generating AI engine 230 for acquiring necessary information from a user is additionally included in the patient analysis device.
The query generating AI engine 230 may generate a query in consideration of 1) a part where it is difficult to fill specific information on the report based on the user voice data, 2) a reliability range of an answer that generates a report with the user voice data but has low reliability, and the like.
That is, when generating a report for a specific patient report (first patient report, second patient report, third patient report), in a case in which there is necessary information or the information is acquired but the reliability of the information is low, the query generation engine 230 may generate the query about the corresponding part and transmit the generated query to the user device. The user may generate a response to a query as voice data and transmit the generated response through the user device.
That is, the query generation engine 230 may generate a query in consideration of a required report type, information required for the report, and reliability based on patient voice data, and may provide the generated query to a patient.
The voice data may be input again to the voice data analysis AI engine 200 and converted into the report basic information required for report generation.
In
Referring to
The voice data analysis AI engine (type 1) 310 may perform overall screening for diseases that can be determined overall based on user voice data, and generate report basic information for determining the possibility of various diseases based on the overall screening.
The voice data analysis AI engine (type 2) 320 may generate report basic information for generating a patient report on a disease already possessed by a user based on the user voice data.
In the case of a user who has previously been confirmed to have a disease, the report basic information may be generated through the voice data analysis AI engine (type 1) 310 and the voice data analysis AI engine (type 2) 320.
In the case of a user who has not previously been confirmed to have a disease, the report basic information may be generated through the voice data analysis AI engine (type 1) 310.
That is, a patient report can be generated based on voice data, but a patient report based on the disease already possessed may be generated.
The voice data analysis AI engine (type 1) 310 may receive patient analysis basic information (type 1) 315 in generating the patient analysis information. The patient analysis basic information (type 1) 315 may be generated by preprocessing sentences and words that may be sources for patient report generation, and adding tags of all diseases highly related to sentences and words to sentences and words. In this case, the sentences and words may be grouped based on tags for each of the plurality of diseases. For example, sentences and words related to migraine may be preprocessed in a form including tags for other diseases considering the possibility of other diseases (for example, arteriosclerosis) as well as migraine.
For the report related to migraine, sentences and words related to a tag for migraine may be grouped and grouped into a migraine report group, and the migraine report group may be input to the voice data analysis AI engine (type 1) 310 and used for report generation.
In this way, sentences and words related to each of the plurality of diseases may be grouped based on tags for each of the plurality of diseases and generated as a disease n report group, the disease n report group may be input to the voice data analysis AI engine (type 1) 310, and the voice data analysis AI engine (type 1) 310 may generate report basic information for a target disease requiring a report among a plurality of diseases (disease 1 to disease n).
The voice data analysis AI engine (type 2) 320 may receive patient analysis basic information (type 2) 325 in generating the patient analysis information. The patient analysis basic information (type 2) 325 may include data in which tags for sentences and words related to a target disease are added in order to report the target disease that the user possess. The voice data analysis AI engine (type 2) 320 may generate the report basic information on the target disease based on the patient analysis basic information (type 2) 325 including the tag for the target disease.
For example, first report basic information, second report basic information, and third report basic information may be generated according to the patient report type to be generated. The first report basic information, the second report basic information, and the third report basic information may be input to a patient report generation engine to generate a first patient report, a second patient report, and a third patient report. The first patient report may be a report for a doctor, the second patient report may be a report for a guardian, and the third patient report may be a report for an insurance company.
In
Referring to
After the patient analysis basic sentence 420 is extracted, the patient analysis basic sentence 420 may be rearranged through the relationship between the patient analysis basic sentences 420.
The patient analysis basic sentence 420 may be classified according to symptoms based on a sub-disease vocabulary group 430 for each symptom included in the disease vocabulary group (disease n). For example, the disease n includes symptom 1, symptom 2, and symptom 3, and the disease vocabulary group (disease n) 400 may include a sub-disease vocabulary group for symptom 1 (symptom 1), a sub-disease vocabulary group for symptom 2 (symptom 2), and a sub-disease vocabulary group for symptom 3 (symptom 3) as sub-disease vocabulary groups 430 for each symptom. The patient analysis basic sentence 420 may be classified into patient analysis basic sentence group 440 for each symptom based on the sub-disease vocabulary group 430. For example, the patient analysis basic sentence group 440 may be classified into a patient analysis basic sentence (symptom 1), a patient analysis basic sentence (symptom 2), and a patient analysis basic sentence (symptom 3). The classified patient analysis basic sentence (symptom n) may be determined as the patient analysis basic sentence group (symptom n) 440.
The patient analysis basic sentence group (symptom n) 440 may be rearranged considering time series. When there is a word whose chronological order may be determined in a sentence, the patient analysis basic sentence (symptom n) may be rearranged within the patient analysis basic sentence group (symptom n) 440 in consideration of chronological order.
A patient analysis basic information (type 2) 450 may be generated based on the patient analysis basic sentence group (symptom n) 440 and the temporally rearranged patient analysis basic sentence (symptom n). Symptom data of a current user may be extracted based on the patient analysis basic sentence (symptom n), and the user symptom data may generate the patient analysis basic information (type 2) 450. In addition, when there is a relationship between symptoms, the patient analysis basic information (type 2) 450 considering the relationship between the patient analysis basic sentences (symptom n) corresponding to different symptoms may be generated.
The patient analysis basic information (type 2) 450 generated in the above manner may be input to the voice data analysis AI engine (type 2) to generate the report basic information. The report basic information may be information in which the patient analysis basic information (type 2) 450 is classified according to detailed items to be reported.
In
Referring to
In order to generate patient analysis basic information (type 1) 580, a disease determination procedure of primarily determining a user's disease may be performed.
The user voice data may be textualized. The textualized voice data may be divided into a plurality of sentences. In the case of the patient analysis basic information (type 1) 580, since there is no user disease data for a user disease that has already been confirmed, first, among a plurality of sentences, a sentence highly related to the symptom vocabulary group set 500 may be classified as a patient analysis basic sentence 520.
Some of the plurality of sentences may be classified based on the symptom vocabulary group set 500 as the patient analysis basic sentences 520 in consideration of symptoms. The symptom vocabulary group set 500 may include a plurality of symptom vocabulary groups 530 in which vocabularies related to symptoms are grouped. The symptom vocabulary group 530 is a group for vocabularies highly related to each symptom and may be defined for various symptoms. When the symptom is a headache, words such as headache, migraine, and sore bones may be included in the corresponding symptom vocabulary group 530. One patient analysis basic sentence 520 may correspond to different symptom vocabulary groups 530.
In the present invention, the symptom vocabulary group 530 corresponding to the patient analysis basic sentence may be determined, and a prediction of the possibility of disease may be performed based on the determined symptom vocabulary group 530. For example, when the patient analysis basic sentence 520 corresponds to symptom vocabulary group a, symptom vocabulary group b, and symptom vocabulary group c, a disease that may have the symptom vocabulary group a, the symptom vocabulary group b, and the symptom vocabulary group c may be determined as the candidate prediction disease 540.
After the candidate prediction disease 540 is determined, a final prediction disease 560 may be determined based on the patient analysis basic sentence 520 corresponding to the symptom vocabulary groups 530. The final prediction disease 560 may be determined in consideration of the relationship between the basic patient analysis sentences, a degree of relation between words included in the patient analysis basic sentence and a specific disease, and severity of a specific symptom included in the patient analysis basic sentence, and the like.
When the final prediction disease is determined, a patient analysis basic sentence group (symptom n) 570 for the final prediction disease may be determined in the same manner as the procedure described above in
When there is no candidate prediction disease 540, the patient analysis basic information (type 1) 580 may be generated based only on the patient analysis basic sentence 520 corresponding to a specific symptom.
The patient analysis basic information (type 1) 580 generated in the above manner may be input to the voice data analysis AI engine (type 1) to generate the report basic information. The report basic information may be information in which the patient analysis basic information (type 1) 580 is classified according to detailed items to be reported.
According to an embodiment of the present invention, when a collision occurs between patient analysis basic information, the reliability between the patient analysis basic information may be determined to process the collision. When a collision occurs, a report may be generated based on patient analysis basic information having relatively high reliability. The reliability of the patient analysis basic information may be determined in consideration of the degree of matching with the previous patient analysis basic information, the degree of overlap of the patient analysis basic information, and the like.
In addition, the patient analysis basic information equal to or less than a certain reliability may be set not to be included in a specific report in consideration of the reliability of the patient analysis basic information according to a report subject (for example, doctor, guardian, insurance company).
Referring to
The patient analysis basic information for generating a patient report may be classified into patient analysis basic information (time series) 600 that is time-series information and patient analysis basic information (non-time series) 610 that is non-time series information. The patient analysis basic information (time series) 600 may be information for generating a patient report by additionally considering previously received previous patient analysis basic information. The basic patient analysis information (non-time series) 610 may be information for generating a patient report in consideration of only the current patient analysis basic information.
In the case of the basic patient analysis information (time series) 600, it is determined whether there is the previous basic patient analysis information, and when there is the previous patient analysis basic information, the previous patient analysis basic information may be reflected when generating the current report. The time series unit of the patient analysis basic information (time series) 600 may be adaptively determined in consideration of report result accuracy and disease characteristics.
A minimum data unit for generating a report may be determined as a reference patient analysis time series unit 620 of the patient analysis basic information (time series) 600. The reference patient analysis time series unit 620 may be defined in consideration of disease characteristics. In the case of a specific disease, a time required to accumulate the patient analysis information for diagnosis or report may be defined, and based on the definition, a reference patient analysis time series unit 620 may be defined. In addition, the reference patient analysis time series unit 620 may be defined to have report result accuracy greater than or equal to a threshold value. The report result accuracy 630 may be determined in consideration of whether an item generated as a report matches an actual user's state.
The reference patient analysis time series unit 620 may be adjusted in consideration of the report result accuracy 630. In an embodiment of the present invention, the characteristics of the reference patient analysis time series unit 620 may be changed in consideration of the report result accuracy 630. For example, even when the amount of patient analysis basic information is relatively reduced due to the increase in the report result accuracy 630 due to the increase in the performance of the AI engine or the time series unit of the patient analysis basic information is reduced, if it is possible to generate report results with accuracy greater than or equal to the threshold value, the reference patient analysis time series unit 620 may be reduced. That is, the characteristics of the reference patient analysis time series unit 620 may be newly set based on the feedback on the report result.
Referring to
The preprocessing unit may extract the context data 700 and the out-of-context data 705 based on voice data. The context data may include information on characteristics generated in context, such as sentence completeness, word composition, and vocabulary. The out-of-context data may include information on out-of-context characteristics (for example, tone, pitch, a stuttering level, etc.).
Voice data analysis AI engines 750 and 760 may receive context data 700 and out-of-context data 705 and generate user symptom data of a user. The user symptom data may include disease prediction data and disease monitoring data. For example, the prediction of the possibility of a user's migraine may be performed based on the voice data analysis AI engines 750 and 760, and migraine prediction data and migraine monitoring data may be provided as user symptom data.
The context data 700 may include a plurality of lower context data, and the out-of-context data 705 may include lower out-of-context data.
In the present invention, two pieces of reference lower context data (first reference lower context data 710 and second reference lower context data 720) may exist in order to extract user's lower context data.
The first reference lower context data 710 may be lower context data commonly possessed by users having a critical ratio (e.g., 80%) or more among users having a specific disease. For example, the first reference lower context data 710 may be lower context data commonly appearing in users having dementia or migraine. The first reference lower context data 710 may be adaptively changed according to the accumulation of the user voice data. The first reference lower context data may also be set as a separate default value in consideration of a user's age, educational background, sex, and the like.
The second reference lower context data 720 may be lower context data other than the first reference lower context data 710. The second reference lower context data 720 may be lower context data observed in a specific user or a specific user group having a specific disease.
The user voice data may be analyzed based on the first reference lower context data 710 and the second reference lower context data 720, and the user symptom data may be generated based on the analysis result.
Similarly, in the present invention, two pieces of reference lower out-of-context data (first reference lower out-of-context data 730 and second reference lower out-of-context data 740) may exist in order to extract the user's lower out-of-context data.
The first reference lower out-of-context data 730 may be lower out-of-context data commonly possessed by users having a critical ratio (e.g., 80%) or more among users having a specific disease. For example, the first reference lower out-of-context data 730 may be lower out-of-context data commonly appearing in users having dementia or migraine. The first reference lower out-of-context data 730 may be adaptively changed according to the accumulation of the user voice data. The first reference lower out-of-context data 730 may also be set as a separate default value in consideration of a user's age, educational background, sex, and the like.
The second reference lower out-of-context data 740 may be lower out-of-context data other than the first reference lower out-of-context data 730. The second reference lower out-of-context data 740 may be lower out-of-context data observed in a specific user or a specific user group having a specific disease.
The user voice data may be analyzed based on the first reference lower out-of-context data 730 and the second reference lower out-of-context data 740, and the user symptom data may be generated based on the analysis result.
According to an embodiment of the present invention, the voice data analysis AI engine may be separately implemented as the voice data analysis AI engine (first reference) 750 for analyzing the first reference lower context data 710 and the first reference lower out-of-context data 730 and the voice data analysis AI engine (second reference) 760 for analyzing the second reference lower context data 720 and the second reference lower out-of-context data 740.
According to an embodiment of the present invention, in order to determine the first reference lower context data 710 and the first reference lower out-of-context data 730, user feature vectors (for example, vocabulary vector, sound feature vector, etc.) of all users having a specific disease may be located on the embedding plane. The user feature vector may be a vector extracted based on the user voice data.
The user feature vectors that users of critical criteria commonly have on the embedding plane are changed according to the accumulation of the user feature vectors of user data (or user voice data) having a disease, and thus, a vector corresponding to the first reference lower context data 710 and a vector corresponding to the data 730 other than the first reference lower context may be changed. As described above, when the first reference lower context data 710 and the first reference lower out-of-context data 730 are changed according to the accumulation of the user feature vectors of all users having a specific disease, the pool of the training data of the voice data analysis AI engine (first reference) 750 and the voice data analysis AI engine (second reference) 760 may be changed and updated.
The embodiments of the present invention described above may be implemented in the form of program instructions that can be executed through various computer units and recorded on computer readable media. The computer readable media may include program instructions, data files, data structures, or combinations thereof. The program instructions recorded on the computer readable media may be specially designed and prepared for the embodiments of the present invention or may be available instructions well known to those skilled in the field of computer software. Examples of the computer readable media include magnetic media such as a hard disk, a floppy disk, and a magnetic tape, optical media such as a compact disc read only memory (CD-ROM) and a digital video disc (DVD), magneto-optical media such as a floptical disk, and a hardware device, such as a ROM, a RAM, or a flash memory, that is specially made to store and execute the program instructions. Examples of the program instruction include machine code generated by a compiler and high-level language code that can be executed in a computer using an interpreter and the like. The hardware device may be configured as at least one software module in order to perform operations of embodiments of the present invention and vice versa.
While the present invention has been described with reference to specific details such as detailed components, specific embodiments and drawings, these are only examples to facilitate overall understanding of the present invention and the present invention is not limited thereto. It will be understood by those skilled in the art that various modifications and alterations may be made.
Therefore, the spirit and scope of the present invention are defined not by the detailed description of the present invention but by the appended claims, and encompass all modifications and equivalents that fall within the scope of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2023-0102208 | Aug 2023 | KR | national |