The following generally relates to imputing an outcome attribute to a Personal Emergency Response Service (PERS) electronic record missing an outcome attribute using a structured situation string or unstructured case note text associated with the record.
The portable device 108 is configured to be carried by a user (i.e. a subscriber of the PERS 102), e.g., as part of a pendent of a neckless, supported on a wristband or belt, etc. The portable device 108 includes at least a wireless transmitter, which is activatable via actuation of a push button, voice detection, etc. The portable device 108 may also include a microphone and a speaker. The transmitter, when activated, wirelessly transmits a signal to the user site communication system 110, which causes the user site communication system 110 to “call” the call center communication system 112. It is understood that USCOMM 110 may be located within device 108 or separate from it.
Personnel at the call center 106 assesses the call. Where the call is intentional and the user requires assistance, the personnel at the call center 106 may contact an informal responder (e.g. a neighbor, a family member, etc.), an emergency responder (an ambulance service, the police department, the fire department, etc.), etc. Information about each call is stored in a data structure/record 116 (e.g., as an electronic file in the storage device 114) that includes attribute fields for identifying at least a case identification, a type of the call, a situation/reason for the call, and an outcome of the call.
Jorn op den Buijs et al., “Predictive Modeling of Emergency Hospital Transport using Medical Alert Pattern Data: Retrospective Cohort Study,” iproc 2015; 1(1):e19, DOI: 10.2196/iproc.4772, indicates that this information has been used with predictive models to predict whether a user of the portable device 108 is at risk for emergency transport to a healthcare facility, and, if so, to notify a clinician of the user. In
The predictive models are trained and tested with the outcome attribute of records. However, the attribute fields in each record for each call from the user site 104 to the call center 106 are not always populated, and records without an outcome attribute cannot be used in training and/or testing of the predictive models because they do not include outcomes. Unfortunately, this may negatively impact the ability of the predictive models to accurately predict users at risk of emergency transport, and this may lead to less than optimal care and increased cost.
Aspects described herein address the above-referenced problems and others.
In one aspect, a computing system includes a memory device configured to store missing outcome instructions and a processor configured to execute the missing outcome instructions. The instructions cause the processor to: evaluate records stored on a storage device of a call center of a system, wherein the storage device is configured to store electronic records of calls from a user site to the call center, and each record includes at least a situation attribute field and an outcome attribute field, identify records of the stored records which include, at least one of, a value in the situation attribute or case note text in a case note database, but no value in the outcome attribute, and impute an outcome to the outcome attribute field of each identified record using at least one of the situation attribute and the case note text to predict the missing outcome.
In another aspect, a method includes identifying stored records, which include at least a situation attribute field and an outcome attribute field, that have at least one of a situation attribute and case note text in a case note database, and no outcome attribute, and determining whether the situation attribute field includes a structured situation string or a reference to a case note in response to the situation attribute field including a situation attribute. The method further includes performing a structured situation string analysis to impute an outcome to one or more of the records missing an outcome attribute in response to the situation attribute field including the structured situation string, and performing a case note analysis to impute an outcome to one or more of the records missing an outcome attribute in response to at least one of the situation attribute field including the reference to the case note and the case note database including the case note text. The method further includes employing records with outcome attributes and imputed outcome attributes to generate predictive models, and employing the predictive models to predict a health state of a user.
In another aspect, a computer readable storage medium is encoded with computer readable instructions. The computer readable instructions, when executed by a processer, cause the processor to: identify stored records, which include at least a situation attribute field and an outcome attribute field, that have at least one of a situation attribute and case note text in a case note database, and no outcome attribute, determine whether the situation attribute field includes a structured situation string or a reference to a case note in response to the situation attribute field including a situation attribute, perform a structured situation string analysis to impute an outcome to one or more of the records missing an outcome attribute in response to the situation attribute field including the structured situation string, perform a case note analysis to impute an outcome to one or more of the records missing an outcome attribute in response to at least one of the situation attribute field including the reference to the case note and the case note database including the case note text, employ records with outcome attributes and imputed outcome attributes to generate predictive models, and employ the predictive models to predict a health state of a user.
The invention may take form in various components and arrangements of components, and in various steps and arrangements of steps. The drawings are only for purposes of illustrating the embodiments and are not to be construed as limiting the invention.
The type attribute 300 identifies a type of a call, which can be welcome 302, test 304, maintenance 306, check-in 308, accidental 310, incident 312, etc. The situation attribute 400 either identifies a reason for the call via a structured situation string or specifies that there is a case note with unstructured narrative text for the call.
Returning to
The memory 208 stores data 214, such as records, and computer readable instructions 216. The processor 206 is configured to execute the computer readable instructions 216. The illustrated computer readable instructions 216 include a missing outcome instruction set or algorithm 218 and a predictive analytics instruction set or algorithm 220. As described in greater detail below, the missing outcome instruction set or algorithm 218 includes instructions, which, when executed by the processor 206, cause the processor 206 to at least identify records with a situation attribute or an associated case note but no outcome attribute, and imputes an outcome attribute to the outcome field of the identified record using either the situation attribute or the case note if available.
The predictive analytics instruction set or algorithm 220 includes instructions that cause the processor 206 to predict users at risk of emergency transport based on predictive models, which are trained and tested on the outcomes in the records, and to generate notification, when needed, indicating a user is at risk for emergency transport with a specified time period (e.g., 30 days). An example product that performs such an analysis is the CareSage predictive analytics engine, a product of Koninklijke Philips N.V., a company headquartered in the NL. By also using imputed outcomes, the predictive analytics instruction set or algorithm 220 uses additional information that was not previously available, which can reduce patient risk, lower costs and/or improve patient outcomes.
At 602, stored call records are processed to identify, at least, records with a situation attribute (e.g., a string, a reference to a case note, and/or an associated case note text) but no outcome attribute. Act 602 is omitted where such records have already been identified. A more detailed example is provided below.
At 604, it is determined whether each of the identified records includes a structured situation string or a reference to a case note, and whether a case note database (e.g., in the storage device 114) has a case note corresponding to each record. For sake of brevity and explanatory purposes, reference to the term “case note” herein refers to: 1) the case note referenced in the situation; 2) the case note in the case note database; or 3) both the case note referenced in the situation attribute and the case note the case note database.
Returning to
Returning to
At 608, one or more new predictive models are employed to predict the value for the missing outcome attribute based on the case notes. A more detailed example is provided below.
At 610, it is determined whether there is a single predictive model or multiple predictive models utilized to impute the missing outcome attribute.
If there is more than one predicted value, then at 612, the predicted values are combined or blended according to predetermined assessment rules to produce a single imputed outcome for the file. A more detailed example is provided below.
If there is only one predictive model, at 614, the model output value is the single imputed outcome for the record.
At 616, it is determined whether the imputed outcome (from act 606, 612 or 614) is to be confirmed. For example, this information can be in the LUT of
Returning to
If the imputed value is not to be confirmed or after confirmation, at 620, the record is populated with the single imputed outcome and stored.
At 622, it is determined whether there is another identified record.
If there is another record, then acts 604-622 are repeated for the record.
If there is no other identified record to process, at 624, the records with imputed outcomes can be stored, further processed, etc. For example, in one non-limiting instance, the records with imputed outcomes can be used along with the records with outcomes with predictive analytics to predict a health state of a user, e.g., users at risk of emergency transport and notify the clinical site 118, if needed, that a user(s) is at risk for emergency transport based on the predicted outcome.
At 802, the stored records are processed to identify files of a predetermined case type (e.g., incident). For example, the computing system 204 receives a user input, via the input device 212, which indicates a case type of interest, and the processor 206 reads the value of the type attribute of each file and identifies the files that include a type attribute that is equal to the case type of interest—the predetermined case type.
At 804, each identified record is processed to determine whether the record includes an outcome attribute. For example, the processor 206 reads the value of the outcome attribute of a record and identifies whether the record include an outcome attribute. This is repeated for the other identified records.
At 806, each of the records with no outcome attribute is processed to determine whether the record includes a structured situation string or a case note. For example, the processor 206 reads the value of the situation attribute of a record and searches the case note database, and identifies whether the records include a structured situation string or a case note. For the later, the processor 206 extracts the case identification from the record and searches the case note database (e.g., a table thereof) for the case identification. If the case identification is found in the case note database, the processor retrieves the corresponding case note text.
Returning to
In one instance, records that include an outcome attribute are flagged as records that can be used for predictive analytics. Additionally, or alternatively, records that are missing an outcome attribute, a structured situation string and a case note are flagged as records that cannot be used for predictive analytics.
At 902, records with an outcome attribute and a case note are processed to determine a distribution of l different outcomes l levels of the categorical variable. For example, using the six (6) outcome possibilities (i.e. l=6) from
At 904, it is determined whether the distribution of the l levels is well balanced. In one instance, a well-balanced distribution is one in which a difference between a greatest occurrence of an outcome and a least occurrence of an outcome satisfies a predetermined threshold. For example, where there are at least two levels (l≥2), the greatest occurrence is Y for the EMER ASSIST-TRANS level, the least occurrence is Z for of SUB-ASSISTED SELF level, and the predetermined threshold is T, the distribution is well balanced if |Y−Z|<T, and not well balanced otherwise.
If the l levels are well balanced, at 906, a number of new predictive models is set to one (i.e. N=1).
If the l levels are not well balanced, at 908, it is determined whether the l levels can be decomposed into n binary pairs (n<1). For example, the six (6) outcome possibilities from
If the l levels can be decomposed as such, at 910, the number of new predictive models is set to n (i.e. N=n).
If the l levels cannot be decomposed as such, at 912, the l levels are grouped into L (L<1) well balanced levels, and, at 914, the number of new predictive models is set to one (i.e. N=1).
At 916, the N predictive models are trained with a training set of case notes with a known outcome.
At 918, the trained N predictive models are tested with a test set of case notes with a known outcome.
At 920, the trained and tested N predictive models are used to predict values for the missing outcomes.
At 1002, blending rules are obtained.
Returning to
If P1 is equal to VALUE0 at 1004, at 1006, it is determined whether P2 is equal to VALUE0.
If P2 is equal to VALUE0 at 1006, at 1008, P1_VALUE0 and P2_VALUE0 are blended to create the single value P1_VALUE0-P2_VALUE0.
If P2 is not equal to VALUE0 at 1006, at 1010, P1_VALUE0 and P2_VALUE1 are blended to create the single value P1_VALUE0-P2_VALUE1.
If P1 is not equal to VALUE0 at 1004, at 1012, it is determined whether P2 is equal to VALUE0.
If P2 is equal to VALUE0 at 1012, at 1014, P1_VALUE1 and P2_VALUE0 are blended to create the single value P1_VALUE1-P2_VALUE0.
If P2 is not equal to VALUE0 at 1012, at 1016, P1_VALUE1 and P2_VALUE1 are blended to create the single value P1_VALUE1-P2_VALUE1.
At 1018, the blended value is imputed to the missing outcome.
At 1202, it is determined, for each user with a record with an imputed outcome, if there is an electronic health record (EHR). In one example, this can be achieved through a PERS-EHR integration interface.
If there is an EHR for a user, at 1204, the imputed outcome is compared with the outcome in the EHR. In one instance, this includes analyzing the EHR to extract emergency room (ER) and/or hospital admissions information about the outcome and comparing the extracted information with the predicted outcome to confirm the predicted outcome.
If there is no EHR for the user, at 1206, a call script template is retrieved from a set of predefined templates. The call script retrieved depends from the case type and the predicted case outcome.
Returning to
At 1210, the imputed outcome is compared with the actual information.
At 1212, it is determined if the imputed outcome is the same as the retrieved outcome.
If the predicted outcome is not the same as the actual outcome, at 1214, the record is updated with the actual outcome.
If the predicted outcome is the same as the actual outcome or after updating the imputed outcome, at 1216 the record is stored with the confirmed outcome.
A non-limiting example of using the approach described herein is described next in connection with
For this example, there are 5,329 records of type “incident.” The output of act 602 (
Of the remaining 720 (13.5%), 397 (7.5%) have a situation attribute but no case note, and of 323 (6%) have a situation attribute that refers to a case note.
For this example, the blending rules shown in
The computing system 204 (
The following illustrates a non-limiting example that shows an example of a benefit of imputing an outcome attribute as described herein. For this example, a predictive model is trained/validated with historic data and outcome window, and performance is tested on a cohort without imputation and a cohort with imputation. From
In another example, the approach described herein is applied to records with the type incident with a partly missing outcome. An example of this category is all incident cases with assigned outcome “Emer Assist—No Status”. No Status means that the call center 106 was unable to get the user's transport status upon calling back the user and/or responder. This may occur when the user or responder does not answer the call center follow-up call, the hospital did not confirm the user's admission, or the EMS dispatch center cannot provide transport information. In this example, of the 3,258 subscribers in the previous example, there are 904 (17%) incident cases with outcome “Emer Assist—No Status,” which is shown in
In
The following describes an example to classify case outcomes based on the case notes (e.g., from a record and/or the case note database). In this example, first, separate case note fragments belonging to a single case are joined. These separate fragments result from a single case often involving different interactions with the user, even by different call centers. Case notes are then converted to lower case, and stripped from commas and dots. Words are extracted from the case notes by splitting case notes on white spaces. In one instance, the call center can use a standard list of abbreviations, e.g., “disp” means dispatch and “amb” means ambulance.
A table of the frequency of each word is generated and sorted in order of decreasing frequency. For each of the N most frequently occurring words, one-hot encoding can be used to indicate if this word was present in the case note. Thus, a large sparse matrix can be generated with one row per case and N columns for the words, where each cell is “1” if the corresponding word is present in the case note and “0” otherwise. This approach is also known as “bag-of-words”. In one instance, the number of words N can be varied from 5 to 500 to determine the optimal N required for classification of the case notes.
The set of case notes is randomly split into a training and test set using a 0.75/0.25 ratio. The training set is used to develop the case outcome classifier. The classifier(s) is developed using a machine learning technology, e.g. boosted regression trees approach referred to as extreme gradient boosting. Due to its tree-based nature, this methodology allows for automatic selection of interaction between variables. Variable importance is determined according to a ‘gain’, which is a measure for the relative contribution of the corresponding variable to the predictive model, calculated by taking the improvement in accuracy brought by a variable to the branches it is on. The boosted regression model involves tuning hyperparameters of the learning algorithm, such as the number of trees, the maximum depth of the trees (defining the degree of interaction between variables), and the learning rate. This optimization can be achieved using 5-fold cross-validation on the training set with the optimization metric determined by the AUC in the test fold.
The separate test set is used to validate the performance of the classifiers, which is depicted in
In another embodiment, the approach described herein is applied to pre-populate case attributes while call center personnel is in a conversation with a subscriber and types a case note. In this embodiment, the predictive models analyze the case note text as the text is entered (i.e. in real time) and predict the value of different case attributes. These values are pre-filled in the record for the personnel, who can confirm them and proceed with rounding up the case. In one instance, this will lead to improved call center efficiency and quality of recorded data.
In another embodiment, the approach described herein is applied to analyze case notes typed by a sale representative and predict the probability of finalizing an inbound call with having a new subscriber and no need for a call back.
The method(s) described herein may be implemented by way of computer readable instructions, encoded or embedded on computer readable storage medium (which excludes transitory medium), which, when executed by a computer processor(s) (e.g., CPU, microprocessor, etc.), cause the processor(s) to carry out acts described herein. Additionally, or alternatively, at least one of the computer readable instructions is carried by a signal, carrier wave or other transitory medium, which is not computer readable storage medium.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims.
In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage.
A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2018/083070 | 11/30/2018 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62594215 | Dec 2017 | US |