The following generally relates to performance benchmarking and more particularly to context-based performance benchmarking.
A key performance indicator (KPI) can been used to evaluate a performance of individuals. For instance, a manager of a clinical department of a healthcare facility can utilize a KPI to evaluate a performance of a staff member of the clinical department. For example, a manager of an echocardiogram laboratory can use a KPI to evaluate a performance of individual sonographers with respect to performing echocardiograms. An example KPI in this instance is an average time duration to perform an echocardiogram.
However, a complexity of performing an echocardiogram varies not only based on a sonographer’s performance but also on factors outside of the control of the sonographer such as a patient-specific clinical context (e.g., inpatient versus output, etc.) and/or a workflow context (e.g., equipment model, etc.). As a consequence, performance benchmarking of individual sonographers for performing echocardiograms is affected by the patient-specific clinical context and/or the workflow context, regardless of the performance of the sonographers.
As such, all else being equal, the same KPI for two different sonographers can be different based on the patient-specific clinical context and/or the workflow context. Thus, current approaches to performance benchmarking can lead to a biased evaluation with a less accurate interpretation of the individual’s performance, e.g., depending on the context. Hence there is an unresolved need for another and/or improved approach(s) for performance benchmarking.
Aspects described herein address the above-referenced problems and/or others. For instance, a non-limiting example embodiment described in greater detail below considers patient-specific clinical context and/or workflow context to determine a more accurate and meaningful KPI without such biases based performance benchmarking.
In one aspect, a system includes a digital information repository configured to store information about performances of individuals, including performances of an individual of interest. The system further includes a computing apparatus. The computing apparatus includes a memory configured to store instructions for a performance benchmarking engine trained to learn factors of the performances that impact key performance indicators independent of the individuals’ performance. The computing apparatus further includes a processor configured execute the stored instructions for the performance benchmarking engine to determine a key performance indicator of interest (1010) for the individual of interest based at least in part on the information in the digital information repository about the performances of the individual of interest and the learned factors that impact the key performance indicator of interest.
In another aspect, a method includes obtaining information about performances of individuals, including performances of an individual of interest, from a digital information repository. The method further includes obtaining instructions for a performance benchmarking engine trained to learn factors of the performances that impact key performance indicators independent of the individuals’ performance. The method further includes executing the instructions to determine a key performance indicator of interest for the individual of interest based at least in part on the information in the digital information repository about the performances of the individual of interest and the learned factors that impact the key performance indicator of interest.
In another aspect, a computer-readable storage medium stores instructions that when executed by a processor of a computer cause the processor to: obtain information about performances of individuals, including performances of an individual of interest, from a digital information repository, obtain instructions for a performance benchmarking engine trained to learn factors of the performances that impact key performance indicators independent of the individuals’ performance, and execute the instructions to determine a key performance indicator of interest for the individual of interest based at least in part on the information in the digital information repository about the performances of the individual of interest and the learned factors that impact the key performance indicator of interest.
Those skilled in the art will recognize still other aspects of the present application upon reading and understanding the attached description.
The invention may take form in various components and arrangements of components, and in various steps and arrangements of steps. The drawings are only for purposes of illustrating the embodiments and are not to be construed as limiting the invention.
The illustrated computing apparatus 104 includes a processor 108 (e.g., a central processing unit (CPU), a microprocessor (µCPU), and/or other processor) and computer readable storage medium (“memory”) 110 (which excludes transitory medium) such as a physical storage device like a hard disk drive, a solid-state drive, an optical disk, and/or the like. The memory 110 includes instructions 112, including instructions for a performance benchmarking engine 114. The processor 108 is configured to execute the instructions for performance benchmarking.
The illustrated computing apparatus 104 further includes input/output (“I/O”) 116. In the illustrated embodiment, the I/O 116 is configured for communication between the computing apparatus 104 and the digital information repository(s) 106, including receiving data from and/or transmitting a signal to the digital information repository(s) 106. The digital information repository(s) 106 includes a physical storage device(s) that stores digital information. This includes local, remote, distributed, and/or other physical storage device(s).
A human readable output device(s) 120, such as a display, is in electrical communication with the computing apparatus 104. In one instance, the human readable output device(s) 120 is a separate device configured to communicate with the computing apparatus 104 through a wireless and/or a wire-based interface. In another instance, the human readable output device(s) 120 is part of the computing apparatus 104. An input device(s) 119, such as a keyboard, mouse, a touchscreen, etc., is also in electrical communication with the computing apparatus 104.
The performance benchmarking engine 114 includes trained artificial intelligence. As described in greater detail below, the performance benchmarking engine 114 is trained at least with data from the digital information repository(s) 106 to learn context that affects overall performance independent of an individual’s performance and then determines a KPI(s) for the individual with data from the digital information repository(s) 106 and factors from the context. In one instance, this provides a more meaningful KPI based performance benchmarking relative to an embodiment in which context is not considered, which leads to a biased evaluation with a less accurate interpretation of the individual’s performance.
The computing apparatus 104 can be used for performance benchmarking in various environments. In one instance, the computing apparatus 104 is used for performance benchmarking in the clinical environment. In this environment, the performance benchmarking engine 114 considers patient-specific clinical context and/or workflow context. “Patient-specific clinical context” includes factors such as patient body mass index, age, type, diagnosis, length of hospital stay, and/or other factors. “Workflow context” includes factors such as equipment model, location of examination, operator, study type, clinician, and/or other factors.
The below is described with particular application to the clinical environment but is not limited thereto.
One or more of the above systems stores data in a structured format. An example structured report includes one or more of the following: 1) a header section with patient demographic information (e.g., patient name, patient age, patient height, blood pressure, etc.) and order information (e.g., ordering physician, study type, reason for study, medical history, etc.); 2) a section for documenting related personnel (e.g., ordering physician, technologists, diagnosing physician, etc.); 3) a section for documenting measurements and clinical findings; 4) a section for a conclusion to summarize and highlight certain findings, and/or 5) a section for billing. In one instance, the digital information repository(s) 106 stores information in a structured free-text report format. Additionally, or alternatively, the digital information repository(s) 106 stores each field in a structured database.
A clinical context extractor 302 extracts a clinical context 304 from the digital information repository(s) 106 using a clinical context extraction algorithm(s) 306. A workflow context extractor 308 extracts workflow context 310 from the digital information repository(s) 106 using a workflow context extraction algorithm(s) 312. For the structured free-text report format, the clinical context extraction algorithm(s) 306 and the workflow context extraction algorithm(s) 312 include algorithms such as a natural language processing (NLP) algorithm or the like to recognize subheading of each item of information. For the structured database, the clinical context extraction algorithm(s) 306 and the workflow context extraction algorithm(s) 312 retrieve information through, e.g., a database query.
In one example, the factor(s) identifier 402 employs a decision tree to identify the factors that affect examination duration. The input to the decision tree includes the clinical context 304 and/or the workflow context 310. Examples of factors that would affect exam duration such as patient age, patient weight, diastolic pressure, patient height patient class, gender, reason for study, type of ultrasound cart, patient location etc.
In one instance, the decision tree is trained as a classification problem to learn what factors determine whether the examination duration would last over or under a threshold time (e.g., 30 minutes). For this, the clinical context 304 and/or the workflow context 310 is divided into multiple classes. In each class, the expected examination duration would be a similar range regardless of the capabilities of sonographers. For example, the data can be classified into two groups, a first group that takes less than thirty minutes and a second group that takes more than thirty minutes.
The output of the decision includes the classification result as well as clinical and/or workflow factors and splitting conditions used to make the classification. An example of such results is shown in
The output of the decision tree includes selected factors that would contribute to the classification and the division threshold for each of the selected factors. In
An unbiased / less biased or more fair benchmarking can then be performed based on the results of the decision tree. For instance, the decision tree of
Such benchmarking is achieved through knowing only the classification results, without understanding how classification is performed by the algorithm. In other words, understanding a list of potential factors that would affect exam duration is not needed for performance benchmarking. However, providing such information would increase interpretability.
Other algorithms could also be applied. For example, random forest could also be applied, with the same inputs. The algorithm would predict the classification of each case and identify the important factors. An example of this is shown in
For each split on the tree, the algorithm identifies the factor (i.e. age) and the condition of the factor (i.e. age > 9.5) to split the dataset in order to achieve the best classification result. The criteria used in random forest to select the factor and the condition is based on impurity. A second axis 616 in
In another example, a statistical method could also be applied. For example, the correlation between potential factor and examination duration can be utilized. With machine learning algorithms, the performance of the predictor is highly dependent on the input features. As such, an optional module allows a healthcare professional (cardiologists, fellow, manager of echocardiogram laboratories, etc.) to configure which indicators/features from the patient/study profiling would be relevant for prediction. This enables a scalable way to incorporate clinical insights to guide algorithm design.
A first bar 714 at the first time duration range 708 includes a first portion 716 that represents a number of outpatients and a second portion 718 that represents a number of inpatients. A second bar 720 at the second time duration range 710 includes a first portion 722 that represents a number of outpatients and a second portion 724 that represents a number of inpatients. A third bar 726 at the third time duration range 712 includes a first portion 728 that represents a number of outpatients and a second portion 730 that represents a number of inpatients.
From the delineation between inpatients and outpatients,
From the delineation between equipment models,
From the delineation between contrast enhanced and non-contrast scans,
For comparison,
In general, the factors can be used to provide a clinical context to the situation. To do this, the data can be filtered according to the selected clinical and workflow factors and identified condition. Then a fair benchmarking could be achieved based on each subset of the filtered cohort. Additionally, or alternatively, data can be grouped based on the classification result, and comparisons can be performed accordingly. In another embodiment, the list of clinical and/or workflow factors can be grouped to derive a single comprehensive factor used in performance benchmarking, which may increase interpretability.
One example would be to use multiple factors to determine a comprehensive factor measuring the amount of care required by this specific case. For example, case complexity could be a comprehensive factor, which is used to measure how ‘difficulty’ the case is to be performed. For example, it is harder to scan an obese stroke patient than to scan a patient with normal BMI to evaluate left ventricular function. Here, the system can use multiple factors including BMI (indicating obese), patient history (indicating stroke) and reason for study (to evaluate left ventricular function) to derive a comprehensive factor - case complexity. Benchmarking performance can then be based on complexity level. Evaluating the productivity per sonographer by comparing average exam duration for studies at the same complex level is fair and meaningful.
For explanatory purposes, the above included non-limiting examples for benchmarking sonographer performance, taking into account factors that are independent of the sonographer. However, it is to be understood that the approach herein can also be used for performance benchmarking of other KPIs. For example, the approach described herein can be used for comparing improvements in workflow efficiency when using different ultrasound models, e.g., to identify factors that would affect the workflow efficiency which are independent of a performance of an ultrasound scanner, i.e. patient complexity, sonographers experience, etc.
It is to be appreciated that the ordering of the acts in the method is not limiting. As such, other orderings are contemplated herein. In addition, one or more acts may be omitted, and/or one or more additional acts may be included.
A profiling step 1302 extracts relevant context from a digital data repository(s), as described herein and/or otherwise. For example, with particular application to the clinical environment, this may include extracting patient-specific clinical and/or workflow that extracts information from the digital information repository(s) 106.
An identifying factors step 1304 identifies factors from the extracted context that affect performance independent of the individual being evaluated, as described herein and/or otherwise. For example, for each KPI of interest, clinical and workflow factors 406 that affect performance independent of the individual under evaluation can be identified in the extracted relevant context.
A benchmarking step 1306 determines a KPI(s) for the individual based at least on the identified factors, as described herein and/or otherwise.
The above may be implemented by way of computer readable instructions, encoded or embedded on computer readable storage medium, which, when executed by a computer processor(s), cause the processor(s) to carry out the described acts. Additionally, or alternatively, at least one of the computer readable instructions is carried out by a signal, carrier wave or other transitory medium, which is not computer readable storage medium.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims.
The word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage.
A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2020/086089 | 12/15/2020 | WO |
Number | Date | Country | |
---|---|---|---|
62951492 | Dec 2019 | US |