The following relates generally to the medical device maintenance arts, medical imaging device maintenance arts, medical device maintenance visualization arts, and related arts.
Maintenance of medical imaging systems and other medical devices such as patient monitoring systems consists of multiple types of maintenance activities. In planned maintenance activities, a field service engineer (FSE) visits the hospital to oil, clean, calibrate, etc. the system at regular intervals (e.g., once, or twice every year, or with a frequency that is determined by the usage of the system, or dynamically scheduled based on remotely monitoring the condition of the system). In addition, there are corrective maintenance activities, that are initiated as a reaction to an issue reported by the hospital. If the issue is severe, then this may result in unplanned down time of the system. The system may not be in operation until the issue is fixed again, either remotely by a remote service engineer (RSE) or on site by an FSE. Unplanned down time can lead to considerable costs for the hospital, as no examinations can be scheduled for some time. It can also lead to patient dissatisfaction, as examinations may have to be rescheduled to a later time.
In addition to the above-mentioned maintenance activities, predictive maintenance activities are used to avoid unplanned downtime. For various parts of a medical imaging system, predictive models have been developed that aim to predict when a part is likely to fail soon, so that the part can be replaced preventively before it fails. These predictive models may be constructed by using a machine learning algorithm that, based on a training set of historical cases, builds a predictive model. For a given subsystem/part p of a given system s, such a predictive model will estimate, for a given a given time window [t,t+w], the probability Pr(p,s,t) that p will fail in this time window. These estimated probabilities can next be used to determine whether it makes sense to preventively replace p in the coming week or weeks. The predictive models analyze log event data that the medical imaging system s produces. Log event data may contain sensor measurements as well as log events in the form of low-level error and warning messages.
Once a predictive model has been tested to perform at a sufficient performance level (considering the probability and cost of false positives as well as false negatives), it can be deployed to monitor many medical imaging systems in the field. The model can be run on recent log event data of each of the systems at regular intervals, e.g., once every hour or day, or it can be triggered dynamically by the availability of new data. If for a system s, it concludes that Pr(p,s,t) exceeds a certain threshold, it can raise an alert. Alternative strategies, such as logged value exceeding a threshold at least k times in 1 successive time units can also be used to raise an alert.
Specialized remote service engineers (RSEs) are trained to review the raised alerts, for example via a workstation computer that shows a ranked list of recent alerts. To each alert of type a priority P(a) is associated, so that all alerts that are raised by the multiple predictive models are simply ordered in order of priority. The RSEs typically consider the alerts in a top-down fashion, addressing the highest priority alerts first. Note that the type a of an alert is based on the predictive model, and likewise the priority P(a) is based on the parameters of the predictive model, such as accuracy, false positive rate (FPR), and estimated repair timeframe window size w.
As the number of alerts increase, identifying high priority alerts that RSEs are well equipped to handle becomes a challenge. Some of the alerts are not interesting to the RSEs or they require some knowledge to resolve the issue. As a result, an important alert may be out of sight and RSEs fail to act on time. Consequently, customers may report an issue when it could have been proactively solved if the RSEs would have time and required skills to resolve the issues. Moreover, alerts that are not reviewed and for which no follow up steps have not been taken (e.g., creating a service case) within a certain time window are removed from the RMW.
The following discloses certain improvements to overcome these problems and others.
In one aspect, a non-transitory computer readable medium stores one or more predictive models trained to generate maintenance alerts for medical devices of a fleet of medical devices based on machine log data received from the medical devices, historical maintenance alerts data including at least historical maintenance alerts generated by the one or more predictive models for the fleet of medical devices, and instructions readable and executable by at least one electronic processor to: train an alert ranking machine learning (ML) model to rank alerts of a queue of alerts using the historical maintenance alerts data; receive unresolved alerts for medical devices of the fleet from the one or more predictive models; generate a ranked list of the unresolved alerts allocated to a service engineer (SE) using the trained ranking ML model; and provide, on a display device accessible by the SE, the ranked list of the unresolved alerts allocated to the SE.
In another aspect, a non-transitory computer readable medium stores one or more predictive models trained generate maintenance alerts for medical devices of a fleet of medical devices based on machine log data received from the medical devices, historical maintenance alerts data including at least historical maintenance alerts generated by the one or more predictive models for the fleet of medical devices; and instructions readable and executable by at least one electronic processor to: train an alert ranking machine learning (ML) model to rank alerts of a queue of alerts using the historical maintenance alerts data; receive unresolved alerts for medical devices of the fleet from the one or more predictive models; generate a global ranking the unresolved alerts using the trained ranking ML model; allocate the unresolved alerts amongst a plurality of service engineers (SEs); order the unresolved alerts allocated to the SE in accordance with the global ranking of the unresolved alerts to generate a ranked list of the unresolved alerts allocated to an SE; and provide, on a display device accessible by an SE, the ranked list of the unresolved alerts allocated to that SE.
In another aspect, a non-transitory computer readable medium stores one or more predictive models trained generate maintenance alerts for medical devices of a fleet of medical devices based on machine log data received from the medical devices, historical maintenance alerts data including at least historical maintenance alerts generated by the one or more predictive models for the fleet of medical devices, and instructions readable and executable by at least one electronic processor to: train an alert ranking machine learning (ML) model to rank alerts of a queue of alerts using the historical maintenance alerts data; receive unresolved alerts for medical devices of the fleet from the one or more predictive models; allocate the unresolved alerts amongst a plurality of SEs including the SE; rank the unresolved alerts allocated to the SE using the trained ranking ML model; and provide, on a display device accessible by a service engineer (SE), the ranked list of the unresolved alerts allocated to that SE.
One advantage resides in providing personalized alerts to RSEs for unresolved alerts.
Another advantage resides in providing a personalized list of alerts to corresponding RSEs to improve alert handling time and improve RSE engagement.
Another advantage resides in reduced downtown of medical devices.
Another advantage resides in providing personalized alerts to RSEs for unresolved alerts based on historical alert data.
A given embodiment may provide none, one, two, more, or all of the foregoing advantages, and/or may provide other advantages as will become apparent to one of ordinary skill in the art upon reading and understanding the present disclosure.
The disclosure may take form in various components and arrangements of components, and in various steps and arrangements of steps. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the disclosure.
A disadvantage of existing workflows for handling maintenance alerts is that the distribution and ordering of alerts are not generally personalized. Similar alerts are presented to all RSEs. This leaves room for improvement. By presenting a personalized list of alerts to each of the RSEs as disclosed herein, the alert handling time and the RSE engagement improves. As a result, the resolution time of customers' issues improves.
The following relates to prioritizing or ranking alerts for individual remote service engineers (RSEs). In daily operation, machine logs received from imaging devices of a fleet of imaging devices are analyzed by diagnostic models and the model outputs scored to generate alerts relating to preventative maintenance tasks that should be performed. The alerts are allocated to RSEs on staff, and are presented to the respective RSEs via a user interface such as a workstation computer.
Disclosed herein are approaches for ranking the alerts on an individualized basis using information on the alerts obtained from a case management database (referred to herein as “alert characteristics”), such as the predictive model that generated the alert, deadlines of the alerts, customer contract terms, customer satisfaction information (if available; e.g. a customer with low satisfaction may be ranked higher), the number of similar systems that the customer has (e.g., if the customer has several similar systems then downtime for the system subject to the alert may be less critical), and modalities or system types for which an RSE has expertise, RSE overall experience, training, or the like. Notably, these latter alert characteristics are RSE specific. Probabilities (or other metrics) for ranking the alerts are computed for (alert, RSE) pairs based on the alert characteristics, and for a given RSE the alerts are ranked based on the computed probabilities.
In some embodiments, probabilities for the alerts are computed and then the alerts are allocated to RSEs and displayed ranked based on the probabilities. In other embodiments, the alerts are allocated to RSEs and then, on a per-RSE basis, the probabilities are computed, and the alerts allocated to that RSE are ranked. This latter approach can improve computational efficiency as only one probability need be computed for each alert, whereas the first embodiment requires computing for each alert the probabilities for all RSEs. On the other hand, the first embodiment provides the probabilities (or other metrics) prior to allocating the alerts to the RSEs, e.g. the probabilities can be computed for all (alert, RSE) pairs, and hence in the first embodiment the probabilities can be used to determine the allocation, e.g. by allocating alerts with high probabilities for one particular RSE to that RSE.
With reference to
As shown in
The service device 102 includes a display device 105 via which alerts generated by predictive failure models are displayed, optionally along with likely root cause and service action recommendation information if this is provided by the predictive model. The service device 102 also preferably allows the service engineer to interact with the servicing support system via at least one user input device 103 such a mouse, keyboard, or touchscreen. The service device further includes an electronic processer 101 and non-transitory storage medium 107 (internal components which are diagrammatically indicated in
In illustrative
The non-transitory computer readable medium 127 stores machine log data 130 received from the medical device 120. The non-transitory computer readable medium 127 stores one or more predictive models 132 trained generate maintenance alerts for the medical device 120 as part of a fleet of medical devices based on the machine log data 130 received from the medical device(s) 120. The non-transitory computer readable medium 127 also stores historical maintenance alerts data including at least historical maintenance alerts 134 generated by the one or more predictive models 132 for the fleet of medical devices 120. In some examples, the historical maintenance alerts data further includes information on the predictive models 132 that generated the respective historical maintenance alerts, deadlines of the respective historical maintenance alerts, customer contract terms associated with the medical devices of the respective historical maintenance, and customer satisfaction information associated with the medical devices of the respective historical maintenance.
The non-transitory storage medium 127 also stores instructions executable by the electronic processor 113 of the backend server 111 to perform a method 200 of ranking and allocating the maintenance alerts generated by the predictive models 132 to RSEs (or, equivalently, to their corresponding workstations 102 to which the respective RSEs are logged into).
With continuing reference to
At an operation 202, an alert ranking machine learning (ML) model 136 is trained to rank alerts 138 of a queue of alerts using the historical maintenance alerts data. To do so, as shown in
At an operation 204, unresolved alerts 144 for medical devices of the fleet are received from the predictive model(s) 134.
At an operation 206, ranked lists 146 of the unresolved alerts 144 are generated and allocated to SEs using the trained ranking ML model 142. To do so, the unresolved alerts 144 are allocated amongst a plurality of SEs, and the unresolved alerts 144 allocated to each SE are ranked using the trained ML model 142.
At an operation 208, the ranked list 146 of the unresolved alerts 144 allocated to each SE are shown on the display device 105 of the service device 102 accessible by the corresponding SE. Each SE receives the ranked list 146 as the unresolved alerts 144 allocated to that particular SE.
In one embodiment, the alerts 144 are ranked based on expertise data including modalities or system types of the one or more medical devices 120 for which each SE has expertise. To do so, alert-SE pairs are generated back on the expertise data. To generate the pairs, probabilities for each alert-SE pair are computed based on the historical alert and the expertise data, and the alerts 144 are allocated to corresponding SEs based on the computed probabilities (for example, for display on a corresponding service device 102 operable by each SE).
This embodiment is shown in more detail in
Another embodiment of the ranking operation 206 is shown in
In some embodiments, the historical maintenance alerts data further includes performance data of the plurality of SEs in resolving the historical maintenance alerts 134. The alert ranking ML model 142 is trained to rank the alerts 144 of the queue of alerts using the historical maintenance alerts data 134 including the performance data of the plurality of SEs, and the ranking of the unresolved alerts 144 allocated to the SE using the trained ranking ML model 142 is based in part on the performance data of the SE.
In other embodiments, the generation of the ranked list 146 of the unresolved alerts 144 allocated to the SE includes generating a global ranking the unresolved alerts 144 using the trained ranking ML model 142. The unresolved alerts 144 are allocated amongst a plurality of SEs, and the unresolved alerts 144 allocated to the SE are ordered in accordance with the global ranking of the unresolved alerts 144.
The following describes the system 100 and the method 200 in more detail. The system 100 is configured to present a personalized ranking of alerts generated by diagnostic models tailored to individual RSEs based on their history and skills. The alert handling history and profile of RSEs together with alert characteristics are used as input to an algorithm to estimate the probability of an alert being reviewed by an RSE. Subsequently, the alerts with their corresponding probability estimates are partitioned by RSE. The probability estimates are then ordered in descending order to provide personalized alerts to each of the RSEs that will later be presented in RMW.
A personalized ranking engine can be embedded into an end-to-end proactive monitoring process, and takes as input alerts generated by diagnostic models, alert handling history of each RSE, the profile of each RSE, alert characteristics, and so forth.
Alerts generated by diagnostic models using scoring engine and historical data are provided to the ranking engine where alerts are partitioned and ordered. The RMW takes the output of the ranking engine, which is basically set of ordered alerts per RSE, and presents it on RMW. Assuming that an alert will appear in the ranking of only one RSE, one could think of optimizing some objective function that considers for each (alert, RSE) pair (a, e) the probability that alert a is solved successfully by engineer e as well as the time that e requires to solve a.
To avoid multiple RSEs simultaneously selecting the same alert, a single alert is assigned to one RSE. Alternatively, alerts can be moved from the queue of one RSE to the queue of another one. Assuming round-the-clock service, RSEs will start and stop working overtime. As such the alerts in the queue of an RSE that stops working are redirected to the queue of other RSEs. Preferably this does not require additional time to handle the alerts. Additionally, if an RSE starts his/her working shift, then the alerts for which he/she is specifically well-skilled to solve are redirected to his or her queue.
The ranking engine is built using an algorithm that takes a set of alerts that are generated by diagnostic models. In addition, the RSEs profile and their corresponding alert handling history as well as alert characteristics are provided as an input to the engine. Some of the alert characteristics are, for example, the number of successfully resolved alerts, the average alert resolution time, the proportion of resolved alerts per modality, the number of resolved alerts of a similar part or subsystem, the success factor of previous handled alerts posts, the similarity of the new alert compared to the previously resolved alerts, and so forth.
To create the ranking engine, the algorithm is trained using historical data. For that, data preparation is required to convert the input data into features the algorithm can take as an input. Moreover, experimenting with different techniques is required to select the right approach.
To describe the approach mathematically, let A={a1, a2, . . . , an} be a set of alerts, R={r1, r2, . . . , rm} be the available RSEs and C={c1, c2, . . . , ck} be the alert characteristics. Every alert a∈A has k characteristics. Suppose a∈A be an alert with deadline da and p(a, r) be the probability that alert a is reviewed by RSE r∈R at da. p(a,r) can be calculated as p(a,r)=ƒ(x(a, r, 1), x(a, r, 2), . . . , x(a, r, k)) where x(a, r, i) represents the features that are derived from alert characteristics and historical alert resolution of RSEs. After calculating the probability for each RSE and partitioning over the RSE, the probability is sorted in descending order and presented in RMW.
The steps taken to produce the ranked alerts are as follows: (1) gather historical data of the RSEs. The data reflects the whole experience of the RSE in handling alerts; (2) fetch alerts generated by diagnostic models that are due to be published in RMW. These alerts have an alert creation day and a deadline; (3) get the values of x(a,r,i) (features) for each alert a and RSE r and alert characteristics I; (4) calculate p(a,r) for all alerts and RSEs; and sort p(a,r) in descending order for each RSE r.
Whenever the RSE opens RMW, the ranking engine is triggered to create list of alerts to publish in real time. The engine fetches historical data and alert characteristics, it then converts the data to features. Afterwards, for each alert the engine calculates the probability an alert is reviewed by an RSE. These probabilities are partitioned by RSEs, sorted in descending order, and then published in RMW.
At the start of each day, the newly arrived alerts and the alerts that are already in the queues of the RSEs are taken as a single set that must be redistributed and ranked over the available RSEs, considering which RSEs will be available in the next time period. In that way, one could also dynamically determine the probability that an alert will be handled in the coming period by a given RSE depending on the queue of alerts that will be presented to the RSE in the next period. This could be estimated by assuming that the RSE would handle the alerts in order of the proposed ranking and by using the time distribution that the given RSE would need to solve the alerts that precede the given alert in the given ranking.
An additional embodiment is to show the personalized ranked and selected alerts directly to staff at hospitals, e.g., the biomedical engineers (as known as biomeds). The biomeds in hospitals are responsible for maximizing the efficiency of the systems at the facility to deliver the best level of patient care. In some cases, they are responsible for specific maintenance activities of the medical imaging systems. Selected alerts are ranked based on their expertise (often biomeds have limited knowledge or are less experienced than the service engineers of OEMs) and service contract that the hospital has.
A non-transitory storage medium includes any medium for storing or transmitting information in a form readable by a machine (e.g., a computer). For instance, a machine-readable medium includes read only memory (“ROM”), solid state drive (SSD), flash memory, or other electronic storage medium; a hard disk drive, RAID array, or other magnetic disk storage media; an optical disk or other optical storage media; or so forth.
The methods illustrated throughout the specification, may be implemented as instructions stored on a non-transitory storage medium and read and executed by a computer or other electronic processor.
The disclosure has been described with reference to the preferred embodiments. Modifications and alterations may occur to others upon reading and understanding the preceding detailed description. It is intended that the exemplary embodiment be construed as including all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2023/057029 | 3/20/2023 | WO |
Number | Date | Country | |
---|---|---|---|
63323689 | Mar 2022 | US |