Embodiments generally relate to predictions based on temporal associated snapshots, and generating predictions about future snapshots (e.g., completion of process steps).
The execution of clinical processes (e.g., radiological imaging), may vary significantly due to unforeseen conditions. This situation is aggravated by the fact that aspects (e.g., duration) of such clinical processes are difficult to predict as no one participant has all the pertinent information or understands how to synthesize such information into a logical estimation of duration. Thus, predictions of such aspects are unreliable resulting in an interrupted and discontinuous workflow. Consequently, scheduling, resource management, and patient management are sub-optimal and degrade the overall performance of an organizational unit.
Some embodiments include an event tracking system, comprising a network controller to receive event data from one or more of a sensor or transmitter, and a processor, a memory containing a set of instructions, which when executed by the processor, cause the event tracking system to access a snapshot stack associated with previous events, clone a portion of the snapshot stack, update a first snapshot of the cloned portion based on the event data to generate a modified portion, wherein the first snapshot is associated with the sensor, add the modified portion to the snapshot stack to generate an updated snapshot stack, and predict one or more future snapshots based on the updated snapshot stack.
Some embodiments include at least one computer readable storage medium comprising a set of instructions, which when executed by a computing device, cause the computing device to access a snapshot stack associated with previous events, clone a portion of the snapshot stack, update a first snapshot of the cloned portion based on event data to generate a modified portion, wherein the first snapshot is associated with one or more of a sensor or transmitter, add the modified portion to the snapshot stack to generate an updated snapshot stack, and predict one or more future snapshots based on the updated snapshot stack.
Some embodiments include a method comprising accessing a snapshot stack associated with previous events, cloning a portion of the snapshot stack, updating a first snapshot of the cloned portion based on event data to generate a modified portion, wherein the first snapshot is associated with a sensor or transmitter, adding the modified portion to the snapshot stack to generate an updated snapshot stack, and predicting one or more future snapshots based on the updated snapshot stack.
The various advantages of the embodiments of the present disclosure will become apparent to one skilled in the art by reading the following specification and appended claims, and by referencing the following drawings, in which:
From a technical point of view, there is a need for a structured and flexible approach that translates detected physical events to provide information about the current state of a patient process and predict the timing of subsequent states. Thus, some embodiments include a “Snapshots-to-Snapshot” approach to solve the aforementioned problems. Events may be decomposed into a series of snapshots associated with timestamps. The embodiments herein determine patterns between the events to identify and predict future states. For example, some embodiments may generate a snapshot stack, and generate a predicted next snapshot based on the snapshot stack.
Turning now to
Vectorization may be utilized in text mining as a preprocessing step so that machine learning algorithms can be applied to various contexts and purposes (e.g., cluster documents into a number of groups, and to further extract topics from each group). In such contexts, vectorization is a process of conversion of a document into a numeric array, using the meaningful words in the collection of documents. Eventually, the collection of documents becomes a matrix, whose single row is one vectorized document. In the present example, vectorization may be the numeric conversion of statuses of concerned sensors and devices.
The first-T timestamped snapshots 102 may originate with (e.g., be captured by) sensors. For example, a first sensor may be a door sensor that senses a state of a door (e.g., whether a door to a CT scanner room is open or closed), and store the state as the first timestamped snapshot St−n and other snapshots (e.g., T-2) of the first-T timestamped snapshots 102. A different sensor associated with the first sensor (e.g., may track a same patient process or patient flow) may be a radiation sensor that senses a state of a CT scanner (e.g., whether the CT scanner is scanning), and store the state as the T-2 timestamped snapshot St−2. The other snapshots of the first-T timestamped snapshots 102 may be similarly generated by different sensors or other connected IT systems. Thus, the sensor readings may be indicative of state data that is stored as the first-T timestamped snapshots 102 may be similarly generated by different sensors or IT systems.
In some embodiments and will be explained below, the architecture 100 includes a network controller to receive event data from a plurality of sensors. The architecture 100 (e.g., which includes a controller that comprises hardware logic, configurable logic, and/or a computing device) may access a snapshot stack (e.g., the first-T timestamped snapshots 102) associated with previous events, clone a portion of the snapshot stack, update a first snapshot of the portion of the cloned snapshot stack based on the event data to generate a modified portion, where the first snapshot is associated with the first sensor, add the modified portion to the snapshot stack to generate an updated snapshot stack and predict one or more future snapshots based on the updated snapshot stack.
Each of the first-T timestamped snapshots 102 may comprise readings from multiple sensors. For example, the T timestamped snapshot may include sensors readings from a patient monitor that monitors a position of the patient that is to undergo the CT scan, a door sensor that determines if a door to the CT room is closed or open, etc.
The architecture 100 may vectorize the first-T timestamped snapshots 102 and store first-T snapshot vectors into a matrix 104 and a vector of time deltas 106. The first-T snapshot vectors may be neural network embeddings. An embedding may be a mapping of a discrete variable to a vector of continuous numbers. The matrix 104 and the vector of time deltas 106 may be an underlying event model which represents and stores the state/changes of physical objects (e.g., doors, medical equipment, persons in a room, etc.) and/or inseparable collections thereof (e.g., operation of scanners within proximity, aggregated patient information, etc.) or process information (e.g., accumulated delay, utilization targets, etc.). Thus, the architecture 100 may translate events into state changes of the first-T snapshot vectors. The first-T snapshot vectors forming the matrix 104 and the vector of time deltas 106 may be a snapshot stack that stores time-stamped snapshots of an environment (e.g., a hospital environment).
Each row of the matrix 104 is a vectorized snapshot. Each value in the vector of time deltas 106 is the time difference of two consecutive snapshots. For example, the Tt-Tt−1 time delta may be the difference between a time at which the St timestamped snapshot (which corresponds to the T snapshot vector) was captured by a sensor (or sensors corresponding to the T snapshot vector), and when a previous timestamped snapshot was captured by the sensor (or sensors corresponding to the T snapshot vector). Each specific time delta of the vector of time deltas 106 is stored in association with the first-T snapshot vectors that corresponds to the specific time delta. For example, the Tt-Tt−1 time delta is stored in association with the T snapshot vector.
A neural network 108 may process the matrix 104 and the vector of time deltas 106. In some embodiments, the neural network 108 may be Long Short-Term Memory (LSTM) Neural Network (NN). A LSTM is a recurrent network architecture that operates in conjunction with an appropriate gradient-based learning algorithm. For example, a LSTM NN may have an adept capability to learn from historical observations, detect the hidden patterns of time-related events and predict future values in a sequence. The neural network 108 may have been previously trained. For example, the neural network 108 may include a machine learning algorithm that is trained with snapshots including object specific and aggregated events, and after training provides predictions on the next snapshot and respective specific object states.
The neural network 108 may detect whether any patterns exist in the matrix 104 and the vector of time deltas 106. The neural network 108 may generate a predicted snapshot vector 110 at a future time Tt+1. The architecture 100 may de-vectorize the predicted snapshot vector into a predicted snapshot at time Tt+1. For example, the architecture 100 may translate the predicted snapshot into resources, activities and processes, and integration of translated information with user-friendly interface to inform participants in the workflow.
In some embodiments, the architecture 100 may further take appropriate action based on the predicted snapshot at time Tt+1 112. For example, some embodiments may automatically adjust parameters of one or more systems based on the predicted snapshot at time Tt+1 112. For example, some embodiments may identify whether an appropriate action (e.g., adjust a temperature, notify other parties of time adjustments, etc.) may be taken based on the predicted snapshot at time Tt+1 112. Thus, in some embodiments, the architecture 100 further maps snapshots to process instance states that may be used for communicating process states to users. In some embodiments, the architecture 100 generates one or more of resource related information (e.g., when a CT scanner will be occupied or unoccupied) or activity related interpretation (e.g., whether a room should be cleaned and/or an action to undertake) based on the updated snapshot stack. In some embodiments, the architecture 100 accesses a set of rules that links states of the updated snapshot stack to a resource of interest. In some embodiments, the architecture 100 interpret outputs of the activity updated by changes of one or more snapshots that contain information about resources. In some embodiments, a respective snapshot of the snapshot stack is updated in response to a change to a state of a physical object associated with the respective snapshot (e.g., a sensor may sense that the physical object is changed and adjust the snapshot stack accordingly).
In some embodiments, the architecture 100 generates the snapshot stack based on previous event data from the plurality of sensors, where the previous event data was previously received. The plurality of sensors may be associated with a hospital environment.
As a detailed example, consider Snapshot Stack, such as first-T snapshots 102, that includes n+1 snapshots: St−n, St-(n-1), St-(n-2), . . . , St−2, St−1, St, and these snapshots are attached with timestamps Tt−n, Tt-(n-1), Tt-(n-2), . . . , Tt−2, Tt−1, Tt with St−n denoting the first snapshot with timestamp Tt−n, and St denoting the last snapshot with timestamp Tt. The subscripts t-n, t-(n-1), t-(n-2), . . . , t-2, t-1 denote a retrospective (e.g., occurred in the past) meaning. The subscript (e.g., “t”) denotes the current point of time. St+1 (e.g., the exact event of St+1) and the time difference between St and St+1 may be intended to be predicted by the neural network 108.
The number of objects in the observed domain (e.g., the number of sensors or scanners) is fixed through the snapshot stack, but the statuses of the number of objects are updated through the first to the last snapshots upon upcoming events that trigger the changes of objects. For example, if there is only a door sensor and a CT scanner in the domain, the snapshot stack may be updated just before any CT exam takes place at the beginning of day (e.g., 8:00 AM). Then in the first snapshot St−n some embodiments may initialize the status of the door as being set to ‘closed’ and the CT scanner is ‘idle’, denoted by St−n={door-closed, CT=idle} and with timestamp Tt−n=8:00 AM.
After five minutes, a patient and a nurse enter the scanner room, then some embodiments update the status of door to be ‘open’ and the CT scanner is ‘idle’ in the 2nd snapshot St-(n-1)={door=open, CT=idle} with Tt-(n-1)=8:05 AM. After three minutes, the patient is placed on the table, the CT scanner is ready to be operated, and the nurse comes out of the scanner room leaving the door closed, the 3rd snapshot becomes St-(n-2)={door=closed, CT=‘ready’} with Tt-(n-2)=8:08.
Two minutes later, the CT scanner starts scanning the patient, the status of the CT scanner is updated to ‘running’ and the door is ‘closed’, so in the 4th snapshot St-(n-3)={door=closed, CT=running} with timestamp Tt-(n-3)=8:10 AM. Suppose after fifteen minutes, while the CT scanner completes all the scans, the 5th snapshot becomes St={door=closed, CT=scans completed} with Tt=8:25. At this point of time, it may be advantageous to determine at what time the patient will leave the scanner room. Thus, the neural network 108 may predict the time at which the patient will leave the room.
Thus, in this example, we have a Snapshot Stack of 5 snapshots, thus n=4 ranging from n=0 to n=4. Specifically, the snapsnot stack is illustrated by the table below. Before feeding the neural network 108, embodiments vectorize each snapshot into a vector denoted by the matrix 104 and the vector of time deltas 106. For each of these vectors (except the 1st vector for n=0), embodiments may append the time difference between the different snapshots. The result of the neural network 108 is the 6th snapshot, St+1={door-open, CT=idle} and the time difference to the 5th snapshot (e.g., 4 minutes). In this way, embodiments may predict in advance that the patient would leave the scanner room at 8:29 AM, and the scanner room will be vacant and ready for next exam.
A vectorization example based on the above example is now described. In this example, snapshots are constructed based on the states of the scanner room door and the CT scanner. The door has statuses “closed” and “open,” and the CT scanner has statuses of “idle”, “ready to run”, “running” and “completed.” Embodiments may compose a binary vector from values of 0 and 1 to encode a snapshot at one point of time. 1 denotes the activeness or presence of one status. The vector of the matrix 104 for Table I is illustrated as below in Table II with the corresponding snapshot indices (e.g., the vector of snapshot index 1 of Table II corresponds to the entry of snapshot index 1 of Table I).
Moreover, triggering an automatic action based on the predicted result is possible, and may be based on software and hardware infrastructure. For example, continuing with the example above, suppose that the neutral network predicts that, at 8:29 AM, the patient leaves the scanner room, and the scanner room is vacant, two actions could be triggered automatically. Firstly, if the patient needs mobility assistance to use (e.g., a movable bed or wheelchair), an automatic notification could be sent out to notify the responsible nurse of the predicted meeting time with the patient. So, the nurse may be present at the predicted time with a movable bed or wheelchair to avoid patient waiting. Secondly, the message of ‘scanner room will be vacant at 8:29 am’ may be sent automatically to cleaning personal, who comes to disinfect and clean the scanner and the room without delay. In another example, an automatic cleaning process with robots may be actuated based on the indication that the scanner room is vacant, or automatic power saving features may be enacted such as turning off all lights and unnecessary resources in the scanner room. These actions include a communication system and device integrated into the infrastructure, which hosts applications of embodiments as described herein.
Turning now to
The data class events 150 may be a snapshot model that is a Domain Object Model, in which objects represent real-world items or a collection thereof. Snapshot 160 includes a plurality of objects described below. Some exemplary objects are medical equipment, such as an imaging system 152, door 154, person 156, movable item 158 (e.g., radio-frequency coils of a magnetic-resonance imaging scanner). Each of these objects has a state that is pertinent to the state of a process or one or more of its activities and are stored as part of the snapshot 160. The object state is updated upon receiving an event from an attached source.
In this example, the snapshot 160 has a timestamp and an event that caused the creation of the snapshot 160. Thus, the snapshot 160 is generated in response to an event 162 being sensed. The snapshot 160 may include a movable object 164 that has a certain position that is tracked when the snapshot 160 is created. The movable object 164 may include a person 156 (at a specific orientation) and a movable item 158 (e.g., a movable surface coil such as for an ankle, knee or head for MR imaging, magnetic resonance imaging item, etc. that is in a specific state). The snapshot 160 includes an installation 166 that includes the door 154 at a state (e.g., open or closed), and the imaging system 152 that is at a protocol. Thus, the snapshot 160 may include diverse sensor readings that are all related to the event 162.
In some embodiments, a neural network architecture may generate a process instance state (e.g., a snapshot) based on the data class process definition 180. For example, the process instance state may contain information such as resource state (e.g., MRI scanner in MRI Bay 1 is currently performing scan 3 of 6 of protocol “Brain”). The activity with actions states for example that the activity of MRI Image Acquisition is in the state where the patient has been positioned for scanning and scan 3 of 6 of protocol “Brain” is being performed. As an example, activity states may include that the MRI image process has completed patient registration, patient education, and is now conducting the activity MRI Image Acquisition which is in the state of performing scan 3 of 6 of protocol “Brain.”
In some embodiments, the snapshot training process 200 generates time stamps from time measurements from training event data (e.g., event data from various sources 202) associated with training events. In such embodiments, the snapshot training process 200 vectorizes the training event data into a plurality of vectors, stores the plurality of vectors in association with the timestamps into a matrix, detects patterns between the training events based on the plurality of vectors and the timestamps and predict the one or more future snapshots based on the patterns.
In the illustrated example, the neural network architecture 650 may include a network interface system 652, a communication system 654, and a sensor array interface 668. The sensor array interface 668 may interface with a plurality of sensors, for example door sensors, switch sensors, imaging sensors, or other connected (IT) systems etc. The sensor array interface 668 may interface with any type of sensor or event data transmitting system suitable for operations as described herein.
A snapshot generator 662 may receive data from the sensor array interface 668. The snapshot generator 662 may analyze events, generate snapshots and predict a next snapshot. The predict snapshot may be provided to a communication system 654 that communicates with one or more other computing devices.
The snapshot generator 662 may include a processor 662a (e.g., embedded controller, central processing unit/CPU) and a memory 662b (e.g., non-volatile memory/NVM and/or volatile memory). The memory 662b contains a set of instructions, which when executed by the processor 662a, cause the snapshot generator 662 to operate as described herein.
Further, the disclosure comprises additional examples as detailed in the following Examples below.
Example 1. An event tracking system, comprising:
Example 2. The event tracking system of Example 1, wherein the set of instructions, which when executed by the processor, cause the event tracking system to:
Example 3. The event tracking of Example 1, wherein the set of instructions, which when executed by the processor, cause the event tracking system to:
Example 4. The event tracking system of Example 1, wherein the set of instructions, which when executed by the processor, cause the event tracking system to:
Example 5. The event tracking system of Example 1, wherein the set of instructions, which when executed by the processor, cause the event tracking system to:
Example 6. The event tracking system of Example 1, wherein the sensor or transmitter is associated with a hospital environment.
Example 7. The event tracking system of any one of Examples 1 to 6, wherein the set of instructions, which when executed by the processor, cause the event tracking system to:
Example 8. At least one computer readable storage medium comprising a set of instructions, which when executed by a computing device, cause the computing device to:
Example 9. The at least one computer readable storage medium of Example 8, wherein the instructions, when executed, cause the computing device to:
Example 10. The at least one computer readable storage medium of Example 8, wherein the instructions, when executed, cause the computing device to:
Example 11. The at least one computer readable storage medium of Example 8, wherein the instructions, when executed, cause the computing device to:
Example 12. The at least one computer readable storage medium of Example 8, wherein the instructions, when executed, cause the computing device to:
Example 13. The at least one computer readable storage medium of Example 8, wherein the sensor is associated with a hospital environment.
Example 14. The at least one computer readable storage medium of any one of Examples 8 to 13, wherein the instructions, when executed, cause the computing device to:
Example 15. A method comprising:
Example 16. The Example of claim 15, further comprising:
Example 17. The Example of claim 15, further comprising:
Example 18. The Example of claim 15, further comprising:
Example 19. The Example of claim 15, further comprising:
Example 20. The Example of any one of claims 15 to 19, further comprising:
The above described methods and systems may be readily combined together if desired. The term “coupled” may be used herein to refer to any type of relationship, direct or indirect, between the components in question, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical or other connections. In addition, the terms “first”, “second”, etc. may be used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated.
Those skilled in the art will appreciate from the foregoing description that the broad techniques of the embodiments of the present disclosure can be implemented in a variety of forms. Therefore, while the embodiments of this disclosure have been described in connection with particular examples thereof, the true scope of the embodiments of the disclosure should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims.
| Filing Document | Filing Date | Country | Kind |
|---|---|---|---|
| PCT/EP2023/054427 | 2/22/2023 | WO |
| Number | Date | Country | |
|---|---|---|---|
| 63315227 | Mar 2022 | US |