This disclosure relates generally to automated management systems and, more particularly, to methods and apparatus for healthcare team performance optimization and management.
A variety of economic, technological, and administrative hurdles challenge healthcare facilities, such as hospitals, clinics, doctors' offices, etc., to provide quality care to patients. Economic drivers, less skilled staff, fewer staff, scheduling constraints, and complicated equipment create difficulties for effective management of resources (e.g., nurses, doctors, technicians, treatment rooms, equipment, etc.) to be deployed at healthcare facilities.
Further, healthcare provider consolidations create geographically distributed hospital networks in which higher level managers of healthcare providers may not work in the same building or even the same geographical region as the hospitals they are managing. At the same time, increasingly large amounts of data capable of assisting higher level managers in the day to day tasks of resource scheduling have become available. However, computer systems to aid in the analysis of these increasingly large amounts of data have yet to be developed.
The figures are not to scale. In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts.
Methods, apparatus, and articles of manufacture to facilitate improved resource scheduling and management based on patient outcome data and patient/employee satisfaction data using machine learning techniques are disclosed herein.
Certain examples provide an example apparatus to facilitate improved resource scheduling and management using machine learning techniques. The apparatus includes a memory to store instructions, and a processor to be particularly programmed using the instructions to implement at least a score prediction engine to process data from the memory to determine a score for a previous resource composition, and predict, using machine learning techniques, a plurality of resource composition scores, the plurality of resource compositions different from the previous resource composition, and an output generator to generate a plurality of results based on the plurality of resource composition scores, select a result from the plurality of results for interactive display, the selection based upon the plurality of resource composition scores, the result including at least one of a resource schedule, a resource substitution, or a resource ranking, output the result for interaction via an interface displayed with digital technology, the interface receiving an input to accept the result or modify the result, and in response to receiving the input via the interface, propagate at least one of the accepted result or the modified result to a scheduling server to, when the result includes the resource schedule, configure a first portion of the scheduling server with the result, when the result includes the resource substitution, reconfigure the first portion of the scheduling server with the result, and when the result includes the resource ranking, configure a second portion of the scheduling server with the result.
Certain examples provide an example computer readable medium to facilitate improved resource scheduling and management using machine learning techniques. The computer readable medium includes instructions that cause a machine to at least process data from a memory to determine a score for a previous resource composition, predict, using machine learning techniques, a plurality of resource composition scores, the plurality of resource compositions different from the previous resource composition, generate a plurality of results based on the plurality of resource composition scores, select a result from the plurality of results for interactive display, the selection based upon the plurality of resource composition scores, the result including at least one of a resource schedule, a resource substitution, or a resource ranking, output the result for interaction via an interface displayed with digital technology, the interface receiving an input to accept the result or modify the result, and propagate at least one of the accepted result or the modified result to a scheduling server to reconfigure one or more scheduling properties included with the server when the input is received.
Certain examples provide an example method to facilitate improved resource scheduling and management using machine learning techniques. The method includes processing data from a memory to determine a score for a previous resource composition, predicting, using machine learning techniques, a plurality of resource composition scores, the plurality of resource compositions different from the previous resource composition, generating a plurality of results based on the plurality of resource composition scores, selecting a result from the plurality of results for interactive display, the selection based upon the plurality of resource composition scores, the result including at least one of a resource schedule, a resource substitution, or a resource ranking, outputting the result for interaction via an interface displayed with digital technology, the interface receiving an input to accept the result or modify the result, and in response to receiving the input, propagating at least one of the accepted result or the modified result to a scheduling server to reconfigure one or more scheduling properties included with the server.
In the following detailed description, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific examples that may be practiced. These examples are described in sufficient detail to enable one skilled in the art to practice the subject matter, and it is to be understood that other examples may be utilized and that logical, mechanical, electrical and other changes may be made without departing from the scope of the subject matter of this disclosure. The following detailed description is, therefore, provided to describe an exemplary implementation and not to be taken as limiting on the scope of the subject matter described in this disclosure. Certain features from different aspects of the following description may be combined to form yet new aspects of the subject matter discussed below.
When introducing elements of various embodiments of the present disclosure, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements.
While certain examples are described below in the context of medical or healthcare systems, other examples can be implemented outside the medical environment. For example, certain examples can be applied to non-medical resource management such as restaurant management, law firm management, sports management, etc.
In the field of hospital administration, a variety of economy, technological, and administrative hurdles challenge healthcare facilities, such as hospitals, clinics, doctors' offices, etc., to provide quality care to patients. Economic drivers, less skilled staff, fewer staff, scheduling constraints, and complicated equipment create difficulties for effective management of resources (e.g., nurses, doctors, technicians, treatment rooms, equipment, etc.) to be deployed at healthcare facilities.
Further, healthcare provider consolidations create geographically distributed hospital networks in which higher level managers of healthcare providers may not work in the same building or even the same geographical region as the hospitals they are managing. At the same time, increasingly large amounts of data capable of assisting higher level managers in the day to day tasks of resource scheduling have become available.
For example, more patient outcome data (e.g., patient never events (e.g., patient falls, death, wrong site surgery, bed sores, medication error, etc.), patient re-admits (e.g., number of re-admits, length of re-admits, frequency of re-admits, etc.), patient length of stay (e.g., number of total days, days in intensive care, number of therapy events during stay, etc.) is available to higher level managers than ever before. Additionally, digital satisfaction surveys have become more prevalently available in the hospital setting, providing higher level managers with yet another data stream to aid in efficiently and optimally managing hospital resources. However, determining correlations between these many data streams and the resources that are associated with the data streams can be an arduous if not impossible task for higher level managers.
A new solution, put forth by the methods, apparatus, and articles of manufacture disclosed herein, utilizes machine learning techniques to assist in hospital resource management.
Machine learning techniques, whether deep learning networks or other experiential/observational learning system, can be used to optimize results, locate an object in an image, understand speech and convert speech into text, and improve the relevance of search engine results, for example. While many machine learning systems are seeded with initial features and/or network weights to be modified through learning and updating of the machine learning network, a deep learning network trains itself to identify “good” features for analysis. Using a multilayered architecture, machines employing deep learning techniques can process raw data better than machines using conventional machine learning techniques. Examining data for groups of highly correlated values or distinctive themes is facilitated using different layers of evaluation or abstraction.
In the examples disclosed herein, the machine learning techniques are utilized to predict outcomes of various compositions of resources available to a hospital based upon previous outcomes of other compositions of resources available to the hospital. In some examples disclosed herein, the predicted outcomes can be utilized to generate a schedule for the resources available to the hospital (e.g., a schedule for nurses, a schedule for doctors, a schedule for equipment, a schedule for spaces, etc.). In other examples disclosed herein, the predicted outcomes can be utilized to determine allowable and/or optimal schedule substitutions in response to a scheduling event (e.g., a low census event, a high census event, a trade request, etc.). In yet other examples disclosed herein, the predicted outcomes can be utilized to determine a pool of the top resources available to the hospital (e.g., a high performing pool of the top nurses, a high performing pool of the top doctors, a high performing pool of the top pieces of equipment, etc.).
Turning now to the figures,
In some examples, each of the wearable 104, the wireless device 108, and the terminal 112 include, implement, or otherwise display a satisfaction survey 116. In the illustrated example of
In operation, any of the users 106, 110, 114A, 114B, and 114C can select one of the three (3) tiers of satisfaction 118A-C through the satisfaction survey 116. In response to a selection of one of the three (3) tiers of satisfaction 118A-C, the satisfaction survey is further to append a timestamp and a device identifier to the data point. Further, for the wearable 104 and the wireless device 108, an identity of a user (e.g., the user 106 for the wearable 104 and the user 110 for the wireless device 108, etc.) is known and/or can be determined (e.g., via facial recognition, fingerprint identification, retinal/iris scan, login, password, etc.) and can be appended to the data point.
Alternatively, for the terminal 112, any one of the users 114A, 114B, and 114C can complete the satisfaction survey 116 via the terminal 112. As such, the identity of the user for the terminal 112 is known such as based on a user identification number (ID), login and password/passcode, name entered at the terminal 112, etc. However, in some examples, based on a location of the terminal 112, the list of potential users may be limited. For example, if only user 114A had access to the room of the user 114C, the terminal 112 can determine the user can only be one of user 114A or 114C.
In some examples, the system 100 further includes a server 120 and a server 121. In the illustrated example of
Looking to the server 121, employee data can be utilized to identify, for example, employee qualifications (e.g., employee certifications, employee qualification per unit, employee title/role, etc.), employee scheduling (e.g., an employee's normal hours, preferred shifts, scheduled vacation, etc.), employee reviews, and/or any other employee statistics or aggregate information that can be determined from the data. In some examples, the server 121 can store clock in and clock out times for employees which can, in some examples, include a flag denoting whether employees were on time for a shift. In such examples, the on time flag can further be processed by the server 121 to determine an on time percentage for each employee (e.g., the employee was more than 5 minutes late 10% of shifts, the employee has an on time rating of 95%, etc.). Further, the server 121 can store employee caseloads (e.g., one or more patients seen by the employee, determined from data stored in the server 120).
In some examples, the system 100 further includes a computing terminal 122. In the illustrated example of
In the illustrated example of
The server 126, included in or otherwise implemented by the system 100, hosts resource scheduling data (e.g., the server 126 is a scheduling server, etc.) generated by the resource director 102. The scheduling data and scheduling properties hosted at the server 126 can include, but is not limited to, resource schedules, completed resource substitutions/switches, resource rankings, pools of high performing and low performing resources, and/or any other scheduling statistics or aggregate information. In some examples, the scheduling properties can be reconfigured (e.g., reconfigured scheduling properties) by an external input to the server 126. In some examples, the server 126 can further propagate the scheduling properties to one or more user devices.
The system 100 can further include the computing terminal 128. In the illustrated example of
Additionally, the system 100 includes the wireless device 130. In the illustrated example of
The system 100 can further include the wearable 132. In the illustrated example of
In the illustrated example of
The first screen 205 displays the current schedule of the user of the trade request interface 200a. For example, in the illustrated example of
In response to selecting the trade request input, the user is directed to the third screen 215 which, in the illustrated example of
In response to the user selecting one of the available shift trades, the user is directed to the fourth screen 220 and, in response to confirming the trade request, the user is further be directed to the fifth screen 225.
The sixth screen 230, displayed to the user upon start-up of the trade acceptance interface 200b, displays a menu including a tab for viewing received shift trade requests. In the illustrated example of
Upon selecting the received shift trade requests tab, the seventh screen 235 is displayed to the user. The seventh screen 235 displays a list of received shift trade requests in addition to a list of distributed shift trade requests. In the illustrated example of
In response to selecting the received shift trade request, the user is directed to the eighth screen 240 which displays details of the shift trade request. In the illustrated example of
In response to the user selecting the accept input, the ninth screen 245 is displayed to the user. The ninth screen 245 allows the user to confirm the trade acceptance and, in response to the confirmation, the tenth screen 250 is displayed to user. The tenth screen 250, in the illustrated example, is the same portion of the interface as the seventh screen 235. However, as the user previously accepted the one outstanding trade request shown on the seventh screen 235 at the eighth screen 240 and the ninth screen 245, the tenth screen 250 does not show any shift trade requests.
The communication manager 302, included in or otherwise implemented by the resource director 102, can at least one of transfer data to and receive data from at least one of the wearable 104, the wireless device 108, the terminal 112, the server 120, the server 121, the computing terminal 122, the server 126, the computing terminal 128, the wireless device 130, and the wearable 132 via at least one of the network 124 and the network 134.
In some such examples, the communication manager 302 can be implemented by any type of interface standards, such as an Ethernet interface, a universal serial bus (USB), and/or a PCI express interface. Further, the interface standard of the communication manager 302 is to at least one of match the interface of the network 124 and the network 134 or be converted to match the interface of the network 124 and the network 134.
The scheduling constraint storer 304, included in or otherwise implemented by the resource director 102, can store scheduling constraints as retrieved from the server 121. In some examples, the scheduling constraints stored by the scheduling constraint storer 304 can include, but are not limited to, hard constraints such as safe scheduling constraints including consecutive hour limits, maximum hour limits per seven day period, and maximum number of consecutive shifts, overtime rules including hours per day and hours per week, union rules including rotation of staff, hour limits, mandatory staff rotation, etc., shift overlap conflicts, indicated availability including FMLA leave, scheduled PTO, scheduled training, and/or call ins for sickness, and employee status (e.g., per diem, full time, part time, etc.). Additionally, the scheduling constraint storer 304 can include soft constraints such as scheduling preferences.
The outcome score storer 306, included in or otherwise implemented by the resource director 102, can store patient data received from the server 120. For example, the outcome score storer 306 can store patient never events (e.g., patient falls, death, wrong site surgery, bed sores, medication error, etc.), patient re-admits (e.g., number of re-admits, length of re-admits, frequency of re-admits, etc.), patient length of stay (e.g., number of total days, days in intensive care, number of therapy events during stay, etc.), and/or any other patient statistics or aggregate information that can be determined from the data. Further, based on data retrieved from the server 121, the outcome score storer 306 can store identities of employees (e.g., nurses, doctors, technicians, etc.) associated with (e.g., had an interaction with) the patient.
The satisfaction score storer 308, included in or otherwise implemented by the resource director 102, can store satisfaction data received from the satisfaction survey 116 that is included in, implemented by, and/or otherwise displayed on at least one of the wearable 104, the wireless device 108, and/or the terminal 112 via the network 124, as illustrated in
In some examples, based on patient data retrieved from the server 120 and caseload data retrieved from the server 121, other individuals associated with the user who completed the satisfaction survey 116 (e.g., a patient's doctor, a patient's nurse, a patient's family member, a doctor's assistant, a nurse's hospital tech, etc.) can be stored in conjunction with the satisfaction data.
The scheduling event storer 310, included in or otherwise implemented by the resource director 102, can store scheduling requests received from the computational terminal 122, via the network 124. For example, as illustrated in
At least one of the scheduling constraint storer 304, the outcome score storer 306, the satisfaction score storer 308, and the scheduling event storer 310 can be implemented by a volatile memory (e.g., a Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM), etc.) and/or a non-volatile memory (e.g., flash memory). At least one of the scheduling constraint storer 304, the outcome score storer 306, the satisfaction score storer 308, and the scheduling event storer 310 can additionally or alternatively be implemented by one or more double data rate (DDR) memories such as DDR, DDR2, DDR3, mobile DDR (mDDR), etc. At least one of the scheduling constraint storer 304, the outcome score storer 306, the satisfaction score storer 308, and the scheduling event storer 310 can additionally or alternatively be implemented by one or more mass storage devices such as hard disk drive(s), compact disk drive(s), digital versatile disk drive(s), etc. While in the illustrated example each of the scheduling constraint storer 304, the outcome score storer 306, the satisfaction score storer 308, and the scheduling event storer 310 are illustrated as a single database, any of one the scheduling constraint storer 304, the outcome score storer 306, the satisfaction score storer 308, and/or the scheduling event storer 310 can be implemented by an number and/or type(s) of databases. Further, the data stored in the scheduling constraint storer 304, the outcome score storer 306, the satisfaction score storer 308, and the scheduling event storer 310 can be in any format such as binary data, comma delimited data, tab delimited data, structured query language (SQL) structures, etc.
The score prediction engine 312, included in or otherwise implemented by the resource director 102, detailed further in conjunction with
In some examples, the output generator 314, included or otherwise implemented by the resource director 102, generates an output based on data received from the score prediction engine 312. In general, the output generator 314 receives a plurality of scheduling options from the score prediction engine 312 and determines a portion of the plurality of scheduling options to present to a user, the portion selected based on one or more selected criteria. For example, the output generator 314 can receive 10,000 possible schedules from the score prediction engine 312 and select the 3 highest performing schedules to present to the user.
In some examples, the output generator 314 includes the example schedule optimizer 316, which can generate a resource schedule and is further detailed in conjunction with
In some examples, in response to at least one of the schedule optimizer 316, the substitution recommender 318, and the resource manger 320 completing generation of an output (e.g., a resource schedule, a resource substitution recommendation, a top resource pool, etc.), the output generator 314 is further to distribute the output along with a destination device tag for the output to the communication manager 302. Utilizing the destination device tag, the communication manager 302 is further to distribute the output to at least one of the server 126, the computer terminal 128, the wireless device 130, and/or the wearable 132 of
The resource director 102 further includes, in some examples, the example preference learning module 322. In some examples, based on inputs received in response to the outputs generated by the output generator 314, the preference learning module 322 determines the general preferences of one or more users (e.g., managers, hospital administration, etc.) when accepting/rejecting outputs of the resource director 102. For example, in response to the user consistently taking an action a predetermined percentage (e.g., 95%, 99%, etc.) of given opportunities, the preference learning module 322 can determine, and thereby notify, the resource director 102 that user feedback is not required in making similar decisions in the future.
For example, the preference learning module 322 can determine that the user always accepts schedule substitutions that result in a higher resource composition score and, in response, the resource director 102 no longer queries the user (e.g., manager) regarding substitutions that improve the overall resource composition score. In another example, the preference learning module 322 can determine the user consistently sends home a certain nurse during periods of low census. In such an example, the resource director 102 automatically sends the nurse home during the next period of low census.
In addition to acting automatically without user input as described above, the preference learning module 322 can also receive pre-defined user inputs regarding when it is not necessary to query the user prior to making a change. For example, the user can set the preference learning module 322 to accept all recommended schedules above a pre-determined score threshold.
In the example score prediction engine 312 of the example
An artificial neural network such as the neural network 402 is a computer system architecture model that learns to do tasks and/or provide responses based on evaluation or “learning” from examples having known inputs and known outputs. A neural network such as the neural network 402 features a series of interconnected nodes referred to as “neurons” or nodes. Input nodes are activated from an outside source/stimulus, such as input from the outcome score storer 306 and the satisfaction score storer 308. The input nodes activate other internal network nodes according to connections between nodes (e.g., governed by machine parameters, prior relationships, etc.). The connections are dynamic and can change based on feedback, training, etc. By changing the connections, an output of the neural network 402 can be improved or optimized to produce more/most accurate results. For example, the neural network 402 can be trained using information from one or more sources to map inputs to potential recommended schedules, etc.
Machine learning techniques, whether neural networks, deep learning networks, and/or other experiential/observational learning system(s), can be used to generate optimal results, locate an object in an image, understand speech and convert speech into text, and improve the relevance of search engine results, for example. Deep learning is a subset of machine learning that uses a set of algorithms to model high-level abstractions in data using a deep graph with multiple processing layers including linear and non-linear transformations. While many machine learning systems are seeded with initial features and/or network weights to be modified through learning and updating of the machine learning network, a deep learning network trains itself to identify “good” features for analysis. Using a multilayered architecture, machines employing deep learning techniques can process raw data better than machines using conventional machine learning techniques. Examining data for groups of highly correlated values or distinctive themes is facilitated using different layers of evaluation or abstraction.
For example, deep learning that utilizes a convolutional neural network (CNN) segments data using convolutional filters to locate and identify learned, observable features in the data. Each filter or layer of the CNN architecture transforms the input data to increase the selectivity and invariance of the data. This abstraction of the data allows the machine to focus on the features in the data it is attempting to classify and ignore irrelevant background information.
Deep learning operates on the understanding that many datasets include high level features which include low level features. While examining an image, for example, rather than looking for an object, it is more efficient to look for edges which form motifs which form parts, which form the object being sought. These hierarchies of features can be found in many different forms of data.
Learned observable features include objects and quantifiable regularities learned by the machine during supervised learning. A machine provided with a large set of well classified data is better equipped to distinguish and extract the features pertinent to successful classification of new data.
A deep learning machine that utilizes transfer learning can properly connect data features to certain classifications affirmed by a human expert. Conversely, the same machine can, when informed of an incorrect classification by a human expert, update the parameters for classification. Settings and/or other configuration information, for example, can be guided by learned use of settings and/or other configuration information, and, as a system is used more (e.g., repeatedly and/or by multiple users), a number of variations and/or other possibilities for settings and/or other configuration information can be reduced for a given situation.
An example deep learning neural network can be trained on a set of expert classified data, for example. This set of data builds the first parameters for the neural network, and this would be the stage of supervised learning. During the stage of supervised learning, the neural network can be tested whether the desired behavior has been achieved.
Once a desired neural network behavior has been achieved (e.g., a machine has been trained to operate according to a specified threshold, etc.), the machine can be deployed for use (e.g., testing the machine with “real” data, etc.). During operation, neural network classifications can be confirmed or denied (e.g., by an expert user, expert system, reference database, etc.) to continue to improve neural network behavior. The example neural network is then in a state of transfer learning, as parameters for classification that determine neural network behavior are updated based on ongoing interactions. In certain examples, the neural network such as the neural network 402 can provide direct feedback to another process, such as the resource composition score engine 406, etc. In certain examples, the neural network 402 outputs data that is buffered (e.g., via the cloud, etc.) and validated before it is provided to another process.
In the example of
In some examples, once the model is deployed to the resource composition engine 406, the resource composition generator 404 can generate one or more possible combinations of resources (e.g., one or more possible staffing schedules) and distribute the possible combinations of resources to the resource composition score engine 406. Utilizing the model, the resource composition score engine 406 outputs one or more predicted outcome scores and satisfaction scores for the received combinations of resources and distributes these to the resource composition score accumulator 408 where the outcomes scores and satisfaction scores are combined to determine one predicted score for each combination of resources. In some examples, the resource composition score accumulator 408 can feedback the highest scoring resource compositions to the resource composition generator 404, which can use this data as seed data to generate improved possible combinations of resources.
In some examples, at least one of the model deployed from the neural network 402 and resource composition scores calculated by the resource composition score engine 406 and the resource composition score accumulator 408 can be stored in the resource composition score storer 410, included in or otherwise implemented by the score prediction engine 312. In some examples, the resource composition score storer 410 can be implemented by a volatile memory (e.g., a Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM), etc.) and/or a non-volatile memory (e.g., flash memory). The resource composition score storer 410 can additionally or alternatively be implemented by one or more double data rate (DDR) memories such as DDR, DDR2, DDR3, mobile DDR (mDDR), etc. The resource composition score storer 410 can additionally or alternatively be implemented by one or more mass storage devices such as hard disk drive(s), compact disk drive(s), digital versatile disk drive(s), etc. While in the illustrated example the resource composition score storer 410 is illustrated as a single database, the resource composition score storer 410 can be implemented by any number and/or type(s) of databases. Further, the data stored in the resource composition score storer 410 can be in any format such as binary data, comma delimited data, tab delimited data, structured query language (SQL) structures, etc.
The layer 520 is an input layer that, in the example of
Of connections 530, 550, and 570 certain example connections 532, 552, 572 can be given added weight while other example connections 534, 554, 574 can be given less weight in the neural network 402. Input nodes 522-526 are activated through receipt of input data via inputs 512-516, for example. Nodes 542-548 and 562-568 of hidden layers 540 and 560 are activated through the forward flow of data through the network 402 via the connections 530 and 550, respectively. Node 582 of the output layer 580 is activated after data processed in hidden layers 540 and 560 is sent via connections 570. When the output node 582 of the output layer 580 is activated, the node 582 outputs an appropriate value based on processing accomplished in hidden layers 540 and 560 of the neural network 402.
For example, the inputs 512, 514, and 516 to the neural network 402 can include at least one of outcome scores and satisfaction scores for one or more previous resource compositions that flows into the input layer 520. From the input layer 520, the scores for the previous resource compositions are distributed to the hidden layer 540 (e.g., the layer is hidden as the outputs of the layer are not displayed in a user facing manner) via the connections 530. In some examples, at the hidden layer 540, the nodes 542-548 can each consider different permutations of the resource compositions received via the connections 530. Further in such examples, the best performing permutations from the nodes 542-548 are further distributed to the hidden layer 560 via the connections 550 and the nodes 562-568 further permutate the best performing permutations. The nodes 562-568 calculate scores for the permutated resource compositions and distribute the scores to the output node 582. Once the output node 582 is activated, the output node 582 can determine a scoring model based on the permutated resource compositions and their corresponding scores received via the connections 570. Further, the output node 582 can distribute an output 590, wherein the output 590 can include the model including one or more correlation coefficients, the coefficients correlating scores to one or more resources and/or combination of resources.
The constraint applicator 602, included in or otherwise implemented by the output generator 314, applies each of the known schedule constraints received from the scheduling constraint storer 304 to the resource compositions received from the score prediction engine 312. In some examples, the constraint applicator 602 may only apply the hard constraints received from the scheduling constraint storer 304. Additionally or alternatively, the constraint applicator 602 can apply both the hard constraints and the soft constraints received from the scheduling constraint storer 304.
The schedule generator 604, included in or otherwise implemented by the output generator 314, can generate a plurality of possible schedules based on the constraints applied by the constraint applicator 602 to the plurality of resource compositions. In some examples, the schedule generator 604 is an exhaustive generator that attempts to generate a majority of all possible schedules.
The schedule selector 606, included in or otherwise implemented by the output generator 314, selects a portion of the schedules generated by the schedule generator 604 to be distributed to a user of the system 100, via the communications manager 302, and to the schedule storer 608. In some examples, the portion of the schedules to be distributed is based on the predicted scores of the schedules as determined by the score prediction engine 312. For example, the top 3 schedules, ranked by score, can be distributed for viewing by the user and for storage. Additionally or alternatively, the schedule selector 606 can select a schedule based on the predicted scores and one or more preferences as determined by the preference learning module 322. For example, if one combination of resources marginally decreases the predicted score but has been consistently preferred by the user of the system 100, the schedule selector 606 can output the schedule including the typical user preference.
Additionally or alternatively, the schedule selector 606 can select one or more schedules to output based on only one of predicted outcome scores or satisfaction scores. Additionally, the schedule selector 606 can select one or more schedules to output based on an uneven weight of predicted outcome scores and satisfaction scores (e.g., outcomes weighted more, satisfaction weighted more, etc.). Additionally or alternatively, the schedule selector 606 can select one or more schedules to output based on a pre-defined preference of the user as distributed by the preference learning module 322. For example, the pre-defined preference of the user can be to prefer cost-saving schedules and the schedule selector 606 can, in response, output the highest performing schedule(s) that adhere to a pre-defined cost threshold.
The schedule storer 608, included in or otherwise implemented by the output generator 314, can store the one or more schedules generated by the schedule selector 606 and/or the deployed schedule as selected by the user. In some examples, the schedule storer 608 can be implemented by a volatile memory (e.g., a Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM), etc.) and/or a non-volatile memory (e.g., flash memory). The schedule storer 608 can additionally or alternatively be implemented by one or more double data rate (DDR) memories such as DDR, DDR2, DDR3, mobile DDR (mDDR), etc. The schedule storer 608 can additionally or alternatively be implemented by one or more mass storage devices such as hard disk drive(s), compact disk drive(s), digital versatile disk drive(s), etc. While in the illustrated example the schedule storer 608 is illustrated as a single database, the schedule storer 608 can be implemented by any number and/or type(s) of databases. Further, the data stored in the schedule storer 608 can be in any format such as binary data, comma delimited data, tab delimited data, structured query language (SQL) structures, etc.
In one operational example of the schedule optimizer 316, Nurse Manager Susan determines that she needs to generate a schedule for her unit. Per hospital regulations, the generated schedule needs to encompass the next 6-weeks. This task typically takes her 6-8 hours and she often does not feel confident in one or more of the resource compositions included in the schedule as she has a limited quantity of time to consider all of the resource compositions that would be possible within the confines of the scheduling constraints provided to her. In such examples, Nurse Manager Susan can utilize the resource director 102 to instead generate multiple schedules that are possible within the confines of the scheduling constraints and include resource compositions with the highest predicted outcome scores and satisfaction scores.
For example, Nurse Manager Susan can initialize this routine with an input to a computing device (e.g., a wearable, a wireless device, a computing terminal, etc.) which is received by the resource director 102. The resource director 102, in response to receiving the request, generates one or more resource composition scores with the score prediction engine 312 which are sent to the schedule optimizer 316. The scheduling optimizer 316 then applies scheduling constraints to the received resource compositions with the constraint applicator 602, generates one or more schedules satisfying the constraints with the schedule generator 604, and selects one or more of the generated schedules to present to Nurse Manager Susan based on predicted scores of the resource compositions included in the schedule generated by the score prediction engine 312. In some examples, each of the aforementioned actions are performed in real time (or substantially real time given data retrieval, processing, and transmission latency), and Nurse Manager Susan receives the selected schedules in the same computing session in which she requested the schedules.
Nurse Manager Susan can now select from one of the presented schedules using, for example, the scheduling recommendation system 129 displayed on the computing terminal 128. Additionally or alternatively, Nurse Manager Susan may like one of the presented schedules (e.g., a second presented schedule), but disagrees with one decision the resource director 102 made for the fourth day of the schedule (e.g., too many nurses scheduled). In such an example, Nurse Manager Susan can use the computing terminal 128 to modify the schedule, removing one of the scheduled nurses in the described example, and accept the schedule after completing the modification. Upon completion of accepting the schedule, Nurse Manager Susan can return to other tasks. While Nurse Manager Susan and her nursing team are described in the above example, the first operational example of the schedule optimizer 316 can be used for any team and/or department at the hospital. For example, Janitorial Staff Manager Noah can generate and/or modify a schedule for the Janitorial team utilizing the resource director 102.
In a second operational example of the schedule optimizer 316, Nurse Manager Susan again determines that she needs to generate a schedule for her unit. Per hospital regulations, the generated schedule needs to encompass the next 6-weeks. However, in this example, Nurse Manager Susan prefers to manually generate schedules, but she still does not have time to consider all of the resource compositions that would be possible within the confines of the scheduling constraints provided to her. In such examples, Nurse Manager Susan can utilize the resource director 102 to monitor her selections. In some examples, monitoring her selections can include utilizing the schedule optimizer 316 to notify Nurse Manger Susan when a resource composition exists that is associated with a higher prediction score (e.g., at least one of a higher outcome score and/or a higher satisfaction score) than a resource composition she had manually selected. In such an example, Nurse Manager Susan can accept or reject the modification suggested by the resource director 102 via the scheduling recommendation system 129 displayed on the computing terminal 128. Utilizing the resource director 102 in such a manner, Nurse Manager Susan can be confident that the resource compositions that she selected and/or were recommended by the resource director 102 can lead to positive patient outcomes and high satisfaction scores for employees and/or patients.
The event processor 702, included in or otherwise implemented by the score prediction engine 312, processes a scheduling event received from the scheduling event storer 310. In some examples, processing the scheduling event further includes determining a type of scheduling event (e.g., low census, high census, trade request, etc.) in addition to a severity of a scheduling event (e.g., the magnitude of low/high census, importance of trade request, etc.). In some examples, the event processor 702 is further to distribute characteristics of the scheduling event to the substitution generator 704.
The substitution generator 704, included in or otherwise implemented by the substitution recommender 318, determines a plurality of scheduling substitutions that satisfy the scheduling constraints received from the scheduling constraint storer 304 and remedy the characteristics of the scheduling event received from the event processor 702. For example, the substitution generator 704 can select a quantity of employees (e.g., nurses, doctors, etc.) to call in based on the magnitude of a high census event. Additionally or alternatively, the substitution generator 704 can select a quantity of employees to send home based on the magnitude of a low census event. Additionally or alternatively, the substitution generator 704 can determine one or more employees capable of fulfilling a trade request with the requesting employee. In each case, each employee identified, along with the characteristics of the scheduling event, are distributed to the substitution selector 706.
The substitution selector 706, included in or otherwise implemented by the substitution recommender 318, determines a portion of the employees identified (e.g., the employees identified to be called in, the employees identified to be sent home, the employees identified as potential trades, etc.) to be distributed to the user of the system 100 via the communications manager 302. For example, based on data received from the score prediction engine 312, the substitution selector 706, in response to a low census event, can select one or more employees with the lowest predicted scores as determined by the score prediction engine 312 to be sent home. Additionally or alternatively, in response to a high census event, the substitution selector 706 can select one or more employees with the highest predicted scores as determined by the score prediction engine 312 to be called in. Additionally or alternatively, in response to a trade request, the substitution selector 706 can select one or more employees with the best possible impact on the overall predicted resource composition score (e.g., the impact of the trade would impact the resource composition positively or least negatively) to be possible trade partners. In addition to using the predicted scores received from the score prediction engine 312, the substitution selector 706 can, in some examples, utilize other known management criteria (e.g., minimizing financial impact, etc.) when determining the portion of the employees identified by the substitution selector 706 to be distributed to the communication manager 302.
The substitution storer 708, included in or otherwise implemented by the substitution recommender 318, can store the portion of employees identified by the substitution selector 706. Additionally or alternatively, in some examples, the substitution storer 708 can store a user selection of one or more of the employees of the portion of employees. In some examples, the substitution storer 708 can be implemented by a volatile memory (e.g., a Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM), etc.) and/or a non-volatile memory (e.g., flash memory). The substitution storer 708 can additionally or alternatively be implemented by one or more double data rate (DDR) memories such as DDR, DDR2, DDR3, mobile DDR (mDDR), etc. The substitution storer 708 can additionally or alternatively be implemented by one or more mass storage devices such as hard disk drive(s), compact disk drive(s), digital versatile disk drive(s), etc. While in the illustrated example the substitution storer 708 is illustrated as a single database, the substitution storer 708 can be implemented by any number and/or type(s) of databases. Further, the data stored in the substitution storer 708 can be in any format such as binary data, comma delimited data, tab delimited data, structured query language (SQL) structures, etc.
In operational examples of the substitution recommender 318, Nurse Manager Tammy, in charge of the 3-West wing of a hospital, must routinely fill in scheduling gaps in her unit. This task, while not typically requiring a significant expenditure of time, must be done dynamically and on the fly, and Tammy often does not feel confident in the resources she selected to fill the scheduling gaps as she has a limited quantity of time to consider all of the resources that would be possible to fill the gaps within the confines of the scheduling constraints provided to her. In such examples, Nurse Manager Tammy can utilize the resource director 102 to instead dynamically generate multiple resource substitutions that are possible within the confines of the scheduling constraints and result in resource compositions with the highest predicted outcome scores and satisfaction scores.
For example, Nurse Manager Tammy, in some examples, is not required to initialize this process. Instead, the resource director 102 can automatically determine the presence of a scheduling event and automatically begin the process flow in response. The resource director 102, in response to receiving the scheduling event (e.g., a low census event, a high census event, a shift trade request, etc.), generates one or more resource composition scores with the score prediction engine 312 which are sent to the substitution recommender 318. The substitution recommender 318 then determines the severity of the staffing decision with the event processor 702, generates one or more resource substitutions satisfying the constraints with the substitution generator 704, and selects one or more of the substitutions to present to Nurse Manager Tammy with the substitution selector 706 based on predicted scores of the resource compositions created by the substitutions by the score prediction engine 312. In some examples, each of the aforementioned actions are performed dynamically, and Nurse Manager Tammy receives the recommended substitutions in real time (or substantially real time given data retrieval, processing, and transmission latency) compared to when the scheduling event was received by the resource director 102.
Nurse Manager Tammy can now select from one of the presented substitutions using, for example, the wearable 132.
In one operational example of the substitution recommender 318, the resource director 102 receives a high census event from the server 120 or a call-in (e.g., Nurse Michelle calls in sick for her shift occurring that day) from the server 121. In such an example, the resource director 102 presents one or more resources to call in. For example, the resource director 102 can present a recommendation to call in Nurse Aaron because his predicted score on the team is higher than Nurse Tim. In other examples, the resource director 102 can present a recommendation to call in both Nurse Aaron and Nurse Tim because two resources are needed. In yet other examples, the resource director 102 can present a recommendation to try to call in Nurse Aaron first, Nurse Tim second and Nurse Joy, whose predicted score is lower than both Aaron and Tim, third. In such an example, Nurse Manager Tammy can accept or reject any one of the recommendations suggested by the resource director 102 on the display of the wearable 132, for example. In response to Nurse Tammy making a selection, the selection is distributed to the server 126, where it is propagated to any employee effected by the substitution.
In a second operational example of the substitution recommender 318, the resource director 102 receives a low census event from the server 120. In such an example, the resource director 102 presents one or more resources to call off. For example, the resource director 102 can present a recommendation to call off Nurse Joy because her predicted score on the team is lower than Nurse Bill, who is also working the same shift. In other examples, the resource director 102 can present a recommendation to call off both Nurse Joy and Nurse Bill because the shift requires two less resources than are currently scheduled. In such an example, Nurse Manager Tammy can accept or reject any one of the recommendations suggested by the resource director 102 on the display of the wearable 132, for example. In response to Nurse Tammy making a selection, the selection is distributed to the server 126, where it is propagated to any employee effected by the substitution.
In a third operational example of the substitution recommender 318, the resource director 102 receives a shift trade request (e.g., the shift trade is requested by Nurse Florence) from the computational terminal 122. In such an example, the resource director 102 presents one or more resources to trade with Nurse Florence. For example, the resource director 102 can present a recommendation to trade Nurse Florence's shift with one of Nurse Karlene's shift or Nurse Joy's shift the following day, as either shift trade improves the predicted score of each of their respective teams. In some examples, the resource director 102 can further present a more detailed analysis noting that a trade with Nurse Karlene would have a more significant effect on outcome scores whereas a trade with Nurse Joy would have a significant effect on satisfaction scores. In such examples, Nurse Manager Tammy can accept or reject any one of the recommendations suggested by the resource director 102 on the display of the wearable 132, for example. In some examples, Nurse Manager Tammy can have a preference for satisfaction scores and selects the trade with Nurse Joy. Alternatively, Nurse Manger Tammy can have a preference for outcome scores and selects Nurse Karlene. In some examples, this preference can be stored in the server 126. In response to Nurse Tammy making a selection, the selection is distributed to the server 126, where it is propagated to any employee effected by the substitution.
In a fourth operational example of the substitution recommender 318, the resource director 102 receives a staffing imbalance (e.g., a Post Op unit is overstaffed and a ICU unit is understaffed, etc.). In such an example, the resource director 102 presents one or more resources to shift from the Post Op Unit to the ICU unit. For example, the resource director 102 can present a recommendation to shift Nurse Joy from the Post Op unit to the ICU unit as it has the greatest effect on the predicted score for the ICU unit. Additionally, the resource director 102 can present a recommendation to shift Nurse Jim from the Post Op unit to the ICU unit as it has the least negative effect on the predicted score of the Post Op unit. In such examples, Nurse Manager Tammy can accept or reject any one of the recommendations suggested by the resource director 102 on the display of the wearable 132, for example. In some examples, Nurse Manager Tammy can have a preference for the receiving unit and selects Nurse Joy. Alternatively, Nurse Manger Tammy can have a preference for the shifting unit and selects Nurse Jim. In some examples, this preference can be stored in the server 126. In response to Nurse Tammy making a selection, the selection is distributed to the server 126, where it is propagated to any employee effected by the substitution.
The resource score sorter 802, included in or otherwise implemented by the resource manager 320, can utilize the predicted resource composition scores received from the score prediction engine 312 to sort a plurality of resources by score. For example, the resource score sorter 802 can sort the resources by overall score. Additionally or alternatively, the resource score sorter 802 can sort the resources by outcome score. Additionally or alternatively, the resource score sorter 802 can sort the resources by satisfaction score. In response to completing the sorting process, the resource score sorter 802 is further to distribute the sorted list of resources to the ranked resource generator 804.
The ranked resource generator 804, included in or otherwise implemented by the resource manager 320, ranks the list of resources received from the resource score sorter 802 in descending order of score (e.g., from a high score to a low score), the score defined by at least one of the overall scores, the outcomes scores, and/or the satisfaction scores. In some examples, the ranked resource generator 804, in addition to generating the ranking of the resources, is further to generate one or more pools (e.g., strata) of resources based on their scores. For example, the ranked resource generator 804 can take the top strata of resources (e.g., the top 10%, top 25%, etc.) and generate a top performing pool (e.g., a float pool). Additionally or alternatively, the ranked resource generator 804 can take the bottom strata of resources (e.g., the bottom 10%, bottom 25%, etc.) and generate a low performing pool. In response to generating at least of one of the ranked order of resources, the top performing pool, and the low performing pool, the ranked resource generator can distribute at least one of these data sets to at least one of the communications manager 302, the commendation recommender 806, and the ranked resource storer 808.
The commendation recommender 806, included in or otherwise implemented by the resource manager 320, can determine a commendation (e.g., financial compensation, a physical or virtual reward, etc.) to be provided to one or more resources based on at least one of their ranking or their presence in at least one of the top performing pool and/or the low performing pool. For example, the commendation recommender 806 can recommend financial compensation to scale as a function of the ranking of resources. Additionally or alternatively, the commendation recommender 806 can recommend a certain reward, either virtual or physical, to be awarded to one or more resources in the top performing pool.
The ranked resource storer 808, included in or otherwise implemented by the resource manager 320, can store at least one of the ranking of resources, the top performing pool of resources, and the low performing pool of resources received from the ranked resource generator 804. In some examples, the ranked resource storer 808 can be implemented by a volatile memory (e.g., a Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM), etc.) and/or a non-volatile memory (e.g., flash memory). The ranked resource storer 808 can additionally or alternatively be implemented by one or more double data rate (DDR) memories such as DDR, DDR2, DDR3, mobile DDR (mDDR), etc. The resource composition score storer 410 can additionally or alternatively be implemented by one or more mass storage devices such as hard disk drive(s), compact disk drive(s), digital versatile disk drive(s), etc. While in the illustrated example the ranked resource storer 808 is illustrated as a single database, the ranked resource storer 808 can be implemented by any number and/or type(s) of databases. Further, the data stored in the ranked resource storer 808 can be in any format such as binary data, comma delimited data, tab delimited data, structured query language (SQL) structures, etc.
While an example implementation of the resource director 102 of
Utilizing the training data 910, the score prediction engine 312 predicts resource composition scores for one more combinations (e.g., compositions) of resources for which no score data is present (e.g., the composition of resources has not previously occurred) and distributes the resource composition scores to the schedule optimizer 316 as predicted scores transmittal 912. In response to receiving the predicted scores transmittal 912, the schedule optimizer 316 queries the scheduling constraint storer 304 with a constraint request 914, the constraint request 914 to request one or more constraints for the time period of the resource schedule requested by the server 121, and the scheduling constraint storer 304 returns the requested constraints to the schedule optimizer 316 as schedule constraint transmittal 916. In response to receiving the schedule constraint transmittal 916, the schedule optimizer 316 applies the schedule constraint transmittal 916 to the predicted scores 912 to determine one or more recommended schedules for the time period requested by the server 121 and distributes the one or more recommended schedules to the communication manager 302 as recommended schedule transmittal 918.
In response to receiving the recommended schedule transmittal 918, the communication manager 302 distributes the recommended schedule transmittal 918 to the computing terminal 128 as recommended schedule transmittal 920. A user of the computing terminal 128 receives the recommended schedule transmittal 920 and, utilizing an interface of the computing terminal 128, accepts or modifies one of the recommended schedules included with the recommended schedule transmittal 920. In both cases, the input of the user is distributed to the schedule optimizer 316 via the communication manager 302 with a user input transmittal 922 and 924. In response to receiving the user input with the input transmittal 924, the schedule optimizer 316 applies the user modification to one of the recommended schedules or applies the recommended schedule. The schedule optimizer 316 then returns the selected schedule, the selected schedule corresponding to the time period requested previously by the server 121, to the server 126 via the communication manager 302 for deployment to the server 126 with a selected schedule transmittal 926 and 928.
Thus, certain examples transform patient information, patient outcome data, patient satisfaction data, and employee satisfaction data into resource scheduling information such as a resource schedule calculated to match a set of pre-determined or dynamic constraints and/or goals that results in the reconfiguration of one or more properties of the server 126. For example, as described above, using machine learning, such as the neural network 402, etc., a plurality of parameters, settings, etc., can be developed, monitored, and refined through deployment of various resource schedules and combinations of resources, for example. Using the neural network 402, for example, learning/training and testing can be facilitated before the resource scheduling system is deployed (e.g., in an internal or testing environment using previously acquired outcome score and satisfaction score data), while continued adjustment of parameters occurs “in the field” (e.g., using outcome score and satisfaction score data for schedules and/or resource compositions generated by the neural network 402) after the system has been deployed and activated for use, for example.
Utilizing the training data 1010, the score prediction engine 312 predicts resource composition scores for one more combinations (e.g., compositions) of resources for which no score data is present (e.g., the composition of resources has not previously occurred) and distributes the resource composition scores to the schedule optimizer 316 as predicted scores transmittal 1012. In response to receiving the predicted scores transmittal 1012, the substitution recommender 318 queries the scheduling constraint storer 304 with a constraint request 1014, the constraint request 1014 to request one or more constraints for the time period of the substitution requested by the computing terminal 122, and the scheduling constraint storer 304 returns the requested constraints to the schedule optimizer 316 as constraint transmittal 1016. In response to receiving the constraint transmittal 1016, the substitution recommender 318 applies the constraints received from constraint transmittal 1016 to the score predictions received from predicted scores transmittal 1012 to determine one or more recommended schedule substitutions (e.g., one or more schedule trades, one or more resources to send home during low census, one or more resources to call in during high census, etc.) and distributes the one or more recommended schedule substitutions to the communication manager 302 as recommended substitution transmittal 1018.
In response to receiving the recommended substitution transmittal 1018, the communication manager 302 distributes the one more recommended schedule substitutions included with the recommended substitution transmittal 1018 to the wearable 132, for example, as recommended substitution transmittal 1020. A user of the wearable 132 receives the recommended substitution transmittal 1020 and, utilizing an interface of the wearable 132, accepts one of the recommended schedule substitutions or modifies one of the recommended schedule substitutions included in the recommended substitution transmittal 1020. In both cases, the input of the user is distributed to the substitution recommender 318 via the communication manager 302 with a user input transmittal 1022 and 1024. In response to receiving the user input with the input transmittal 1024, the substitution recommender 318 applies the user modification to one of the recommended schedule substitutions or applies the user selected schedule substitutions. The substitution recommender 318 then distributes the schedule substitution to the server 126 via the communication manager 302, the schedule substitution to modify an existing resource schedule in the server 126 with a selected substitution transmittal 1026 and 1028.
Thus, certain examples transform patient information, patient outcome data, patient satisfaction data, employee satisfaction data, and a schedule substitution request into resource scheduling information such as a recommended schedule substitution calculated to match a set of pre-determined and/or dynamic constraints and/or goals that results in the reconfiguration of one or more properties of the server 126. For example, as described above, the system 100 can transform at least one of a notification of a schedule trade request, a low census event, and/or a high census event into a recommended schedule modification (e.g., a recommended shift trade, a recommended resource to send home or call in, etc.) Using machine learning, such as the neural network 402, etc., a plurality of parameters, settings, etc., can be developed, monitored, and refined through deployment of various resource schedules and combinations of resources, for example. Using the neural network 402, for example, learning/training and testing can be facilitated before the resource scheduling system is deployed (e.g., in an internal or testing environment using previously acquired outcome score and satisfaction score data), while continued adjustment of parameters occurs “in the field” (e.g., using outcome score and satisfaction score data for schedules and/or resource compositions generated by the neural network 402) after the system has been deployed and activated for use, for example.
Utilizing the training data 1110, the score prediction engine 312 predicts resource composition scores for one more combinations (e.g., compositions) of resources for which no score data is present (e.g., the composition of resources has not previously occurred) and distributes the resource composition scores to the resource manager 320 as predicted scores transmittal 1112. In response to receiving the predicted scores transmittal 1112, the resource manager 320 determines one or more top performing resources (e.g., one or more resources with highest prediction score) and organizes the top performing resources as a “float pool” (e.g., a pool of resources to substitute and/or shift around based on need) based on the resource composition scores included with the predicted scores transmittal 1112. The resource manager 320 distributes the float pool to the communication manager 302 as a pool transmittal 1114. The communication manager 302 then distributes the float pool to at least one of the wireless device 130 for display to a user and the server 126 for storage and/or modification of a previous float pool stored in the server 126 via pool transmittal 1116 and pool transmittal 1118, respectively.
Thus, certain examples transform patient information, patient outcome data, patient satisfaction data, and employee satisfaction data into resource scheduling information such as a resource float pool calculated to match a set of pre-determined or dynamic constraints and/or goals that results in the reconfiguration of one or more properties of the server 126. For example, as described above, the system 100 can transform a request to update a pool of resources into a modification of the pool of resources. Using machine learning, such as the neural network 402, etc., a plurality of parameters, settings, etc., can be developed, monitored, and refined through deployment of various resource schedules and combinations of resources, for example. Using the neural network 402, for example, learning/training and testing can be facilitated before the resource scheduling system is deployed (e.g., in an internal or testing environment using previously acquired outcome score and satisfaction score data), while continued adjustment of parameters occurs “in the field” (e.g., using outcome score and satisfaction score data for schedules and/or resource compositions generated by the neural network 402) after the system has been deployed and activated for use, for example.
Flowcharts representative of example hardware logic or machine readable instructions for implementing the example resource director 102 of
As mentioned above, the example processes of
“Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc. can be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, and (6) B with C.
At block 1206, the resource director 102 receives scheduling event data from at least one of the server 120, the server 121, and the computing terminal 122 of
At block 1208, the resource director 102 receives scheduling constraint data from at least one of the server 121 and the computing terminal 122 of
In response to receiving one or more of the datums as described in conjunction with blocks 1202-1208, processing proceeds to block 1210. At block 1210, further described in conjunction with
At block 1212, the resource director 102 determines whether a recommended schedule is requested. In some examples, the schedule can be automatically requested by the server 121. Additionally or alternatively, the schedule can be manually requested by a user (e.g., a manager, an administrator, a department head, etc.). In response to determining the recommended schedule is requested, processing proceeds to block 1214, further described in conjunction with
At block 1216, the resource director 102 determines whether a scheduling event was received. In some examples, the scheduling event can include a scheduling request that an employee input to the computing terminal 122. Additionally or alternatively, scheduling event data can include low census and/or high census data stored in the server 120. Additionally or alternatively, scheduling event data can include employee schedule preferences stored in the server 121. In some examples, the scheduling event may be received at the time of the scheduling event. For example, the server 120 may notify of low census/high census at the time the census event occurs. In another example, an employee may request off of work. In some examples, the scheduling event can be retrieved from the scheduling event storer 310. In response to determining the scheduling event was received, processing transfers to block 1218, further described in conjunction with
At block 1220, the resource director 102 determines whether resource management data was requested. In some examples, resource management data can be requested by at least one of the server 121 and/or the server 126 in response to an upcoming review period and/or a quantity of time elapsed since the data was previously requested. Additionally or alternatively, resource management data can be manually requested by a user. In response to determining the resource management data was requested, processing transfers to block 1222, further described in conjunction with
At block 1224, the resource director 102 determines whether it is desired to continue monitoring the system 100. In response to determining the monitoring is desired, processing returns to block 1202 of the example program 1220. Conversely, in response to determining monitoring is no longer desired, the example program 1200 of
An example program that can be executed to predict resource composition scores based on received data (
At block 1304, the resource composition generator 404 generates one or more resource compositions for distribution to the resource composition score engine 406, the generated resource compositions different than the resource compositions used to train the neural network 402 at block 1302. In some examples, the resource composition generator 404 adaptively generates the one or more resource compositions based on scores previously generated by the resource composition score engine 406. For example, if a certain doctor continuously appears in the highest performing resource compositions, the resource composition generator 404 can include that certain doctor in a greater portion of the generated resource compositions. In response to distribution of the one or more resource compositions to the resource composition score engine 406, processing proceeds to block 1306.
At block 1306, the resource composition score engine 406 utilizes the deployed model generated at block 1302 to predict scores for the resource compositions generated by the resource composition generator 404 at block 1304. For example, the scores can be predicted based upon a series of correlation coefficients (e.g., coefficients showing a dependence of two variables, the variables in this example being compositions of resources and outcome and/or satisfaction scores) included with the deployed model. So, for example, if the resource compositions used to train the neural network 402 include a first nurse and doctor pairing and a second nurse and doctor pairing, the resource composition generator 404 can predict a score for a pairing of the first nurse and the second doctor based upon one or more correlation coefficients. In some examples, the scores can be predicted separately as predicted outcome scores and predicted satisfaction scores.
In such examples, at block 1308, the resource composition score accumulator 408 combines the predicted outcome scores and predicted satisfaction scores into a composite predicted score. In some examples, the combination process further includes applying a weighting factor to at least one of the predicted outcome scores and the predicted satisfaction scores. In some examples, the weighting factors can be generated based on one or more user selected preferences as determined by the preference learning module 322. Additionally or alternatively, the weighting factors can be pre-determined values. In yet other examples, the weighting factors can be dynamically learned and updated by the neural network 402.
In response to completion of the combination of scores into the singular predicted scores at block 1308, the resource composition score accumulator 408 can, at block 1310, store the predicted scores in the resource composition score storer 410 and, at block 1312, output the predicted scores to the output generator 314 for use by at least one of the schedule optimizer 316, the substitution recommender 318, and the resource manager 320 for further processing. In response to completion of block 1312, the example program 1210 of
An example program that can be executed to generate a schedule (
Once all applicable constraints are applied, the schedule generator 604 at block 1406 generates one or more resource schedules that satisfy each of the constraints applied at block 1604. Further at block 1406, the schedule generator 604 distributes the generated schedules to the schedule selector 606 and, at block 1408, the schedule selector 606 retrieves prediction scores from the score prediction engine 312.
Using the received schedules and the retrieved prediction scores associated with one or more resource compositions included in the received schedules, the schedule selector 606 is further to, at block 1410, select one of the received schedules based on which of the received schedules includes the highest prediction score for the plurality of resource compositions included in the schedule In some examples at block 1410, the schedule selector 606 can additionally or alternatively utilize one or more business objectives to determine the selected schedule. In response to selection of the schedule, the schedule selector 606 can, at block 1412, store the selected schedule in the schedule storer 608 and, at block 1414, output the selected schedule to a display. For example, the schedule selector 606 can output the selected schedule to a display associated with the computing terminal 128 of
At block 1416, the schedule optimizer 316 determines whether or not the computing terminal 128 received a user input a request to modify the selected schedule. For example, the user can switch the schedules of one or more employees through the computing terminal 128, thereby modifying one or more resource compositions included in the schedule. Additionally or alternatively, the user can reject each schedule presented on the computing terminal 128 and request a plurality of different schedule options. In response to determining the user did request a modification, processing proceeds to block 1418. Alternatively, in response to determining the user did not request a modification (e.g., the user accepted the selected schedule), the example program 1214 of
At block 1418, triggered in response to a user modification, the schedule selector 606 modifies the selected schedule per the user input and determines a score for the modified schedule based on the resource composition scores previously retrieved from the score prediction engine 312 at block 1408. In some examples, the schedule selector 606 can distribute a warning to the user if the modified schedule reduces the predicted score by a threshold amount. In response to determining the score for the modified schedule, processing proceeds to block 1420 where the schedule selector 606 outputs the modified schedule and an associated score to a display such as the display associated with the computing terminal 128 of
An example program that can be executed to determine a scheduling substitution (
At block 1504, the substitution generator 704 generates one or more schedule substitutions (e.g., a resource to shift, a resource to add, a resource to subtract, etc.) that satisfy each scheduling constraint (e.g., one or more hard constraints, one or more soft constraints, a combination of hard and soft constraints, etc.) received from the scheduling constraint storer 304. Further at block 1504, the substitution generator 704 distributes the generated schedule substitutions to the substitution selector 706 and, at block 1506, the substitution selector 706 retrieves prediction scores from the score prediction engine 312.
Using the received schedule substitutions and the retrieved prediction scores associated with one or more schedule substitutions, the substitution selector 706 is further to, at block 1508, select one of the received schedule substitutions based on which of the schedule substitutions is associated with the greatest prediction score. For example, the substitution selector 706 can add (e.g., call in) the resources with the greatest positive impact to the predicted score, subtract (e.g., send home) the resources with the greatest negative impact the predicted score, and/or trade the resource that has the greatest positive cumulative impact to each of the two or more schedules modified for the trade. In some examples, at block 1508, the substitution selector 706 can additionally or alternatively utilize one or more business objectives to determine the selected schedule substitution. In response to selection of the schedule substitution, the substitution selector 706 can, at block 1510, output the selected substitution to a display. For example, the substitution selector 706 can output the selected schedule to a display associated with the wearable 132 of
At block 1512, the substitution recommender 318 determines whether or not a user input accepted or rejected the selected substitution. In some examples, the user may be a manager reviewing the score impact the substitution has. Additionally or alternatively, the user may be an employee deciding whether to accept the proposed trade through the trade acceptance interface 200b of
At block 1514, triggered in response to the user rejection, the substitution selector 706 selects a second schedule substitution with a next best prediction score (e.g., the second best resource to call in, the second best resource to send home, the second best resource to trade, etc.) based on the resource composition scores previously retrieved from the score prediction engine 312 at block 1506. In response to selecting the second schedule substitution, processing proceeds to block 1516 where the substitution selector 706 stores the second schedule substitution in the substitution storer 708. Additionally, at block 1518, the substitution selector 706 distributes the modification as a user preference (e.g., a resource removed based on the modification is non-preferred and a resource added based on the modification is preferred) for storage. In response to completion of block 1518, the example program 1218 of
An example program that can be executed to determine a resource ranking and/or pool (
At block 1604, a filtering technique is applied to the sorted plurality of resources determined at block 1602 by the ranked resource generator 804 to determine a pool of top resources. In some examples, the pool can be determined based on a threshold score (e.g., the pool consists of employees with a score of 90 or higher, etc.). Additionally or alternatively, the pool can be determined based on a threshold portion of a population of employees (e.g., the pool consists of the 25% of employees with the highest score, etc.). In response to completion of the pooling of resources, the ranked resource generator 804 can, at block 1606, store the pool of resources generated at block 1604 in the ranked resource storer 808. Additionally or alternatively at block 1606, the pool of resources can be stored in the server 126.
Additionally, at block 1608, the ranked resource generator 804 can provide the pooled resources to the commendation recommender 806. In some examples, resources can be commended (e.g., financial compensation, an award, etc.) as a function of respective resource scores. Additionally or alternatively, resources can be commended based on membership to one or more pools of resources. Upon completion of providing recommendations at block 1608, the example program 1222 of
The processor platform 1700 of the illustrated example includes a processor 1712. The processor 1712 of the illustrated example is hardware. For example, the processor 1712 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor can be a semiconductor based (e.g., silicon based) device. In this example, the processor 1712 implements the example resource director 102 which can, in some examples, include or otherwise implement the example communication manager 302, the example score prediction engine 312, the example output generator 314 which can, in some examples, include or otherwise implement the example schedule optimizer 316, the example substitution recommender 318, the example resource manager 320, and the example preference learning module 322
The processor 1712 of the illustrated example includes a local memory 1713 (e.g., a cache). The processor 1712 of the illustrated example is in communication with a main memory including a volatile memory 1714 and a non-volatile memory 1716 via a bus 1718. The volatile memory 1714 can be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®), and/or any other type of random access memory device. The non-volatile memory 1716 can be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 1714, 1716 is controlled by a memory controller.
The processor platform 1700 of the illustrated example also includes an interface circuit 1720. The interface circuit 1720 can be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface.
In the illustrated example, one or more input devices 1722 are connected to the interface circuit 1720. The input device(s) 1722 permit(s) a user to enter data and/or commands into the processor 1712. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, an isopoint device, and/or a voice recognition system.
One or more output devices 1724 are also connected to the interface circuit 1720 of the illustrated example. The output devices 1724 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer, and/or speaker. The interface circuit 1720 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip, and/or a graphics driver processor.
The interface circuit 1720 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 1726. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc.
The processor platform 1700 of the illustrated example also includes one or more mass storage devices 1728 for storing software and/or data. Examples of such mass storage devices 1728 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives.
The machine executable instructions 1732 of
From the foregoing, it will be appreciated that example methods, apparatus and articles of manufacture have been disclosed that facilitate the analysis of large data flows (including data from multiple servers and/or multiple user devices) available in a hospital system to streamline the scheduling/management of resources (e.g., doctors, nurses, technicians, equipment, hospital rooms, etc.) available in a hospital. This improvement can lead to an improvement in patient and employee satisfaction as well as an improvement in patient outcomes. The examples disclosed herein can, for example, utilize the large data flows to train a neural network which generates a prediction score model which can, in some examples, include one or more correlation coefficients. The model can be used, in some examples, to predict outcome scores for resource compositions which can in turn be used in conjunction with constraints applied to the compositions to generate schedules that a) satisfy the many scheduling constraints that exist in a hospital setting and b) show the highest predicted outcome and/or satisfaction scores. In other examples, the model can be used to facilitate dynamic and real-time schedule resource substitutions (e.g., an employee trade) that have a minimal negative impact (and in some cases, a positive impact) on predicted outcomes. In yet other examples, the model can be used to predict which resources (e.g., which nurses, which doctors, etc.) have the greatest impact on the hospital system and identify said resources in a “float pool.” Thus, certain examples improve schedule processing, feedback incorporation, and processor operation to drive configuration, adjustment, and processing of hospital systems.
Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.