As the need for healthcare rises, the time spent by caregivers in patient care becomes even more valuable. Caregivers are continually asked to become more efficient in providing that care. This can include requirements to see additional patients in a given amount of time. The result is additional pressure on the caregivers, as the healthcare system already works at a perceived high level of efficiency.
In general terms, the present disclosure relates to the use of audio and/or video by caregivers to increase efficiencies in patient care. Various aspects are described in this disclosure, which include, but are not limited to, the following aspects.
In one aspect, an example method for delivery of patient information through audio and/or video can include: capturing audio and/or video from a caregiver; receiving identification of a patient from the caregiver; receiving authorization to deliver the audio and/or video in association with providing care for the patient; and delivering the audio and/or the video.
In another aspect, an example method for initiating a workflow for a patient can include: receiving a trigger event; upon receiving the trigger event, monitor for a command from a caregiver of the patient; and upon receiving the command, initiate the workflow associated with the command.
In yet another aspect, an example method for conducting a video conference associated with care of a patient can include: initiating the video conference on a first device; identifying at least one face associated with a caregiver on the video conference; receiving a trigger to transfer the video conference to a second device; authenticating the caregiver on the second device using the at least one face; and automatically transferring the video conference from the first device to the second device.
In another aspect, an example method for optimizing a video conference between a caregiver and a patient can include: initiating the video conference between the caregiver and the patient; determining an aspect of the video conference needs to be optimized; and performing optimization of the aspect of the video conference.
In yet another aspect, an example method of estimating an amount of time before a resource is allocated for a video call between a caregiver and a patient can include: receiving a request for the resource associated with the video call; calculating an estimated wait time for the resource to be available for the video call; and presenting the estimated wait time to one or more of the caregiver and the patient.
The following drawing figures, which form a part of this application, are illustrative of the described technology and are not meant to limit the scope of the disclosure in any manner.
The present disclosure relates to the use of audio and/or video by caregivers to increase efficiencies in patient care. In general terms, audio and/or video is captured from a caregiver, and that audio and/or video is used to create greater efficiencies as the caregiver provides care to patients. Many different examples are provided below.
As shown in
In some examples, the remote caregivers 16 are medical specialists such as an intensivist, a neurologist, a cardiologist, a psychologist, and the like. In some further examples, a remote caregiver 16 is an interpreter/translator, or other kind of provider.
In certain examples, the virtual care management application 110 is installed on the devices 102, 104, 106, 108. Alternatively, the virtual care management application 110 can be a web-based or cloud-based application that is accessible on the devices 102, 104, 106, 108.
The virtual care management application 110 enables the caregiver 12 to provide acute care for the patient 14 by allowing the caregiver 12 to connect and consult with a remote caregiver 16 who is not physically located in the clinical care environment 10. Advantages for the patient 14 can include reducing the need to transfer the patient 14 to another clinical care environment or location, and minimizing patient deterioration through faster clinical intervention. Advantages for the caregiver 12 can include receiving mentorship and assistance with documentation and cosigning of medication administration. Advantages for the remote caregiver 16 can include allowing the remote caregiver 16 to cover more patients over a wider geographical area while working from a single, convenient location.
As shown in
The secondary device 104 can be a workstation such as a tablet computer, or a display monitor attached to a mobile stand that can be carted around the clinical care environment 10. The secondary device 104 can be shared with other caregivers in the clinical care environment 10. In some examples, the secondary device 104 can be a smart TV located in the patient's room, that is configured to access the virtual care management application 110.
The primary and secondary devices 102, 104 are interchangeable with one another. For example, in some alternative examples the secondary device 104 can be a smartphone carried by the caregiver 12, and the primary device 102 can be a workstation such as a tablet computer, a display monitor attached to a mobile stand, or a smart TV.
The remote caregivers 16 can similarly use both a primary device 106 and a secondary device 108 that can each access the virtual care management application 110. In the example illustrated in the figures, the primary device 106 of the remote caregiver 16 is a laptop, a tablet computer, or a desktop computer, and the secondary device 108 is a smartphone. The primary and secondary devices 106, 108 are interchangeable such that in some examples the secondary device 108 can be a laptop, a tablet computer, or a desktop computer, and the primary device 106 is a smartphone that the remote care provider carries with them.
The consultations between the caregiver 12 and the remote caregivers 16 are managed across a communications network 20. As shown in the example of
A request from the caregiver 12 will go out to all remote caregivers 16 who have chosen to receive notifications for the request type and who are part of the health care system of the clinical care environment 10. Advantageously, the consultations between the caregiver 12 and the remote caregivers 16 are guided by the virtual care management application 110 to take the burden off the caregiver 12 to reach out to multiple care providers for a consultation. Instead, a request from the caregiver is sent to a plurality of remote care providers, and the remote care provider who accepts first gets connected to the caregiver who sent the request. This is achieved through combination of routing logic with a user activated interface. Advantageously, the virtual care management application 110 combines patient contextual data in a single application with communications and task management platforms.
Additionally, the virtual care management application 110 enables the remote caregivers 16 to cover multiple facilities within the health care system. Also, the virtual care management application 110 enables the remote caregivers 16 to select and change the type of notifications, request types, and facilities or units that they will receive notifications and virtual care requests on their devices from the virtual care management application 110.
Additional details regarding the system 100 can be found in U.S. Patent Application No. 63/166,382 filed on Mar. 26, 2021, the entirety of which is hereby incorporated by reference.
As described further in the examples provided below, one or more of the devices 102, 104, 106, 108 can be used to capture audio and/or video from the caregiver 12 and/or the remote caregivers 16 to enhance the delivery of patient care.
For example, referring now to
In an alternative embodiment, the caregiver can be authenticated, at least in part, using a Real Time Locating System (RTLS). The RTLS can be used to locate and/or identify the caregiver. One non-limiting example of such an RTLS is described in U.S. patent application Ser. No. 17/111,075 filed on Dec. 3, 2020, the entirety of which is hereby incorporated by reference.
Next, an operation 204 requires the identification of the patient for which the audio and/or video is directed. This can include a manual selection of a patient (e.g., by patient name or number) and/or automated selection of the patient based upon context (e.g., location or current assignment). As previously noted, an RTLS can also be used to located and/or authenticate the patient.
Next, at operation 206 audio and/or video is captured from the caregiver, and at operation 208 the captured audio and/or video is used in patient care. The patient care can include many different aspects of patient care, including communication between the caregiver and other caregivers and/or the patient, workflow implementations, and the like. Each of the operations 206 and 208 will be described in more detail with respect to the various embodiments described below.
Upon signing in, the primary device 102 of the caregiver 12 is configured to capture audio and/or video from the caregiver. As is typical in mobile devices, the primary device 102 can include at least one microphone to capture the audio from the caregiver 12 and at least one camera to capture photographs and/or video from the caregiver 12.
In some examples provided herein, the audio and/or video from the caregiver 12 is captured and recorded. In other examples, the audio and/or video from the caregiver 12 is captured and delivered to another, such as the patient 14, to allow for a two-way communication between the caregiver 12 and the patient 14. Many of these configurations are described below.
Referring now to
In this example, the primary device 102 of the caregiver 12 is used to capture video from the caregiver 12 about that transition in care. As shown in
Referring now to
Referring now to
Next, at operation 606 the primary device 102 receives authorization to deliver the video, and at operation 608 the video is delivered.
In the example above, the video can be delivered to the patient 14 to provide the patient 14 with information about the care of the patient 14 during transition from the caregiver 12 to a subsequent caregiver. In other examples, the video can be delivered to the next caregiver and/or the patient of the caregiver.
In some examples, the primary device 102 can receive instructions from the caregiver 12 for routing and delivery of the video. For instance, the caregiver 12 can record a single video to be delivered to both the patient 14 and the next caregiver or record different videos for delivery to each.
In some examples, the virtual care management application 110 can automate the routing and delivery of the video to the appropriate parties.
For example, the virtual care management application 110 can be programmed to automate the delivery of the video to a chatroom associated with the patient. Additional details on these chatrooms are provided in U.S. patent application Ser. No. 17/453,273 filed on Nov. 2, 2021, the entirety of which is hereby incorporated by reference.
Additional details regarding delivery of messages, including the audio and/or video described herein, to patient families is provided in U.S. Patent Application No. 63/163,468 filed on Mar. 19, 2021, the entirety of which is hereby incorporated by reference.
Additional details regarding delivery of care instructions including the audio and/or video described herein, across different aspects of patient care within the system 100 (as well as possibly within the home of the patient) are provided in U.S. Patent Application No. 63/362,250 (Attorney Docket 14256.0060USP1) filed on Mar. 31, 2022, the entirety of which is hereby incorporated by reference.
There are various other aspects that can be associated with the capture of the audio and/or video from the caregiver. For instance, the audio and/or video can be transcribed to create a text version. In other examples, the audio can automatically be translated, especially if the patient 14 or the family speaks a different language. This can again be done in text or audio formats. Finally, the audio and/or video can be used for documentation purposes and captured in, for example, the Electronic Medical Record (EMR) associated with the patient.
In other examples, a prompt can automatically be provided to the caregiver 12 at desired intervals to capture the audio and/or video. For instance, when the caregiver 12 is getting ready to end a shift, the virtual care management application 110 can be programmed to automatically prompt the caregiver 12 to capture audio and/or video associated with the handoff. Similarly, when the caregiver 12 provides discharge instructions, the virtual care management application 110 can be programmed to automatically capture audio and/or video from the caregiver 12 associated with the discharge.
The delivery of the video can enhance the system 100 by allowing the caregiver 12 to deliver the information more efficiently to the various parties. For instance, the caregiver 12 may not be located in an area where the caregiver 12 can easily access the next caregiver or the patient 14, so delivering the video to the next caregiver or patient 14 is more efficient because the caregiver 12 does not need to locate the next caregiver or the patient 14. Further, the caregiver 12 can record and deliver multiple videos quickly, thereby allowing the caregiver 12 to deliver the required information more efficiently than having to walk around the care facility to greet each caregiver and patient individually. This can help to reduce the inefficiencies associated with the exchange of information and errors associated therewith. Other advantages are possible.
In addition to the capturing audio and/or video for delivery to others, the system 100 can capture audio and/or video to initiate or modify existing workflows associated with the care of the patient 14. For instance, referring now to
In the examples provided herein, a workflow is one or more actions associated with the care of the patient 14. Examples of such workflows include prescribing a drug for a patient, initiating a ventilator for a patient, a consult (in-person or virtual), etc.
In these examples, a workflow can be initiated or modified based upon audio and/or video captured from the caregiver 12. For instance, referring to
Once a trigger is received, the primary device 102 of the caregiver 12 monitors or otherwise waits for and receives a command from the caregiver 12 at operation 704. The command can be verbalized by the caregiver 12, for instance: “Initiate ventilation”. In other scenarios, the command can be received through other methods, such as from a gesture by the caregiver 12.
Finally, at operation 706, the primary device 102 of the caregiver 12 initiates or modifies a given workflow based upon the command from the caregiver 12. For example, in the instance of the command “Initiate ventilation”, the primary device 102 can implement a ventilation workflow that gathers the necessary resources to ventilate the patient 14, including the ventilator, personnel to deliver and initiate the ventilation, and any other requirements for the ventilation workflow. Further, context associated with issuance of the command can be used.
For instance, the primary device 102 can be location-aware (e.g., Real-Time Locating Systems (RTLS)), so that when the caregiver 12 issues a command in a particular room, the workflow is initiated by the primary device 102 for the patient associated with that room. One example of a system using such an RTLS is disclosed in U.S. patent application Ser. No. 17/111,075 filed on Dec. 3, 2020, the entirety of which is hereby incorporated by reference.
Referring now to
The actions can be configurable and put together like building blocks to create the desired workflows. For instance, the workflows can be nested (see, e.g., workflow 804) and put together from existing actions to assist in their creation. In some examples, the workflows can be defined by the caregiver 12 and/or include pre-defined workflows defined for the system 100. Further, the caregiver 12 can use the interface 800 to modify the workflows as desired. Many configurations are possible.
Referring now to
For instance, if a workflow requires a specialty consult by the remote caregiver 16, the workflow can automatically initiate a call to the third party resource 902. The third party resource 902 can, in turn, manage connection of the caregiver 12 to the remote caregiver 16 at the clinical care environment 10 for a virtual consult. Many other configurations are possible.
In some examples, the workflows can provide updates to the various records associated with the patient 14. For instance, with the example relating to the ventilator, the workflow can update the chatroom associated with the patient 14 to indicate that ventilation has been ordered and also provide updates as the ventilator is delivered and initiated. The workflow can further highlight certain aspects of the entries in the chatroom that may be important or otherwise require action by the caregiver 12.
Although the examples provided discuss the initiation of a workflow, the examples can also be used to modify a workflow or stop a workflow. For instance, the trigger can be used to modify an existing workflow or provide input for the workflow. For example, if a workflow requires a particular parameter to execute, the workflow can receive that parameter from the caregiver through further input from the caregiver. Similarly, the input from the caregiver can be received to stop a workflow or substitute one workflow for another. Many configurations are possible.
As noted, video can also be captured along with or in place of audio to initiate or modify workflows. For instance, gestures rather than audio input can be received from the caregiver to initiate a particular workflow.
Referring now to
The meeting room screen 1000 can further include a window 1004 that displays a live video feed of the caregiver 12 acquired from the camera of the primary device 102. In such examples, the meeting room screen 1000 can provide a two-way video conference between the caregiver 12 and patient 14. The meeting room screen 1000 can include a video camera icon 1006 that the caregiver can select to turn on and off the camera of the primary device 102, and thereby allow or block the live video feed of the caregiver 12. The meeting room screen 1000 can also include a microphone icon 1008 that the caregiver 12 can select to turn off and on the microphone of the primary device 102, and thereby mute and unmute the caregiver 12. The meeting room screen 1000 can also include a hang up icon 1010 that the caregiver 12 can select to terminate the video conference with the patient 14.
To initiate such a conference, the caregiver 12 can be authenticated on the primary device 102 (e.g., through a password, biometrics, FOB/scanner, etc.). Upon authentication, the caregiver 12 can initiate the video conference with the patient 14 by selecting the patient from a list, selecting a specific room, etc. The patient 14 can communicate with a device located in the room of the patient 14 or possibly a personal device of the patient 14.
When this conference is happening between the caregiver 12 and the patient 14, the caregiver 12 and patient can discuss any desired topics, such as the care of the patient, changes in that care, etc. As the discussion is occurring, the caregiver 12 may wish to change the device used to conduct the conference.
For instance, the caregiver 12 may initiate the conference on the primary device 102 while the caregiver 12 is moving. The caregiver 12 may then reach a place where the caregiver 12 has another device that may be more conducive or easier to use, such as the secondary device 104. The secondary device 104 can be a display monitor attached to a mobile stand that can be carted around the clinical care environment 10. Upon reaching the secondary device 104, the video conference with the patient 14 can automatically be transferred to the secondary device 104 from the primary device 102 to allow the caregiver 12 more flexibility, such as not having to hold the primary device 102.
More specifically,
Next, at operation 1106, either the first device or a second device (e.g., the secondary device 104) received a trigger to transfer the video conference to the second device. This trigger can be manual, such as through a request received from the caregiver 12 on the first device or the second device. The trigger can be automated, in that a prompt (e.g., toast or other notification) is presented on the first device when the first device is within a specific distance of the second device, such as a few feet. In yet another example, the trigger can simply be entering the field of view of another camera, such as entering the view of the camera on the secondary device 104.
In either event, when the transfer is initiated, the caregiver 12 is authenticated on the second device at operation 1108. In some examples, this authentication can happen automatically, such as by recognizing the face of the caregiver 12 on the second device using facial recognition. For instance, the caregiver 12 can simply present his or her face to the camera of the second device, and the second device can use the face to authenticate the caregiver 12. In one example, the first device uses facial recognition to identify the face or faces in the field of view of the first device. Upon one of more of those faces being identified in the field of view of a camera on the second device, the second device can automatically authenticate the face.
Finally, at operation 1110, the video conference is transferred to the second device upon authentication.
A similar transition can occur should the caregiver 12 enter the room of the patient 14 while a video conference is occurring. For example, the caregiver 12 can initiate a video conference with the patient 14 as the caregiver 12 is enroute to the room of the patient 14. This allows the caregiver 12 to begin conveying information to the patient 14 even before the caregiver 12 arrives physically in the room, thereby increasing efficiency.
When the caregiver 12 reaches a close proximity to the patient 14 (e.g., 20 feet, 10 feet, 5 feet, or enters the room of the patient 14), the primary device 102 can be programmed to automatically end the video conference, since the caregiver 12 is now in physical proximity to the patient 14 and the video conference is no longer needed. For example, the primary device 102 can use location information (e.g., GPS, RTLS) or other data (e.g., RFID beacons) to determine that the caregiver 12 has entered the room of the patient 14 and automatically end the video conference.
The transitions can occur for multiple providers when the video conference involves more than two individuals. These examples help to automate the transitions associated with video conferencing between the caregiver 12 and the patient 14. Ideally, the transitions become less intrusive to both and provide a seamless ability for communication. Many other configurations are possible.
Referring now to
More specifically, the caregiver 12 can use a device, such as the primary device 102 and/or the secondary device 104, to conduct a video conference with the patient (or patients) 14, as described previously. In such a scenario, a display 1200 provides the video feed, and one or more microphones and speakers of the secondary device 104 allow the caregiver 12 to communicate with the patient 14.
During the video conference, the secondary device 104 or a server 1202 facilitating the video conference can be programmed to optimize the communication between the caregiver 12 and the patient 14. For instance, the server 1202 can automatically analyze the audio and/or video associated with the video conference and make recommendations or reconfigurations to optimize the video conference.
In the example, the server 1202 analyzes the speech of the caregiver 12 and makes recommendations to optimize the likelihood that the patient 14 can understand the caregiver. In the example, the server 1202 creates a pop-up window 1204 that provides recommendations to the caregiver 12, such as to slow the speed of their speech and better enunciate their spoken words. This can be created based upon an analysis of the audio feed from the secondary device 104 of the caregiver 12.
Further, the server 1202 can analyze the conditions for the patient 14 and provide recommendations and/or optimizations to the caregiver 12 and/or the patient 14. For instance, the server 1202 can generate a window 1206 that provide metrics associated with the video conference between the caregiver 12 and the patient 14, such as whether the patient is muted, the speaking rate, the volume, background noise, screen presence, and speech clarity.
If there are issues with any of the metrics, the server 1202 can provide recommendations to fix the possible issue and/or automatically do so. For instance, if the speech rate is too fast, the server 1202 can generate the pop-up window 1204 described above. Additional examples can include, without limitation, the following:
Further, if the server 1202 senses that the patient 14 is trying to talk (either through audio and/or video analysis showing the lips of the patient 14 moving) but is on mute, the server 1202 can indicate such to the caregiver 12 and/or the patient 14 or simply automatically unmute the patient 14. In addition, if the background noise increases, the sound from the speakers for the caregiver 12 and/or the patient 14 can be increased (and/or noise cancelation can be turned on or off). Further, if the face of the caregiver 12 and/or the patient 14 is not centered in the camera view, the server 1202 (or local device) can recenter the image as necessary to optimize the view. Many other configurations are possible.
In addition, the server 1202 can be programmed to optimize the language used for communication between the caregiver 12 and the patient 14. For instance, a language preference can be captured at the beginning of the video conference or language can be automatically detected during conversation on the video conference.
If the server 1202 identifies a disconnect between the language of the caregiver 12 and the patient 14, the server 1202 can either provide automatic translation of the language or request an interpreter.
If auto-translation is provided, once the languages are identified there can be an automatic translation of voice and text to the appropriate language using, for instance, artificial intelligence. The caregiver 12 and/or the patient 14 can indicate gaps in understanding (or translation issues) via a button 1208 on the display 1200. This will enable additional training of the algorithm as needed. Audio transcripts can also be sent over to native speaking auditors to ensure all of the details of the encounter are understood and properly translated.
If an interpreter is needed, the server 1202 can automatically request the interpreter and facilitate the conference of the interpreter with the existing video call between the caregiver 12 and the patient 14.
As resources such as the translator are requested, it can be desirable to provide an indication to the caregiver 12 and/or the patient 14 regarding the availability of the resources. Referring now to
In this example, when an interpreter is requested, the server 1202 provides a pop-up window 1302 indicating that the resource has been requested and an estimated time for the resource to be available. In this instance, the server 1202 estimates that the interpreter will be available in 15 minutes. Although this example includes the translator as the resource, many other types of resources can be requested. For instance, a remote specialist in a particular area of medicine is an example of another type of resource that can be requested.
In order to provide the estimate, the server 1202 uses artificial intelligence, such as machine learning, to develop an algorithm to estimate when resources are likely to be available. The algorithm can look at one or more of the following when determining likely response times:
These are just some of the examples of the types of inputs that can be provided. The algorithm, as developed, looks at the request for the translator and provides an estimate of the amount of time until the translator is available.
A link is also provided in the window 1302 should the delay be excessive or otherwise unacceptable. If so, the caregiver 12 or the patient 14 can select the link to escalate the request. For instance, when a resource is requested, the initial request can be indicated as low priority (or escalated immediately based upon the context, such as type of resource, patient condition, etc.). Should the amount of time to wait for the resource be excessive, accessing the link will allow the caregiver 12 to raise the priority level of the request for the resource. This will escalate the request and allow the resource to be allocated more quickly. Many other configurations are possible.
The device 102, 104, 106, 108 can also include a mass storage device 1414 that is able to store software instructions and data. The mass storage device 1414 is connected to the processing unit 1402 through a mass storage controller (not shown) connected to the system bus 1420. The mass storage device 1414 and its associated computer-readable data storage media provide non-volatile, non-transitory storage for the device 102, 104, 106, 108.
Although the description of computer-readable data storage media contained herein refers to a mass storage device, it should be appreciated by those skilled in the art that computer-readable data storage media can be any available non-transitory, physical device or article of manufacture from which the device can read data and/or instructions. In certain embodiments, the computer-readable storage media comprises entirely non-transitory media. The mass storage device 1414 is an example of a computer-readable storage device.
Computer-readable data storage media include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable software instructions, data structures, program modules or other data. Example types of computer-readable data storage media include, but are not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid-state memory technology, or any other medium which can be used to store information, and which can be accessed by the device.
The device 102, 104, 106, 108 operates in a networked environment using logical connections to devices through the communications network 20. The device 102, 104, 106, 108 connects to the communications network 20 through a network interface unit 1404 connected to the system bus 1420. The network interface unit 1404 can also connect to additional types of communications networks and devices, including through Bluetooth, Wi-Fi, and cellular.
The network interface unit 1404 may also connect the device 102, 104, 106, 108 to additional networks, systems, and devices such as a digital health gateway, electronic medical record (EMR) system, vital signs monitoring devices, and clinical resource centers.
The device 102, 104, 106, 108 can also include an input/output unit 1406 for receiving and processing inputs and outputs from a number of peripheral devices. Examples of peripheral devices may include, without limitation, a camera 1422, a touchscreen 1424, speakers 1426, a microphone 1428, and similar devices used for voice and video communications.
The mass storage device 1414 and the RAM 1410 can store software instructions and data. The software instructions can include an operating system 1418 suitable for controlling the operation of the device 102, 104, 106, 108. The mass storage device 1414 and/or the RAM 1410 also store software instructions 1416, that when executed by the processing unit 1402, cause the device to provide the functionality of the device 102, 104, 106, 108 discussed herein.
The various embodiments described above are provided by way of illustration only and should not be construed to be limiting in any way. Various modifications can be made to the embodiments described above without departing from the true spirit and scope of the disclosure.
Number | Date | Country | |
---|---|---|---|
63269802 | Mar 2022 | US |