The present disclosure relates to producing documentation relating to healthcare services and more particularly to efficiently producing documentation from information spoken by caregivers in a healthcare facility.
In a typical healthcare facility (e.g., a hospital), caregivers (e.g., nurses, doctors, etc.) provide services under a variety of pressures, including the need to provide prompt and timely care to many patients during a limited time frame, and the need to provide customized care that takes into account information that was developed about a given patient, such as from previous visits to the patient’s room (e.g., on hospital rounds) or medical procedures (e.g., surgery) that may have been performed on the patient. However, given the fast paced nature of providing healthcare services in a healthcare facility, it is difficult for a caregiver to fully document information that was learned about a patient during a recent interaction with the patient, before moving on to another patient.
In the context of an operating room, a team of caregivers, such as surgeons, nurses, and anesthesiologists cooperate in a carefully coordinated manner to perform a complex medical procedure that may involve many pre- and post-operative steps and the use of high-tech medical equipment. As such, the full attention of the caregivers is focused on performing the procedure. Accordingly, information that may have been developed during the course of the procedure, such as observations about the condition of the patient during the procedure, information about the settings of medical equipment used at different stages in the procedure, and actions that should be performed post-operatively, may not be accurately or completely retained by the caregivers after the operation is complete (i.e., when the information would typically be documented in an operation note). As a consequence, when another caregiver subsequently tends to a patient, that caregiver may not have access to all of the information that was developed from a previous interaction with the patient, as some portion of the information may not have been documented.
The present application discloses one or more of the features recited in the appended claims and/or the following features which, alone or in any combination, may comprise patentable subject matter:
According to an aspect of the present disclosure, a compute device may include circuitry configured to obtain, from a caregiver, voice data indicative of spoken information pertaining to a patient. The compute device may obtain the voice data in response to a determination that the caregiver is located in a room with a patient in a healthcare facility (e.g., based on information from a real time location tracking system). The circuitry may additionally be configured to produce, from the obtained voice data, textual data indicative of the spoken information. Further, the circuitry may be configured to provide the textual data to another device for storage or presentation. The caregiver may be associated with a first shift and, in some embodiments, the circuitry may be configured to determine that a change from a first shift to a second shift has occurred, determine that a second caregiver associated with a second shift is assigned to the patient, and provide, to the second caregiver and in response to the determination that the shift change has occurred, a notification of the textual data. The circuitry, in some embodiments, may be configured to determine that a second caregiver has entered a room associated with the patient, and provide, to the second caregiver and in response to the determination that the second caregiver has entered the room, a notification of the textual data.
In some embodiments, the circuitry of the compute device may be configured such that providing the notification includes providing the notification to a mobile compute device carried by the second caregiver. The circuitry, in some embodiments, may be configured to prompt the second caregiver to acknowledge that the textual data has been reviewed. Additionally or alternatively, the circuitry of the compute device may be configured to determine whether the second caregiver has reviewed the textual data within a predefined time period and provide, in response to a determination that the second caregiver has not reviewed the textual data within the predefined time period, a reminder to the second caregiver to review the textual data. In some embodiments, the circuitry of the compute device may be configured to determine an identity of the patient based on patient designation data provided by the caregiver or based on a determination that the caregiver is located in a room assigned to the patient. Additionally or alternatively, the circuitry may be configured to provide the textual data to a bedside display device.
In some embodiments, the circuitry may be configured to display the textual data to the caregiver for review and editing before the textual data is provided to another device for storage or presentation. The caregiver may be one of multiple caregivers in an operating room in which a medical procedure is performed on the patient and the circuitry may be further configured to determine, from the voice data, an identity of the caregiver that provided the spoken information from among the plurality of the caregivers in the operating room.
The circuitry of the compute device, in some embodiments, may be configured to produce the textual data using a machine learning model trained to convert speech to text. Additionally or alternatively, the circuitry of the compute device may be configured to correct one or more words in the textual data based on a context in which the one or more words were spoken. Further, the circuitry may be configured such that to correct one or more words based on a context in which the one or more words were spoken comprises to correct one or more words based on data indicative of a medical procedure being performed when the one or more words were spoken, a status of the patient when the one or more words were spoken, a determined location of the speaker, words previously spoken by the speaker, or one or more predefined words associated with predefined commands. In some embodiments, the circuitry may be configured to supplement the textual data with tag data indicative of a context of the textual data. The circuitry may also be configured such that to supplement the textual data with tag data includes supplementing the textual data with time stamp data indicative of times at which the spoken information was obtained. In some embodiments, the circuitry may be configured such that supplementing the textual data with tag data includes supplementing the textual data with caregiver identification data indicative of an identity of a speaker of the spoken information.
In some embodiments, the circuitry may be configured such that supplementing the textual data with tag data includes supplementing the textual data with speaker location data indicative of a location of a speaking caregiver associated with the spoken information. Additionally or alternatively, the circuitry may be configured such that supplementing the textual data with tag data includes supplementing the textual data with speaker direction data indicative of a direction a speaking caregiver was facing when the spoken information was obtained. In some embodiments, the circuitry may be configured such that supplementing the textual data with tag data includes supplementing the textual data with a listing of equipment located in the operating room in which the medical procedure is performed on the patient. Additionally or alternatively, the circuitry of the compute device may be configured such that supplementing the textual data with tag data includes supplementing the textual data with data indicative of a type of medical procedure performed on the patient.
In some embodiments, the circuitry of the compute device may be configured such that supplementing the textual data with tag data includes supplementing the textual data with procedure stage data that may be indicative of a present stage of the medical procedure performed on the patient when the spoken information was obtained. The circuitry of the compute device, in some embodiments, may be configured such that supplementing the textual data with tag data includes supplementing the textual data with patient status data indicative of a status of the patient when the spoken information was obtained. Additionally or alternatively, the circuitry may be configured such that supplementing the textual data with tag data includes supplementing the textual data with equipment status data indicative of a status of equipment present in the room in which the medical procedure is performed.
In some embodiments, supplementing the textual data with tag data includes supplementing the textual data with tag data indicative of an incision site, an incision type, a location or diagram of incisions relative to each other, a size of a laparoscopic port used, an intra-operative finding, an identification of a pathology, stages of the medical procedure carried out from first incision to closure, ligation of one or more vessels, identification of an implant or prosthesis used in the medical procedure, excised tissue, anatomy notably identified, closure time, one or more materials used for closure, one or more intraoperative complications noted, one or more specimens obtained, blood loss, or one or more actions to be taken post-operatively. The circuitry may be additionally or alternatively configured to supplement the textual data with signature data that may be indicative of a signature and date associated with a caregiver who spoke the spoken information represented in the textual data.
In some embodiments, the circuitry of the compute device may be configured to provide the tag data to the other device for storage or presentation. The compute device, in some embodiments, may be part of a medical device used in the medical procedure on the patient. The circuitry may be configured such that providing the textual data to another device includes providing the textual data to at least one of an electronic medical records system, a device in a patient room, a device in an operating room, a personal computer, a device operating a web browser, a mobile device, an augmented reality presentation device, a projection device, or a wearable device. The circuitry may, in some embodiments, be configured to reduce ambient noise in the voice data.
In another aspect of the present disclosure, a method may include obtaining, by a compute device and from a caregiver, voice data indicative of spoken information pertaining to a patient, in response to a determination that the caregiver is located in a room with the patient in a healthcare facility (e.g., based on information from a real time location tracking system). The method may additionally include producing, by the compute device and from the obtained voice data, textual data indicative of the spoken information. Further, the method may include providing, by the compute device, the textual data to another device for storage or presentation. In some embodiments, the caregiver may be associated with a first shift and the method may further include determining, by the compute device, that a change from a first shift to a second shift has occurred, determining, by the compute device, that a second caregiver associated with a second shift is assigned to the patient, and providing, by the compute device, to the second caregiver and in response to the determination that the shift change has occurred, a notification of the textual data.
The method, in some embodiments, may additionally include determining, by the compute device, that a second caregiver has entered a room associated with the patient. Further, the method may include providing, by the compute device, to the second caregiver and in response to the determination that the second caregiver has entered the room, a notification of the textual data. The method may include providing the notification to a mobile compute device carried by the second caregiver. In some embodiments, the method includes prompting the second caregiver to acknowledge that the textual data has been reviewed. In some embodiments, the method includes determining, by the compute device, whether the second caregiver has reviewed the textual data within a predefined time period and providing, by the compute device and in response to a determination that the second caregiver has not reviewed the textual data within the predefined time period, a reminder to the second caregiver to review the textual data.
In some embodiments, the method includes determining, by the compute device, an identity of the patient based on patient designation data provided by the caregiver or based on a determination that the caregiver is located in a room assigned to the patient. The method may include providing, by the compute device, the textual data to a bedside display device. Additionally or alternatively, the method may include displaying, by the compute device, the textual data to the caregiver for review and editing before the textual data is provided to another device for storage or presentation. In some embodiments, the caregiver is one of multiple caregivers in an operating room in which a medical procedure is performed on the patient and the method may further include determining, by the compute device and from the voice data, an identity of the caregiver that provided the spoken information from among the plurality of the caregivers in the operating room. In some embodiments, the method additionally includes producing, by the compute device, the textual data using a machine learning model trained to convert speech to text.
The method may further include correcting, by the compute device, one or more words in the textual data based on a context in which the one or more words were spoken. Correcting one or more words based on a context in which the one or more words were spoken may include correcting one or more words based on data indicative of a medical procedure being performed when the one or more words were spoken, a status of the patient when the one or more words were spoken, a determined location of the speaker, words previously spoken by the speaker, or one or more predefined words associated with predefined commands. In some embodiments, the method includes supplementing the textual data with tag data indicative of a context of the textual data. Supplementing the textual data with tag data may include supplementing the textual data with time stamp data indicative of times at which the spoken information was obtained. Additionally or alternatively, supplementing the textual data with tag data may include supplementing the textual data with caregiver identification data indicative of an identity of a speaker of the spoken information.
In some embodiments, supplementing the textual data with tag data includes supplementing the textual data with speaker location data indicative of a location of a speaking caregiver associated with the spoken information, supplementing the textual data with speaker direction data indicative of a direction a speaking caregiver was facing when the spoken information was obtained, supplementing the textual data with a listing of equipment located in the operating room in which the medical procedure is performed on the patient, supplementing the textual data with data indicative of a type of medical procedure performed on the patient, and/or supplementing the textual data with procedure stage data indicative of a present stage of the medical procedure performed on the patient when the spoken information was obtained.
Supplementing the textual data with tag data, in some embodiments, may include supplementing the textual data with patient status data indicative of a status of the patient when the spoken information was obtained and/or supplementing the textual data with equipment status data indicative of a status of equipment present in the room in which the medical procedure is performed. In some embodiments, supplementing the textual data with tag data includes supplementing the textual data with tag data indicative of an incision site, an incision type, a location or diagram of incisions relative to each other, a size of a laparoscopic port used, an intra-operative finding, an identification of a pathology, stages of the medical procedure carried out from first incision to closure, ligation of one or more vessels, identification of an implant or prosthesis used in the medical procedure, excised tissue, anatomy notably identified, closure time, one or more materials used for closure, one or more intraoperative complications noted, one or more specimens obtained, blood loss, or one or more actions to be taken post-operatively.
The method may additionally or alternatively include supplementing, by the compute device, the textual data with signature data indicative of a signature and date associated with a caregiver who spoke the spoken information represented in the textual data. In some embodiments, the method includes providing, by the compute device, the tag data to the other device for storage or presentation. Providing the textual data to another device may include providing the textual data to at least one of an electronic medical records system, a device in a patient room, a device in an operating room, a personal computer, a device operating a web browser, a mobile device, an augmented reality presentation device, a projection device, or a wearable device. In some embodiments, the method includes reducing, by the compute device, ambient noise in the voice data.
In another aspect of the present disclosure, one or more machine-readable storage media may include instructions stored thereon. In response to being executed, the instructions may cause a compute device to obtain, from a caregiver, voice data indicative of spoken information pertaining to a patient. The instructions may cause the compute device to obtain the voice data in response to a determination that the caregiver is located in a room with a patient in a healthcare facility (e.g., based on information from a real time location tracking system). The instructions may further cause the compute device to produce, from the obtained voice data, textual data indicative of the spoken information. Additionally, the instructions may cause the compute device to provide the textual data to another device for storage or presentation. The caregiver may be associated with a first shift and in some embodiments, the instructions may cause the compute device to determine that a change from a first shift to a second shift has occurred, determine that a second caregiver associated with a second shift is assigned to the patient, and provide, to the second caregiver and in response to the determination that the shift change has occurred, a notification of the textual data.
The instructions may, in some embodiments, cause the compute device to determine that a second caregiver has entered a room associated with the patient and provide, to the second caregiver and in response to the determination that the second caregiver has entered the room, a notification of the textual data. In some embodiments, providing the notification includes providing the notification to a mobile compute device carried by the second caregiver. The one or more instructions may also cause the compute device to prompt the second caregiver to acknowledge that the textual data has been reviewed. In some embodiments, the one or more instructions may cause the compute device to determine whether the second caregiver has reviewed the textual data within a predefined time period and provide, in response to a determination that the second caregiver has not reviewed the textual data within the predefined time period, a reminder to the second caregiver to review the textual data.
The one or more instructions may, in some embodiments, cause the compute device to determine an identity of the patient based on patient designation data provided by the caregiver or based on a determination that the caregiver is located in a room assigned to the patient. The one or more instructions may cause the compute device to provide the textual data to a bedside display device. The instructions may, in some embodiments, cause the compute device to display the textual data to the caregiver for review and editing before the textual data is provided to another device for storage or presentation. The caregiver may be one of multiple caregivers in an operating room in which a medical procedure is performed on the patient and the one or more instructions may additionally cause the compute device to determine, from the voice data, an identity of the caregiver that provided the spoken information from among the caregivers in the operating room.
In some embodiments, the one or more instructions additionally cause the compute device to produce the textual data using a machine learning model trained to convert speech to text. The one or more machine-readable storage media may additionally cause the compute device to correct one or more words in the textual data based on a context in which the one or more words were spoken. In correcting one or more words based on a context in which the one or more words were spoken, the instructions may cause the compute device to correct one or more words based on data indicative of a medical procedure being performed when the one or more words were spoken, a status of the patient when the one or more words were spoken, a determined location of the speaker, words previously spoken by the speaker, or one or more predefined words associated with predefined commands. The instructions may additionally or alternatively cause the compute device to supplement the textual data with tag data indicative of a context of the textual data.
In supplementing the textual data with tag data, the instructions may cause the compute device to supplement the textual data with time stamp data indicative of times at which the spoken information was obtained, supplement the textual data with caregiver identification data indicative of an identity of a speaker of the spoken information, supplement the textual data with speaker location data indicative of a location of a speaking caregiver associated with the spoken information, supplement the textual data with speaker direction data indicative of a direction a speaking caregiver was facing when the spoken information was obtained, and/or supplement the textual data with a listing of equipment located in the operating room in which the medical procedure is performed on the patient.
In some embodiments, in supplementing the textual data with tag data, the instructions may cause the compute device to supplement the textual data with data indicative of a type of medical procedure performed on the patient, supplement the textual data with procedure stage data indicative of a present stage of the medical procedure performed on the patient when the spoken information was obtained, supplement the textual data with patient status data indicative of a status of the patient when the spoken information was obtained, and/or supplement the textual data with equipment status data indicative of a status of equipment present in the room in which the medical procedure is performed. Additionally or alternatively, the instructions may cause the compute device to supplement the textual data with tag data indicative of an incision site, an incision type, a location or diagram of incisions relative to each other, a size of a laparoscopic port used, an intra-operative finding, an identification of a pathology, stages of the medical procedure carried out from first incision to closure, ligation of one or more vessels, identification of an implant or prosthesis used in the medical procedure, excised tissue, anatomy notably identified, closure time, one or more materials used for closure, one or more intraoperative complications noted, one or more specimens obtained, blood loss, or one or more actions to be taken post-operatively.
In some embodiments, the one or more instructions may cause the compute device to supplement the textual data with signature data indicative of a signature and date associated with a caregiver who spoke the spoken information represented in the textual data. The one or more machine-readable storage media may also have instructions embodied thereon that cause the compute device to provide the tag data to the other device for storage or presentation. In some embodiments, the instructions may cause the compute device to provide the textual data to at least one of an electronic medical records system, a device in a patient room, a device in an operating room, a personal computer, a device operating a web browser, a mobile device, an augmented reality presentation device, a projection device, or a wearable device. The instructions may also cause the compute device to reduce ambient noise in the voice data.
Additional features, which alone or in combination with any other feature(s), such as those listed above and/or those listed in the claims, may comprise patentable subject matter and will become apparent to those skilled in the art upon consideration of the following detailed description of various embodiments exemplifying the best mode of carrying out the embodiments as presently perceived.
The detailed description particularly refers to the accompanying figures in which:
While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.
References in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of “at least one of A, B, and C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of “at least one of A, B, or C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).
The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).
In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.
Referring now to
As shown in
The patient care coordination system 180 may be embodied as any device(s) (e.g., one or more server compute devices) located on premises or remotely from the healthcare facility 110 (e.g., in a cloud data center) configured to enable communication among the caregivers at the healthcare facility 110, receive information from device(s) at the healthcare facility 110, and notify corresponding caregivers (e.g., caregivers assigned to a team associated with a particular patient to whom the information pertains) of the information. The electronic medical records (EMR) system 182 may be embodied as any device(s) (e.g., one or more server compute devices) located on premises or remotely from the healthcare facility 110 (e.g., in a cloud data center) configured to obtain electronic (e.g., digital) medical record data pertaining to patients, store the electronic medical record data (e.g., in one or more data storage devices), and provide the electronic medical record data (e.g., upon request) to an authenticated compute device (e.g., to a mobile compute device 140, 142) of a caregiver (e.g., a caregiver 130, 132).
Still referring to
In operation, the system 100, using one or more of the compute devices 140, 142, 150, 152, 154, 160, 162, 164, 170, 172, 174, 180, 182, 184, 186 described above, obtains voice data from one or more caregivers and converts the voice data into textual data to be stored and/or presented on an as-needed or as-requested basis. As such, the system 100 frees up the caregivers from the time-consuming task of manually entering textual notes pertaining to a patient during hospital rounds or in association with a medical procedure (e.g., surgical operation) performed on the patient. Furthermore, and as described in more detail herein, the system 100 may supplement the textual data with metadata (e.g., also referred to herein as tag data) indicative of contextual information associated with the textual data, such as identifiers of the caregivers who provided certain information (e.g., caregivers who spoke the information that has been converted to text), when the information was spoken, the patient to whom the information pertains, the location of the speaker of the information when the information was spoken, the stage of a medical procedure associated with a medical procedure during which the information was spoken, the settings of one or more devices (e.g., medical devices) at the time the information was spoken, diagrams and/or other visual information (e.g., locations of incisions made during a surgery, etc.). As such, the system 100 provides a more complete record, with significantly greater efficiency, than conventional systems in which caregivers are relied on to recall and manually enter information pertaining to patients in the course of performing hospital rounds and/or during the course of performing surgeries or other medical procedures on patients. Moreover, and as described in the more detail herein, the system 100 may determine to provide pertinent information to caregivers without their express request to do so (e.g., upon a change in care teams assigned to one or more patients, upon detecting that a caregiver has entered a room associated with a patient), to increase the likelihood that caregivers are equipped with pertinent information that could improve the care they provide to patients.
Referring now to
The compute engine 200 may be embodied as any type of device or collection of devices capable of performing various compute functions described below. In some embodiments, the compute engine 200 may be embodied as a single device such as an integrated circuit, an embedded system, a field-programmable gate array (FPGA), a system-on-a-chip (SOC), or other integrated system or device. Additionally, in the illustrative embodiment, the compute engine 200 includes or is embodied as a processor 202 and a memory 204. The processor 202 may be embodied as any type of processor capable of performing the functions described herein. For example, the processor 202 may be embodied as a single or multi-core processor(s), a microcontroller, or other processor or processing/controlling circuit. In some embodiments, the processor 202 may be embodied as, include, or be coupled to an FPGA, an application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein.
The main memory 204 may be embodied as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory or data storage capable of performing the functions described herein. Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. In some embodiments, all or a portion of the main memory 204 may be integrated into the processor 202. In operation, the main memory 204 may store various software and data used during operation such as voice data, textual data produced from the voice data, tag data indicative of contextual information associated with the textual data, patient medical record data, applications, libraries, and drivers.
The compute engine 200 is communicatively coupled to other components of the mobile compute device 140 via the I/O subsystem 206, which may be embodied as circuitry and/or components to facilitate input/output operations with the compute engine 200 (e.g., with the processor 202 and the main memory 204) and other components of the mobile compute device 140. For example, the I/O subsystem 206 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 206 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with one or more of the processor 202, the main memory 204, and other components of the mobile compute device 140, into the compute engine 200.
The communication circuitry 208 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications over a network between the mobile compute device 140 and another device 142, 150, 152, 154, 160, 162, 164, 170, 172, 174, 180, 182, 184, 186. The communication circuitry 208 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Wi-Fi®, WiMAX, Bluetooth®, cellular, Ethernet, etc.) to effect such communication.
The illustrative communication circuitry 208 includes a network interface controller (NIC) 210. The NIC 210 may be embodied as one or more add-in-boards, daughter cards, network interface cards, controller chips, chipsets, or other devices that may be used by the mobile compute device 140 to connect with another device 142, 150, 152, 154, 160, 162, 164, 170, 172, 174, 180, 182, 184, 186. In some embodiments, the NIC 210 may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors. In some embodiments, the NIC 210 may include a local processor (not shown) and/or a local memory (not shown) that are both local to the NIC 210. In such embodiments, the local processor of the NIC 210 may be capable of performing one or more of the functions of the compute engine 200 described herein. Additionally or alternatively, in such embodiments, the local memory of the NIC 210 may be integrated into one or more components of the mobile compute device 140 at the board level, socket level, chip level, and/or other levels.
Each data storage device 212 may be embodied as any type of device configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage device. Each data storage device 212 may include a system partition that stores data and firmware code for the data storage device 212 and one or more operating system partitions that store data files and executables for operating systems. Each audio capture device 214 may be embodied as any device or circuitry (e.g., a microphone) configured to obtain audio data (e.g., human speech) and convert the audio data to digital form (e.g., to be written to the memory 204 and/or one or more data storage devices 212). Each display device 216 may be embodied as any device or circuitry (e.g., a liquid crystal display (LCD), a light emitting diode (LED) display, a cathode ray tube (CRT) display, etc.) configured to display visual information (e.g., text, graphics, etc.) to a viewer (e.g., a caregiver or other user of the mobile compute device 140). Each image capture device 218 may be embodied as any device or circuitry (e.g., a camera) configured to obtain visual data from the environment and convert the visual data to digital form (e.g., to be written to the memory 204 and/or one or more data storage devices 212). Each peripheral device 220 may be embodied as any device or circuitry commonly found on a compute device, such as a keyboard, a mouse, or a speaker to supplement the functionality of the other components described above.
The compute devices 142, 150, 152, 154, 160, 162, 164, 170, 172, 174, 180, 182, 184, 186 may have components similar to those described in
In the illustrative embodiment, the compute devices 140, 142, 150, 152, 154, 160, 162, 164, 170, 172, 174, 180, 182, 184, 186 are in communication via a network 190, which may be embodied as any type of wired or wireless communication network, including local area networks (LANs) or wide area networks (WANs), digital subscriber line (DSL) networks, cable networks (e.g., coaxial networks, fiber networks, etc.), cellular networks (e.g., Global System for Mobile Communications (GSM), Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), 3G, 4G, 5G, etc.), radio area networks (RAN), global networks (e.g., the internet), or any combination thereof, including gateways between various networks.
Referring now to
As indicated in block 306, the system 100 may determine the identity of the patient (e.g., the patient 120) based on patient designation data (e.g., the patient’s name, a room number of the patient, an identification number of the patient, etc.) provided by the caregiver (e.g., the caregiver 130). In doing so, and as indicated in block 308, the system 100 may determine the identity of the patient based on an identification of the patient provided by a compute device (e.g., the mobile compute device 140) used by the caregiver (e.g., the caregiver 130). For example, the mobile compute device 140 may receive the patient designation data through selection, by the caregiver 130, of the patient’s name on a touch screen of the mobile compute device 140 or may obtain the patient designation data through the audio capture device 214 if the caregiver 130 speaks the patient’s name.
In some embodiments, the system 100 may determine the identity of the patient based on a determined location of the caregiver (e.g., the caregiver 130), as indicated in block 310. For example, and as indicated in block 312, the system 100 may determine the identity of the patient based on a determination that the caregiver (e.g., the caregiver 130) is located in the patient’s room (e.g., the room 112). The system 100 may determine that the caregiver is located in the patient’s room (e.g., the room 112) based on location data obtained from a real time location tracking system (e.g., the RLTS system 186), as indicated in block 314. That is, the location data may indicate, for example, that a location tracking badge (e.g., an NFC tag) worn by the caregiver 130 has been detected in the room 112, where the patient 120 is located. As indicated in the block 316, in making the determination of the identity of the patient, the system 100 may determine the room assigned to the patient based on admission, discharge, and transfer (ADT) data that associates patients with rooms in the healthcare facility 110. The ADT data may be provided by the ADT system 184 described with reference to
In some embodiments, the method 300 may include obtaining voice data indicative of a medical procedure being performed on a patient, as indicated in block 318. For example, in the room 116 of
Referring now to
Referring now to
In other situations, in which multiple caregivers are detected in the room, the system 100 may limit the set of potential caregiver voice matches to those that are determined to be in the room (e.g., when performing a match based on voice biometric data). In some embodiments, the system 100 may determine an identity of a speaking caregiver represented in the voice data based on a determined position of each caregiver in the room when the caregiver spoke, as indicated in block 354. In doing so, and as indicated in block 356, the system 100 may determine the identity of each caregiver based on a comparison of speech volumes detected by each of multiple audio capture devices (e.g., data capture devices 164) in the room (e.g., the room 116). For example, if multiple microphones (e.g., data capture devices 164) are positioned at different locations in the room 116, the voice of a caregiver nearer to one of the microphones will be determined to be louder than the same voice detected by another microphone located farther away from the caregiver. As such, once a relative position of an identified caregiver in a room is determined (e.g., from voice biometric data and from a comparison of the volumes detected by multiple microphones at different locations in the room), the system 100 may ascribe, to the previously identified caregiver, other segments of voice data having similar differences in volume detected by the various microphones in the room.
Still referring to
Continuing the method 300, and referring now to block 368 of
The system 100 may also supplement the textual data with speaker direction data, which may be embodied as any data indicative of a direction a speaker (e.g., caregiver) was facing when the caregiver spoke a portion of the spoken information represented in the textual data. The direction may be expressed relative another speaker, relative to one or more objects in the room, or relative to any other reference (e.g., geodetic north), as indicated in block 376. As indicated in block 378, the system 100 may supplement the textual data with a list of all caregivers that participated in a medical procedure to which the textual data pertains. Additionally or alternatively, the system 100 may supplement the textual data with a list of all medical devices present in the room in which the medical procedure was performed (e.g., from device identifiers reported by the medical devices themselves and/or based on spoken identifiers of the medical devices), as indicated in block 380. The system 100 may additionally or alternatively supplement the textual data with summary data indicative of the type of medical procedure that was performed, as indicated in block 382. Similarly, the system 100 may supplement the textual data with procedure stage data indicative of a stage of the medical procedure being performed when corresponding spoken information (e.g., represented by the textual data) was spoken, as indicated in block 384. The system 100 may also supplement the textual data with data indicative of a status of the patient when the spoken information was obtained, as indicated in block 386.
Referring now to
In some embodiments, as indicated in block 394, the system 100 may supplement the textual data with signature data, which may be embodied as any data indicative of a signature and date associated with the speaking caregiver(s) that provided the spoken information represented in the textual data. For example, the system 100 may add, to the textual data, the date that the spoken information was obtained (e.g., spoken by a corresponding caregiver and detected by the system 100) and a stored image of a handwritten signature of each corresponding caregiver. As indicated in block 396, the system 100 may provide the textual data to one or more devices for storage and/or presentation. In doing so, and as indicated in block 398, the system 100 may enable viewing and editing of the textual data prior to providing the textual data to other devices. For example, the system 100 may present the textual data to the caregiver who initially provided the spoken information (e.g., in the audio data) via the caregiver’s mobile compute device (e.g., mobile compute device 140, 142) or a nearby compute device (e.g., a presentation device 150, 152, 154) for review, editing, and confirmation of accuracy by the corresponding caregiver(s) prior to providing the textual data to other devices in the system 100. As indicated in block 400, in providing the textual data to one or more devices, the system 100 may additionally provide the tag data, discussed above, to the one or more devices for storage and/or presentation. Additionally or alternatively, and as indicated in block 402, the system 100 may provide the signature data, discussed above in block 394, to the one or more devices. The system 100 may provide the data (e.g., the textual data, the tag data, the signature data) to an electronic medical records system (e.g., the EMR system 182), as indicated in block 404.
Referring now to
In some embodiments, the system 100 may provide the data to a bed side display device (e.g., a presentation device 150, 152) to be presented to a subsequent caregiver (e.g., a caregiver assigned to the next shift), as indicated in block 416. In doing so, the system 100 may, in some embodiments, provide the data after the subsequent caregiver provides authentication data (e.g., proving the identity of the subsequent caregiver), as indicated in block 418. For example, and as indicated in block 420, the system 100 may provide the data after the subsequent caregiver provides a predefined personal identification number (PIN) verifying the identity of the subsequent caregiver.
Relatedly, the system 100 may provide a notification of the textual data to a replacement care team assigned to the corresponding patient, as indicated in block 422. In doing so, the system 100 may provide the notification when a shift change occurs, as indicated in block 424. As indicated in block 426, the system 100 may provide the notification when a caregiver (e.g., the subsequent caregiver) enters the room of the patient (e.g., as detected by the location tracking system 186). In some embodiments, the system 100 may prompt (e.g., through the caregiver’s mobile compute device 140, 142, a presentation device 150, 152, 154, etc.) a caregiver (e.g., the caregiver notified of the existence of the textual data) to acknowledge that the textual data (and any associated data, such as tag data) has been reviewed by the caregiver, as indicated in block 428. Relatedly, the system 100 may provide a reminder (e.g., through the caregiver’s mobile compute device 140, 142, a presentation device 150, 152, 154, etc.) to a caregiver to acknowledge that the textual data has been reviewed (e.g., after a predefined amount of time has elapsed since the notification was provided to the caregiver, prior to the performance of a scheduled medical procedure on the patient, etc.), as indicated in block 430. In the illustrative embodiment, the method 300 loops back to block 304 of
Referring now to
Further, the system 100 provides the audio data combined with the time stamp data, to an engine to recognize different speakers, as indicated in block 914. The engine to recognize different speakers may be embodied as an algorithm to identify speakers based on voice biometric data (e.g., dominant frequencies known as formants), executed by corresponding hardware (e.g., a processor executing instructions, reconfigurable circuitry, application specific circuitry, etc.) in any of the devices of the system 100. Further, and as indicated in block 916, a voice to text engine (e.g., a voice to text algorithm executed by any device of the system 100) obtains the audio data and produces textual data from the audio data. In doing so, the system 100 may utilize contextual data, represented by block 918, relating to the speaker(s) associated with the audio data. The contextual data (e.g., corresponding to block 368 of
Additionally or alternatively, the voice to text engine represented by block 916 may utilize voice-related contextual tags (e.g., the tag data described with reference to block 392 of
Referring now to
The mobile compute device 1004, in the illustrative embodiment, receives a notification from the location tracking system 186 or the patient care coordination system 180 (e.g., the Voalte® Platform) that the caregiver 1002 has entered the patient’s room, and in response, displays in the mobile application, information (e.g., data provided from the EMR system 182) pertaining to the patient in the room (e.g., the patient 120 in the room 112). In step 1006, the caregiver begins to report (e.g., verbally) on the patient and the system 100 (e.g., the mobile compute device 1004, which is similar to the mobile compute device 140) captures notes (e.g., the audio data, to be converted to textual data) via a microphone (e.g., the audio capture device 214). Subsequently, in step 1008, the system 100 (e.g., the mobile compute device 1004, 140, or another device in the system 100 that receives the audio data from the mobile compute device 1004, 140) converts the notes (e.g., the audio data) to textual data. Referring briefly to
Referring now to
While certain illustrative embodiments have been described in detail in the drawings and the foregoing description, such an illustration and description is to be considered as exemplary and not restrictive in character, it being understood that only illustrative embodiments have been shown and described and that all changes and modifications that come within the spirit of the disclosure are desired to be protected. There exist a plurality of advantages of the present disclosure arising from the various features of the apparatus, systems, and methods described herein. It will be noted that alternative embodiments of the apparatus, systems, and methods of the present disclosure may not include all of the features described, yet still benefit from at least some of the advantages of such features. Those of ordinary skill in the art may readily devise their own implementations of the apparatus, systems, and methods that incorporate one or more of the features of the present disclosure.
This application claims the benefit, under 35 U.S.C. § 119(e), of U.S. Provisional Pat. App No. 63/236,104, filed Aug. 23, 2021, the entirety of which is hereby expressly incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
63236104 | Aug 2021 | US |