The present disclosure relates generally to building management systems (BMS) and/or other systems and methods for monitoring and controlling building equipment such as various equipment that affect (e.g., improve) the air quality of a building space. More specifically, the present disclosure relates to air quality services provided for healthcare facilities, including air quality services provided for particular patients and/or patient rooms in a healthcare facility. Improved air quality can improve patient outcomes for a variety of health conditions. As such, it may be advantageous to provided improved systems and methods relating to air quality services and tracking for healthcare facilities.
This summary is illustrative only and is not intended to be in any way limiting. Other aspects, inventive features, and advantages of the devices or processes described herein will become apparent in the detailed description set forth herein, taken in conjunction with the accompanying figures, wherein like reference numerals refer to like elements.
One implementation of the present disclosure is a method of operating HVAC equipment for a healthcare facility. The method includes determining an air quality service to be provided to a space of the healthcare facility based on a patient record associated with the space, controlling the HVAC equipment to provide the air quality service, tracking an amount of the air quality service provided to the space, and updating the patient record to indicate the amount of the air quality service provided to the space.
Another implementation of the present disclosure is a method for a healthcare facility. The method includes determining an air quality service to be provided to a space of the healthcare facility based on a patient record associated with the space and tracking an amount of the air quality service provided to the space by counting a number of air changes provided for the space by the HVAC equipment, measuring a change in an air quality parameter in the space, 1 or determining a duration of provision of the air quality service to the space. The method also includes updating the patient record to indicate the amount of the air quality service provided to the space.
In some embodiments, the method includes controlling the HVAC equipment to provide the air quality service based on the patient record associated with the space. In some embodiments, the method includes measuring the change in the air quality parameter in the space via a sensor coupled to a patient bed in the space. The sensor can measure one or more of carbon dioxide concentration, air pressure, humidity, temperature, or particulate concentration.
In some embodiments, updating the patient record to indicate the amount of the air quality service provided includes adding an air quality entry to a list of treatments provided to the patient. The method may also include generating a bill based on the patient record such that the bill comprises an indication of the amount of the air quality service provided.
In some embodiments, the HVAC equipment includes a patient bed comprising an air purifier and the method comprises controlling the air purifier based on the patient record. In some embodiments, determining the air quality service to be provided to the space of the healthcare facility based on the patient record associated with the space includes determining that the air quality service is approved for reimbursement based on a patient health condition indicated in the patient record.
In some embodiments, the method includes controlling an ultraviolet light source in the space and updating the patient record to indicate that the ultraviolet light source was operated for the space. Controlling the ultraviolet light source can include providing a first amount of ultraviolet light during a first time period subsequent to a treatment performed on the patient and a second amount of the ultraviolet light after the first time period, where the second amount is less than the first amount. The method can also include adjusting the air quality service based on a user input to a nursing interface.
Another implementation of the present disclosure is a patient bed. The patient bed is configured to support a patient and can include at least one sensor configured to measure at least one variable relating to a condition of air at the patient bed, an air purifier, and circuitry programmed to control the air purifier in accordance with an air quality service determined based on a condition of the patient and track an amount of the air quality service provided based on data from the plurality of sensors.
In some embodiments, the circuitry is further programmed to communicate with an electronic health record system and cause the electronic health record system to record the amount of the air quality service. In some embodiments, the patient bed also includes a UV light source. The circuitry can be programed to control the UV light source based on timing of a medical intervention conducted on the patient.
In some embodiments, the at least one sensor is a temperature sensor, a humidity sensor, a particulate sensor, and a pressure sensor. In some embodiments, the at least one sensor comprise an air composition sensor and the circuitry is programmed to classify a disease or disease state of the patient based on data form the air composition sensor.
Another implementation of the present disclosure is a system. The system includes equipment configured to affect a condition of air in a patient room, an air quality service system including one or more processors and one or more non-transitory computer-readable media storing program instructions that, when executed by one or more processors, perform operations. The operations include determining an air quality service to be provided to the patient room based on an electronic medical record associated with the patient room, controlling the equipment to provide the air quality service, tracking an amount of the air quality service provided to the patient room, and causing the electronic medical record to be updated to indicate the amount of the air quality service provided to the space.
In some embodiments, the system includes a plurality of sensors configured to measure a plurality of variables relating to air quality in the patient room. Tracking the amount of the air quality service can include using data from the plurality of sensors. In some embodiments, the equipment comprises a patient bed comprising an air purifier.
Referring generally to the FIGURES, systems and methods for aggregating data from multiple subsystems and making one or more command decisions based on the aggregated data. In some embodiments, buildings-such as hospitals-include several systems and/or subsystems, such as heating, ventilation, or air conditioning (HVAC) systems, room scheduling systems, patient monitoring systems, ambulance dispatch systems, and distributed care systems.
Referring now to
The BMS that serves building 10 includes an HVAC system 100. HVAC system 100 may include a plurality of HVAC devices (e.g., heaters, chillers, air handling units, pumps, fans, thermal energy storage, etc.) configured to provide heating, cooling, ventilation, or other services for building 10. For example, HVAC system 100 is shown to include a waterside system 120 and an airside system 130. Waterside system 120 may provide a heated or chilled fluid to an air handling unit of airside system 130. Airside system 130 may use the heated or chilled fluid to heat or cool an airflow provided to building 10. In some embodiments, waterside system 120 is replaced with a central energy plant such as central plant 200, described with reference to
In some embodiments, building 10 acts as a building or campus (e.g., several buildings of a hospital campus) capable of housing some or all components of HVAC system 100. While the systems and methods described herein are primarily focused on operations within a typical building (e.g., building 10), they can easily be applied to various other enclosures or spaces (e.g., cars, airplanes, recreational vehicles, etc.).
Still referring to
AHU 106 may place the working fluid in a heat exchange relationship with an airflow passing through AHU 106 (e.g., via one or more stages of cooling coils and/or heating coils). The airflow may be, for example, outside air, return air from within building 10, or a combination of both. AHU 106 may transfer heat between the airflow and the working fluid to provide heating or cooling for the airflow. For example, AHU 106 may include one or more fans or blowers configured to pass the airflow over or through a heat exchanger containing the working fluid. The working fluid may then return to chiller 102 or boiler 104 via piping 110.
Airside system 130 may deliver the airflow supplied by AHU 106 (i.e., the supply airflow) to building 10 via air supply ducts 112 and may provide return air from building 10 to AHU 106 via air return ducts 114. In some embodiments, airside system 130 includes multiple variable air volume (VAV) units 116. For example, airside system 130 is shown to include a separate VAV unit 116 on each floor or zone of building 10. VAV units 116 may include dampers or other flow control elements that can be operated to control an amount of the supply airflow provided to individual zones of building 10. In other embodiments, airside system 130 delivers the supply airflow into one or more zones of building 10 (e.g., via air supply ducts 112) without using intermediate VAV units 116 or other flow control elements. AHU 106 may include various sensors (e.g., temperature sensors, pressure sensors, etc.) configured to measure attributes of the supply airflow. AHU 106 may receive input from sensors located within AHU 106 and/or within the building zone and may adjust the flowrate, temperature, or other attributes of the supply airflow through AHU 106 to achieve setpoint conditions for the building zone.
Referring now to
Central plant 200 is shown to include a plurality of subplants 202-212 including a heater subplant 202, a heat recovery chiller subplant 204, a chiller subplant 206, a cooling tower subplant 208, a hot thermal energy storage (TES) subplant 210, and a cold thermal energy storage (TES) subplant 212. Subplants 202-212 consume resources from utilities to serve the thermal energy loads (e.g., hot water, cold water, heating, cooling, etc.) of a building or campus. For example, heater subplant 202 may be configured to heat water in a hot water loop 214 that circulates the hot water between heater subplant 202 and building 10. Chiller subplant 206 may be configured to chill water in a cold water loop 216 that circulates the cold water between chiller subplant 206 and building 10. Heat recovery chiller subplant 204 may be configured to transfer heat from cold water loop 216 to hot water loop 214 to provide additional heating for the hot water and additional cooling for the cold water. Condenser water loop 218 may absorb heat from the cold water in chiller subplant 206 and reject the absorbed heat in cooling tower subplant 208 or transfer the absorbed heat to hot water loop 214. Hot TES subplant 210 and cold TES subplant 212 may store hot and cold thermal energy, respectively, for subsequent use.
Hot water loop 214 and cold water loop 216 may deliver the heated and/or chilled water to air handlers located on the rooftop of building 10 (e.g., AHU 106) or to individual floors or zones of building 10 (e.g., VAV units 116). The air handlers push air past heat exchangers (e.g., heating coils or cooling coils) through which the water flows to provide heating or cooling for the air. The heated or cooled air may be delivered to individual zones of building 10 to serve the thermal energy loads of building 10. The water then returns to subplants 202-212 to receive further heating or cooling.
Although subplants 202-212 are shown and described as heating and cooling water for circulation to a building, it is understood that any other type of working fluid (e.g., glycol, CO2, etc.) may be used in place of or in addition to water to serve the thermal energy loads. In other embodiments, subplants 202-212 may provide heating and/or cooling directly to the building or campus without requiring an intermediate heat transfer fluid. These and other variations to central plant 200 are within the teachings of the present invention.
Each of subplants 202-212 may include a variety of equipment configured to facilitate the functions of the subplant. For example, heater subplant 202 is shown to include a plurality of heating elements 220 (e.g., boilers, electric heaters, etc.) configured to add heat to the hot water in hot water loop 214. Heater subplant 202 is also shown to include several pumps 222 and 224 configured to circulate the hot water in hot water loop 214 and to control the flowrate of the hot water through individual heating elements 220. Chiller subplant 206 is shown to include a plurality of chillers 232 configured to remove heat from the cold water in cold water loop 216. Chiller subplant 206 is also shown to include several pumps 234 and 236 configured to circulate the cold water in cold water loop 216 and to control the flowrate of the cold water through individual chillers 232.
Heat recovery chiller subplant 204 is shown to include a plurality of heat recovery heat exchangers 226 (e.g., refrigeration circuits) configured to transfer heat from cold water loop 216 to hot water loop 214. Heat recovery chiller subplant 204 is also shown to include several pumps 228 and 230 configured to circulate the hot water and/or cold water through heat recovery heat exchangers 226 and to control the flowrate of the water through individual heat recovery heat exchangers 226. Cooling tower subplant 208 is shown to include a plurality of cooling towers 238 configured to remove heat from the condenser water in condenser water loop 218. Cooling tower subplant 208 is also shown to include several pumps 240 configured to circulate the condenser water in condenser water loop 218 and to control the flowrate of the condenser water through individual cooling towers 238.
Hot TES subplant 210 is shown to include a hot TES tank 242 configured to store the hot water for later use. Hot TES subplant 210 may also include one or more pumps or valves configured to control the flowrate of the hot water into or out of hot TES tank 242. Cold TES subplant 212 is shown to include cold TES tanks 244 configured to store the cold water for later use. Cold TES subplant 212 may also include one or more pumps or valves configured to control the flowrate of the cold water into or out of cold TES tanks 244.
In some embodiments, one or more of the pumps in central plant 200 (e.g., pumps 222, 224, 228, 230, 234, 236, and/or 240) or pipelines in central plant 200 include an isolation valve associated therewith. Isolation valves may be integrated with the pumps or positioned upstream or downstream of the pumps to control the fluid flows in central plant 200. In various embodiments, central plant 200 may include more, fewer, or different types of devices and/or subplants based on the particular configuration of central plant 200 and the types of loads served by central plant 200.
Referring now to
In
Each of dampers 316-320 can be operated by an actuator. For example, exhaust air damper 316 can be operated by actuator 324, mixing damper 318 can be operated by actuator 326, and outside air damper 320 can be operated by actuator 328. Actuators 324-328 can communicate with an AHU controller 330 via a communications link 332. Actuators 324-328 can receive control signals from AHU controller 330 and can provide feedback signals to AHU controller 330. Feedback signals can include, for example, an indication of a current actuator or damper position, an amount of torque or force exerted by the actuator, diagnostic information (e.g., results of diagnostic tests performed by actuators 324-328), status information, commissioning information, configuration settings, calibration data, and/or other types of information or data that can be collected, stored, or used by actuators 324-328. AHU controller 330 can be an economizer controller configured to use one or more control algorithms (e.g., state-based algorithms, extremum seeking control (ESC) algorithms, proportional-integral (PI) control algorithms, proportional-integral-derivative (PID) control algorithms, model predictive control (MPC) algorithms, feedback control algorithms, etc.) to control actuators 324-328.
Still referring to
Cooling coil 334 can receive a chilled fluid from waterside system 200 (e.g., from cold water loop 216) via piping 342 and can return the chilled fluid to waterside system 200 via piping 344. Valve 346 can be positioned along piping 342 or piping 344 to control a flowrate of the chilled fluid through cooling coil 334. In some embodiments, cooling coil 334 includes multiple stages of cooling coils that can be independently activated and deactivated (e.g., by AHU controller 330, by BMS controller 366, etc.) to modulate an amount of cooling applied to supply air 310.
Heating coil 336 can receive a heated fluid from waterside system 200 (e.g., from hot water loop 214) via piping 348 and can return the heated fluid to waterside system 200 via piping 350. Valve 352 can be positioned along piping 348 or piping 350 to control a flowrate of the heated fluid through heating coil 336. In some embodiments, heating coil 336 includes multiple stages of heating coils that can be independently activated and deactivated (e.g., by AHU controller 330, by BMS controller 366, etc.) to modulate an amount of heating applied to supply air 310.
Each of valves 346 and 352 can be controlled by an actuator. For example, valve 346 can be controlled by actuator 354 and valve 352 can be controlled by actuator 356. Actuators 354-356 can communicate with AHU controller 330 via communications links 358-360. Actuators 354-356 can receive control signals from AHU controller 330 and can provide feedback signals to controller 330. In some embodiments, AHU controller 330 receives a measurement of the supply air temperature from a temperature sensor 362 positioned in supply air duct 312 (e.g., downstream of cooling coil 334 and/or heating coil 336). AHU controller 330 can also receive a measurement of the temperature of building zone 306 from a temperature sensor 364 located in building zone 306.
In some embodiments, AHU controller 330 operates valves 346 and 352 via actuators 354-356 to modulate an amount of heating or cooling provided to supply air 310 (e.g., to achieve a setpoint temperature for supply air 310 or to maintain the temperature of supply air 310 within a setpoint temperature range). The positions of valves 346 and 352 affect the amount of heating or cooling provided to supply air 310 by cooling coil 334 or heating coil 336 and may correlate with the amount of energy consumed to achieve a desired supply air temperature. AHU controller 330 can control the temperature of supply air 310 and/or building zone 306 by activating or deactivating coils 334-336, adjusting a speed of fan 338, or a combination of both.
Still referring to
In some embodiments, AHU controller 330 receives information from BMS controller 366 (e.g., commands, setpoints, operating boundaries, etc.) and provides information to BMS controller 366 (e.g., temperature measurements, valve or actuator positions, operating statuses, diagnostics, etc.). For example, AHU controller 330 can provide BMS controller 366 with temperature measurements from temperature sensors 362 and 364, equipment on/off states, equipment operating capacities, and/or any other information that can be used by BMS controller 366 to monitor or control a variable state or condition within building zone 306.
Client device 368 can include one or more human-machine interfaces or client interfaces (e.g., graphical user interfaces, reporting interfaces, text-based computer interfaces, client-facing web services, web servers that provide pages to web clients, etc.) for controlling, viewing, or otherwise interacting with HVAC system 100, its subsystems, and/or devices. Client device 368 can be a computer workstation, a client terminal, a remote or local interface, or any other type of user interface device. Client device 368 can be a stationary terminal or a mobile device. For example, client device 368 can be a desktop computer, a computer server with a user interface, a laptop computer, a tablet, a smartphone, a PDA, or any other type of mobile or non-mobile device. Client device 368 can communicate with BMS controller 366 and/or AHU controller 330 via communications link 372.
Referring now to
Each of building subsystems 428 can include any number of devices, controllers, and connections for completing its individual functions and control activities. HVAC subsystem 440 can include many of the same components as HVAC system 100, as described with reference to
Still referring to
Interfaces 407, 409 can be or include wired or wireless communications interfaces (e.g., jacks, antennas, transmitters, receivers, transceivers, wire terminals, etc.) for conducting data communications with building subsystems 428 or other external systems or devices. In various embodiments, communications via interfaces 407, 409 can be direct (e.g., local wired or wireless communications) or via a communications network 446 (e.g., a WAN, the Internet, a cellular network, etc.). For example, interfaces 407, 409 can include an Ethernet card and port for sending and receiving data via an Ethernet-based communications link or network. In another example, interfaces 407, 409 can include a Wi-Fi transceiver for communicating via a wireless communications network. In another example, one or both of interfaces 407, 409 can include cellular or mobile phone communications transceivers. In one embodiment, communications interface 407 is a power line communications interface and BMS interface 409 is an Ethernet interface. In other embodiments, both communications interface 407 and BMS interface 409 are Ethernet interfaces or are the same Ethernet interface.
Still referring to
Memory 408 (e.g., memory, memory unit, storage device, etc.) can include one or more devices (e.g., RAM, ROM, Flash memory, hard disk storage, etc.) for storing data and/or computer code for completing or facilitating the various processes, layers and modules described in the present application. Memory 408 can be or include volatile memory or non-volatile memory. Memory 408 can include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described in the present application. According to an exemplary embodiment, memory 408 is communicably connected to processor 406 via processing circuit 404 and includes computer code for executing (e.g., by processing circuit 404 and/or processor 406) one or more processes described herein.
In some embodiments, BMS controller 366 is implemented within a single computer (e.g., one server, one housing, etc.). In various other embodiments BMS controller 366 can be distributed across multiple servers or computers (e.g., that can exist in distributed locations). Further, while
Still referring to
Enterprise integration layer 410 can be configured to serve clients or local applications with information and services to support a variety of enterprise-level applications. For example, enterprise control applications 426 can be configured to provide subsystem-spanning control to a graphical user interface (GUI) or to any number of enterprise-level business applications (e.g., accounting systems, user identification systems, etc.). Enterprise control applications 426 can also or alternatively be configured to provide configuration GUIs for configuring BMS controller 366. In yet other embodiments, enterprise control applications 426 can work with layers 410-420 to optimize building performance (e.g., efficiency, energy use, comfort, or safety) based on inputs received at communications interface 407 and/or BMS interface 409.
Building subsystem integration layer 420 can be configured to manage communications between BMS controller 366 and building subsystems 428. For example, building subsystem integration layer 420 can receive sensor data and input signals from building subsystems 428 and provide output data and control signals to building subsystems 428. Building subsystem integration layer 420 can also be configured to manage communications between building subsystems 428. Building subsystem integration layer 420 translate communications (e.g., sensor data, input signals, output signals, etc.) across a plurality of multi-vendor/multi-protocol systems.
Demand response layer 414 can be configured to optimize resource usage (e.g., electricity use, natural gas use, water use, etc.) and/or the monetary cost of such resource usage in response to satisfy the demand of building 10. The optimization can be based on time-of-use prices, curtailment signals, energy availability, or other data received from utility providers, distributed energy generation systems 424, from energy storage 427 (e.g., hot TES 242, cold TES 244, etc.), or from other sources. Demand response layer 414 can receive inputs from other layers of BMS controller 366 (e.g., building subsystem integration layer 420, integrated control layer 418, etc.). The inputs received from other layers can include environmental or sensor inputs such as temperature, carbon dioxide levels, relative humidity levels, air quality sensor outputs, occupancy sensor outputs, room schedules, and the like. The inputs can also include inputs such as electrical use (e.g., expressed in kWh), thermal load measurements, pricing information, projected pricing, smoothed pricing, curtailment signals from utilities, and the like.
According to an exemplary embodiment, demand response layer 414 includes control logic for responding to the data and signals it receives. These responses can include communicating with the control algorithms in integrated control layer 418, changing control strategies, changing setpoints, or activating/deactivating building equipment or subsystems in a controlled manner. Demand response layer 414 can also include control logic configured to determine when to utilize stored energy. For example, demand response layer 414 can determine to begin using energy from energy storage 427 just prior to the beginning of a peak use hour.
In some embodiments, demand response layer 414 includes a control module configured to actively initiate control actions (e.g., automatically changing setpoints) which minimize energy costs based on one or more inputs representative of or based on demand (e.g., price, a curtailment signal, a demand level, etc.). In some embodiments, demand response layer 414 uses equipment models to determine an optimal set of control actions. The equipment models can include, for example, thermodynamic models describing the inputs, outputs, and/or functions performed by various sets of building equipment. Equipment models can represent collections of building equipment (e.g., subplants, chiller arrays, etc.) or individual devices (e.g., individual chillers, heaters, pumps, etc.).
Demand response layer 414 can further include or draw upon one or more demand response policy definitions (e.g., databases, XML files, etc.). The policy definitions can be edited or adjusted by a user (e.g., via a graphical user interface) so that the control actions initiated in response to demand inputs can be tailored for the user's application, desired comfort level, particular building equipment, or based on other concerns. For example, the demand response policy definitions can specify which equipment can be turned on or off in response to particular demand inputs, how long a system or piece of equipment should be turned off, what setpoints can be changed, what the allowable set point adjustment range is, how long to hold a high demand setpoint before returning to a normally scheduled setpoint, how close to approach capacity limits, which equipment modes to utilize, the energy transfer rates (e.g., the maximum rate, an alarm rate, other rate boundary information, etc.) into and out of energy storage devices (e.g., thermal storage tanks, battery banks, etc.), and when to dispatch on-site generation of energy (e.g., via fuel cells, a motor generator set, etc.).
Integrated control layer 418 can be configured to use the data input or output of building subsystem integration layer 420 and/or demand response layer 414 to make control decisions. Due to the subsystem integration provided by building subsystem integration layer 420, integrated control layer 418 can integrate control activities of the subsystems 428 such that the subsystems 428 behave as a single integrated supersystem. In an exemplary embodiment, integrated control layer 418 includes control logic that uses inputs and outputs from a plurality of building subsystems to provide greater comfort and energy savings relative to the comfort and energy savings that separate subsystems could provide alone. For example, integrated control layer 418 can be configured to use an input from a first subsystem to make an energy-saving control decision for a second subsystem. Results of these decisions can be communicated back to building subsystem integration layer 420.
Integrated control layer 418 is shown to be logically below demand response layer 414. Integrated control layer 418 can be configured to enhance the effectiveness of demand response layer 414 by enabling building subsystems 428 and their respective control loops to be controlled in coordination with demand response layer 414. This configuration may advantageously reduce disruptive demand response behavior relative to conventional systems. For example, integrated control layer 418 can be configured to assure that a demand response-driven upward adjustment to the setpoint for chilled water temperature (or another component that directly or indirectly affects temperature) does not result in an increase in fan energy (or other energy used to cool a space) that would result in greater total building energy use than was saved at the chiller.
Integrated control layer 418 can be configured to provide feedback to demand response layer 414 so that demand response layer 414 checks that constraints (e.g., temperature, lighting levels, etc.) are properly maintained even while demanded load shedding is in progress. The constraints can also include setpoint or sensed boundaries relating to safety, equipment operating limits and performance, comfort, fire codes, electrical codes, energy codes, and the like. Integrated control layer 418 is also logically below fault detection and diagnostics layer 416 and automated measurement and validation layer 412. Integrated control layer 418 can be configured to provide calculated inputs (e.g., aggregations) to these higher levels based on outputs from more than one building subsystem.
Automated measurement and validation (AM&V) layer 412 can be configured to verify that control strategies commanded by integrated control layer 418 or demand response layer 414 are working properly (e.g., using data aggregated by AM&V layer 412, integrated control layer 418, building subsystem integration layer 420, FDD layer 416, or otherwise). The calculations made by AM&V layer 412 can be based on building system energy models and/or equipment models for individual BMS devices or subsystems. For example, AM&V layer 412 can compare a model-predicted output with an actual output from building subsystems 428 to determine an accuracy of the model.
Fault detection and diagnostics (FDD) layer 416 can be configured to provide ongoing fault detection for building subsystems 428, building subsystem devices (i.e., building equipment), and control algorithms used by demand response layer 414 and integrated control layer 418. FDD layer 416 can receive data inputs from integrated control layer 418, directly from one or more building subsystems or devices, or from another data source. FDD layer 416 can automatically diagnose and respond to detected faults. The responses to detected or diagnosed faults can include providing an alert message to a user, a maintenance scheduling system, or a control algorithm configured to attempt to repair the fault or to work around the fault.
FDD layer 416 can be configured to output a specific identification of the faulty component or cause of the fault (e.g., loose damper linkage) using detailed subsystem inputs available at building subsystem integration layer 420. In other exemplary embodiments, FDD layer 416 is configured to provide “fault” events to integrated control layer 418 which executes control strategies and policies in response to the received fault events. According to an exemplary embodiment, FDD layer 416 (or a policy executed by an integrated control engine or business rules engine) can shut down systems or direct control activities around faulty devices or systems to reduce energy waste, extend equipment life, or assure proper control response.
FDD layer 416 can be configured to store or access a variety of different system data stores (or data points for live data). FDD layer 416 can use some content of the data stores to identify faults at the equipment level (e.g., specific chiller, specific AHU, specific terminal unit, etc.) and other content to identify faults at component or subsystem levels. For example, building subsystems 428 can generate temporal (i.e., time-series) data indicating the performance of BMS 400 and the various components thereof. The data generated by building subsystems 428 can include measured or calculated values that exhibit statistical characteristics and provide information about how the corresponding system or process (e.g., a temperature control process, a flow control process, etc.) is performing in terms of error from its setpoint. These processes can be examined by FDD layer 416 to expose when the system begins to degrade in performance and alert a user to repair the fault before it becomes more severe.
Referring now to
The registration system 512 may be configured to facilitate registration with the building 10. In some embodiments, a user registers for their appointment at the building 10 via an application. In some embodiments, the application provides global positioning satellite (GPS) coordinates to a user prior to and/or during the commute to building 10. The application may provide directions to the user that may include color coding and/or sound indicators. For example, on the application the user may be represented as a blue dot following a green path towards the building 10. Once inside the building 10, the application may also provide directions to the particular room or other location required to complete registration. In some embodiments, the application includes GPS instructions and assistance for several aspects of a visit to the building 10, such as providing guidance to another wing of building 10, providing guidance to a bathroom, and providing guidance to a new location to continue the visit at the building 10. In some embodiments, the MAS system 532 is included in the registration system 512, and is configured to facilitate medical appointment scheduling and coordinate travel to the appointment in a timely manner. Integration of the scheduling, check in, and navigation improves the user's ability to schedule and travel to appointments and allows for a more accurate schedule for the facility.
The scheduling system 514 may be configured to facilitate improved scheduling of the patients using the collected data at the command center engine 502. In some embodiments, the scheduling system 514 receives user preferences from the registration system 512 and uses that information to selectively schedule the time/date for the appointment of the patient. For example, during registration, the user is prompted (e.g., via an application, etc.) with questions regarding their preferences, such as preferred times to schedule an appointment. After completing registration, the user attempts to schedule a time or data that is outside of their preferred time slots (e.g., the user prefers morning slots and the user is attempting to schedule an appointment for the afternoon, etc.), the application may respond by suggestion to the user a preferred time slot. For example, the application responds with: “Thank you for booking! We've noticed that you prefer time slots in the mornings for your hospital appointments. Here are some suggested time slots that fit within your preferences, in case you would like to switch.” The application may then provide one or more time slots on one or more days that fits within the preferences of the user.
In some embodiments, the scheduling system 514 facilitates scheduling of one or more patients based on several sets of aggregated data, not just the data from the registration system 512. For example, the BMS 400 may determine that one or more chillers configured to supply chilled fluid to a subsystem that cools the air in a particular zone of the building 10 is inoperable and, as such, the temperature cannot be properly controlled in that building zone. The command center engine 502 receives this information and the scheduling information from the scheduling system 514, which indicates a preferred appointment in that building zone. The application (e.g., which may be communicably connected to the command center engine, or query data from the command center engine 502, etc.) may respond to the requested scheduling by notifying the user that the building cannot presently receive scheduling appointments for that location of the building. In some embodiments, the operations of the one or more systems described within the supersystem 500 utilize data sets from one or more other systems within the supersystem 500.
The EMR system 516 may be configured to update and organize electronic medical records. In some embodiments, the updating and organizing of medical records is at least in part monitored by the command center engine 502. The command center engine 502 may be configured to receive information that helps facilitate the organizing of the medical records in a more efficient manner, using aggregated data from the other systems within the supersystem 500. For example, the command center engine may include scheduling preferences (e.g., preferred scheduled times, etc.), received from the scheduling system 514, caregiver preferences (e.g., gender, qualifications, etc.) (e.g., from the caregiver monitoring system 526, etc.), bed management information (e.g., preferred room type, etc.), and other information.
The laboratory monitoring system 518 may be configured to monitor and record laboratory operations before, during, or after laboratory sessions (e.g., surgeries, scans, etc.). In some embodiments, control signals provided to the laboratory rooms are adjusted based on decisions made by the command center engine 502. For example, the command center engine 502 receives information regarding user preferences to temperature, pressure, and humidity (TPH) levels within a room from the PPMS 534. The command center engine 502 also receives information on how the TPH levels in a laboratory are to comply with rules and regulations. The command center engine 502 then complies with the user preferences insomuch that they are in compliance with the rules and regulations of operation for the laboratory. Preferences of other users may also be considered, such as the doctor's preferences or the nurse's preferences.
The physiological monitoring system 520 may be configured to obtain real-time and/or historical data relating to the physiological operation of one or more patients. In some embodiments, the physiological monitoring system 520 provides the physiological information to the command center engine 502, such that the command center engine 502 can make decisions based on the physiological information. For example, the command center engine 502 may receive EMR's from the EMR system 516 and the physiological data of the same patient from the physiological monitoring system 520 and determine that the physiological data is significantly abnormal compared to the EMR's of the patient. The command center engine 502 may automatically provide a notification to the care team (e.g., the doctor, the nurse, etc.) that provides a warning related to the discovered information.
The bed management system 522 may be configured to facilitate appropriate bed management for the patients. In some embodiments, this can include assigning particular beds/rooms to patients based on their preferences (e.g., as determined by the PPMS 534). For example, a patient may prefer to be assigned to a bed with certain features (e.g., inclined back rest, additional pillows, queen-sized, etc.), which can be provided via the registration system 512. The command center engine 502 may analyze the registration preferences and assign the beds to the patients to satisfy the preferences. Of course, multiple types of preferences and/or data sets can be considered for making decisions in the physiological monitoring system 520, or any of the systems in supersystem 500.
Real time location system (RTLS) 524 may be configured to provide real-time monitoring of the one of more patients arriving at the building 10, the occupants currently within the building 10, or a combination thereof. The RTLS 524 can provide directions to users after they are within a certain distance of the building 10. For example, after crossing a geo-fence (e.g., 1 mile from the building 10, 5 miles from the building 10, etc.), an application hosted on a user's device and communicably connected to the command center engine 502 updates the user on the directions for parking at the building 10, and continues to provide directions all the way to the appropriate room in the building 10, including providing directions while within the building 10. In some embodiments, the RTLS 524 tracks users (e.g., patients, building occupants, etc.) via GPS or via radio-frequency identification (RFID) from WiFi signals, Bluetooth® signals, etc. For example, each occupant of a building may carry a device capable of RFID transmissions, such as a mobile phone, a badge or lanyard, a wristband, etc., which can be tracked by RFID transceivers positioned throughout the building. In some embodiments, the RTLS 524 utilizes facial and/or voice recognition to detect and track occupants.
The caregiver monitoring system 526 may be configured to provide updates to the care team of the patient using aggregated data within command center engine 502. For example, as described in greater detail below with reference to
The transport system 528 may be configured to facilitate transport of patients from the entrance of building 10 to the required room within building 10, or between multiple rooms within building 10. For example, an ambulance may arrive with a patient and automatically provide the command center engine 502 with an indication that the patient has arrived. The command center engine 502 may then query a database (e.g., maintained by the PPMS 534, as described below) that indicates one or more preferences for the user, such as preferred temperature, pressure, and humidity levels in the room. Thus, the room may be prepared for the patient prior to the patient arriving at the room.
The EVS system 530 may be configured to facilitate the cleaning and sanitizing of rooms, hallways, tables, and other components of building 10 (e.g., a hospital). The EVS system 530 may employ global positioning satellite (GPS) tracking of the custodians and maintain a virtual cleaning log to display locations that have been cleaned and have yet to be cleaned. In some embodiments, the EVS system 530 is particularly equipped to handle sanitization after a detection of a contagious disease (e.g., COVID-19). For example, once COVID-19 has been detected in the facility, contact tracing may be determined based on cameras located within the building 10. The command center engine 502 may receive the video feeds from the cameras and the location of the potentially contagious area, determine the profiles of the potential carriers of the disease, and implement contact tracing/prevention.
The PPMS 534 is configured to determine, record, store, and/or retrieve various patient preferences. Patient preferences may include, for example, preferred temperature, pressure, and humidity levels in a room (e.g., a patient room), preferred lighting temperatures and intensities, preferred meal times, favorite movies or television shows, favorite musicians and bands, a patient's sleep schedule, etc., along with any of the other patient or user preferences described herein. In some embodiments, patient preferences are entered manually, such as by a patient or facility (e.g., hospital) staff when the patient is checking-in or registering. In some embodiments, the PPMS 534 is configured to automatically determine patient preferences by recording room and/or schedule parameters over time (e.g., temperature, pressure, and humidity levels in the patient's room) and/or by analyzing the patient's social media accounts, bank and/or credit card purchases, search history, etc. For example, the PPMS 534 may receive preference information for a patient from a remote and/or third-party (e.g., Google®, Facebook®, etc.). In some embodiments, the PPMS 534 may interface with EMR system 516 to determine patient preferences and/or to store patient preferences. In some embodiments, the PPMS 534 maintains patient profiles for recording patient preferences over time. For example, patient Additional features of the PPMS 534 are described in greater detail below.
The event correlation engine 536 may be configured to monitor and detect patterns in patient care. For example, the event correlation engine 536 may monitor patient temperature checks over time, to determine an interval (i.e., frequency) between temperature checks. If a period of time (e.g., a time interval) passes without a temperature check, when one was expected, the event correlation engine 536 may transmit a notification to a care provider or other user to initiate a temperature check. In some embodiments, the event correlation engine 536 can identify correlations between building events (e.g., BMS 400 events) and clinical events. For example, the event correlation engine 536 may identify a correlation between the TPH of a patient room and patient recovery times or patient comfort.
Referring now to
The memory 608 (e.g., memory, memory unit, storage device, etc.) can include one or more devices (e.g., RAM, ROM, Flash memory, hard disk storage, etc.) for storing data and/or computer code for completing or facilitating the various processes, layers and modules described in the present application. The memory 608 can be or include volatile memory or non-volatile memory. The memory 608 can include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described in the present application. According to an exemplary embodiment, the memory 608 is communicably connected to the processor 606 via the processing circuit 604 and includes computer code for executing (e.g., by the processing circuit 604 and/or the processor 606) one or more processes described herein. In some embodiments, the command center engine 502 is implemented within a single computer (e.g., one server, one housing, etc.). In various other embodiments the command center engine 502 can be distributed across multiple servers or computers (e.g., that can exist in distributed locations).
The communications interface 650 can facilitate communications between the command center engine 502 and other systems within supersystem 500 (e.g., the scheduling system 514, the bed management system 522, etc.) for allowing user control, monitoring, and adjustment to the command center engine 502 and/or the one or more systems in the supersystem 500. The communications interface 650 can be or include wired or wireless communications interfaces (e.g., jacks, antennas, transmitters, receivers, transceivers, wire terminals, etc.) for conducting data communications within the supersystem 500 or other external systems or devices. In various embodiments, communications via the communications interface 650 can be direct (e.g., local wired or wireless communications) or via a communications network (e.g., a WAN, the Internet, a cellular network, etc.). For example, the communications interface 650 can include an Ethernet card and port for sending and receiving data via an Ethernet-based communications link or network. In another example, the communications interface 650 can include a Wi-Fi transceiver for communicating via a wireless communications network.
Memory 608 is shown to include clinical command center 610, including enterprise capacity optimization 614, critical care outreach 616, hospital operations 618, predictive analysis 620, care communication 622, early warning surveillance 624, pandemic management 626, and virtual care 628. The memory 608 is also shown to include facility command center 612, including building automation 630, fire systems 632, nurse call system 634, telecom operators 636, asset management 638, medical gas management 640, automated guided vehicles 642, and security 644. The memory 608 is shown to further include integrated command center 646.
Referring now to
In some embodiments, the sensors 720 may obtain information related to any type of manipulated or control variable within system 700. For example, the sensors 720 obtain measurements on the temperature, pressure, humidity, light intensity, blinds position, sound level, motion, or a combination thereof, and provides the sensor data to the data collector 702 in the command center engine 502. The data collector 702 may be configured to provide the sate-data of the patient room to the neural network 706.
The neural network 706 may be configured to make predictions regarding certain situations that may warrant the distribution of a notification, warning, or alarm for safety reasons. Accordingly, the neural network 706 may be any suitable type of neural network, such as a perceptron, feed forward, recurrent (RNN), deep feed forward, convolutional, residual, support vector machine (SVM), etc. In some embodiments, however, the neural network 706 may be replaced with other model or algorithm-based prediction methods (e.g., clustering, forecast, time series, etc.). While not shown, the neural network 706 may be trained using historical data. For example, the neural network may use data regarding the patient room 718, data regarding the patients that have previously been in the patient room 718, and/or metadata regarding all patients and patient rooms, to generate a model of “safe” conditions. As shown, the neural network 706 may be configured to output optimal TPH settings or levels for an area (e.g., a patient room) based on patient metadata.
For example, a 20 year old male is put in the patient room 718 for a back injury. The neural network 706 uses training data including several months of different patients that have been placed in the patient room 718, historical data on 20 year old males, and patients with back injuries over a historical time period. In some embodiments, the neural network 706 attempts to satisfy an objective function, wherein the goal of the objective function is to provide the appropriate sensor value (e.g., temperature) based on the number of factors. In the above example, the historical data indicating the temperature preference of 20 year old males may be weighted differently than the temperature preference of a patient with back injuries, which may be weighted differently that patients located in the patient room 718. This can result in a multi-weighted objective function, which can also include one or more constraints. In some embodiments, the constraints are provided to make sure that the manipulated variable conforms to general rules and regulations. For example, if the objective function indicated that the temperature should be 100° F., a constraint may forbid a control signal attempting to reach a 100° F. setpoint in the patient room 718, the constraint not allowing temperature setpoints above 75° F.
In some embodiments, the neural network 706 is shown to receive metadata of all patients, including the data of the patient located in the patient room 718. This data could include previously preferred manipulated variables (e.g., temperature, pressure, humidity, light intensity, sound level, etc.), setpoints from the patient, medical data related to the patient, of the data for any and all patients that have logged data with the command center engine 502. For example, the neural network 706 may receive data from one or more sensors or devices monitoring the vitals and other physiological parameters of the patient. The neural network 706 may be configured to adjust TPH values in the room based on the patient's vitals (e.g., cooling the room to lower the patient's core temperature).
In some embodiments, the neural network 706 also receives patient data indicating classification or diagnostic codes for the patient. These codes can include International Statistical Classification of Diseases and Related Health Problems (ICD) codes, Diagnosis-Related Group (DRG) codes, Current Procedural Terminology (CPT) codes, and/or any other similar type of classification or diagnostic code format that may be implemented by a healthcare facility. In some such embodiments, the neural network 706 includes these types of codes (e.g., as an input) when determining TPH values or other parameters for the patient's room. For example, a patient experiencing COVID-19 symptoms can be assigned IDC code “U07.1, 2019-nCoV acute respiratory disease” and, in response, the TPH or other parameters of the patient's room may be adjusted to not only improve patient care, but to help prevent or slow the spread of COVID-19 to other patients and/or areas within the building. In this example, the patient's room may be isolated from other areas of a hospital, such as by adjusting HVAC equipment and/or by changing TPH values of the room. In another example, a sleep protocol may be initiated in response to particular classification or diagnostic codes and may cause the lights in the patient's room to dim, the blinds to close, the temperature to lower, etc.
In some embodiments, any of the data discussed above may also be used to train the neural network 706. After training, the neural network 706 may then be used to satisfy an objective function. Once the neural network 706 implemented and a predicted manipulated variable value is determined by the neural network 706, the TPH manager 708 may generate control signals via the control signal generator 710 to satisfy the predicted manipulated variable value. In some embodiments, this includes setting the manipulated variable value as a setpoint and providing control signals to the HVAC equipment 712 to satisfy the setpoint.
As mentioned above, the command center engine 502 may be configured to provide early warning detection based on received data. As such, the trained neural network 706 may be able to determine if the manipulated variable(s) are abnormal such that they indicate an unsafe environment in the patient room 718. In such embodiments, the control signal generator 710 may provide notifications to the care team hub to help the patient. While not shown, the control signal generator 710 may also provide a control signals to the HVAC equipment to adjust the manipulated variable to a safe level. In some embodiments, early warning detection is based in part of the number, type, and/or frequency of patient temperature change requests (e.g., from an application for adjusting parameters of the patient room).
In some embodiments, the control signal generator 710 can provide warning notifications to the notification system 714. The notification system 714 may be a system communicably connected to workstations within building 10, one or more user devices of the occupants within building 10 (e.g., or will eventually be within building 10), or any combination thereof. For example, the control signal generator 710 provides a warning indicated that the temperature within the patient room 718 is too high for the patient, who is particularly sensitive to high temperatures. A building manager receives the notification and supervises the adjustment of the temperature setpoint down to a safe level.
In some embodiments, command center engine 502 may know the room TPH values and how most people should feel within the room (e.g., based on neural network functionality described above, etc.). For example, if a patient requests a temperature change in the room, command center engine 502 may then use demographic information (e.g., height, weight, gender, medical history, etc.) to determine if the care team needs to be alerted, or if it is merely a preferential temperature change.
Command center engine 502 may be configured to integrate any and all systems within supersystem 500. In some embodiments, supersystem 500 is configured to improve noise levels for patients in their rooms, improve energy efficiency within BMS 400, improve sleep quality for patients, reduce infection risk, increase care team response times (e.g., notifying the closest nurse and not necessarily the assigned nurse, etc.), and increase security response time.
The integration provided by the command center engine 502 allows the system to provide improved service and facility efficiency. In some embodiments, if a service (e.g., an MRI machine, a vaccination, etc.) is backed up or is experiencing longer than usual wait times at one location (e.g. building 10, etc.), command center engine 502 can provide a notification to the user and provide an option to receive service at an alternative location in network, such as at a hospital at another location. The recommendation of an alternate location can include a prompt to accept or reject the recommended alternate location. In some embodiments, the command center engine 502 utilizes an algorithm based on travel times from a user's current location to the available buildings and the wait times at the available buildings and recommends the shortest total time to service. For example, a 90 minute wait may exist at the scheduled location, but a location 3 miles away may have immediate availability. The selection of an alternate location can reduce the time it takes the user to receive service and also improves the efficiency of each building. For example, a building with long wait times may experience a crowded waiting room and irritated patients.
In some embodiments, the command center engine 502 also considers a type of care required for the patient. In some such embodiments, the type of care required for the patient can be determined based on classification or diagnostic codes, such as the ICD, DRG, and/or CPT codes discussed above. For example, the command center engine 502 may identify only those alternate locations that are capable of treating a patient associated with specific classification or diagnostic codes. In some embodiments, a determination that a location is suitable for treating a patient is made based on the equipment, staffing, and/or capabilities of the location. For example, a location without an MRI machine may not be suitable to treat a patient that is classified as having a head injury. As another example, a location that does not staff pediatric physicians may be less suitable for a patient under 10 years old than a location that does staff pediatric physicians.
In some embodiments, the command center engine 502 is configured to rank and/or score facilities based on the patient's required type of care (e.g., based on ICD, DRG, and/or CPT codes). In some such embodiments, the command center engine 502 may calculate and assign a rating (e.g., on a scale from zero to five, based on a number of stars, as a percentage, etc.) for each identified alternate location, which could be presented to a patient when recommending alternate locations. For example, a patient may be presented with an interface that lists a plurality of potential alternate locations, a wait time and/or distance for each alternate location, and a star-rating for each location. Accordingly, in this example, the patient can weigh whether a longer wait time at a first, five-star rated location would be preferable over a shorter wait time at a second, four-star rated location.
The ability to recommend alternate locations has the effect of spreading the patient load between facilities and reducing the number of individuals positioned in waiting rooms. The reduction of individuals in waiting rooms provides a number of benefits including the reduced likelihood of transmission of illness. The command center engine 502 can coordinate scheduling changes and navigation changes when a user selects or accepts an alternate location. Coordination of scheduling includes reserving appointment slots, adjusting doctor or caregiver schedules, coordinating room allocation and parameter control, etc. Alternatively, the command center engine 502 can maintain the currently selected location if the user rejects the recommendation (e.g., the user may wish to go to a building with a longer wait time if the building is also convenient for other reasons such as visiting a friend or running errands nearby).
Referring now to
In some embodiments, an occupant arrives at a parking structure of the building 10. Either prior to arriving or after arriving at the parking structure, the user, using a user device 818, provides preferred parking criteria, such as proximity to the entrance of the building 10, handicapped parking, requested valet services. The parking preference manager 802 receives this information and available parking information from parking spots database 804 and provides one or more available parking spots to parking manager 808 that satisfy the user's preferences. If there are no spots that satisfy the user's preferences that are available, the parking preference manager 802 may provide acceptable parking spots to the parking manager based on another criteria (e.g., closest to the building 10, etc.).
The parking manager 808 may be configured to receive the user location from GPS manager 810 and the acceptable parking spots from the parking preference manager 802 and determine a selected parking spot for the user. The parking manager 808 may also be able to determine when the user is close to arriving or has arrived at the parking spot. In some embodiments, the parking manager 808 determines that the user is close to, or at, a parking structure based on GPS data from the user's device or vehicle, or based on a license plate recognition (LPR) system that detects the user's vehicle (e.g., using cameras positioned at an entrance to a parking structure).
In the event that the user has requested valet services, the parking manager 808 may send a notification to the valet notification manager 814 when the occupant is close (e.g., 1 mile, 1000 m, 100 m, etc.) to the parking structure, indicating that the valet services should be ready to assist. In the event that the user has not requested valet services when arriving, the parking manager 808 may send a signal to parking lights manager 812 to illuminate the parking spot for the occupant. In some embodiments, the illumination is a particular color that the user is told is for them via the application 816. In some embodiments, the GPS manager 810 provides directions directly to the appropriate spot choses by the parking manager 808 for the occupant, the directions provided on the application 816.
In some embodiments, the parking manager 808 may be configured to send information directly to the application 816, such as the chosen parking spot, the parking spot location (e.g., 2nd floor, row A, etc.) and directions to the parking spot. In some embodiments, the occupant has indicated via the application 816 that the vehicle is a self-driving vehicle. In such embodiments, when the occupant has reached a reasonable proximity to the parking spot (e.g., entering the parking structure, getting near the building, etc.), the command center engine 502 may take partial or full control of the vehicle to guide the occupant's vehicle to the chosen parking spot. In some embodiments, the parking manager 808 interfaces with a remote and/or third-party lighting system installed in the parking structure and can utilize the lighting system to guide the user to a parking spot or to identify an available parking spot. For example, a parking structure lighting system may include one or more lights (e.g., a set of lights above each parking spot), in some cases of varying or variable colors, that can indicate whether a parking spot is available, reserved, or currently occupied. The user may receive a notification (e.g., a text message, a push notification, etc.) instructing the user to “follow the lights” through the parking structure to a reserved and/or available spot. In some embodiments, the parking manager 808 tracks the user's movement through the parking structure (e.g., via the LPR system) and activates lights along the user's path to guide the user to a parking spot. In some embodiments, the parking manager 808 is also configured to transmit a notification to the staff of a facility (e.g., hospital staff) when a user (e.g., a patient) arrives at a parking structure and/or parks their vehicle.
In some embodiments, the command center engine 502 may be aware of payments that the occupant still has to make, apart from the payment of the parking spot. For example, the occupant may need to pay for a prescription after completing an appointment. The payments manager 806 can accordingly receive information from a variety of remote and/or third-party systems, such as a pharmacy management system (not shown), and can facilitate payment to these remote and/or third-party systems at the same time as a payment for a parking spot. For example, the payments manager 806 can provide a combined transaction request to the user device 818 via the application 816 allowing the user to pay for the several transactions at once. In some embodiments, the command center engine 502 can also transmit a notification to valet staff (e.g., via valet notification manager 814) instructing the valet staff to pick up a user's prescriptions while the user is checking-out from an appointment, or when the user is ready to leave (e.g., when the user requests their vehicle from the valet). In this manner, the user may save time by receiving their prescriptions from the valet at the same time that they retrieve their vehicle, rather than making a separate trip to a pharmacy.
Referring now to
The building 10 is shown to include a command center engine 502 and a virtual command center 902. The virtual command center 902 may be configured to control at least a portion of one or more of the drone buildings. In some embodiments, the virtual command center 902 controls a single patient room (e.g., a surgical room), that can be better managed by the intelligence (e.g., processing circuitry, processing power, memory space, etc.) at the building 10 than the intelligence at the respective drone building. The building 10 is shown to receive data from the drone buildings via network 446. In some embodiments, this data includes real-time operation of the control zone, historical data of the control zone, system and subsystem layouts, or a combination thereof.
For example, the drone building A provides temperature, pressure, and humidity data to the building 10 for a surgical room at drone building A. The virtual command center 902 receives this data and, in response to at least one of a user preference, optimization, or regulatory conditions, the virtual command center 902 provides control signals to the HVAC equipment in the drone building A to satisfy the intended requests of the virtual command center 902.
In some embodiments, the virtual command center 902 of the building 10 can control rooms of the drone building A to provide functionality that is different from the functionality originally provided by the rooms of the drone building A. For example, general patient rooms of the drone building A may be controlled by the virtual command center 902 to provide TPH and other parameters that are conducive for use the general patient rooms for surgery, infection disease control, or another specific use room. The ability to remotely control rooms of the drone building A allows for an expanded flexibility and use of the drone building A while taking advantage of the more robust intelligence of building 10. In some embodiments, the virtual command center 902 can implement room uses for the drone building A that are outside the original design of the drone building A. For example, the virtual command center 902 may receive historical sensor information from the drone building A and recognize (e.g., using a machine learning or neural network system) how to control TPH and/or other parameters to provide an overall atmosphere that is appropriate for a desired procedure or use of the drone building A.
In some embodiments, virtual command center 902 of building 10 can control a basic care control room in drone building A to provide required assistance. For example, virtual command center 902 can provide a negative pressure to allow emergency care that the remote clinic was not designed for initially. Additionally, the virtual command center 902 can facilitate communication between doctors of the building 10 and the drone building A to improve the available resources and expertise available for procedures within the drone building A. For example, an emergency surgery in drone building A can include control of room parameters by the virtual command center 902 of the building 10 and a specialized doctor at building 10 can provide guidance to a doctor conducting the emergency surgery at the drone building A. The virtual command center 902 can facilitate the communication by analyzing schedules of doctors at the building 10 or another building and identify an appropriate expert and add the emergency schedule or other procedure to the expert's schedule and provide a communication platform for connecting the two remotely located doctors.
Referring now to
In some embodiments, the patient room 1002 provides sensor data for one or more manipulated variables to the data collector 1016. The data collector 1016 may also receive patient preferences from the user profiles 704. The data collector may then send this information to the control signal generator, so that the control signal generator 1018 can provide comfort to the patient without having the patient request the adjustments to the manipulated variable. In some embodiments, this information is combined with the processed digital video feed to determine when the patient has arrived in the patient room 1002. In some embodiments (not shown), the command center engine 502 can perform facial recognition to determine which person (i.e., the patient) has arrived in the patient room 1002, and provide comfortability adjustments for the patient. In other embodiments, the command center engine 502 can determine that a person has arrived in the patient room 1002 based on RTLS data (e.g., from an RTLS enabled badge or wristband carried by a patient, physician, etc.). Other reasons for adjusting the manipulated variables can be considered too, just as the command center engine 502 determining that the patient has re-entered the room, and sending a voice command to the patient, such as “Welcome back, [Name]. We've adjusted the room to your liking. Please feel free to change any settings via the application.”
The system 1000 is structured to recognize using the sensor arrays and the video feeds the patient's needs without input. The system 1000 is also integrated with the scheduling system so that procedures and appointments are recognized by the system 1000 and integrated into the care provided by the sentient patient room. For example, the system 1000 may utilize historical demographic information to predict a base profile policy for the patient (e.g., the average individual matching the age, gender, nationality, etc. of the patient defines a base profile of preferences), receives inputs and preferences from the patient before a stay in the sentient patient room (e.g., favorite sports team, favorite color, pictures from a past vacation, favorite authors, normal sleep temperature, favorite scents, etc.) that allow the system 1000 to update the base profile policy to provide a customized profile policy in view of patient inputs, and continue to update the customized profile policy using machine learning or artificial intelligence (e.g. neural networks, reinforcement learning, etc.) to improve a response of the sentient patient room by the system 1000 to the patients activities and actions. For example, the system 1000 may receive feedback from the patient about actions implemented by the system 1000 (e.g., thumbs up or thumbs down in response to an implemented change).
The system 1000 receives input form the scheduling system to understand why the patient is residing in the sentient patient room (e.g., a surgery and recovery are scheduled, etc.), and automatically responds to scheduled activities and patient actions to provide care. For example, the system 1000 coordinates cleaning and sanitization activities to align with times the patient will be out of the room, adjusts temperatures before a patient arrives to the room and after a patient leaves, changes meal timing based on schedules and patient reactions (e.g., if a patient is sleeping soundly, no meals are provided to wake them), adjust lighting in the room (e.g., provide dim light upon return form a stressful appointment), provide inspiration after a difficult appointment (e.g., quotes, movies, music, smells, sounds, etc.), present movies or music based on activities of the patient (e.g., recognizing boredom via the patient switching rapidly between activities), welcoming the patient back to the room after an appointment (e.g., “Welcome back Joe, you did great!”), provides a reel of pictures provided by the patient or associates of the patient (e.g., family members can provide a stream of photos or videos to provide encouragement to the patient), and recognizes that additional individuals (e.g., family or friends) are in the sentient patient room and adjusts operation to better accommodate the group. The system 1000 can utilize big data (e.g., purchase history, ad info, etc.) to improve the base profile policy and more accurately predict how the patient will prefer the sentient patient room to react to various actions. Other actions and functionality of the sentient patient room can be provided by the system 1000 within the scope of this concept. The use of an artificial intelligence engine can be used to predict and respond to patient actions and activities to improve patient care.
In some embodiments, system 1000 is configured to improve the patients and clinicians surroundings, including lighting conditions, air temperature, privacy and noise. In some embodiments, this can make the difference for a successful intervention of the care team. System 1000 may be configured to decrease response delays, decrease distractions from focusing on patient care, decrease HVAC system lagging, improve readiness of access to patient information, and decrease potential negative patient outcomes.
In some embodiments, system 1000 is speeds staff response (e.g., in an emergency situation when every second could influence patient survival, etc.). Care team staff may receive immediate notifications with patient status, as well as room number to help with wayfinding. Automated controls can shift room features to the optimal setting, and the healthcare team can be free to focus immediately and fully on assessing, resuscitating, and otherwise stabilizing the patient. System 1000 may include providing care team notifications with room number and staff arrival status, patient event dashboard(s) including meals, medications, and/or allergies, automated HVAC zone temperature change, and/or, automated controls for lighting, TV, shades room settings, and any combination thereof.
Referring now to
The patient room 1002 is shown to include the patient 1102, a user device 1104, a room application 1106, TPH sensors 1112, lighting sensors 1116, friends/family 1118, the command center engine 502, and friends/family 1118. The friends/family 1118 located within the patient room 1002 provide updates or requests to the application 1106, along with the patient 1102. In some embodiments, there is hierarchy of which requests can be considered for implementing.
TPH sensors 1112 may be configured to monitor environmental conditions within room 10002. Lighting sensors 1114 may be configured to monitor the amount of light within patient room 1002 (e.g., lumens, etc.). Friends/family 1118 may be the friends and/or family of the patient that has entered patient room 1002. User device 1104 may be any device capable of accessing any one of the systems within supersystem 500 via the Internet, an application, or any combination thereof, such as a smartphone or tablet. Room application 1106 may be hosted on premise (e.g., within a server in building 10, etc.) or off-premise (e.g., stored on a server at a datacenter, etc.), and may be hosted as an application on user device 1104.
For example, if the patient 1102 requests a temperature decrease, while a friend of friends and family 116 requests a temperature increase, the room application 1106 (e.g., via a control signal manager 1108, etc.) may implement a control signal to satisfy the request of the patient 1102. In some embodiments, one or more requests to adjust manipulated variables from any of the occupants in the patient room 1002 can be implemented simultaneously. In some embodiments, friends and family 1118 are located outside of the patient room 1002, and can similarly make requests. In the above example relating to hierarchy, requests from the friends and family 1118 may be lower in hierarchy than the occupants within the patient room 1002.
Control signal manager 1108 may be configured to provide control signals to HVAC equipment, provide care team updates to the care team of the patient 1102, and provide room preference changes to one or more building occupants (e.g., hospital administrator, etc.).
Referring now to
In some embodiments, system 1200 is configured to receive occupant data (e.g., facial recognition data from one or more occupants that enter building 10, etc.), process the facial recognition data, determine the detected occupant, and provide control actions based on the detected occupant. Video feed receiver 1202 may be configured to receive video feed (e.g., live video feed, etc.) and process the data such that it can be readable for facial recognition manager 1204. Facial recognition manager 1204 may be configured to determine the occupant detected from the live video feed based on stored user information (e.g., user profiles, etc.) from user profiles 1206.
Facial recognition manager 1204 may be configured to provide the detected occupant to security manager 1208 and notification system 1210. In some embodiments, security manager 1208 to determine if the incoming occupant is known, is a threat, requires security concern, or a combination thereof. Notification system 1210 may be configured to determine one or more registration updates (e.g., patient profiles in the building, etc.) and provide that information to registration system 512. In some embodiments, notification system 1210 may also be configured to provide patient updates to care team hub 716, in the event that the occupant is the patient and the care team can be updated about the incoming patient. While not shown in
As shown in
In some embodiments, the emergency includes a code blue event. Code blue is the most universally recognized emergency code within a hospital setting and indicates that there is a medical emergency occurring within the hospital. Healthcare providers can choose to activate a code blue event, typically by pushing an emergency alert button or dialing a specific phone number, if they feel the life of the person they are treating is in immediate danger. Many hospitals have a code blue team who will respond to the code blue event as quickly as possible (e.g., within minutes). The code blue team may include doctors, nurses, a respiratory therapist, and a pharmacist. Some common reasons for activating a code blue event can include cardiac arrest, respiratory arrest, severe confusion, not alert or lack of consciousness, or shows signs of stroke, and/or sudden and severe drop in blood pressure. With any code blue medical event, patient safety rises above all else. For the critical responses these situations demand, every second counts. Precision, focus, and efficiency matter and so does the room environment where these lifesaving measures unfold. Optimizing the patient's and clinicians' surroundings including lighting conditions, air temperature, privacy and noise can tip the scales for a successful intervention. Controlling a code blue environment builds a foundation for the best possible outcomes. Current systems suffer from response delays, including finding and adjusting features in the patient's room, distractions from focusing on patient care, HVAC system lag in changing zone temperature, lack of ready access to patient information, and other room specific issues that can lead to less desirable patient outcomes.
The intelligent code blue method 1300 speeds staff response when every second is key to patient survival. Care team staff receive immediate notifications with patient status, as well as room number to help with wayfinding. Automated controls shift room features and/or parameters to more optimized settings, keeping the healthcare team free to focus on immediately and fully assessing, resuscitating, and otherwise stabilizing the patient. Features of the intelligent code blue method 1300 include one-button code blue launch, care team notifications with room number and staff arrival status, patient event dashboard including meals, medications, allergies, automated HVAC zone temperature change, automated controls for lighting, TV, shades and room settings, and seamless integration with building systems and technology. The systems described above provide the infrastructure to enable the use of the intelligent code blue method 1300, and the benefits of the intelligent code blue method 1300 include greater operational efficiency, improved critical response team productivity, greater patient satisfaction with higher net promotor scores, improved HCAHPS scores, and enhanced hospital image. The critical response team can focus immediately on the patient, rather than the room environment, saving time when every second may affect patient outcomes. Advanced messaging, alarms and notification lookups quickly notify critical response team members for participation in person or by video. Room devices can be monitored and controlled without human intervention. Intelligent Code Blue automates the change to optimal room conditions, supporting the potential for positive outcomes. Fully digital, app-based controls adjust the patient room technologies automatically. Digital tools minimize physical touches by staff and patients, while increasing flexibility with options such as broadcasting a collaboration video to the room's TV. Optimizing Code Blue response increases staff satisfaction and promotes more effective collaboration among in-person and video participants. As patient care improves, patient and staff satisfaction increases and the hospital's positive reputation grows.
At step 1304, the patient is admitted to a room and the details of the patient and required medical assistance is entered into the command center engine 502 so that scheduling and coordination as discussed above can be integrated. In some embodiments, the patient is checked into a smart room or a sentient patient room. In some embodiments, the patient's preferences, historical information, and/or demographic information are loaded into the command center engine 502, and more specifically into PPMS 534, as described above.
At step 1308, a clinician, other hospital worker, or any of the automated check in systems discussed herein admit the patient into an admission, discharge, and transfer (ADT) system of the hospital. In some embodiments, the ADT is built into the command center engine 502 and the patient is automatically checked in upon arrival at the facility. In some embodiments, the patient admission is initialized by the patient at step 1304 and finalized at step 1308.
At step 1312, a clinician and/or a care team is assigned to the patient. In some embodiments, the clinician is assigned immediately after the patient is admitted and the assignment is not affected or in response to the symptoms recognized in step 1316, as described below. In some embodiments, the clinician is assigned after step 1316 in the assignment is based at least in part on the types of symptoms observed by the system.
At step 1316, the patient begins to experience symptoms that may lead to a code blue event. In some embodiments, the symptoms can include shortness of breath, chest pain, or increased heart rate. In some embodiments, the symptoms are recognized automatically by the sentient patient room via direct monitoring (e.g., a heart rate monitor) or via intelligent observation (e.g., patient is clutching chest, wincing, etc.).
At step 1320, a code blue button is optionally activated (e.g., a friend or family member presses an RN button on the nurse call pillow speaker) and a notification is sent to the assigned clinician. In some embodiments, the code blue button is engaged automatically by the command center engine 502 or another system of the sentient patient room in response to the symptoms recognized in step 1316. In some such embodiments, an automated code blue system or method may be implemented that automatically detects and/or initiates a code blue event (e.g., based on the patient's vital signs). The notification provides the clinician with instant access to the symptoms and any other information (e.g., classification and/or diagnostic codes, as discussed above) that triggered the engagement of the code blue button. When the clinician arrives in the patient's room, he or she already has information on the code blue event and can focus on confirmation of the notification information.
At step 1324, the clinician confirms the code blue event and the intelligent code blue method 1300 engages the full response team to mitigate the problems associated with the code blue event.
At step 1328, the patient room responds to the confirmed code blue event and automatically adjusts the temperature, pressure, and humidity and/or any other room systems (e.g., blinds, air purification, etc.) to provide an optimum environment for critical care. In some embodiments, the temperature and/or humidity of the room are automatically lowered in response to a code blue event. Lowering the temperature and/or humidity of the room may not only benefit the patient, but can benefit a care team and/or the assigned clinician, who may have to rush (e.g., run) to the patient's room, causing increased body temperatures and respirations, etc. As the room is automatically adjusting conditions for response, a notification is sent via the command center engine 502 to the code blue response team.
At step 1332, the assigned clinician administers immediate resuscitation or other code blue procedures and awaits the arrival of the code blue team. In some embodiments, a camera and/or display (e.g., a television) positioned in the patient's room is also activated, allowing a remote supervising clinician to support the assigned clinician in administering code blue procedures.
At step 1336, the code blue team arrives and stabilizes the patient. The command center engine 502 receives input through the code blue event and can automatically schedule a care room or facility that is appropriate for the patient's care after the code blue event. For example, if an emergency surgery is required in view of the code blue event, the command center engine 502 automatically schedules an OR and the code blue team is provided with wayfinding information for transferring the patient to the scheduled OR.
The patient or family of the patient can push a button to call a nurse (e.g., via a pillow speaker, etc.). The clinician can get notified and go to the patient's room to find the patient in distress. Subsequently, the clinician can engage an emergency alert (e.g., via a button, etc.). The patient's room is then optimized for the ideal care of the patient. For example, if the patient had adjusted the TPH levels to their preference, but were not necessarily ideal for their health, engaging the button that the clinician pressed can return the TPH levels back to levels that are optimal for the patient's health, even it compromises some of the comfort of the patient. In addition, the emergency button engaged by the clinician can send an alert to a critical care team, who can respond and assist in helping the patient.
Referring now to
As shown in
In some embodiments, the integration engine 1404 provides information to the nurse call server 1412 to coordinate activities of the clinical staff. In some embodiments, the integration engine 1404 receives information from the nurse call system 1412 in the form of patient calls (e.g., a patient or other room occupant pressing the nurse call button), nurse and other clinical staff scheduling information (e.g., who is currently staffed, on-call individuals, clinician locations within the hospital, etc.), and/or other information.
In some embodiments, the integration engine 1404 provides information to the building automation system 1412 including room identity information (e.g., a room ID code, a room number, a room location, etc.), current patient information, room conditions at the time of a response event in the form of a code blue event, and other information, as desired. In some embodiments, the integration engine 1404 receives information from the building automation system 1412 including historical building information, operational status information for building systems (e.g., air handlers, chillers, door open status, air quality, temperature, pressure, humidity, etc.), or any other information available to the building automation system 1412.
In some embodiments, the wayfinder system 1420 is provided via lighted floors, video panels arranged in the hallways of the hospital, audio speakers, overhead lighting, or other physical elements within the hospital. In some embodiments, the wayfinder system 1420 is provided via a graphical user interface (GUI) of an application that can be accessed by a care provider via a smart phone, tablet, or other device. The GUI can provide real time location information and directions through the hospital. The wayfinder system 1420 can provide assistance to medical staff responding to the code blue event and reduce the time required for travel to the room where the code blue event is happening. The wayfinder system 1420 sends location information of a user (e.g., a medical professional) to the integration engine 1404 and receives information (e.g., directions, maps, etc.) from the integration engine 1404.
The integration engine 1404 communicates with a room control system 1424 to enact actions of the intelligent code blue system architecture 1400 (e.g., actions of the method 1300, response actions to a patient fall emergency, response actions to a contagious disease emergency, etc.). In some embodiments, the room control system 1424 controls room features including an entertainment center 1428, an HVAC system 1432, a lighting system 1436, a shades system 1440, an alert signal system 1444, a video system 1448, an oxygen supplement system 1450, and or other systems. In some embodiments, the room control system 1424 is in direct control of the room features. For example, the room control system 1424 can include or be a part of the BMS 400 discussed above and in control of HVAC systems and subsystems and other room features. In some embodiments, the room control system 1424 controls one or more room features, but not all room features, as desired. For example, the room control system 1424 can control the HVAC system 1432 and the lighting system 1436, but not the entertainment center 1428. In some embodiments, the room control system 1424 provides instructions to systems and subsystems associated with the room features to provide a coordinated response to the code blue event. For example, the room features may be controlled by any combination of local controllers and offsite controllers and the room control system 1424 provides instructions to the controllers (e.g., local, distributed, cloud based, off site, etc.) to enact the desired actions in response to the code blue event.
As shown in
In some embodiments, the nurse call system 1412 is structured to output a text string. The snap box 1452 is structured to parse the text string into data packets and provide the data packets to the integration engine 1404 is a format suitable for use by the integration engine 1404. In some embodiments, the nurse call system outputs consumable data packets directly to the integration engine 1404. For example, the nurse call system 1412 can output a room signal that directly identifies the room in which a code blue event is occurring and a code blue signal indicating that a code blue event is ongoing. The room signal and the code blue signal can be directly communicated to the integration engine 1404 to allow the intelligent code blue system architecture 1400 to implement code blue actions (e.g., cool the room, increase air flow, maximize light, etc.) and to provide communication with the medical professionals to increase response time of the code blue team to the code blue event (e.g., via communication with the companion system 1464).
In some embodiments, the nurse call system 1412 outputs a text string including a room identifier and a code blue identifier. The room identifier may be a room number, a room code (e.g., hexadecimal ID number, etc.), or another identifier that allows the integration engine 1404 or another portion of the intelligent code blue system architecture 1400 to recognize the room in which the code blue event is occurring.
In some embodiments, the snap box 1452 receives the text string from the nurse call system 1412 and extracts the room identifier and the code blue identifier. For example, the snap box may utilize filtering techniques, programmed logic, a machine learning engine, or a rules based logic to determine the room identifier and the code blue identifier. For example, in some hospitals, the nurse call system 1412 may include relatively older technology or may include a unique way of identifying rooms and code blue events (e.g., a room identifier and a code blue identifier that are unique to that one hospital). The snap box 1452 includes logic, machine learning engines, or other programming that allows for the integration with an existing, and sometimes older technology, nurse call system 1412 with the integration engine 1404 and allows an older nurse call system 1412 to enjoy the benefits of the intelligent code blue system architecture 1400. In some embodiments, there the snap box 1452 utilizes machine learning, a reinforcement training scheme can be used to initialize a snap box model with historical information, then to further train the snap box model using real time nurse call information from the nurse call system 1412. In some embodiments, the snap box 1452 is a physical control module installed at an edge of the intelligent code blue system architecture 1400. For example, a snap box 1452 can be installed at each nurse station of a hospital to directly interact with the nurse call system 1412 where the medical staff interacts with the nurse call system 1412 (e.g., at a nurse station, in a section of a hospital floor, in a treatment unit, etc.). In some embodiments, the snap box 1452 may be included in the integration engine 1404 and an external physical module is not needed. In some embodiments, the snap box 1452 may be a regional or sectional physical module that is in communication with more than one nurse call system 1412 or more than one nurse call stations allowing for a single snap box 1452 to provide the room identifier and the code blue identifier from the more than one nurse stations to the integration engine 1404 and for use by the intelligent code blue system architecture 1400. In some embodiments, the snap box 1452 may communicate with the cloud 1456, the enterprise management system 1460, and/or the companion system 1464.
The EMR/ADT system 1454 communicates with the integration engine 1404 to provide patient information including allergies, medication restrictions, health history, current medical treatment plans, historical doctor information, etc. The patient information can be used by the integration engine 1404 to communicate relevant information to the code blue team via the companion system 1464. For example, the code blue team may be provided with the patient information and have access to patient allergies, or other information that may inform treatment during response to the code blue event.
The network system of
In some embodiments, the enterprise management system 1460 is the BMS 400 or the BMS controller 366, or any portion of the BMS 400 or the BMS controller 366, or any portion of enterprise management components, systems, or subsystems discussed above. The enterprise management system 1460 can control building systems to affect the environment in the room where the code blue event is occurring. For example, the enterprise management system 1460 may control air handlers, chillers, other HVAC components, louvers, lighting, shades, and any other room features that affect the environment. In some embodiments, the enterprise management system 1460 is also connected to auxiliary systems such as an entertainment system, and alert system, a video system, etc. For example, an entertainment system in the room may be automatically operated in response to a code blue event to display patient information, vital statistics, room parameters (e.g., temperature, pressure, humidity), locations of other members of the code blue team, etc. In some embodiments, the entertainment system and the video system may cooperate to provide an interactive environment that aids the code blue team. For example, a remote medical professional (e.g., an expert, a patients general practice doctor, etc.) may be connected to the entertainment system to communicate with the code blue team to increase the likelihood of survival of the patient. The video system could allow communication from the code blue team to the remote medical professional.
In some embodiments, the companion system 1464 is coordinated with the integration engine 1404 and the enterprise management system 1460 to provide coordinated communication to the code blue team and/or other medical professionals, family and friends, etc. For example, the companion system 1464 can include an application operable on a mobile device that provides a graphical user interface (GUI) for communicating information to a user and receiving information from the user. For example, the GUI can provide an alert to the code blue team that a code blue event has been triggered. A room location may then be provided and wayfaring information provided to speed the code blue teams arrival. The GUI may provide communication between the code blue team to provide additional coordination. Additionally, the code blue team can be connected with a primary care team for the patient to gain insight and information regarding the patient before arriving at the room. The GUI may also provide patient information before the code blue team arrives at the room. The companion system 1464 allows the code blue team to be more well prepared when they arrive at the room and therefore be more efficient with care and increase the likelihood of survival.
The ADS/ADX 1468 connects to devices at the edge of the intelligent code blue system architecture 1400 and manages the collection of large amounts of trend data, event messages, operator transactions, and system configuration data. The ADS/ADX 1468 provides site unification, advanced reporting, a simple and intuitive user interface, and a hierarchical network view of the intelligent code blue system architecture 1400 for all connected devices, which allows for efficient control of energy usage, quick response to critical conditions, and optimization of automation strategies. In some embodiments, the ADS/ADX 1468 provides fault detection at the edge. For example, the ADS/ADX 1468 identifies and lists building system-related faults in order of severity to help operators quickly fix issues and avoid equipment issues, energy waste, and comfort complaints. Fault detection can include fault triage that provides fault duration, occurrence information, and corrective action recommendations to improve fault prioritization that assists less experienced building operators with problem solving. In some embodiments, the ADS/ADX 1468 provides a building network tree that allows for faster delivery of user interfaces (UI) of the enterprise management system 1460 by enabling deployment prior to the spaces and equipment configuration process. In some embodiments, the ADS/ADX 1468 provides an advanced search and reporting features that accesses the enterprise management system 1460 to find and report on operational data and make bulk commands to restore order more quickly. For example, the advanced search and reporting feature can provide users the ability to quickly search enterprise management system 1460 objects by the building network, equipment, equipment type, or space. In some embodiments, the ADS/ADX 1468 provides custom dashboards for the enterprise management system 1460 and enable designers to create dashboards that provide the most relevant and critical information for enhanced productivity and creates an experience that mimics users operational styles for ease of use. In some embodiments, the custom dashboards can be provided to users via a mobile application of the companion system 1464. In some embodiments, the ADS/ADX 1468 provides graphics custom behaviors including custom symbols for individual buildings, campus needs, local standards, etc. In some embodiments, the ADS/ADX 1468 provides trend widget updates that allow users to identify patterns including outliers, using intuitive candlestick charts that display min, max, and averages. In some embodiments, the ADS/ADX 1468 provides a cyber health dashboard with a centralized view of potential security-related issues or system issues which are detectable by the ADS/ADX 1468, but which may not surface as part of general system alarms. In some embodiments, the ADS/ADX 1468 provides user management that facilitates the creation and management of users and their roles within the intelligent code blue system architecture 1400 including category based permissions and privileges. In some embodiments, the ADS/ADX 1468 provides historical data management, including an Open Database Connectivity (ODBC) compliant database package for storage of trend data, event messages, operator transactions, and system configuration data. A site management portal UI of the ADS/ADX 1468 provides a flexible system to change the online configuration of the enterprise management system 1460, optimize control strategies, and perform administrative tasks. The ADS/ADX 1468 includes an ODBC compliant database package for secure storage of historical and configuration data. The ADS/ADX 1468 supports virtual environments, including VMware® and Microsoft® Hyper-V™.
The IoT device interface 1472 provides communication between the integration engine 1404 and a variety of IoT connected devices (e.g., lighting, shades, HVAC/Temp, TV/entertainment, etc.). The IoT device interface 1472 can call a resting IP Application Programming Interface (API) of each individual IoT connected device to receive information therefrom and to provide control from the integration engine 1404 to the IoT connected device. The IoT device interface 1472 provide integration of IoT devices (e.g., third party provided devices, devices inclusive of the intelligent code blue system architecture 1400, any other IoT capable device) with the intelligent code blue system architecture 1400. The IoT device interface 1472 improves the system's ability to be integrated into existing system and hospitals while providing the advantages and benefits of the intelligent code blue system architecture 1400.
The integration engine 1404 includes programming that allows communication with the IoT device interface 1472 and integration of the IoT connected devices into the intelligent code blue system architecture 1400. In some embodiments, the integration engine 1404 include Node-RED programming that provides a logical flow based development environment allowing for visual programming of inputs, outputs, and actions of the integration engine 1404. The integration engine 1404 integrates hardware devices, APIs and online based services as a part of the IoT. Node-RED provides a light-weight runtime built on Node.js, and an event-driven, non-blocking model. The integration engine 1404 including Node-RED is ideal to run at the edge of the intelligent code blue system architecture 1400 and can be supported on relatively low-cost local hardware, in the cloud 1456, on a distributed network, or any combination thereof.
As shown in
In some embodiments, the HVAC control can control operation of the HVAC 1432 to lower the temperature of the room (e.g., the room 1002) to a set point temperature. In some embodiments, the HVAC control can adjust the HVAC 1432 to a coldest setting to drop the temperature as rapidly and as cold as permitted by the HVAC system constraints. In some embodiments, HVAC control defines an air flow or circulation rate (e.g., a turn over time, a turn over volume, etc.) and adjusts air handlers to achieve the air flow or circulation rate.
In some embodiments, the lighting control can control the lighting 1436 to provide a maximum illumination in the room and/or on a path to the room. For example, the hospital lighting system may be controlled to illuminate any areas through which a member of the code blue team may travel on their way to the room.
In some embodiments, the shade control operates the shades 1440 to allow a maximum of ambient light through any windows of the room. In some embodiments, the shade control can operate the shades 1440 to block visibility into the room (e.g., from an adjacent hallway, etc.).
In some embodiments, the security control can automatically unlock doors on the path between the code blue team and the room. Automatic control of safety doors may reduce the time required for the code blue team to arrive at the room and administer care.
In some embodiments, the control of communications to the code blue team can include communication using the companion system 1464 and may provide the code blue team with vital patient information, patient EMR/ADT information, communication or locations of other code blue team members, wayfaring information, etc. In some embodiments, the control of the communications to the code blue team can include paging the code blue team using personal pagers, group or floor level paging, department level paging, phone calls, or another form of communications. In some embodiments, the control of communications to the code blue team includes controlling video boards, or directional lighting to aid in wayfinding or wayfaring to aid in the speedy arrival of the code blue team in the room.
In some embodiments, the control of communications to the primary care team can include communication using the companion system 1464 and may provide the primary care team with vital patient information, patient EMR/ADT information, communication or locations of the code blue team members, instruction for treatment until the code blue team arrives (e.g., how to conduct preliminary actions and life saving techniques), etc. In some embodiments, the control of communications to the primary care team includes controlling the entertainment system 1428 to display information to the primary care team.
In some embodiments, the control of communications to the patient's family and/or friends can include communication using the companion system 1464 and may provide information about the patient, where they can meet the primary care team and/or the code blue team after the code blue event has concluded, etc.
In some embodiments, the control of video camera systems can include the ability for live interaction of the code blue team and/or the primary care team with a remote expert or individual who would like to communicate during the code blue event. The video system 1448 can provide sounds and video feed out to other, remote systems.
In some embodiments, the control of entertainments systems can include operation of the entertainment system 1428 to display patient information, aids for treatment, provide communication with an offsite expert or individual with information valuable to the code blue event, etc.
In some embodiments, the control of an alert system can be used to alert the code blue team or other individuals or groups who need to be aware of the code blue event. Alerts 1444 can include audible alarms, text messages, pages, alerts provided through the companion system 1464, or another type of alert, as desired.
At step 1484, the integration engine 1404 receives nurse call information from the nurse call system 1412. In some embodiments, the nurse call information includes a data packet in the form of a text string. The text string includes a code blue identifier and a room identifier. The code blue identifier indicates that a code blue event is occurring. The room identifier indicates a room location or code that allows the integration engine 1404 to determine the room where the code blue event is occurring and to provide location information of the room to the code blue team. In some embodiments, the location information can include a location of a nearest crash cart or other code blue related supplies and/or equipment. In some embodiments, the text string is parsed by the snap box 1452 in order to convert the raw information provided by the nurse call system 1412 to a data format usable by the integration engine 1404. In some embodiments, the nurse call system 1412 communicates directly with the integration engine 1404 and provides the code blue identifier and the room identifier in a format consumable by the integration engine 1404. In some embodiments, the nurse call system 1412 sends the nurse call information when a code blue button is pressed. The code blue button can include a physical button at a nurse station or on a mobile device, etc. or may be a digital or UI button provided on a touch screen or a mobile device (e.g., via the companion system 1464).
At step 1488, a status is returned to the integration engine 1404 and/or other components of the intelligent code blue system architecture 1400. For example, the existence of an active code blue event may be provided to the enterprise management system 1460, the companion system 1464, the cloud 1456 or another network system, the ADS/ADX 1468, and/or the IoT device interface 1472. The status may be used for logging or control actions.
At step 1492, the integration engine 1404 analyzes the nurse call information received at step 1484, parses the code blue identifier, and determines if the code blue identifier indicates an active code blue event. In some embodiments, the nurse call information only includes the code blue identifier if an active code blue event is occurring. In some embodiments, the code blue identifier is a first value (e.g., true, 1, etc.) if an active code blue event is occurring, and a second value (e.g., false, 0, etc.) if there is no active code blue event.
If the code blue identifier indicates no active code blue event (e.g., no code blue identifier included in the nurse call information, 0, false, etc.) then the method 1476 ends at step 1496 and continues to wait for further nurse call information.
If the code blue identifier indicates an active code blue event is ongoing (e.g., the code blue identifier is included in the nurse call information, 1, true, etc.) then the method 1476 continues to step 1500 and the room identifier is parsed and used to determine the room location.
At step 1504, the integration engine 1404 looks up a fully qualified reference (FQR) for the identified room. The FOR is a unique, user-defined name that identifies an object in the intelligent code blue system architecture 1400. In some embodiments, the identified room includes the FQR for all related room systems (e.g., lighting, HVAC, shades, etc.). In some embodiments, FQRs of each associated system and room object are looked up at step 1504.
At step 1508, each FOR associated with the identified room is set to write (e.g., initialized so that a status or operational characteristics of the associated object can be changed). In some embodiments, step 1508 opens each FOR associated with the identified room for editing or writing by the integration engine 1404 or another component of the intelligent code blue system architecture 1400.
At step 1512, each FOR is set to indicate an active code blue event. In some embodiments, step 1512 writes the FQR's of each associated device or system to true (e.g., 1, active, etc.) for code blue event. This indicates to the intelligent code blue system architecture 1400 and all included systems that a code blue event is occurring.
At step 1516, the integration engine 1404 initiates and controls the code blue actions. Initiation of the code blue actions results in the actuation of all desired room associated systems to produce the desired environmental and experiential result in the room during the code blue event. For example, the temperature of the room will be lowered to account for the larger number of bodies who will be occupying the room and inhibit the temperature in the room from raising to an unacceptable level during the code blue event. The code blue actions provide an automated response to the initiation of the code blue event and increase the medical staff and code blue team's likelihood for success in the code blue event.
As shown in
The integration engine 1404 also communicates with a patient engagement system 1540 and an AV distribution system 1544. In some embodiments, the patient engagement system 1540 controls room speakers, television displays and/or other room displays, hallway displays, hallway speakers, and/or other features that the patient may interact with. In some embodiments, the AV distribution system 1544 coordinates and controls digital white boards, clinical computers, video control computers, video cameras, speakers, badge tap devices, and/or other devices controlled by or interacted with by the medical staff or other hospital employees. In some embodiments, a video box 1548 provides communication from the patient engagement system 1540 to the AV distribution system 1544 to allow for coordination of the medical teams requirements for care and the patient's needs. Additionally, the EMR/ADT system 1454 provides information to the patient engagement system 1540, and the integration engine 1404.
The nurse call system 1412 provides information to the integration engine 1404 as discussed above. Additionally, the nurse call system 1412 can provide information to a code blue communication system 1552 structured to communicate with the code blue team. In some embodiments, the code blue communication system 1552 includes a personal page 1556, an overhead page 1560, and a department page 1564 that are arranged to communicate with the code blue team. For example, the personal age system 1556 may provide communication directly to a code blue team member via a pager, a mobile device (e.g., a text message, a phone call, etc.), or via the companion system 1464 discussed above. The overhead page system 1560 can provide an audible page to the code blue team based on location information (e.g., the page will be audible in the area where the code blue team members are currently). The department page system 1564 can provide an audible page to an entire area of the hospital (e.g., a floor, a department, a section, an area, etc.) so that the code blue team is alerted to the code blue event. The nurse call system 1412 can also communicate with a mobile device 1568 either directly or via the companion system 1460 to provide code blue team members with information regarding the code blue event. For example, the mobile device 1568 may display a room number (e.g., based on the room identifier), provide wayfinding directions to the room, provide wayfinding directions to a crash cart or other equipment/supplies, display vital statistics of the patient, display other patient information, or provide other information to the code blue team member to aid in response to the code blue event.
In some embodiments, the intelligent code blue system architecture 1520 includes a voice control system 1572 in communication with the nurse call system 1412 and the HVAC/light control 1524 to allow for voice control of room features and nurse call features. Voice control via the voice control system 1572 can provide efficiency during the time of determining if a code blue event is occurring and can decrease the response time of the code blue team to an identified code blue event. The nurse call system 1412 also receives information from the code blue button 1408 as discussed above. In some embodiments, the code blue event can be triggered by the voice control system 1572 and/or the code blue button 1408.
In some embodiments, the intelligent code blue system architecture 1520 includes a real-time locating system (RLTS) 1576 in communication with the nurse call system 1412, the patient engagement system 1540, the AV distribution system 1544, and the integration engine 1404 to provide location information usable by the intelligent code blue system architecture 1520. The location information can be used for wayfinding, and/or for controlling security doors to speed access of the code blue team in transit to the room. Location information can also be used by other systems as desired to coordinate the code blue team's ability to respond quickly to the code blue event. In some embodiments, the location information can include information about a crash cart or other supplies and/or equipment related to the code blue event. For example, a code blue team member may be assigned responsibility for bring a crash cart to the room, and the location information can provide wayfinding information to the crash cart for that team member, thereby reducing the response time of the code blue team as a whole to the code blue event.
In some embodiments, the intelligent code blue system architecture 1520 includes a computer vision system 1580 that is capable of watching patient's in rooms. The computer vision system 1580 can include a machine learning engine capable of determining activities of the patient. For example, the computer vision system 1580 may monitor the patient in conjunction with a vital signs monitor and determine that a code blue event is likely to occur for the patient. For example, the computer vision system 1580 may determine that the patient has been still for longer than a predetermined time and send an alert to the nurse call system 1412 to initiate a patient check. The computer vision system 1580 improves the primary care team's ability to recognize and trigger a code blue event in a timely fashion thereby increasing the chance of survival. The intelligent code blue system architecture 1520 is capable of performing the method 1476 discussed above.
As shown in
Similar to the intelligent code blue system architecture 1400 discussed above, the system architecture 1600 communicates with building systems and sensors. The building systems and sensors can be used as inputs of the integration engine 1404 and/or the digital twin 1604 and used for determination of actions. In some embodiments, the inputs of the integration engine 1404 and/or the digital twin 1604 include a lighting system 1608, a shade system 1612, an HVAC system 1618 (e.g., the HVAC system 440), an entertainment system 1622, a camera system 1626, a microphone system 1630, a Real time location system (RTLS) 1634, a security system 1638, and/or an elevator system. In some embodiments, the inputs can include more or fewer inputs. For example, any systems or subsystems described herein (e.g., the building subsystems 428) can act as inputs or sources of information for the integration engine 1404 and digital twin 1604. Additionally, the digital twin 1604 and the integration engine 1404 can receive inputs or information from the nurse call system 1412 via the snap box 1452, the EMR/ADT system 1454, or the cloud 1456 (i.e., network system). The inputs and information sources allow the integration engine 1404 and the digital twin 1604 to learn how the building systems react to different changes in the system or environment and react to the changes to maintain or control a desirable environment. For example, a desirable environment may be an environment that reduces or minimizes the spread of viruses or bacteria, isolates a particular room or space, maintains a temperature/pressure/humidity (TPH) within compliance standards of a regulator body or compliance standard setting body such as Centers for Medicare and Medicaid Services (CMS) or auditors such as The Joint Commission, etc.
The building systems and sensors discussed above can also receive outputs or be controlled by the integration engine 1404 and/or the digital twin 1604 to enact actions and change the operational characteristics of the space. For example, the integration engine 1404 may determine actions, or may receive commands from the digital twin 1604 to institute actions, and the integration engine 1404 then controls operation of one or more building systems to control conditions of the space.
The digital twin 1604 in general is structured to replicate the physical systems associated with the building digitally or virtually. That is, the digital twin 1604 provides a virtual representation of the building and how the inputs affect operation of the building and environments within the building. The digital twin 1604 can includes programmed representations of the integration engine 1404 and all other components, systems, and subsystems connected to the integration engine 1404. The digital twin 1604 allows for testing and information sampling in a digital environment without the need for physical testing and manipulation. In other words, the digital twin 1604 allows an operator to observe potential changes to a subsystem or system without physically changing the real world environment. Below, a number of use cases for the digital twin 1604 and the system architecture 1600 are described.
As shown in
With the based digital twin policy initialized, the self-supervised training method 1700 receives or recognizes an action prompt at step 1704. In some embodiments, the action prompt is a sudden change in temperature, pressure, or humidity. In some embodiments, the action prompt is a code blue event. In some embodiments, the action prompt is a change in a patient state-of-mind (SOM) or a particular action of the patient or another person in the building (e.g., changing a thermostat, pressing a nurse-call button, turning on an entertainment system, etc.). In some embodiments, the action prompt is a scheduled event (e.g., a schedule doctor check in, a scheduled procedure, etc.).
At step 1708, the digital twin 1604 receives the action prompt as an input to the based digital twin policy. The digital twin 1604 then processes the action prompt and outputs a digital twin result at step 1712. The digital twin result includes a command for action of one or more building systems or subsystems. For example, the digital twin result may control operation of the HVAC system 1618 to change a temperature, a pressure, a humidity, an airflow, etc. of a room or space. In some embodiments, the digital twin result includes a command to a scheduling system, the nurse call system 1412, the entertainment system 1622, the lighting system 1608, the shade system 1612, or any other system or subsystem connected to the integration engine 1404.
At step 1716, a physical action is taken in the real world via the integration engine 1404. For example, a nurse may call a primary care physician, or lower a room temperature, or increase a room pressure, turn on lights, open shades, or any other activity to alter the physical parameters of the room or space.
At step 1720, the result of the physical actions taken in step 1716 are measured by the systems, subsystems, and sensors of the system architecture 1600. The physical action in step 1716 and the result in step 1720 provide real time training information for the digital twin 1604.
At step 1724, the base digital twin policy is trained based on the digital twin result generated in step 1712, the physical action in step 1716, and the environmental result in step 1720. For example, the physical action of step 1716 may be compared to the digital twin result from step 1712 and the comparison is used in a reinforcement learning scheme. For example, if the digital twin result matches the physical action taken or is within a tolerance band of the physical action taken, then a reward is provided to the based digital twin policy. If the digital twin result is different from the physical action taken or is outside a tolerance band of the physical action taken, then a penalty is provided to the based digital twin policy. Other learning methods are considered and contemplated within the scope of the self-supervised training method 1700. In some embodiments, a human operator interacts with the digital twin 1604 via the integration engine 1404 and enters the physical actions of step 1716 during the self-supervised training method 1700. In some embodiments, the integration engine 1404 provides the physical action of step 1716 to the digital twin 1604 automatically.
The ability of the digital twin 1604 to learn the specific response tendencies of the actual people operating the building or facility allows the digital twin 1604 to accurately represent the real world responses and controls of the building and staff within the building. Once the base digital twin policy is fully customized and operating within a predetermined tolerance band or threshold accuracy, then the fully trained customized digital twin policy can be instituted on the integration engine 1404 and can be used to automatically react to situations in the room or building to control building systems and subsystems. The digital twin 1604 can continue to run and update in the background for use the integration engine 1404 or for any other use case or desired implementation.
As shown in
At step 1808, the sleep engine of the integration engine 1404 and/or the digital twin 1604 process the inputs and determine a sleep event percentage indicative of a likelihood that a target patient is sleeping, or is falling asleep. In some embodiments, the sleep engine can use video 1626 processing to identify sleep patterns. For example, the sleep engine can include a convoluted neural network capable of identifying objects and movement patterns to identify common sleeping patterns (e.g., no movement for a predetermined amount of time, rhythmic breathing, etc.). In some embodiments, other video analysis tools are implemented to analyze a video stream and identify a sleep event. In some embodiments, the microphone 1630 inputs to determine sleep noises and identify the sleep event. For example, the sleep engine can process the audio information from the microphone 1630 to determine rhythmic breathing, snoring, soft breathing, general quiet, etc. In some embodiments, the sleep engine can identify the sleep event by processing bed weight sensor information to determine if the patient has been laying still for a predetermined amount of time. In some embodiments, the sleep engine can identify the sleep event by processing heart rate monitor information. For example, if the patient's heart rate is dropping into a typical sleeping range or a specific heart rate range for that particular patient, the sleep engine can identify a sleep event. Other sensors and methods can be analyzed by the sleep engine in the integration engine 1404 and/or the digital twin 1604 to identify a sleep event.
At step 1812, the sleep engine outputs a digital twin result that includes control actions for response to the sleep event identified in step 1808. The control actions can be based on a simulation run in the digital twin 1604 using all available information. For example, the digital twin 1604 can test multiple control actions then determine the best actions for implementation by the integration engine 1404. For example, the digital twin 1604 may determine that schedules can be shifted to improve the patient's sleep experience (e.g., moving a scheduled appointment or non-critical procedure, cancelling a non-critical maintenance procedure nearby, etc.). If the digital twin 1604 determines that scheduling changes can be made without detriment to the overall care of the patient and/or the facility, then the schedules are adapted at step 1816.
In some embodiments, the digital twin result determined at step 1812 includes actuation of room features that can be implemented by the integration engine 1404. For example, the digital twin result may control actions of smart room features at step 1820. The actions can include any of the smart room features discussed herein. For example, the integration engine 1404 can command the shades 1612 to close, the lights 1608 to dim, the HVAC 1618 to adjust temperature to a patient requested sleep temperature, adjust the temperature to a physician requested level, adjust air flow to a predetermined sleep air flow, adjust a humidity to a desired level, the entertainment system 1622 to turn off, the entertainment system 1622 to play sleep sounds or a meditation, or any other environmental control identified in a user preference or identified by a care giver.
As shown in
At step 1904, the efficient patient environment engine receives historical information relating to environmental parameters and associated state-of-mind (SOM) scores of the patient or care team members (e.g., a surgeon in an OR). For example, the historical information can include temperature, humidity, air flow, shade position, light levels, etc. The associated SOM scores can be used to identify relationships between patient or care team requests and the outcomes achieved within the environment. For example, when a user (i.e., patient, family, care team, etc.) makes a request for a change in temperature, a satisfactory feeling for the user can be achieved in multiple ways (e.g., raising and lowering temperature, raising and lowering humidity, raising and lowering airflow, etc.).
At step 1908 the efficient patient environment engine of the digital twin 1604 is trained using the historical information so that the efficient patient environment engine can identify the most energy efficient solution to achieve a user request. In some embodiments, the most energy efficient solution includes adjusting system unrelated directly to the user request but are identified by the efficient patient environment engine as achieving a high SOM score for that particular request. For example, the efficient patient environment engine can identify action commands for the HVAC system 1618, the lighting system 1608, the shade system 1612, etc. to achieve a user request.
At step 1912, the user makes an environmental request. In some embodiments, the environmental request includes a temperature change, a humidity change, a lighting change, a shade change, etc. In some embodiments, the environmental request includes an indication that the user is hot, cold, clammy, light headed, tired, nauseous, or another SOM indicator.
At step 1916, the environmental request is inputted to the efficient patient environment engine of the digital twin 1604 for processing. The efficient patient environment engine queries a policy using the environmental request and returns a digital twin result at step 1920. The digital twin result includes command actions for systems and subsystems of the room or space to achieve the most energy efficient response while maintaining a high SOM score of the user. For example, if the environmental request includes a indication to reduce a temperature of the room, the most energy efficient response may not be adjusting the actual temperature of the room as requested by the user. Rather, the digital twin result may indicate that a more energy efficient response includes lowering humidity of the room and increasing air flow. Or any combination of adjusting lighting, shades, temperature, humidity, air flow etc. to achieve the users desired affect while maintaining high efficiency operation.
At step 1924, the digital twin result is provided to the integration engine 1404 and the action commands are implemented in the room or space. The digital twin 1604 is also capable of collecting feedback information for the user either by direct feedback (e.g., received via a GUI of a mobile device) or by sensed feedback (e.g., the user again made a similar environmental request, visual analysis of user behavior and/or gestures, etc.). The efficient patient environment engine can identify and provide associated actions that achieve a desired result at a lower energy cost and high efficiency. The efficient patient environment engine can drive overall system efficiency while maintaining a high SOM of the user.
As shown in
The digital twin 1604 includes a remote digital twin engine that defines a digital representation of the physical systems and spaces of the remote building. The remote digital twin engine can be trained over time using historical information of requests at the remote building and the building responses. For example, the remote digital twin engine records a temperature response of a room or space within the remote building following an environmental request made by a user or a control system of the remote building. The remote digital twin engine is able to accurately reproduce actions and effects of the remote building and to model potential outcomes, associates, and related attributes of spaces within the remote building. The ability to model the remote space can allow for improved health outcomes and improved SOM scores by correlating attributes (e.g., how does humidity, temperature, and pressure affect a patients SOM score) and modelling how a space will react to prompts and user inputs/requests.
Additionally, the remote digital twin engine can model compliance information that may be difficult to obtain at the remote building. For example, the remote digital twin engine may be able to recreate a compliance activity and provide detailed compliance reporting for a compliance check by The Joint Commission or another standard setting body or regulator group. Compliance standards can define operating parameters and procedures, response actions, or any other parameters, checks, or standards as determined by the standard setting body or regulator group. The remote digital twin engine can provide more advanced compliance reporting by recreating faults and actions taken by the system, and by auto-populating reports for the compliance review of the standard setting body.
At step 2004, the remote digital twin engine receives a remote request from the remote building. In some embodiments, the remote digital twin engine automatically recognizes the remote request via the cloud 1456. For example, when a procedure is scheduled (e.g., a surgery in an OR), or when an environmental request is made at a nurse station, the local controller provides information to the remote digital twin engine of the digital twin 1604 and the remote request is recognized.
At step 2008, the remote request is processed by the remote digital twin engine to determine possible outcomes of the remote request. The remote digital twin engine can identify a best action set that achieves desired outcomes. For example, desired outcomes may include highest efficiency operation, highest likelihood of successful health outcome, a successful operation of the building while maintaining compliance, etc. The remote digital twin engine can process multiple outcomes and select the best action set without physical experimentation within the remote building. This leads to a more efficient operation for achieving the desired result and provides more information to the remote building systems for operation.
At step 2012, compliance information from the remote digital twin engine regarding the action set list is checked against compliance standards for the remote building. The remote digital twin engine can auto-populate compliance reports using the information generated by the remote digital twin engine to improve transparency and information depth. The improved ability to provide a clear and accurate picture of the operation of the remote building can improve the ability of the remote building to stay within compliance and avoid costly shut downs or compliance related issues.
At step 2016, the remote digital twin engine determines the best action set and provide the included system and subsystem actions of the remote building as a digital twin result. The digital twin result can then be used by the remote building (e.g., a remote building BMS) to implement the changes and arrange the space or room with in the remote building to achieve the remote request.
The remote digital twin engine allows for advanced data based control of a remote building that does not include advanced analytics, an integration engine 1404, or other components of the system architecture 1600.
As shown in
At step 2108, the digital twin prediction engine determines a system response that models the actual response of the building. The digital twin prediction engine models the system response and at step 2112, determines that an undesirable condition will exist or is likely to exist if operation of the building systems continues unchanged.
At step 2116, the digital twin prediction engine processes alternative operational states and inputs (e.g., changes to the operation of the HVAC system 1618 or another system) to determine how the undesirable condition might be avoided. After processing the available inputs and information, the digital twin prediction engine returns a pre-emptive corrective action indicative of the best case response. The pre-emptive corrective action is determined by the digital twin prediction engine to provide the best response to the predicted undesirable condition and to in some cases avoid the undesirable condition entirely.
At step 2120, the digital twin prediction engine send the pre-emptive corrective action to the integration engine 1404 or to another system or subsystem of the system architecture 1600 so that the pre-emptive corrective action can be implemented at step 2124.
The digital twin prediction engine is structured to model a large number of potential reactions and outcomes and to determine a best response that can avoid the undesirable condition. In some embodiments, the undesirable condition may include an out of compliance event (e.g., temperature, pressure, and humidity compliance). Use of the digital twin prediction engine and the returned pre-emptive corrective actions may allow the building or space to remain in compliance through an otherwise disruptive event. For example, if power is lost to a first room or space, dampers, air handlers, or other HVAC equipment associated with a second room may be manipulatable to provide the required environmental control to the first room. The digital twin prediction engine of the digital twin 1604 can determine interrelations of system and sub-systems and thereby provide controls that may not be obvious to the user of the building control systems. The digital twin prediction engine can also document changes made and the pre-emptive corrective action for use in compliance reports describing actions taken and how compliance events are responded to or avoided.
As shown in
At step 2208, the fall avoidance engine determines a fall parameter based on the information received in step 2204. In some embodiments, the fall parameter is a value indicative of the likelihood of a patient fall. For example, a larger fall parameter may indicate a larger likelihood that the patient would fall if no action is taken. In some embodiments, the fall parameter is a percentage, ratio value, a sliding scale value, etc. The fall parameter is determined based on a model trained using historical information or self-supervised learning as discussed above and can predict the likelihood of a potential fall based on patient activity and situational information (e.g., light levels in the room or space, medications administered to the patient, patient injuries or maladies, etc.).
At step 2212, the fall parameter is compared to the threshold or a tolerance band. If the fall parameter is less than or equal to the threshold or falls within the tolerance band, then the method returns to step 2204 and the fall avoidance engine continues to monitor the patient for a potential fall.
If the fall parameter is greater than the threshold or falls outside the tolerance band, then the fall avoidance engine determines that a fall is likely to occur and takes environmental actions at step 2216. In some embodiments, the environmental action includes turning on the lights using the lighting system 1608. In some embodiments, the lighting system 1608 may illuminate specific areas of a space, an entire room, a bathroom walk way, etc. In some embodiments, the environmental action can include opening shades of the shade system 1612, operating the entertainment system 1622, activating the RTLS 1634 to track the patient, etc.
At step 2220, a notification or alarm is sent to the nurse call system 1412 to alert the care team that a fall is likely. The notification is automated and may prompt a check or a care action to provide aid to the patient.
Automatic recognition of a potential fall scenario and an automated response to reduce the likelihood of a fall can increase the percentage of successful health outcomes for a healthcare facility and lead to an overall improvement in patient satisfaction.
As shown in
The reconfiguration engine of the digital twin 1604 is structured to implement a reconfiguration method 2300 that includes receiving a space reconfiguration request at step 2304. The reconfiguration request can include a current usage of the space (e.g., general population hospital space), and a requested usage of the space (e.g., a COVID or infections disease ward, a burn unit, etc.). The reconfiguration request can include associated compliance information for the current usage and/or the requested usage. For example, the current usage (e.g., a general population hospital space) may have different compliance standards defined by a standard setting body (e.g., The Joint Commission) than the requested usage (e.g., an infectious disease ward).
At step 2308, the reconfiguration engine receives inputs from the space (e.g., current temperature, humidity, and pressure) and models the system response to the request usage. For example, the reconfiguration engine can determine if the building systems and sub-systems are capable to operate the space according to the requested usage and meet all compliance standards associated with the requested usage. If the reconfiguration engine determines that the building systems and sub-systems are not capable of achieving the requested usage, then a digital twin result is returned indicating that the requested change is not feasible. If the reconfiguration engine determines that the change is feasible, then the digital twin result is provided to the integration engine 1404 at step 2312 and the space is reconfigured to the requested usage at step 2316. The integration engine 1404 can use the digital twin result to operate the HVAC system 1618 and other systems and subsystems of the system architecture 1600 to operate the space according to the requested usage. The digital twin result includes operational parameters that are modelled based on the space and can provide the most efficient manner of operation to condition the space and achieve the desired requested usage while maintaining compliance of the space. The digital twin result can be used for compliance documentation of how the space is reconfigured for the requested usage. For example, what physical changes are controlled, at what times, and how long changes take to implement can be monitored and populated into a compliance report. The population of compliance reports can act as a quality control and also be used in compliance reporting for the standard setting body.
As shown in
At step 2408, the treatment improvement engine monitors health outcomes and the associated environmental conditions over time and uses the associated outcomes to train and update the treatment improvement engine at step 2412. Continually updating the treatment improvement engine of the digital twin 1604 based on health outcomes, and the association of health outcomes to environmental parameters allows the digital twin 1604 to identify correlations and relationships that may not be apparent. For example, improved environmental parameters for a burn unit may be different than improved environmental parameters for a post-surgery recovery ward. The digital twin 1604 is capable of determining improved environmental parameters that may not be apparent to operators of the building systems. The treatment improvement engine of the digital twin 1604 may be a remotely located artificial intelligence or machine learning engine that is trained based on the aggregated information from a large number of healthcare facilities. The knowledge and learning of the treatment improvement engine gained from multiple sources, can then be implemented by the integration engine 1404 of the system architecture 1600 locally at step 2416. In this way, the local building can leverage the learning provided by a larger network of healthcare facilities and a larger knowledge base of health outcome based environmental parameter associations.
As shown in
At step 2508, the compliance testing engine processes the inputs and generates a model of the space and/or component (e.g., a digital representation of the air handler). The model accurately represents how the space and/or component react to situations in the real world. At step 2512, a non-invasive test is conduct on the real world space and/or component. For example, an air flow may be measured, a pressure tested, a temperature measured, a humidity measured, an air quality tested, etc. The non-invasive test does not require significant disturbance of the healthcare space allowing the healthcare space to continue normal operation during the testing.
At step 2516, the results of the testing completed in step 2512 are input into the compliance testing engine and the digital twin 1604 works to determine the parameters associated with the test result. The digital twin 1604 is structured to determine and recreate the operational parameters of the space and/or component that lead to the real world test result. Then, at step 2520, the determined operational parameters that lead to the real world test result are used to populate a compliance report evidencing the operation and validated testing results of the space and/or component. The digital twin 1604 can use the compliance testing engine to generate rich data of the operational parameters and to verify performance, operation, and other features of the space and/or component without requiring invasive testing or inspection of the space and/or component. The compliance testing engine allows for more complete compliance information to be generated and reported while minimizing the disturbance to healthcare facility operation.
As shown in
At step 2612, the knowledge graph engine generates a graphical user interface (GUI) based on the knowledge graph that presents the relationships identified by the knowledge graph engine and allows users of the GUI to better understand the affects adjustments made to the system have on other parameters of their space. The knowledge graph engine can improve user behavior and improve overall compliance and efficiency of the space.
Asset tracking in a healthcare facility can refer human assets (e.g., nurses, doctors, administrative staff, patients, friends and family, etc.) and to physical assets (e.g., a crash cart, medical equipment, etc.). Additionally, action assets can include actions such as hand washing, mask usage, door closures, PPE procedures, or other activities that can be tracked and analyzed to determine the impact of the action assets. The system architecture 1600 can implement asset tracking to improve overall operation of the building. The digital twin 1604 or the integration engine 1404 can be used to track physical assets and determine ideal positioning or routings for physical assets. For example, the system architecture 1600 can identify the nearest medical device needed for a particular procedure. The system architecture 1600 can improve efficiency of medical equipment storage to reduce setup time, transport time, etc. The system architecture 1600 can provide improved wayfinding by tracking route efficiency for medical equipment related to various procedures. Using the digital twin 1604 to model the space allows for improved efficiency in asset tracking and improved capabilities of asset tracking systems.
As shown in
At step 2712, the contamination threat engine determines a contamination threat parameter. In some embodiments, the contamination threat parameter is a percentage, a ratio value, a sliding scale value, or another parameter than indicates the likelihood of a contamination resulting from the activity being tracked. At step 2718, the contamination threat is tracked after the activity has taken place (e.g., the individual conducting the activity is tracked following the contamination threat activity). At step 2722, a health outcome of a patient is tied to the contamination threat and the determine contamination threat parameter. For example, no infection ensued, or an infection occurred.
Based on the contamination parameter and the tracked health outcome, the policy of the contamination threat engine is updated or trained at step 2726 and the method 2700 continues to track contamination threats using the updated policy. Over time, the contamination threat engine learns to associate contamination threat parameters with health outcomes and can identify contamination threat parameters that exceed a threshold or fall outside of a tolerance band and are therefore identified at step 270 as a critical threat. A critical threat indicates an increased risk of a negative health outcome. Once a critical threat is identified, a notification or warning can be sent by the system architecture at step 2734 to alert the individual that actions taken could lead to a negative health outcome and corrective action can be taken.
As shown in
At step 2808, the hostile situation engine identifies an individual who is not wearing PPE (e.g., not a nurse, doctor, etc.). At step 2812, a first hostile action is recognized. In some embodiments, hostile actions include classifications that identify a threat level. For example, a raised voice may indicate a low level threat, raised arms may indicate a medium level threat, and shouting and aggressive jerking movements may indicate a high threat level. In response to the first hostile action recognized in step 2812, a first reaction is implemented at step 2816. In some embodiments, the first reaction includes sending an automated alert including relevant information and video feed to a security team via the security system 1638. In some embodiments, the first reaction includes activating lights using the lighting system 1608 to provide full illumination of an ongoing hostile activity. In some embodiments, the first reaction include recording video and audio using the camera system 1626 and the microphone system 1630. In some embodiments, the first reaction includes activating the entertainment system 1622 to present a prerecorded message and/or video regarding hostile events, or to connect a live stream to security personal to engage with and affect the hostile situation immediately (e.g., reducing time to security engagement versus physically walking to the location of the hostile activity). The first reaction is intended to deescalate the hostile activity.
At step 2822, a second hostile action is identified and classified similar to the first hostile action. In response to the identification of the second hostile action, a second reaction is initiated at step 2826. The second reaction is intended to isolate the hostile activity and mitigate the ability of the hostile activity to spread. For example, the second reaction can include locking of doors (e.g., patient room doors, wing access doors, etc.), actuation of doors (e.g., automatically open or close doors), or closing of gates and/or other barriers. The barriers can be utilized to automatically isolate the hostile activity and inhibit its spread to other areas of the building. For example, using RTLS information from the RTLS 1634, the hostile situation engine can determine a location of the hostile event and lock all adjacent doors effectively locking the hostile individual in a single space or separating them from others who may be harmed. The hostile situation engine can be used to integrate building systems to mitigate hostilities and threats automatically with a fast response. The hostile situation engine leverages a variety of sensor inputs such as computer vision, audio sensing/voice recognition, and RTLS to sense danger to hospital staff, and then take coordinated action by integrating with a variety of systems (e.g., nurse call, watches, phone, other hospital systems) to converge help as quickly as possible to solve this dangerous situation. Further, the hostile situation engine can leverage hostility information to generate tailored insights, and relationships of hostile events to building parameters, layouts etc. and automated reporting for Joint Commission (or other regulator body) compliance with safety standards or required procedures.
Referring now to
The air quality equipment 3104 is operable to provide an air quality service to the patient room 3102. The air quality equipment 3104 can include heating, ventilation, and/or cooling equipment operable to affect the temperature, humidity, pressure, airflow, and other conditions of the indoor air (e.g., carbon dioxide concentration, particulate concentration, chemical composition, etc.). For instance, the air quality equipment 3104 may include a variable air volume box that services the patient room 3102. The air quality equipment 3104 can also or alternatively include air purification devices, fans, filters, etc. in various embodiments.
The ultraviolet (UV) light system 3106 can include light sources configured to emit UV light in wavelengths suitable for disabling (destroying, killing, etc.) of viruses, bacteria, and other pathogens. The ultraviolet (UV) light system 3106 is operable to reduce infection risk in the patient room 3102 by emission of ultraviolet light. Accordingly, operation of the UV light system 3106 can be characterized as improving air quality in the patient room 3102.
The patient room 3102 is also shown as including one or more sensor(s) 3110. The one or more sensor(s) 3110 can measure variables relating to air quality of the patient room 3102, for example air temperature, air pressure, air flow, humidity, carbon dioxide concentration, particulate matter concentration, air composition (e.g., detecting the presence of certain chemicals, etc.), etc. in various embodiments. The one or more sensor(s) 3110 can also measure exposure to ultraviolet light from the UV light system 3106, in some embodiments.
The patient room 3102 is also shown as including a user interface 3108. The user interface 3108 can be a screen, console, display, personal computing device, wall unit (e.g., thermostat), control panel, etc. configured to present information to a user and to receive inputs from a user. For example, the user interface 3108 can present data based on measurements from the sensor(s) 3110. As another example, the user interface 3108 can accept a command to control the air quality equipment 3104 to provide an air quality service to the patient room 3102.
The patient room 3102, the building management system 3112, the air quality service system 3114, the electronic health record system 3116, and the one or more other healthcare system(s) 3118 interoperate to provide air quality services to a patient in the patient room and to track and otherwise manage such air quality services. The system 3000 can operate according to process 3300, which is described in detail below.
In some embodiments, air quality services to be provided at the patient room 3102 are determined by the air quality service system 3114 based on a patient record from the electronic health record system 3116. The air quality service system 3114 may receive information indication a space in which the patient is located or is to be located (e.g., from the electronic health record system 3116 and/or the other healthcare system(s) 3118), and may receive an indication of the building equipment (e.g., air quality equipment 3104) serving that space from the building management system 3112. The air quality service 3114 can use such information to determine the air quality service(s) available to be provided to the patient based on the particular equipment available at the space occupied or to be occupied by the patient. Additionally or alternatively, the air quality service 3114 can recommend a particular patient room 3102 from a set of available patient rooms for a particular patient, based on the equipment available at different patient rooms, the amount of time (or other efficiency metric) associated with driving different patient rooms to an internal air condition suitable for the particular patient, or other consideration relating to provisional of air quality service tuned to a particular patient, for example using one or more artificial intelligence models.
In some embodiments, the air quality service 3114 is configured to determine an air quality service to be provided to the patient based on a health condition of the patient indicated in the electronic health record system 3116. For example, the air quality service 3114 may determine that an enhanced air filtration service be provided to a patient with a respiratory-system-related health condition and that a standard air quality service be provided to a patient with an orthopedic injury; that at least one first building condition (e.g., a first humidity level, a first temperature, a first filtration rate, a first rate of UV irradiation, etc.) should be provided to a patient with a first medical condition as compared to at least one second, different building condition (e.g., a second humidity level, a second temperature, a second filtration rate, a second rate of UV irradiation, etc.) that should provided to a patient with a second medical condition (e.g., based on different clinical effects, outcomes, advantages, etc. associated with different building conditions relative to different medical conditions, disease states, treatment phases, etc.); that at least one building condition should be adjusted overtime to account for progression of a patient's disease state or treatment program (e.g., extra filtration and UV irradiation immediately after a patient gets out of surgery, followed by return to normal or reduced levels as the patient recovers; changes in filtration or pressurization coordinated with treatment such as a nebulizer treatment that emits particulates for coordinated containment, filtration, or flushing). Various such examples are possible to tune the air quality services to the therapeutic advantage of different patients and/or to coordinate air quality services with the timing of therapies being provided to a patient. In some embodiments, an air quality service to be provided in a space is confirmed, selected, or otherwise commanded by a user via user interface 3108. Determining the air quality service 3114 can included determining whether additional equipment (e.g., filtration device, humidifier, ultraviolet lite, aerosol disinfection system, etc.) should be added to a patient room associate with a particular patient. In some embodiments, the air quality service 3114 uses at least one AI model (e.g., using a large language model, neural network classifier, etc.) to perform the operations of the air quality service 3114 described herein.
The air quality equipment 3104 is operable to provide the air quality service selected by the air quality service system 3114 to the patient room 3102. Operating the air quality equipment 3104 can include adjusting settings, operating parameters, on/off decisions, etc. of the air quality equipment 3104, for example via the building management system 3112. In some embodiments, different ventilation rates, fan speeds, enhanced filtration modes, temperature, pressure, humidity, etc. are provided to provide different air quality services. The one or more sensor(s) 3110 provide data which can be used in control of the air quality equipment 3104 (e.g., feedback control) and/or for tracking an amount of the air quality service provided. The air quality system 3104 can include permanent equipment serving a patient room (e.g., installed equipment such as VAV boxes, air handling units, rooftop equipment, chillers, room air conditioners, heaters, etc. of a healthcare facility) and/or additional equipment (e.g., filtration systems, humidifiers, space heater, sensor array, patient bed as in
The air quality service system 3114 is configured to track an amount of the air quality service provided. Tracking the amount of the air quality service provided can include, for example, measuring a change in the air quality in the patient room 3102 (e.g., a reduction in carbon dioxide concentration, a reduction in particulate concentration, a temperature change, a pressure change, etc.); determining a duration for which the air quality service is provided (e.g., an amount of time for which enhanced filtration is provided); determining a number of air changes provided, for example by counting or estimating a number of times the air in the patient room is replaced by the air quality equipment 3104 (e.g., based on air flow measurements, fan speeds, damper position, etc. and a volume of the patient room); or otherwise quantifying the amount of air quality services provided to the patient room.
The air quality service system 3114 can then cause the electronic health record system 3116 to be updated to include an indication of the amount of air quality services provided. The indication of the amount of air quality services provided can be included as an entry in a list of other facilities provided to a patient (e.g., room charges, equipment used for the patient, etc.), therapies provided to the patient (e.g., doctor's visits, surgeries, etc.), therapeutics provided to the patient (e.g., hospital-administered drugs), etc. such that the air quality services are included in wholistic data on care provided to the patient at the healthcare facility. In some embodiments, the air quality services provided can then be included directly in a bill for such care provided to the patient and/or other payor (e.g., insurance provider). In some embodiments, inclusion of air quality services in such data helps inform doctors, nurses, etc. about the patient's condition and treatment in a manner which can facilitate future interventions, discharge instructions, etc. to improve patient outcomes.
Referring now to
The patient bed 3202 is shown as including a variety of sensors configured to measure air quality parameters at the patient bed 3202. By including such sensors in the patient bed 3202, such sensors provide a more accurate representation of air quality experienced by the patient as compared to sensors which may be wall-mounted or otherwise positioned away from the patient in a patient room (e.g., positioned in ductwork, positioned at a unit of building equipment), especially given that the patient's bodily functions can directly affect various air quality parameters in different ways given different patient health conditions. As shown, the patient bed 3202 includes a UV sensor 3204 configured to measure exposure to UV light, a carbon dioxide sensor 3206 configured to measure carbon dioxide level in the air at the patient bed 3202, an air pressure sensor 3208 configured to measure air pressure at the patient bed 3202, a humidity sensor configured to measure humidity at the patient bed 3202, a particulate sensor 3212 configured to measure particulate levels at the patient bed 3202, a temperature sensor 3214 configured to measure temperature at the patient bed 3202, and an air composition sensor 3216 to detect the chemical composition of the air at the patient bed 3202 (e.g., the presence of certain chemicals in the air). Such sensors can measure the air quality at the patient bed 3202. Different sensors can be included or omitted in various embodiments.
The patient bed 3202 is also shown as including a UV light emitter 3218. The UV light emitter 3218 can operate to emit UV light adapted to disable, kill, etc. pathogens such as viruses and bacteria. The UV light emitter 3218 can be arranged on the patient bed 3202 to target surfaces of the patient bed 3202 (e.g., handles, frames, sheets, etc.) and/or other areas of a patient room while minimize exposure of the patient to the UV light emitter 3218. The patient bed can thereby provide integrated sanitation using UV light. In some embodiments, the UV light emitter 3218 is operated (e.g., controlled by the smart diagnostic circuitry 3228, control panel 3226, or air quality service system 3114) to provide an amount of UV light (e.g., number of irradiations, time of irradiation, rate of irradiations, intensity/brightness of irradiation) as a function of a stage of treatment or disease progression of the patient. For example, the UV light emitter 3218 can be automatically controlled to provide a first amount of UV light during a first period of time following a treatment performed on the patient (e.g., a surgical operation, a chemotherapy administration, a radiation therapy appointment) and a second amount of UV light after the first time period, for example such that a higher level of UV-based sanitization can be provided during time periods when patient infection risk is higher as compared to other time periods. Such an approach can include determining that a tradeoff between UV-exposure-related risks for such higher levels of UV-based sanitization are outweighed by the benefits for of reduced infection risk as a function of treatment times or other indicator of patient vulnerability to infection, disease progression, treatment progress, or the like.
The patient bed 3202 is further shown as including an air purifier 3220. The air purifier 3220 is shown as including a fan 3222 and a filter 3224. The fan 3222 can operate to drive air through the filter 3224, which can be a high efficiency particulate air (HEPA) filter, electrostatic filter, etc., such that operation of the fan 322 causes filtration of air at the patient bed 3202. In some embodiments, the air purifier 3220 interoperates with the UV light emitter 3218 such that air flow provided by the fan 3222 is exposed to UV light as part of air purification by the air purifier 3220. In some embodiments, the air purifier 3220 includes a humidifier or dehumidifier and is thereby configured to affect humidity of air proximate the patient bed 3202. The air purifier 3220 can thus be operated to provide an air quality service to the patient. By virtue of being co-located with a patient on the patient bed 3202, the air purifier 3202 is arranged to provide an air quality service directly to the patient.
As shown in
As shown in
As shown in
While
Referring now to
At step 3302, building equipment is associated with spaces of a healthcare facility. For example, a digital twin of the healthcare facility may include representation that different building equipment serves different spaces of the healthcare facility. Step 3302 can include determining what types of building equipment are available to serve different spaces, for example different patient rooms, operating rooms, waiting rooms, intensive care units, etc.
At step 3304, a patient record is associated with a space of the healthcare facility. The space may be a space occupied by the patient or planned for occupation by the patient (e.g., a patient room assigned to a patient for a hospital stay, an operating room or other treatment room assigned for treating the patient, etc.). In some embodiments, the patient record is assigned with a space representation in a digital twin of the healthcare facility in step 3304.
In step 3306, patient air quality needs are determined based on the patient record. For example, the patient record can indicate that the patient has a health condition that would benefit therapeutically from pressurization, low carbon dioxide levels, high ventilation rates, low particulate levels, higher temperatures, lower temperatures, etc. and/or for which policies, rules, compliance requirements exist setting requirements relating to air quality. Patient air quality needs, enhanced air quality, and the like herein refers to an aspect of the present application which observes that certain health conditions may be associated with therapeutic/clinical benefits of certain irregular or non-standard air quality services, for example according to assessments from payors (e.g., insurers, etc.); for example, tuning building conditions such as temperature, filtration rates, air change rates, pressure, humidity, etc. to conditions which deviate from general hospital setpoints but in a manner tuned to a particular patient condition can provide therapeutic/clinical benefits that ties a patient-specific air quality operation to the patient record above-and-beyond the already high air quality in healthcare facilities. In some embodiments, step 3306 can include determining whether an air quality service is pre-approved for reimbursement by a payor for the patient's particular health condition.
At step 3308, an air quality service to be provided is determined based on the patient record and the space associated with the patient. Step 3308 can include comparing the patient's air quality service needs (e.g., payor-approved air quality services based on the patient's particular health condition) to air quality services that can be provided in the healthcare facility given the building equipment available to serve the space associated with the patient and/or services that can be provided by bringing additional equipment (e.g., patient bed 3202) into the space. Step 3308 can be performed using a look-up table storing relationships between equipment and air quality service needs, for example. In other embodiments, a neural network classifier is used to select an air quality service to be provided based on inputs including the patient record and the building equipment serving the space associated with the patient (e.g., a classifier trained via supervised learning).
At step 3310, the air quality service is provided to the patient. Step 3310 can be provided by controlling the equipment that serves the space associated with the patient, for example air quality equipment 3104 and/or air purifier 3220. Step 3310 can be executed using a building management system to control equipment. In some embodiments, step 3310 includes prompting a user (e.g., nurse, doctor, patient) for approval or other command to execute the air quality service (e.g., via user interface 3108, via control panel 2226, via a nurse call system, via room reservation interface, via a patient's chart, etc.) and operating equipment to provide the air quality service responsive to such a command. Air quality in the space associated with the patient can thereby be affected (e.g., improved, carbon dioxide levels reduced, particulate matter levels reduced, humidity driven to a setpoint, temperature driven to a setpoint, pressure in compliance with a goal, etc.) in a manner particularized to provide therapeutic advantages for patient care in step 3310.
In step 3312, the amount of the air quality service provided in tracked. In some embodiments, step 3312 includes counting an amount of air changes provided in the space, i.e., a number of times air in the space is replaced with filtered or ventilated air, during provision of the air quality services, for example based on fan speeds, damper positions, flow rate measurements, volumetric flow rate estimates, room volume, etc. The total amount of air moved through or to the space (or through a filter, through a purification device, etc.) may also be used as a quantification of the amount of air quality service provided. In some embodiments, step 3312 includes determining a change in air quality measurements, for example a change from before the air quality service is provided to a time after the air quality service is provided. In such embodiments, the amount of the air quality service provided can be quantified in terms of amount of an undesirable condition removed (e.g., an amount of carbon dioxide removed, an amount of a contaminant removed, an amount of humidity reduced, reduction in particulate concentration, etc.) or other change in air quality. In some embodiments, step 3312 includes determining (e.g., timing) a duration for which the air quality service is provided and/or for which the air quality service maintains a measured parameter within a desired range (e.g., keeps particulate levels below a threshold, keeps pressure above a threshold). The air quality service can be provided continuously (e.g., for the entirety of occupation of a space by the patient) or intermittently (e.g., on a schedule set in step 3308, based on sensor measurements, based on a treatment schedule for medical interventions being provided to the patient, etc.) with step 3312 tracking the amount of time for which the air quality service is provided. In some embodiments, the system 3200 or system 3100 communicates with a medical device operating in the space to provide a treatment (e.g., an intravenous drip machine, a nebulizer, a supplemental oxygen device, etc.) so that air quality services can be automatically coordinated with operates of said medical device. In some embodiments, a number of times the air quality service is executed is counted in step 3312 (e.g., a number of times a ventilation system turns on, etc.). In some embodiments, step 3312 includes measuring or estimating resources (e.g., electricity, water, filtration material, disinfectant, etc.) consumed by provision of the air quality service and such resource consumption is used as an quantification of the amount of air quality services provided to the patient. Any of such examples and/or combinations thereof can be used in step 3312 to quantify an amount of air quality service provided to a particular patient in the healthcare facility.
At step 3314, the patient record is updated to include an entry for provision of the air quality service, for example including an indication of the amount of the air quality service provided. The patient record can be an electronic medical record, such that the treatment history of the patient reflect air quality service(s) provided to the patient. In some embodiments, step 3314 includes providing an indication of the air quality service on a bill for the treatment of the patient, for example a bill provided to the patient and/or a payor. The indication can include a representation that the air quality service was provided with particularity based on the patient's health condition (e.g., based on step 3306) and/or that the service was pre-approved by a payor. Accordingly, process 3300 can enable provision of air quality services tied to particular, therapeutic advantages for particular patients (in addition to optionally facilitating reimbursement for provision of such air quality services). Such teachings can thereby provided technical improvement in the field of HVAC and air quality equipment for healthcare operations, for example by tuning operations thereof to patient therapeutic needs, as well as in the healthcare field by provided patients which air quality services particularly adapted to provide therapeutic advantages for various patients.
As utilized herein, the terms “approximately,” “about,” “substantially”, and similar terms are intended to have a broad meaning in harmony with the common and accepted usage by those of ordinary skill in the art to which the subject matter of this disclosure pertains. It should be understood by those of skill in the art who review this disclosure that these terms are intended to allow a description of certain features described and claimed without restricting the scope of these features to the precise numerical ranges provided. Accordingly, these terms should be interpreted as indicating that insubstantial or inconsequential modifications or alterations of the subject matter described and claimed are considered to be within the scope of the disclosure as recited in the appended claims.
It should be noted that the term “exemplary” and variations thereof, as used herein to describe various embodiments, are intended to indicate that such embodiments are possible examples, representations, or illustrations of possible embodiments (and such terms are not intended to connote that such embodiments are necessarily extraordinary or superlative examples).
The term “coupled” and variations thereof, as used herein, means the joining of two members directly or indirectly to one another. Such joining may be stationary (e.g., permanent or fixed) or moveable (e.g., removable or releasable). Such joining may be achieved with the two members coupled directly to each other, with the two members coupled to each other using a separate intervening member and any additional intermediate members coupled with one another, or with the two members coupled to each other using an intervening member that is integrally formed as a single unitary body with one of the two members. If “coupled” or variations thereof are modified by an additional term (e.g., directly coupled), the generic definition of “coupled” provided above is modified by the plain language meaning of the additional term (e.g., “directly coupled” means the joining of two members without any separate intervening member), resulting in a narrower definition than the generic definition of “coupled” provided above. Such coupling may be mechanical, electrical, or fluidic.
The term “or,” as used herein, is used in its inclusive sense (and not in its exclusive sense) so that when used to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Conjunctive language such as the phrase “at least one of X, Y, and Z,” unless specifically stated otherwise, is understood to convey that an element may be either X, Y, Z; X and Y; X and Z; Y and Z; or X, Y, and Z (i.e., any combination of X, Y, and Z). Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y, and at least one of Z to each be present, unless otherwise indicated.
References herein to the positions of elements (e.g., “top,” “bottom,” “above,” “below”) are merely used to describe the orientation of various elements in the FIGURES. It should be noted that the orientation of various elements may differ according to other exemplary embodiments, and that such variations are intended to be encompassed by the present disclosure.
The hardware and data processing components used to implement the various processes, operations, illustrative logics, logical blocks, modules and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, or, any conventional processor, controller, microcontroller, or state machine. A processor also may be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some embodiments, particular processes and methods may be performed by circuitry that is specific to a given function. The memory (e.g., memory, memory unit, storage device) may include one or more devices (e.g., RAM, ROM, Flash memory, hard disk storage) for storing data and/or computer code for completing or facilitating the various processes, layers and modules described in the present disclosure. The memory may be or include volatile memory or non-volatile memory, and may include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described in the present disclosure. According to an exemplary embodiment, the memory is communicably connected to the processor via a processing circuit and includes computer code for executing (e.g., by the processing circuit or the processor) the one or more processes described herein.
The present disclosure contemplates methods, systems and program products on any machine-readable media for accomplishing various operations. The embodiments of the present disclosure may be implemented using existing computer processors, or by a special purpose computer processor for an appropriate system, incorporated for this or another purpose, or by a hardwired system. Embodiments within the scope of the present disclosure include program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon. Such machine-readable media can be any available media that can be accessed by a general purpose or special purpose computer or other machine with a processor. By way of example, such machine-readable media can comprise RAM, ROM, EPROM, EEPROM, or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor. Combinations of the above are also included within the scope of machine-readable media. Machine-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions.
Although the figures and description may illustrate a specific order of method steps, the order of such steps may differ from what is depicted and described, unless specified differently above. Also, two or more steps may be performed concurrently or with partial concurrence, unless specified differently above. Such variation may depend, for example, on the software and hardware systems chosen and on designer choice. All such variations are within the scope of the disclosure. Likewise, software implementations of the described methods could be accomplished with standard programming techniques with rule-based logic and other logic to accomplish the various connection steps, processing steps, comparison steps, and decision steps.
It is important to note that the construction and arrangement of various systems (e.g., system 100, system 200, etc.) and methods as shown in the various exemplary embodiments is illustrative only. Additionally, any element disclosed in one embodiment may be incorporated or utilized with any other embodiment disclosed herein. Although only one example of an element from one embodiment that can be incorporated or utilized in another embodiment has been described above, it should be appreciated that other elements of the various embodiments may be incorporated or utilized with any of the other embodiments disclosed herein.
This application claims the benefit of and priority to U.S. Patent Application No. 63/470,147 filed May 31, 2023, the entire disclosure of which is incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
63470147 | May 2023 | US |