The present disclosure relates to systems and methods utilizing video cameras for monitoring patients and their environment.
According to various aspects of the present disclosure, a patient support apparatus, a system, and/or one or more methods are provided that operate in conjunction with one or more cameras adapted to monitor the patient and/or the patient's environment. The images from the camera are used to improve the safety of the patient, to help prevent one or more adverse events from occurring (e.g. patient fall), to prevent unauthorized usage of the patient support apparatus assigned to the patient, to detect patient conditions that warrant medical attention, to synchronize sensor readings with video captured from the cameras, to apprise remotely positioned caregivers of the patient's situations when an exit alert is detected, and to allow remote controlled-movement of the patient support apparatus to be carried out without risk of injury or damage to the patient support apparatus and/or the patient or other individuals. Still other features and aspects of the present disclosure will be apparent to one of ordinary skill in the art from the following written description and accompanying drawings.
According to one aspect of the present disclosure, a patient support apparatus is provided that includes a support surface, a movable component, a powered actuator adapted to move the movable component, a control panel, a transceiver, and a controller. The support surface is adapted to support a patient thereon. The control panel includes a movement control adapted to control the powered actuator. The transceiver is adapted to receive a movement command from a remote control positioned off-board the patient support apparatus. The controller communicates with the transceiver and a camera having a field of view that includes a range of motion of the component. The controller is adapted to drive the actuator in response to receiving the movement command if the camera is simultaneously capturing a video stream that includes the component, to not drive the actuator in response to the movement command if the camera is not simultaneously capturing the video stream, and to drive the actuator in response to the activation of the movement control on the control panel regardless of whether the camera is simultaneously capturing the video stream.
According to other aspects of the present disclosure, the camera may be positioned on the patient support apparatus.
In some embodiments, the transceiver is adapted to transmit the video stream to the remote control. The transceiver may further be adapted to receive an acknowledgement from the remote control of the receipt of the video stream. In such embodiments, the controller may be further adapted to drive the actuator in response to receiving the movement command if the camera is simultaneously capturing the video stream and the controller has received the acknowledgement from the remote control, and to not drive the actuator in response to receiving the movement command if the camera is simultaneously capturing the video stream but the controller has not received the acknowledgement from the remote control.
The movable component, in some embodiments, is a litter frame supported by a pair of lifts. The litter frame supports the support surface.
In some embodiments, the support surface includes a Fowler section adapted to pivot about a generally horizontal axis, and the movable component is the Fowler section.
The transceiver, in some embodiments, is a WiFi transceiver adapted to communicate with a wireless access point of a local area network of a healthcare facility.
The remote control, in some embodiments, is an electronic device that is adapted to communicate with the wireless access point of the local area network, and that is further adapted to send the movement command to a server on the local area network that then forwards the movement command to the patient support apparatus.
In some embodiments, the camera includes a depth sensor adapted to determine distances to objects appearing within the field of view of the camera.
In some embodiments, a second camera is positioned onboard the patient support apparatus. The controller, in such embodiments, may be adapted to generate a stitched video stream comprised of a portion of the video stream from the first camera and a portion of a second video stream from the second camera.
The patient support apparatus, in some embodiments, further comprises a second control panel that includes a second control adapted to carry out a particular function when activated. The second control is positioned on a face of the second control panel that faces away from the patient when the patient is positioned on the support surface. In such embodiments, the controller may be further adapted to analyze the video stream to determine if the patient is attempting to activate the second control panel and to disable the second control if the controller determines that the patient is attempting to activate the second control.
In some embodiments, the patient support apparatus further comprises an exit detection system and the second control is adapted to disarm the exit detection system when the second control is activated.
In some embodiments, the patient support apparatus further comprises an onboard monitoring system adapted to monitor a plurality of conditions on the patient support apparatus and to issue an alert if at least one of the conditions is in an undesired state, and to not issue the alert if none of the conditions are in the undesired state. In such embodiments, the second control may be adapted to disarm the onboard monitoring system when the second control is activated.
The controller, in some embodiments, is further adapted to analyze the video stream to determine a breathing rate of the patient. In such embodiments, the controller may be further adapted to perform any one or more of the following: transmit the breathing rate to a server on a local area network of a healthcare facility; transmit an alert to the server if the breathing rate exceeds an upper threshold; or transmit an alert to the server if the breathing rate decreases below a lower threshold.
In some embodiments, the controller is further adapted to analyze the video stream to determine if a ligature is present within the field of view. In such embodiments, the controller may be further adapted to transmit a message to a server on a local area network of a healthcare facility if the controller detects the presence of the ligature.
The controller, in some embodiments, is further adapted to communicate with a database containing visual characteristics of gowns assigned to patients within a healthcare facility in which the patient support apparatus is positioned, and to use the visual characteristics to identify within the video stream a gown worn by the patient.
The controller, in some embodiments, is further adapted to analyze the video stream to determine a position of the patient's body, to modify a color of the patient's body within the video stream, and to transmit the modified video stream with the modified color of the patient's body to an off-board device.
The modified color, in some embodiments, is comprised of shades of a single color, such as, but not limited to, gray.
In some embodiments, the off-board device is a server on a local area network of a healthcare facility in which the patient support apparatus is positioned.
The controller, in some embodiments, is further adapted to identify the patient support apparatus in the video stream, to modify the video stream by replacing the patient support apparatus with a computer generated rendering of the patient support apparatus, and to transmit the modified video stream to an off-board device.
The patient support apparatus, in some embodiments, further comprises an exit detection system comprising a plurality of load cells, and the controller is further adapted to generate a synchronized data file including a visual representation of readings from the plurality of load cells synchronized with movement of the patient captured in the video stream. In such embodiments, the controller may be further adapted to transmit the synchronized data file to an off-board device.
According to another aspect of the present disclosure, a system is provided that includes a patient support apparatus, a camera, and an off-board computer. The patient support apparatus includes a support surface adapted to support a patient thereon, a sensor, a transceiver, and a controller. The controller is adapted to instruct the transceiver to transmit a sequence of readings from the sensor to the off-board computer. The camera has a field of view that captures at least a portion of the patient support apparatus and the camera is adapted to generate a video. The off-board computer is adapted to receive the video from the camera and to generate a synchronized data file. The synchronized data file includes a first portion synchronized with a second portion. The first portion contains a visual representation of the sequence of readings from the sensor and the second portion contains the video.
The off-board computer, in some embodiments, is a server in communication with a local area network of a healthcare facility in which the patient support apparatus is located, and the server is adapted to forward the synchronized data file to an electronic device in communication with the local area network.
The electronic device, in some embodiments, is a smart phone assigned to a caregiver.
The camera, in some embodiments, is positioned onboard the patient support apparatus.
The camera, in some embodiments, includes a depth sensor adapted to determine distances to objects appearing within the field of view of the camera.
In some embodiments, the system further includes a second camera positioned onboard the patient support apparatus, and the off-board computer is further adapted to receive a second video from the second camera and to generate a stitched video comprised of a portion of the video from the camera and a portion of the second video from the second camera. In such embodiments, the off-board computer is further adapted to integrate the stitched video into the synchronized data file.
The off-board computer, in some embodiments, is a server adapted to analyze the video to determine a breathing rate of the patient. The server may further be adapted to perform at least one of the following: if the breathing rate exceeds an upper threshold, transmit an alert to a mobile electronic device associated with a caregiver assigned to the patient; or, if the breathing rate is less than a lower threshold, transmit an alert to the mobile electronic device.
In some embodiments, the off-board computer is a server adapted to analyze the video to determine if a ligature is present within the field of view. In such embodiments, the server may further be adapted to transmit a message to a mobile electronic device associated with a caregiver assigned to the patient if the server detects the presence of the ligature.
The off-board computer, in some embodiments, is a server adapted to communicate with a database containing visual characteristics of gowns assigned to patients within a healthcare facility in which the patient support apparatus is positioned. The server, in such embodiments, is adapted to use the visual characteristics to identify within the video a gown worn by the patient.
In some embodiments, the off-board computer is a server adapted to analyze the video to determine a position of the patient's body, to modify a color of the patient's body within the second portion of the synchronized data file, and to transmit the synchronized data file with the modified color of the patient's body to a mobile electronic device associated with a caregiver assigned to the patient. The modified color may comprise shades of a single color, such as, but not limited to, gray.
In some embodiments, the off-board computer is a server adapted to identify the patient support apparatus in the video, to modify the second portion of the synchronized data file by replacing the patient support apparatus with a computer generated rendering of the patient support apparatus, and to transmit the synchronized data file with the computer generated rendering of the patient support apparatus to a mobile electronic device associated with a caregiver assigned to the patient.
The sensor, in some embodiment, includes a load cell of an exit detection system that comprises a plurality of load cells.
The remote control, in some embodiments, is a portable electronic device, such as a smart phone, and the server is adapted to receive a movement command from the remote control and to forward the movement command to the patient support apparatus. The movement command commands the controller of the patient support apparatus to move a component of the patient support apparatus.
In some embodiments, the server is further adapted to analyze the video to determine if any obstruction is present in a movement path of the component, and to forward the movement command to the patient support apparatus only if no obstruction is present in the movement path of the component.
The server, in some embodiments, is further adapted to analyze the video to determine if the patient is present on the support surface of the patient support apparatus, and to forward the movement command to the patient support apparatus only if the patient is not present on the support surface.
The server may be further configured to send a failure message to the remote control if the server does not forward the movement command to the patient support apparatus. The failure message indicates that the component has not been moved.
The server may also, or alternatively, be further configured to send a success message to the remote control if the server does forward the movement command to the patient support apparatus. The success message indicates that the component has been moved.
The server, in some embodiments, is adapted to forward the movement command to the patient support apparatus only if the server is simultaneously streaming the video to the remote control.
The movable component, in some embodiments, is one of an adjustable height litter frame or a Fowler section that is adapted to pivot about a generally horizontal pivot axis.
Before the various embodiments disclosed herein are explained in detail, it is to be understood that the claims are not to be limited to the details of operation or to the details of construction and the arrangement of the components set forth in the following description or illustrated in the drawings. The embodiments described herein are capable of being practiced or being carried out in alternative ways not expressly disclosed herein. Also, it is to be understood that the phraseology and terminology used herein are for the purpose of description and should not be regarded as limiting. The use of “including” and “comprising” and variations thereof is meant to encompass the items listed thereafter and equivalents thereof as well as additional items and equivalents thereof. Further, enumeration may be used in the description of various embodiments. Unless otherwise expressly stated, the use of enumeration should not be construed as limiting the claims to any specific order or number of components. Nor should the use of enumeration be construed as excluding from the scope of the claims any additional steps or components that might be combined with or into the enumerated steps or components.
An illustrative patient support apparatus 20 according to a first embodiment of the present disclosure is shown in
In general, patient support apparatus 20 includes a base 22 having a plurality of wheels 24, a pair of lifts 26 supported on the base 22, a litter frame 28 supported on the lifts 26, and a support deck 30 supported on the litter frame 28. Patient support apparatus 20 further includes a headboard 32, a footboard 34 and a plurality of siderails 36. Siderails 36 are all shown in a raised position in
Lifts 26 are adapted to raise and lower litter frame 28 with respect to base 22. Lifts 26 may utilize hydraulic actuators, electric actuators, or any other suitable device for raising and lowering litter frame 28 with respect to base 22. In the illustrated embodiment, lifts 26 are operable independently so that the tilting of litter frame 28 with respect to base 22 can also be adjusted, to place the litter frame 28 in a flat or horizontal orientation, a Trendelenburg orientation, or a reverse Trendelenburg orientation. That is, litter frame 28 includes a head end 38 and a foot end 40, each of whose height can be independently adjusted by the nearest lift 26. Patient support apparatus 20 is designed so that when an occupant lies thereon, his or her head will be positioned adjacent head end 38 and his or her feet will be positioned adjacent foot end 40. The lifts 26 may be constructed and/or operated in any of the manners disclosed in commonly assigned U.S. patent publication 2017/0246065, filed on Feb. 22, 2017, entitled LIFT ASSEMBLY FOR PATIENT SUPPORT APPARATUS, the complete disclosure of which is hereby incorporated herein by reference. Other manners for constructing and/or operating lifts 26 may, of course, be used.
Litter frame 28 provides a structure for supporting support deck 30, the headboard 32, footboard 34, and siderails 36. Support deck 30 provides a support surface for a mattress 42, or other soft cushion, so that a person may lie and/or sit thereon. The top surface of the mattress 42 or other cushion forms a support surface for the occupant.
Support deck 30 is made of a plurality of sections, some of which are pivotable about generally horizontal pivot axes. In the embodiment shown in
In some embodiments, patient support apparatus 20 may be modified from what is shown to include one or more components adapted to allow the user to extend the width of patient support deck 30, thereby allowing patient support apparatus 20 to accommodate patients of varying sizes. When so modified, the width of deck 30 may be adjusted sideways in any increments, for example between a first or minimum width, a second or intermediate width, and a third or expanded/maximum width.
As used herein, the term “longitudinal” refers to a direction parallel to an axis between the head end 38 and the foot end 40. The terms “transverse” or “lateral” refer to a direction perpendicular to the longitudinal direction and parallel to a surface on which the patient support apparatus 20 rests.
It will be understood by those skilled in the art that patient support apparatus 20 can be designed with other types of mechanical constructions, such as, but not limited to, that described in commonly assigned, U.S. Pat. No. 10,130,536 to Roussy et al., entitled PATIENT SUPPORT USABLE WITH BARIATRIC PATIENTS, the complete disclosure of which is incorporated herein by reference. In another embodiment, the mechanical construction of patient support apparatus 20 may be the same as, or nearly the same as, the mechanical construction of the Model 3002 S3 bed manufactured and sold by Stryker Corporation of Kalamazoo, Michigan. This mechanical construction is described in greater detail in the Stryker Maintenance Manual for the MedSurg Bed, Model 3002 S3, published in 2010 by Stryker Corporation of Kalamazoo, Michigan, the complete disclosure of which is incorporated herein by reference. It will be understood by those skilled in the art that patient support apparatus 20 can be designed with still other types of mechanical constructions, such as, but not limited to, those described in commonly assigned, U.S. Pat. No. 7,690,059 issued to Lemire et al., and entitled HOSPITAL BED; and/or commonly assigned U.S. Pat. publication No. 2007/0163045 filed by Becker et al. and entitled PATIENT HANDLING DEVICE INCLUDING LOCAL STATUS INDICATION, ONE-TOUCH FOWLER ANGLE ADJUSTMENT, AND POWER-ON ALARM CONFIGURATION, the complete disclosures of both of which are also hereby incorporated herein by reference. The mechanical construction of patient support apparatus 20 may also take on still other forms different from what is disclosed in the aforementioned references.
Patient support apparatus 20 further includes a plurality of control panels 54 that enable a user of patient support apparatus 20, such as a patient and/or an associated caregiver, to control one or more aspects of patient support apparatus 20. In the embodiment shown in
Among other functions, controls 50 of control panel 54a allow a user to control one or more of the following: change a height of support deck 30, raise or lower head section 44, activate and deactivate a brake for wheels 24, arm and disarm an exit detection system, arm and disarm an onboard monitoring system, configure patient support apparatus 20, control one or more cameras and/or camera processing functions, control an onboard scale system, and/or other functions. One or both of the inner siderail control panels 54c also include at least one control that enables a patient to call a remotely located nurse (or other caregiver). In addition to the nurse call control, one or both of the inner siderail control panels 54c may also include one or more controls for controlling one or more features of a television, room light, and/or reading light positioned within the same room as the patient support apparatus 20. With respect to the television, the features that may be controllable by one or more controls 50 on control panel 54c include, but are not limited to, the volume, the channel, the closed-captioning, and/or the power state of the television. With respect to the room and/or night lights, the features that may be controlled by one or more controls 50 on control panel 54c include the on/off state of these lights.
Control panel 54a includes a display 52 (
Surrounding display 52 are a plurality of navigation controls 50a-f that, when activated, cause the display 52 to display different screens on display 52. More specifically, when a user presses navigation control 50a, control panel 54a displays an exit detection control screen on display 52 that includes one or more icons that, when touched, control an onboard exit detection system. The exit detection system is as adapted to issue an alert when a patient exit from patient support apparatus 20. Such an exit detection system may include any of the features and functions as, and/or may be constructed in any of the same manners as, the exit detection system disclosed in commonly assigned U.S. patent application 62/889,254 filed Aug. 20, 2019, by inventors Sujay Sukumaran et al. and entitled PERSON SUPPORT APPARATUS WITH ADJUSTABLE EXIT DETECTION ZONES, the complete disclosure of which is incorporated herein by reference. Other types of exit detection systems can also or alternatively be used.
When a user pressed navigation control 50b (
When a user presses navigation control 50c, control panel 54a displays a scale control screen that includes a plurality of control icons that, when touched, control the scale system of patient support apparatus 20. Such a scale system may include any of the features and functions as, and/or may be constructed in any of the same manners as, the scale systems disclosed in commonly assigned U.S. patent application 62/889,254 filed Aug. 20, 2019, by inventors Sujay Sukumaran et al. and entitled PERSON SUPPORT APPARATUS WITH ADJUSTABLE EXIT DETECTION ZONES, and U.S. patent application Ser. No. 62/885,954 filed Aug. 13, 2019, by inventors Kurosh Nahavandi et al. and entitled PATIENT SUPPORT APPARATUS WITH EQUIPMENT WEIGHT LOG, the complete disclosures of both of which are incorporated herein by reference. Other types of scale systems can also or alternatively be included with patient support apparatus 20.
When a user presses navigation control 50d, control panel 54 displays a motion control screen that includes a plurality of control icons that, when touched, control the movement of various components of patient support apparatus 20, such as, but not limited to, the height of litter frame 28 and the pivoting of head section 44. In some embodiments, the motion control screen displayed on display 52 in response to pressing control 50d may be the same as, or similar to, the position control screen 216 disclosed in commonly assigned U.S. patent application Ser. No. 62/885,953 filed Aug. 13, 2019, by inventors Kurosh Nahavandi et al. and entitled PATIENT SUPPORT APPARATUS WITH TOUCHSCREEN, the complete disclosure of which is incorporated herein by reference. In some embodiments, the motion control screen takes on the form of motion control screen 62 shown in
When a user presses navigation control 50e, control panel 54a displays a motion lock control screen that includes a plurality of control icons that, when touched, control one or more motion lockout functions of patient support apparatus 20. Such a motion lockout screen may include any of the features and functions as, and/or may be constructed in any of the same manners as, the motion lockout features, functions, and constructions disclosed in commonly assigned U.S. patent application Ser. No. 16/721,133 filed Dec. 19, 2019, by inventors Kurosh Nahavandi et al. and entitled PATIENT SUPPORT APPARATUSES WITH MOTION CUSTOMIZATION, the complete disclosures of both of which are incorporated herein by reference. Other types of motion lockout control screens can also or alternatively be included with patient support apparatus 20.
When a user presses on navigation control 50f, control panel 54a displays a menu screen that includes a plurality of menu icons that, when touched, bring up one or more additional screens for controlling and/or viewing one or more other aspects of patient support apparatus 20. Such other aspects include, but are not limited to, diagnostic and/or service information for patient support apparatus 20, mattress control and/or status information, configuration settings, and other settings and/or information. One example of a suitable menu screen is the menu screen 100 disclosed in commonly assigned U.S. patent application Ser. No. 62/885,953 filed Aug. 13, 2019, by inventors Kurosh Nahavandi et al. and entitled PATIENT SUPPORT APPARATUS WITH TOUCHSCREEN, the complete disclosure of which is incorporated herein by reference. Other types of menu screens can also or alternatively be included with patient support apparatus 20.
For all of the navigation controls 50a-f (
Control panel 54a, in some embodiments, also includes a dashboard 58 (
The lights positioned behind these icons 60a-e may be controlled to be illuminated in different colors, depending upon what state the associated condition is currently in (e.g. the brake is deactivated, exit detection system 82 is disarmed, etc.) and/or one or more of them may alternatively not be illuminated at all when the associated condition is in another state. Additionally, the brightness level of the lights may be adjustable such that, regardless of color, the intensity of the light emitted may be varied by a controller onboard patient support apparatus 20.
Fewer or additional icons 60 may be included as part of dashboard 58 (
Motion control screen 62 includes a plurality of motion controls 50g-p for controlling the movement of patient support apparatus 20. Specifically, it includes a chair control 50g for moving patient support apparatus 20 to a chair configuration; a flat control 50h for moving patient support apparatus 20 to a flat orientation; a set of Fowler lift and lower controls 50i and 50j; a set of gatch lift and lower controls 50k and 50l; a litter frame lift control 50m; a litter frame lower control 50n; a Trendelenburg control 50o; and a reverse Trendelenburg control 50p. In some embodiments of patient support apparatus 20, motion control screen 62 are dedicated controls that are separate from display 52.
Control panel 54a (
Each camera 64, in some embodiments, is a camera from the RealSense™ product family D400 series marketed by Intel Corporation of Santa Clara, California. For example, in some embodiments, each camera is an Intel RealSense™ D455 Depth Camera that includes two imagers, an RGB sensor, a depth sensor, an inertial measurement unit, a camera module and a vision processor. Further details regarding this camera are found in the June 2020, (revision 009; document number 337029-009) datasheet entitled “Intel® RealSense™ Product Family D400 Series,” published by Intel Corporation of Santa Clara, California, the complete disclosure of which is incorporated herein by reference. Other types of depth cameras marketed by the Intel Corporation, as well as other types of depth cameras marketed by other entities may also, or alternatively, be used according to the teachings of the present disclosure. In some embodiments, cameras may be used that are of the same type(s) as those disclosed in commonly assigned U.S. Pat. No. 10,368,39 issued to Derenne et al. on Jul. 20, 2019, and entitled VIDEO MONITORING SYSTEM, the complete disclosure of which is incorporated herein by reference. As will be discussed in greater detail below, the images captured by camera 64 are utilized by one or more controllers onboard patient support apparatus 20 and/or one or more remote computing devices (e.g. one or more servers) to carry out one or more of the plurality of functions described herein.
As was noted previously, in some embodiments, additional control panels may be present on the patient support apparatus 20, spaced from control panel 54a.
Control 68a (
Trendelenburg control 68d, when pressed, causes the motorized actuators to move litter frame 28 and deck 30 to the Trendelenburg position. Flat control 68e, when pressed, causes the motorized actuators to move litter frame 28 and deck 30 to a flat orientation. Reverse Trendelenburg control 68g, when pressed, causes the motorized actuators to move litter frame 28 and deck 30 to the reverse Trendelenburg position.
Controls 68g and 68h (
Control panel 54b also includes indicators 70a, 70b, and 70c. Indicator 70a is illuminated in a first manner (e.g. a red or amber light) when a brake onboard patient support apparatus 20 is not activated, and in another manner (e.g. green) when the brake is activated. Indicator 70b, in some embodiments, is illuminated in order to remind a caregiver to arm or disarm an exit detection system onboard patient support apparatus 20. In at least one such embodiment, indicator 70b emits white light (steady, flashing, or pulsing) when a user presses on egress control 68a while the exit detection system is armed, and emits no light at all other times except when the exit detection system is armed and detects a patient exiting from patient support apparatus 20. When such a patient exit is detected, indicator 70b may be activated to emit a red flashing light.
Indicator 70c is illuminated when a patient makes a call to a remotely positioned nurse. In some embodiments, indicator 70c is illuminated a first color when such a call is placed and illuminated a second color when no such call is placed. In other embodiments, indicator 70c is not illuminated when no call is being placed, and is illuminated when such a call is placed.
As shown in
Electronic device 72 includes a controller 90, a memory 92, a network transceiver 94, a display 96, and one or more controls 98. Memory 92 includes a software application 100 that is executed by controller 90 and that carries out one or more functions described herein, such as, but not limited to, a remote control function for controlling patient support apparatus 20, an image processing/viewing function for processing and/or viewing images captured by one or more cameras 64, and/or other functions. Electronic device 72 may be a conventional smart phone, tablet computer, laptop computer, or other type of computer that is able to execute software application 100 and that includes the components shown in
Controller 78 of patient support apparatus 20 and controller 90 of electronic device 72 may take on a variety of different forms. In the illustrated embodiment (
Network transceivers 74 and 94 are, in at least some embodiments, WiFi transceivers (e.g. IEEE 802.11) that wirelessly communicate with each other via one or more conventional wireless access points 104 of the local area network 102 (
Exit detection system 82 of patient support apparatus 20 (
Headwall transceiver 76 of patient support apparatus 20 (
Memory 80 of patient support apparatus 20, in addition to including the data and instructions for carrying out the functions described herein, may include a synchronized data file 112. Synchronized data file 112, as will be discussed herein, may be generated by controller 78 synchronizing the outputs of one or more sensors (e.g. sensors 88 or other sensors, such as load cells 110) with a video captured by one or more of the cameras 64. In some embodiments, synchronized file 112 is generated and stored onboard patient support apparatus 20 (e.g. in memory 80). In other embodiments, file 112 may be generated by an off-board computing device (e.g. a server) and stored in another location. In still other embodiments, synchronized data file 112 may be streamed from patient support apparatus 20 (and/or a server of network 102) to one or more remote devices, such as one or more electronic devices 72.
Lift actuators 84 (
Sensor(s) 88 may comprise any of a variety of different sensors that are either positioned onboard patient support apparatus 20 and/or that are positioned elsewhere but in communication with controller 78 (e.g. via transceiver 74). In some embodiments, sensor(s) 88 comprise angle and/or position sensors that determine the angular orientation and/or position of one or more movable components of patient support apparatus 20, such as, but not limited to, litter frame 28 and/or support deck 30. In some embodiments, sensors 88 may comprise any of the sensors 92 disclosed in (including those disclosed in references incorporated therein by reference) commonly assigned U.S. patent application Ser. No. 63/077,864 filed Sep. 14, 2020, by inventors Krishna Bhimavarapu et al. and entitled PATIENT SUPPORT APPARATUS SYSTEMS WITH DYNAMICAL CONTROL ALGORITHMS, the complete disclosure of which is incorporated herein by reference.
Sensors 88 (
In some embodiments, sensors 88 include a pressure sensing mat of the types disclosed in commonly-assigned U.S. Pat. No. 8,161,826 issued to Taylor and/or of the types disclosed in commonly-assigned PCT patent application 2012/122002 filed Mar. 2, 2012 by applicant Stryker Corporation and entitled SENSING SYSTEM FOR PATIENT SUPPORTS, the complete disclosures of both of which are incorporated herein by reference.
In some embodiments, sensors 88 include one or more load cells that are built into one or more patient support apparatuses 20 and that are adapted to detect one or more vital signs of the patient. In at least one of those embodiments, patient support apparatus 20 is constructed in the manner disclosed in commonly-assigned U.S. Pat. No. 7,699,784 issued to Wan Fong et al. and entitled SYSTEM FOR DETECTING AND MONITORING VITAL SIGNS, the complete disclosure of which is hereby incorporated herein by
Any of the sensors 88 discussed herein may include one or more load cells, pressure sensors such as piezoelectric and piezoresistive sensors, Hall Effect sensors, capacitive sensors, resonant sensors, thermal sensors, limit switches, gyroscopes, accelerometers, motion sensors, ultrasonic sensors, range sensors, potentiometers, magnetostrictive sensors, electrical current sensors, voltage detectors, and/or any other suitable types of sensors for carrying out their associated functions.
Display 96 and controls 98 of electronic device 72 (
Patient support apparatus 20 is configured to communicate with one or more servers on local area network 102 of the healthcare facility (
Local area network 102 is also configured to allow one or more electronic devices 72 and patient support apparatuses 20 to access the local area network 102 via wireless access points 104. It will be understood that the architecture and content of local area network 102 will vary from healthcare facility to healthcare facility, and that the example shown in
The combination of patient support apparatus server 114 and patient support apparatus 20 form a vision system 130 that, as will be discussed in greater detail below, is adapted to perform one or more functions related to the images gathered by camera(s) 64. It will be understood that vision system 130 may, in some embodiments, include additional cameras 64 that are not positioned on patient support apparatus 20, and that vision system 130 may also, or alternatively, include one or more other servers, such as a remote server 116 (
In some embodiments, each video camera 64 has its own processor integrated therein that is adapted to partially or wholly process the images captured by the image sensor(s) of the camera 64. For example, when using an Intel D400 series camera as camera 64, these cameras include an Intel RealSense Vision Processor D4, along with other electronic circuitry, that performs various vision processing on the signals captured by the various sensors that are part of the D400 series of cameras. For purposes of the following description, the use of the term “outputs,” “signals,” “image signals,” or the like from cameras 64 will refer to either unprocessed image data captured by the camera 64, or partially or wholly processed image data that is captured by cameras 64 and partially or wholly processed by the processor integrated into the camera 64.
Controller 78 of patient support apparatus 20 (
As was noted above, the precise number and location of cameras 64 on patient support apparatus 20 (and/or elsewhere) may vary, depending upon the data that is intended to be captured by the cameras 64 in a particular embodiment of vision system 130. Each camera 64 may be either mounted in a fixed orientation, or it may be coupled to a mounting structure that allows the orientation of the camera to be automatically adjusted by controller 78 and/or server 114 such that the camera may record images of different areas of the room by adjusting its orientation. Still further, each camera 64 may include zoom features that allow controller 78 and/or server 114, or another intelligent device, to control the zooming in and zooming out of the cameras 64 such that both close-up images and wider field of view images may be recorded, as desired.
Server 114 (
The sheet/gown attribute data refers to color, patterns, and/or other visual information regarding the sheets and/or gowns that are used in a particular healthcare setting. Generally speaking, specific healthcare facilities use gowns for patients that are of the same color and/or pattern. Alternatively, they may use several different types of gowns that each have their own color and/or patterns on them. Similarly, the sheets used on patient beds may be of the same color and/or have the same pattern, or the healthcare facility may use a set of colors and/or patterns for its patient support apparatuses. Regardless of whether or not a healthcare facility uses only a single type, or multiple types, of gowns and/or sheets, the color and/or pattern attributes of these items are stored in database 124 in at least one embodiment. As will be discussed in greater detail below, vision system 130 uses this color and/or pattern information to identify the sheets and/or gowns that are captured in the images of cameras 64. By identifying the sheets and/or gowns, vision system 130 is better able to distinguish the patient and/or the sheets in the image from other objects that are captured in the images.
The restrain attribute data refers to color, patterns, and/or other visual information regarding any patient restraints that may be used by a particular healthcare facility for restraining a patient while positioned on patient support apparatus 20. Such restraints may be used for certain types of patients that are determined to be of potential danger to themselves and/or to others. Such restraints restrain the patient from getting out of patient support apparatus 20 and/or restrain movement of their arms, legs, neck, and/or other body parts. Vision system 130 is configured to allow authorized users to enter into database 124 attribute data defining the color, pattern, position, shape, and/or other visual characteristics of the restraints that they use within their particular healthcare facility. Vision system 130 then uses this attribute data to recognize the restraints in the images it captures. In at least one embodiment, vision system 130 is configured to issue an alert to one or more caregivers if one or more cameras 64 detect that a restraint is not applied to a patient. That is, vision system 130 uses the restraint attribute data to determine whether one or more restrains have been applied to the patient and, if not, it may be configured to issue an alert to caregivers alerting them of the fact that one or more restraints have not been applied.
The patient support apparatus attribute data stored in database 124 (
The camera location information stored in database 124 (
The patient support apparatus movement abilities refer to the components of patient support apparatus 20 that are physically movable, as well as where those components are located on the patient support apparatus 20 and their range of motion. As will be discussed in greater detail below, server 114 and/or controller 78 may be adapted to allow a person to remotely control movement of one or more components of patient support apparatus 20 if an analysis of the concurrent images captured by camera(s) 64 indicate that there are no obstacles in the movement path of that component. In order to determine if any such obstacles are present or not, server 114 and/or controller 78 utilize this data so that they are able to identify the movement path of the component in the captured images and to analyze the captured images to determine if an obstacle exists in the movement path.
The association data that may be stored in database 124 is data that associates the location of a particular patient support apparatus 20 to a particular room (and/or bay within a room) within the healthcare facility, the association of particular rooms and/or bays with particular caregivers, and/or the association of particular rooms and/or bays with particular electronic devices 72 that are to receive data regarding particular patient support apparatuses 20, or that are to send data regarding particular patient support apparatuses 20.
In some embodiments, database 124 includes a table of data that server 114 consults to determine the corresponding data it is to use for a particular patient support apparatus 20 based on an ID, or other indicator, that it receives from the patient support apparatus 20. For example, in some embodiments, patient support apparatus 20 sends an ID to server 114 via transceiver 74 that indicates the type of patient support apparatus that it is. From this ID, server 114 may consult a table of different types of patient support apparatuses 20 that contains data for each type. The data may indicate any of the previously discussed data, such as the movement capabilities of the patient support apparatus, the shape and/or color of the patient support apparatus, the location of the camera(s) onboard the patient support apparatus 20, and/or other information. Thus, for example, if patient support apparatus 20 sends an ID to server 114 that identifies patient support apparatus 20 as a type A patient support apparatus 20, server 114 may be configured to consult a table that indicates that type A patient support apparatuses 20 have three cameras located at specific locations, are able to have their Fowler sections 44 pivoted upwardly 80 degrees, can have their litter frames raised/lowered fifteen inches, etc.
In some embodiments, either or both of controller 78 and server 114 include commercially available software that is adapted to carry out the image analysis discussed herein. For example, in some embodiments, either controller 78 and/or server 114 include the commercially available software suite referred to as OpenCV (Open Source Computer Vision Library), which is an open source computer vision library supported by Willow Garage of Menlo Park, California. The OpenCV library has been released under the Berkeley Software Distribution (BSD) open source license. The OpenCV library has more than 2500 computer vision algorithms and is available for use with various commercially available operating systems, including Microsoft Windows, Linux/Mac, and iOS. The OpenCV algorithms include a comprehensive set of both classic and state-of-the-art computer vision and machine learning algorithms. These algorithms are designed to be used to detect and recognize faces, identify objects, classify human actions in videos, track camera movements, track moving objects, extract 3D models of objects, produce 3D point clouds from stereo cameras, stitch images together to produce high resolution images of entire scenes, find similar images from an image database, follow eye movements, recognize scenery and establish markers to overlay scenery with augmented reality, and other tasks.
The OpenCV library has to date included multiple major releases (version 4.5.2 was released in 2021), and any one of these major versions (as well as any of the multiple intermediate versions), is suitable for carrying out the features and functions described in more detail herein. In at least one embodiment of patient support apparatus 20 and/or server 114, customized software is added to interact with and utilize various of the software algorithms of the OpenCV library in order to carry out the features described herein. Other commercially available software may also be used, either in addition to or in lieu of the OpenCV library.
ADT server 118 stores patient information, including the identity of patients and the corresponding rooms and/or bays within rooms to which the patients are assigned. That is, ADT server 118 includes a patient-room assignment table 132, or functional equivalent to such a table. The patient-room assignment table 132 correlates rooms, as well as bays within multi-patient rooms, to the names of individual patients within the healthcare facility. The patient's names are entered into the ADT server 118 by one or more healthcare facility staff whenever a patient checks into the healthcare facility and the patient is assigned to a particular room within the healthcare facility. If and/or when a patient is transferred to a different room and/or discharged from the healthcare facility, the staff of the healthcare facility update ADT server 118. ADT server therefore maintains an up-to-date table 132 that correlates patient names with their assigned rooms.
EMR server 120 (
Nurse call server 126 is shown in
Nurse call system server 126 is configured to communicate with caregivers and patients. That is, whenever a patient on a patient support apparatus 20 presses, or otherwise activates, a nurse call, the nurse call signal is transmitted wirelessly from headwall transceiver 76 to headwall unit 106, which in turn forwards the signals to communication outlet 108 via a nurse call cable 138. The communication outlet 108 forwards the signals to nurse call server 126 via one or more conductors 140 (and/or through other means). The nurse is thereby able to communicate with the patient from a remote location. In some embodiments, patient support apparatus 20 is not adapted to wirelessly communicate with outlet 108, but instead communicates with communication outlet 108 via a direct coupling of nurse call cable 138 between patient support apparatus 20 and outlet 108. In those embodiments of patient support apparatus 20 that are adapted to wirelessly communicate with outlet 108, headwall unit 106 may take on any of the forms and/or functionality of any of the headwall units disclosed in commonly assigned U.S. patent application Ser. No. 63/193,778 filed May 27, 2021, by inventors Krishna Bhimavarapu et al. and entitled PATIENT SUPPORT APPARATUS AND HEADWALL UNIT SYNCING, and/or any of the headwall units that are disclosed in any of the patent references incorporated therein by reference. The complete disclosure of the aforementioned 63/193,778 patent application, as well as all of the references incorporated therein by reference, are hereby incorporated herein by reference in their entirety.
Power to the patient support apparatus 20 is provided by an external power source and/or an onboard battery. As shown in
Local area network 102 may include additional structures not shown in
Patient support apparatus server 114 includes a table 150 (
In addition to sending messages 152, patient support apparatuses 20 are further adapted send data messages 154 to network 102 via network transceiver 74. The data messages 154 contain data about the status of patient support apparatus 20 and/or visual image data from one or more cameras 64 positioned onboard the patient support apparatus 20. The visual image data may include live (or delayed) streaming video images, non-streamed videos, portions of videos, and/or any other data related to the images captured by cameras 64.
The data about the status of patient support apparatus 20 contained within messages 154 may also include any other information that is generated by patient support apparatus 20, such as, but not limited to, the status of any of its siderails 36, its brake, the height of litter frame 28, the state of its exit detections system 82, and/or any other data. Although
In some embodiments, server 114 is configured to share the patient support apparatus data (including visual data) that is receives (via messages 154) with only caregivers who are responsible for the patient associated with the particular patient support apparatus 20 that the message 154 originated from. In other words, in some embodiments, server 114 is configured to forward data to only a subset of the electronic devices 72, and that subset is chosen based on the caregivers who are responsible for a particular patient. In this manner, for example, a caregiver who is assigned to patients A-G will not receive data on his or her associated electronic device 72 (e.g. smart phone) from patient support apparatuses that are assigned to patients H-Z.
Server 114 may be configured to determine which electronic devices 72 to transmit patient support apparatus data to based on information contained within table 150, which may be generated by server 114 in response to communication with other servers. Specifically, once server 115 knows the room (and/or bay) that the status data pertains to, it can correlate this room with a particular patient by consulting ADT server 118 and/or nurse call server 126 (or another server on network 102) that correlates rooms to specific caregivers. Once the specific caregiver is identified, server 114 is further configured to maintain, or have access to, a list that identifies which electronic devices 72 are associated with which caregivers. Messages can then be sent by server 114 to only the appropriate caregiver's electronic devices 72.
As shown in
Server 114 includes a table (not shown), or has access to a table, that contains the surveying data performed when headwall units 106 were installed within the healthcare facility, and which correlates the specific headwall unit IDs with specific locations within the healthcare facility. Server 114 may use this data to determine which room and/or bay a particular patient support apparatus 20 is currently located in after it receives a message 152 from that particular patient support apparatus 20.
In any of the embodiments disclosed herein, server 114 may be configured to additionally execute a caregiver assistance software application of the type described in the following commonly assigned patent applications: U.S. patent application Ser. No. 62/826,097, filed Mar. 29, 2019 by inventors Thomas Durlach et al. and entitled PATIENT CARE SYSTEM; U.S. patent application Ser. No. 16/832,760 filed Mar. 27, 2020, by inventors Thomas Durlach et al. and entitled PATIENT CARE SYSTEM; and/or PCT patent application serial number PCT/US2020/039587 filed Jun. 25, 2020, by inventors Thomas Durlach et al. and entitled CAREGIVER ASSISTANCE SYSTEM, the complete disclosures of which are all incorporated herein by reference.
Processed image 160 depicts a patient rendering 162 and a patient support apparatus rendering 164. The patient rendering 162 is generated by controller 78 and/or server 114 by analyzing the video images from one or more camera 64 to identify the position of the patient's body within those video images. Once the patient's body is identified, controller 78 and/or server 114 modify the images of the patient's body within the video images in one or more manners. Such modifications include modifications to the color of the patient's body and/or, in some embodiments, modifications to the face and/or other identifying characteristics of the patient. For example, in the example illustrated in
It will also be understood that, in some embodiments, the patient's body is modified in one or more other manners, such as the size and/or shape of the patient's body. For example, in some embodiments, controller 78 and/or server 114 is configured to replace one or more of the patient's body parts within the captured images with generic renderings of those same body parts in order to better conceal the patient's identity, such as the patient's head, arms, legs, torso, feet, fingers, etc. Thus, as one example, the image of the patient's body captured by camera(s) 64 may include all of the image details captured by the camera(s) 64 with the exception of the patient's head, which may be replaced with a generic rendering 162 of a human head, thereby anonymizing the patient shown in processed image 160. As another example, the image of the patient's torso that is captured by camera(s) 64 may be replaced by a generic rendering 162 of a human torso, thereby providing another layer of anonymization to the patient's identity. Still other partial or whole renderings 162 of the patient's body may be performed.
It will be understood that the renderings of the patient's body 162 (
In some embodiments, cameras 64 are adapted to automatically identify a three-dimensional estimate of the patient's body from an analysis of the images captured thereby (including the depth sensors). In such embodiments, the generic rendering of the patient's body may be performed by adding a generic overlay on top of the detected patient skeleton. In some embodiments, this addition of a generic overlay onto the skeleton may be carried out in one or more conventional manners, such as using the OpenPTrack software developed by the University of California, Los Angeles (UCLA) and its Center for Research in Engineering, Media, and Performance (REMAP). The OpenPTrack software creates a scalable, multi-camera solution for group person tracking and version 2 (V2, Gnocchi) includes object tracking and pose recognition functionality. Various libraries may be utilized in the performance of one or more of these functions, such as the OpenPose library that is available from the UCLA REMAP project. Other software may also, or additionally, be utilized for detecting the position of the patient's body and generating an anonymized rendering of the patient's body.
Vision system 130 is configured to display a sequence of (i.e. a video of) the processed images 160 (
Alternatively, controller 78 may be configured to transmit video from one or more cameras at all times, or substantially all times, to server 114. In such embodiments, server 114 may be configured to automatically forward all, or segments, of the processed images 160 to one or more electronic devices 72 at specific times (e.g. in response to a request and/or the occurrence of a predefined event).
In at least one embodiment, controller 78 and server 114 are configured to deliver to an electronic device 72 a processed video (e.g. comprised of processed images 160) of the patient automatically in response to exit detection system 82 issuing an exit alert (i.e. the patient exited or is in the process of exiting). Still further, in such embodiments, server 114 is configured to forward the exit alert (which is forwarded by controller 78 to server 114 in one or more data messages 154) to the same electronic device 72, which, in at least some embodiments, is programmed to make an audible sound, vibrate, and/or illuminate one or more lights in response to the receipt of the exit alert. In this manner, the caregiver associated with electronic device 72 will not only be alerted to the bed exit alert, but he or she will be able to view the patient's movement in substantially real time on display 96. The caregiver is therefore not only presented with notification of the exit alert, but also a visual depiction of the patient's movement. This can help the caregiver assess the urgency of his or her response to the exit alert. For example, if the exit alert has been accidentally triggered, or the patient has decided to return to patient support apparatus 20 after initially attempting to exit, the patient's movement will be displayed on display 96 and the caregiver should be able to see if the exit alert was accidentally triggered and/or if the patient has returned to patient support apparatus 20.
In addition to, or in lieu of, rendering all or a portion of the patient's body, vision system 130 may be configured to render all or a portion of patient support apparatus 20. That is, vision system 130 may include patient support apparatus rendering 164 within processed images 160 (
It will, of course, understood that any or all of the image modification that is reflected in processed images 160 and discussed herein as being carried out by controller 78 and/or server 114 could alternatively be carried out, either partially or wholly, by the one or more processors that are integrated into one or more of the cameras 64.
Upper portion 172 displays various data regarding patient support apparatus 20 that is forwarded from patient support apparatus 20 in one or more data messages 154 to server 114, and server 114 then forwards that data to electronic device 72. This data includes an exit detection system indicator 176 that indicates what sensitivity, or zone, exit detection system 82 is currently armed with; a monitoring system indicator 178 that indicates whether an onboard monitoring system of patient support apparatus 20 is armed or disarmed; a plurality of siderail indicators 180 that indicate the up/down status of siderails 36; a brake status indicator 182; a low height (of litter frame 28) indicator 184; a nurse call system indicator 186 (that indicates whether patient support apparatus 20 is communicatively coupled to outlet 108 or not); a power source indicator 188 that indicates whether patient support apparatus 20 is currently receiving electrical power from electrical outlet 144 or not; and a weight indicator 190 that indicates whether a patient is currently present on patient support apparatus 20 or not (as determined by the weight detected by load cells 110).
Lower portion 174 (
Menu 194 may be provided on screen 170 in those embodiments of software application that are adapted to perform additional functions beyond the display of data associated with camera(s) 64. In such embodiments, the user is free to access the other functions of software application 100 by selecting one of menu icons 196a, b, c or d. In some embodiments, software application 100 is configured to include any or all of the same functionality as the caregiver assistance software application 124 disclosed in commonly assigned PCT patent application WO 2020/264140 filed Jun. 25, 2020 by Stryker Corporation and entitled CAREGIVER ASSISTANCE SYSTEM, the complete disclosure of which is incorporated herein by reference. In other embodiments, software application 100 may be configured to perform still other functions in addition to the image displaying functions and remote control functions described herein.
Server 114, electronic device 72, and/or patient support apparatus 20 are configured, in at least one embodiment, to drive one or more of the actuators 84, 86 of patient support apparatus 20 in response to the activation of one or more controls 50i′-50p′ when one or additional conditions are satisfied, and to not drive the actuator(s) 84, 86 when those one or more additional conditions are not satisfied. In one embodiment, software application 100 is configured to disable motion controls 50i′-50p′ whenever it is not also simultaneously displaying images of patient support apparatus 20 in display area 192. This disabling of controls 50i′-50p′ is implemented in order to prevent a remotely positioned person from remotely moving one or more components of patient support apparatus 20 without that person also being able to simultaneously see the current position of those one or more components, as well as the surrounding environment (e.g. the range of motion of the component(s)). This helps ensure that any remote movement of patient support apparatus 20 is carried out safely without damaging patient support apparatus 20 or any objects positioned in the range of motion of patient support apparatus 20, as well as without injuring the patient, any other individuals that may be present on or near patient support apparatus 20.
In some embodiments, software application 100 is configured to simply not transmit any movement commands to server 114 when concurrent visual images of patient support apparatus 20 (from one or more cameras 64) are not simultaneously being displayed on display 96. In other words, controller 90 may be configured to disable controls 50i′-50p′ whenever device 72 is not simultaneously displaying concurrent images of patient support apparatus 20. In another embodiment, instead of software application 100 disabling controls 50i′-50p′, controller 90 may be configured to send movement commands to server 114 whenever a person presses on one or more of controls 50i′-50p′, and server 114 may be configured to not forward those movement commands to patient support apparatus 20 if server 114 is not also simultaneously transmitting a video from camera 64 of patient support apparatus (whether modified or not) 20 to electronic device 72. In other words, in some embodiments, server 114 disables the functionality of remote controls 50i′-50p′ by not forwarding corresponding movement commands to patient support apparatus 20. In still other embodiments, patient support apparatus 20 may be configured to disable remote controls 50i′-50p′ by ignoring any movement commands it receives from server 114 unless it is simultaneously transmitting video from camera(s) 64 to server 114 that captures the range of motion of the movable components. In some such embodiments, controller 78 may be configured to require an acknowledgement from server 114 and/or electronic device 72 that it is receiving, and/or displaying, the video from camera 64 at the same time the movement commands triggered by controls 50i′-50p′ are being transmitted to patient support apparatus 20 by server 114.
In the aforementioned embodiments, the remote controls 50i′-50p′ are disabled by one or more of electronic device 72, server 114, and/or patient support apparatus 20 when video is not being concurrently displayed at the remote control device (i.e. electronic device 72). Further, the remote controls 50i′-50p′ are enabled when video is being concurrently displayed at the remote control device. In these embodiments, it is up to the viewer of the remotely displayed video to analyze the video to ensure that the movement commands from controls 50i′-50p′ are sent to patient support apparatus 20 when the corresponding movement is safe to carry out without risking injury to the patient, to patient support apparatus 20, and/or to other objects or people within the room.
In an at least one alternative embodiment, vision system 130 is configured such that the video from camera(s) 64 is automatically processed to determine if an obstacle is present in the movement path of the component that is to be moved by one of controls 50i′-50p′, and to automatically disable or enable controls 50i′-50p′ based on this automatic analysis. In such embodiments, the determination of whether it is safe to move a component of patient support apparatus 20 is carried out automatically by vision system 130. In such embodiments, it is not necessary to transmit video to electronic device 72 in order to enable one or more of controls 50i′-50p′. Instead, it is merely necessary for server 114 and/or controller 78 to determine that no obstacle is present within the movement path of the component that is being controlled remotely by one of controls 50i′-50p′, as well as, in some cases, that one or more additional criteria are met for safely moving the desired components. Such additional criteria may include the absence of a patient on patient support apparatus 20 or still other criteria.
In those embodiments where controller 78 and/or server 114 are configured to automatically analyze the video from one or more of cameras 64 to determine if any obstacles are present in the movement path of a movable component of patient support apparatus 20, controller 78 and/or server 114 utilize the attribute data stored in database 124 that defines the movement capabilities of patient support apparatus 20. This attribute data, as discussed previously, indicates which components are movable, the extent of their movement, the location of those components, and the location or their movement paths.
It will be understood that whenever any of controls 50i′-50p′ are disabled, whether by electronic device 72, server 114, or patient support apparatus 20, such disablement does not apply to the local controls 50i-50p that are part of, or displayed on, patient support apparatus 20 itself. In other words, it is only the remote controls 50i′-50p′ that are disabled, not the local controls. This allows a user positioned on or adjacent to patient support apparatus 20 to move one or more components of patent support apparatus 20 regardless of the image data being captured by camera(s) 64.
It will also be understood that in any of the embodiments discussed herein where software app 100 includes a remote control function (and therefore displays a screen like remote control screen 200 of
In some embodiments, vision system 130 is configured to lock out one or more controls 50 on patient support apparatus 20 such that the patient is not able to utilize those local controls. This local locking out of one or more controls 50 on patient support apparatus 20 is separate from and independent of the disabling of the remote controls 50i′-50p′ discussed above. In such embodiments, vision system 130 analyzes the video from camera(s) 64 to identify the position of the patient, and locks out the desired controls whenever the patient is identified as trying to activate those controls. Thus, for example, if one of control panels 54a and/or 54b include a control for disarming exit detection system 82 and/or the onboard monitoring system, vision system 130 automatically disables those disarming controls whenever it detects that the patient is reaching for these controls. In some embodiments, vision system 130 may be configured to automatically disable an entire control panel (e.g. control panel 54b) from operating whenever the patient reaches to activate a control 50 positioned thereon. In either of these embodiments, vision system 130 does not disable any of the controls 50 on patient support apparatus 20 when a caregiver attempts to utilize them. Instead, one or more of the controls 50 are only disabled from patient use (and/or non-caregiver use).
In carrying out the disablement function of one or more local controls 50 for patient usage, controller 78 and/or server 114 utilizes attribute data from database 124 to analyze the images captured by camera 64. This attribute data includes data identifying the location and functionality of the control panels 54 on the patient support apparatus. It may also include attribute data regarding the caregivers that allows vision system 130 to identify caregivers from the images captured by camera 64.
In some embodiments, patient support apparatus 20 includes a settings screen that is displayable on display 52 and that allows an authorized user to select which local controls are to be disabled for usage by the patient, and which local controls are to be enabled for usage by the patient. In such embodiments, the selection of the enabled and disabled patient controls is utilized by vision system 130 to determine what controls are to be disabled when it detects that a patient is reaching toward one of those controls. Alternatively, or additionally, server 114 may be configured to display a screen that includes the same settings screen for allowing an authorized user to select which controls are to be disabled and enabled for patient usage. In either case, patient support apparatus 20 and server 114 communicate with each other to ensure that, when controller 78 and/or server 114 detects that the patient is reaching for a particular control—by analyzing the images from camera(s) 64—controller 78 and/or server 114 know whether or not to disable one or more controls adjacent to the patient's reaching hand.
In some embodiments, either or both of controllers 78 and server 114 of vision system 130 are adapted to stitch together the images from two or more of the cameras 64a-i to form a combined image. In some embodiments, the combined image is generated in a manner that present the viewer with a different field of view than any of the individual cameras 64, even when those fields of view are added together. For example, in the embodiment shown in
It will be understood that substantial modifications can be made to the number of cameras 64, the location of the cameras 64, and/or the orientation of the cameras shown in
In any of the embodiments discussed herein, the stitching of multiple images together from different cameras 64 (whether positioned at a corner of patient support apparatus 20, or elsewhere) may utilize one or more conventional image merging and/or melding techniques such that inconsistent colors, textures, and/or other properties of the disparate images are gradually changed from along the border between the two images. In other words, the image melding techniques may be used to synthesize a transition region between the two images that melds together the images in a gradual manner having fewer visual artifacts. As another alternative, multiple images may be “gelled” together in a manner wherein the two images are not merged together along a pair of straight edges of each image, but instead are merged together with edges that fade into each other. In other words, instead of a straight dividing line between the two merged images, a faded or amorphous division is created between the two images in the combined image. Other types of merging and/or stitching techniques may also and/or alternatively be used.
Additional processing that may be performed by vision system 130, in at least some embodiments, includes analyzing the images from one or more cameras 64 to determine if the patient's breathing rate is above a defined threshold or below a defined threshold, analyzing the images from one or more cameras 64 to detect the presence of a ligature that presents a choking hazard for the patient, and/or analyzing the images from one or more of the cameras to identify within the images the patient's gown and/or the sheets on patient support apparatus 20.
Turning to the function of monitoring the patient's breathing, controller 78 and/or server 114 are configured in some embodiments to identify one or more boundaries of the patient's torso from the images captured by one or more of cameras 64, and to monitor the expansion and contraction of those one or more boundaries as the patient breathes (e.g. the rising and falling of the patient's chest). Alternatively, or additionally, the images from one or more cameras 64 may be analyzed by vision system 130 to monitor the expansion and contraction of the patient's nostrils. Still other techniques may be used to analyze the images to determine the patient's breathing rate.
After determining the patient's breathing rate from the images from camera(s) 64, controller 78 and/or server 114 are configured, in at least one embodiment, to compare this breathing rate to an upper limit and a lower limit. If the breathing rate exceeds the upper limit or is less than the lower limit, server 114 is configure to send an alert to the electronic device 72 associated with the caregiver assigned to care for the particular patient in the patient support apparatus 20. The upper and lower limits, in at least some embodiments, are configurable by one or more administrators 168 (
In some embodiments, vision system 130 may also or alternatively be configured to measure the amount of contraction/expansion of the patient's chest while they are breathing. Thus, in addition to the rate of breathing, vision system 130 may also determine a numeric indicator of the shallowness or depth of the patient's breath. This information may be utilized, in some embodiments, to determine if the patient is experiencing an asthma attack or not. Still further, in some embodiments, the upper and lower limits mentioned above for issuing a breathing rate alert may be based on the patient's initial baseline breathing rate. That is, instead of having a fixed upper limit and/or a fixed lower limit, the upper and lower limits may be percentages, or absolute values, above and/or below the patient's initial baseline breathing rate. The baseline breathing rate is determined when the patient initially enters patient support apparatus 20, and/or at other times.
In some embodiments, vision system 130 not only monitors the patient's breathing rate with respect to an upper and/or lower limit, but also monitors the rate of change of the patient's breathing rate. In such embodiments, an alert may be issued if the patient's breathing rate abruptly changes at a rate higher than a predetermined rate. Alternatively, or additionally, the rate of change of the patient's breathing rate may be monitored in combination with the patient's absolute breathing rate, and the breathing rate and rate of change of the breathing rate may be used individually or in combination to determine whether to issue an alert.
In those embodiments of vision system 130 that are adapted to monitor the patient's breathing rate, vision system 130 may be configured to examine visual images from camera(s) 64 to look for movement in the chest area and/or the belly area of the patient. In such embodiments, vision system 130 may also monitor changes in the patient from chest breathing to belly breathing, or vice versa. In some instances, the switching from one form of breathing (chest or belly) to another, coupled with changes in the breathing rate (e.g. above/below a limit and/or above a rate of change) causes vision system 130 to send an alert to one or more electronic devices 72 indicating that the patient may be experiencing a change that warrants the attention of a caregiver.
Additionally, or alternatively, in those embodiments of vision system 130 that are configured to monitor the patient's breathing, controller 78 and/or server 114 of vision system 130 may be configured to monitor the patient's breathing by detecting one or more edges of the patient's chest and/or belly. Movement of those edges when one or more of the other edges of the patient's body (e.g. the patient's legs, neck, hips, arms, etc.) does not move are interpreted generally by controller 78 and/or server 114 as breathing movements, while movement of the edges of the patient's chest and/or belly that occurs simultaneously with movement of other portions of the patient's body is generally interpreted as a gross movement of the patient that is separate from their breathing movement.
When vision system 130 is configured to analyze the images from camera(s) 64 to search for the presence of a ligature within those images, controller 78 and/or server 114 may be configured to search for objects within those images that have the shape of a ligature. This includes analyzing the shape of the sheets on the patient support apparatus and detecting when one or more of the sheets are rolled up into a ropey condition that could be looped around the patient's neck. When vision system 130 detects the presence of a ligature, it issues a warning to one or more caregivers by sending an alert from server 114 to one or more of the electronic devices 72.
In those embodiments of vision system 130 that are configured to detect a gown worn by the patient and/or to detect the sheets on patient support apparatus 20 (such as for ligature detection), vision system 130 utilizes one or more of the attributes of the gowns and/or sheets of that particular healthcare facility that are stored in database 124. That is, controller 78 and/or server 114 utilize the attributes of the gowns and/or sheets stored in database 124 when analyzing the images captures by camera(s) 64 in order to identify the gown and/or sheets that appear in the images. Identification of the gown is used by vision system 130 in some embodiments to identify the boundaries of the patient's body. Similarly, identification of the sheets is used by vision system 130 to distinguish between the patient's gown and the sheets, thereby further facilitating the identity of the patient's body. Identification of the patient's gown can be used in those embodiments of vision system 130 that monitor the patient's breathing to facilitate the identification of the patient's torso and/or chest.
Vision system 130, in some embodiments, is configured to also monitor locations around the perimeter of patient support apparatus 20 and/or underneath patient support apparatus 20 in order to automatically detect if a patient may be attempting to engage in acts of self-harm. Such acts of self-harm may include, in addition to using a ligature to hang oneself, attempts by the patient to crush one or more of their body parts by lowering components of patient support apparats 20 (e.g. litter frame 28, siderails 36, deck sections 44, 46, 48, etc.) onto portions of their body. In order to detect these and other acts of self-harm, patient support apparatus 20 may include one or more cameras 64 positioned on a top side of base frame 22 that face upward toward siderails 36 and the underside of litter frame 28, and/or one or more downward facing cameras 64 positioned on the underside of litter frame 28, the underside of siderails 36, the underside of headboard 32 and/or the underside of footboard 34. Such cameras are positioned and oriented so that any body parts, or other objects, that are positioned in the movement range of litter frame 28 and/or siderails 36 (particularly the downward movement range of these components) will be within the field of view of one or more cameras 64. In this manner, the cameras 64 will be able to capture images of these body parts and/or objects so that vision system 130 can identify these body parts and/or objects and cause controller 78 instructing to stop or prevent downward movement of these components when a body part or other object is present in the downward motion path of these components.
In some embodiments, controller 78 and/or server 114 are configured to prevent downward movement of components of patient support apparatus 20 when any object—whether a patient body part or an inanimate object—is detected within the movement pathway of a component of patient support apparatus 20. If vision system 130 determines the object is not a human body part, it may simply disable downward movement of the corresponding component(s) of patient support apparatus 20 and take no further action. However, if vision system 130 determines that the object is a human body part, controller 78 and/or server 114 may be further configured to take one or more additional, such as automatically sending a message to an appropriate caregiver's electronic device 72 informing the caregiver of the detected body part in the movement path. This alerts the caregiver to take appropriate steps to respond to the situation. In some embodiments, controller 78 may also be configured to send a signal to the nurse call system server 126 when a patient's body part is detected in a movement path (and/or when a ligature is detected).
In embodiments, vision system 130 is adapted to automatically capture, and/or automatically mark, clips of videos that are relevant to certain activities performed using patient support apparatus 20, or performed on the patient, and/or that are performed within the vicinity of patient support apparatus 20. For example, in some embodiments, vision system 130 may automatically capture a video clip of patient support apparatus 20 encountering an obstruction during the movement of any of its components, a video clip of a patient attempting to use patient support apparatus 20 in a manner that causes self-harm, and/or a video clip of any events around the perimeter of, and/or within the main body of, patient support apparatus 20 that are of interest. In such embodiments, vision system 130 may automatically forward the video clip to one or more electronic device 72 so that remote caregivers can see the video on display 96. In some embodiments, as soon as an event of interest is detected by vision system 130, it may automatically begin streaming live video from one or more cameras 64 that are capturing the event to one or more electronic devices 72.
72
In some embodiments, vision system 130 is used to confirm and/or supplement sensors of other systems onboard patient support apparatus 20 and/or within the vicinity of patient support apparatus 20. For example, in some embodiments, vision system 130 is configured to confirm when any components of patient support apparatus 20 are moved in a manner the impacts an obstacle. That is, patient support apparatus 20 may include one or more force sensors that are positioned such that they detect forces resulting from a collision with an object when one or more components of the patient support apparatus 20 are moving. In such instances, controller 78 is configured to not issue an obstruction alert unless vision system 130 visually confirms that one or more components of patient support apparatus 20 actually ran into an obstruction. In other words, in some embodiments, vision system 130 is adapted to help avoid false obstruction detection alerts that might otherwise be issued by controller 78 if it relied solely on its force sensors for detecting a collision with an obstruction. In such embodiments, controller 78 only issues an alarm if vision system 130 visually recognizes contact with an obstruction at the same time that one or more force sensors onboard patient support apparatus 20 detect contact with an obstruction. False obstruction alarms can therefore be reduced.
In some embodiments, vision system 130 is adapted to work with, and confirm the outputs of, the perimeter load detection system described in commonly assigned U.S. patent application Ser. No. 63/335,863 filed on Apr. 28, 2022, by Lavanya Vytla et al. and entitled PATIENT SUPPORT APPARATUS FOR TREATING PATIENTS PRESENTING BEHAVIORAL HEALTH INDICIA, the complete disclosure of which is incorporated herein by reference. When combined with a system, such as that disclosed in the aforementioned '863 application, vision system 130 may automatically confirm whether a behavioral health event and/or a load applied to a perimeter of the patient support apparatus 20, is an actual event (as opposed to a false alarm) that warrants sending an alert to one or more caregivers, such as via one or more electronic devices 72. Vision system 130 may also, or alternatively, capture and/or mark clips of videos that encompass moments before, during, and after a perimeter load detection system of patient support apparatus 20 detects a load applied anywhere on the perimeters of patient support apparatus 20.
In some embodiments, vision system 130 is adapted to automatically analyze the images captured by cameras 64 to detect when a caregiver is positioned next to patient support apparatus 20 and/or within the same room as patient support apparatus 20. In some such embodiments, the caregivers may wear an ultra-wideband (UWB) badge that is adapted to communicate with a plurality of ultra-wideband transceivers that are positioned onboard patient support apparatus 20. The ultra-wideband transceivers onboard the patient support apparatus 20 are adapted to automatically determine the location of the caregiver's badge, read an ID from the badge, and use the ID to confirm that the badge is one that belongs to a caregiver. In some embodiments, vision system 130 is adapted to work in conjunction with such a UWB system to confirm that the ultra-wideband badges detected by the UWB transceivers onboard patient support apparatus 20 are indeed worn by a caregiver. Alternatively, or additionally, the UWB badges and transceivers may be used by vision system 130 to confirm whether facial recognition, and/or other caregiver detection techniques, have accurately determined that a caregiver is positioned next to a patient support apparatus 20. That is, if visual processing of the images from cameras 64 leads to vision system concluding that a caregiver is positioned adjacent patient support apparatus 20, vision system 130 may receive, request, or otherwise use data from the UWB system to confirm the presence of a caregiver adjacent to the patient support apparatus 20. In some embodiments, the UWB system used to detect caregiver-worn badges may include any of the structures, function, and/or features of the UWB badge system disclosed in commonly assigned U.S. patent application Ser. No. 63/356,061 filed Jun. 28, 2022, by inventors Krishna Bhimavarapu et al. and entitled BADGE AND PATIENT SUPPORT APPARATUS COMMUNICATION SYSTEM, the complete disclosure of which is incorporated herein by reference.
In some embodiments, vision system 130 is configured to visually monitor the position of one or more tagged items and to issue an alert if those tagged items are moved to another location. That is, in some embodiments, a healthcare facility may apply a visual tag to any item that it doesn't want removed from a particular area of the healthcare facility. The visual tags have visual attributes (e.g. size, shape, color, etc.) that are entered into database 124 and used by vision system 130 to visually recognize these tags when they are positioned within the field of view of any of the cameras 64 of vision system 130. When vision system 130 recognizes one of these visual tags within the images captured by one or more cameras 64, it determines the location of the tag and monitors that location to see if it changes. If it changes by more than a threshold, such as being moved out of the room in which it is currently being used and/or out of the field of view of one or more cameras 64, vision system 130 is configured to issue an alert to one or more caregivers indicating that an tagged object has been moved.
It will be understood that, although the foregoing discussion about detecting body parts of the patient has referenced the “patient's” body parts, vision system 130 does not need to be configured to recognize or distinguish the patient from other individuals. Instead, vision system 130 is configured to prevent downward movement of components of patient support apparatus 20 (and/or send messages to caregivers) when any human body parts are detected in the movement path of components, or when any ligatures are detected that might be used by any human. In other words, vision system 130 need not be configured to visually distinguish the patient assigned to patient support apparatus 20 from any other humans, but instead is configured to help prevent any individual from using patient support apparatus 20 to administer self-harm.
Vision system 130 may also be configured to automatically recognize mattress straps or brackets that are used on patient support apparatus 20 to secure mattress 42 thereto. Such straps or brackets may be a source of a ligature, and vision system 130 includes visual attributes of these straps and/or brackets so that it can more easily recognize them in the captured images, and process those images to see if the straps and/or brackets are being used to form a ligature or other tool of self-harm.
In some embodiments, vision system 130 uses one or more image subtraction techniques to determine the position of outline of the patients' body. That is, in some embodiments, the depth sensors that are included within camera(s) 64 are used to take one or more baseline snapshots of the patient support apparatus 20 when the patient is not present on the patient support apparatus 20. After the patient is present, depth sensor snapshots are taken and the difference between the depth sensor snapshots taken when the patient is present and when the patient is not present are used to identify the patient's body within the images (including the distance between the camera(s) 64 and each of the portions of the patient's body).
In some embodiments, the baseline image of the patient is automatically captured by one or more camera(s) 64 when the patient support apparatus 20 has its onboard scale system zeroed. This scale zeroing process is performed when the patient support apparatus 20 is empty of the patient, and therefore provides an opportunity for vision system 130 to capture baseline images of patient support apparatus 20 with no patient present. In such embodiments, controller 78 may be configured to automatically save a snapshot (or multiple snapshots) captured from one or more camera(s) 64 in response to the user activating the scale zeroing control (not shown) that is present on one or more of the control panels 54.
In some embodiments, controller 78 is configured to capture a baseline image of the patient on the patient support apparatus 20 when he or she is initially positioned thereon. This baseline image is then used by system 130 as a reference for determining subsequent patient movement. That is, subsequent images are taken periodically of the patient, and those subsequent images are compared to the baseline image of the patient when he or she was initially positioned on the patient support apparatus. The difference between the subsequent image and the baseline image provides an indication of how far the patient has moved. In some embodiments, this amount of movement is measured by vision system 130 and an alert, such as, but not limited to, an exit alert, is issued when the patient's movement exceeds a threshold amount (with respect to the baseline image).
In some embodiments, exit detection system 82 of patient support apparatus 20 is adapted to allow a user to enter a fall risk score (via one or more of control panels 54), wherein the fall risk score corresponds to an assessment of the patient's potential for falling. The fall risk score may be derived from a conventional fall risk analysis (e.g. a Morse fall risk score), or it may be derived from some other analysis. Once entered, controller 78 and/or exit detection system 82 may be configured to translate the fall risk score into a pre-selected sensitivity level for exit detection system 82 such that, when the caregiver arms exit detection system 82, it is automatically armed with a sensitivity level that has been selected by controller 78 and/or exit detection system 82 based on the patient's fall risk score.
In some embodiments, exit detection system 82 may be configured to work in conjunction with vision system 130 and/or vision system 130 may be utilized to detect patient exits either in conjunction with, or separately from, exit detection system 82. In some embodiments, vision system 130 may be adapted to monitor the movement of one or more parts of the patient's body and issue an alert when those monitored parts move (or move beyond a threshold). In such embodiments, vision system 130 may include a selection screen allowing the caregiver to monitor which parts of the patient it is to monitor for movement (e.g. left arm, right arm, left leg, right leg, head, etc.) and, after the caregiver selects the body parts to be monitored, vision system 130 thereafter analyzes the positon of the selected body parts in the images gathered from camera(s) 64 and issues an alert if one or more of the selected body parts moves past a threshold. The alert, as with all alerts from vision system 130, may be forwarded to one or more electronic devices 72 and/or it may be issued locally on patient support apparatus 20, and/or it may be forwarded to one or more other servers or other devices in communication with network 102.
Controller 78 and/or server 114 is configured to use the time stamped sensor readings and camera images to generate one or more synchronized data files 112, and to then make those data files available for viewing on any of the displays that are part of, or in communication with, vision system 130. As was mentioned, the data file 112 shows the readings from one or more sensors over a period of time, along with the images captured from the camera(s) 64 over that same time period. That is, the sensor readings and images are displayed in a synchronized fashion so that, at any given moment, the image shown in right portion 214 corresponds to an image that was taken at the same time that the sensor readings shown in left portion 212 were taken. Because the data file 112 is a video file, it will be understood that the example screen 210 shown in
In some embodiments, controller 78 and/or server 114 are configured to stream the synchronized data file 112 to one or more electronic devices 72 in real time, or near real time (within one or several seconds) so that remotely positioned personnel can view the sensor readings and video images in real time, or nearly real time. The synchronized data file may also be stored in memory 80, a memory of server 114, and/or memory 92 of one or more electronic devices 72 for viewing at other times.
It will be understood that, although
It will also be understood that the video 216 that is incorporated into the synchronized data file 112 may be a video that is unedited or it may be a modified video. When video 216 is a modified video, it may be modified in any of the manners discussed herein (e.g. it may be comprised of multiple videos stitched together, it may include one or more computer renderings, and/or it may be modified in still other manners).
In some embodiments, vision system 130 is configured to generate a synchronized data file 112 that also identifies the patient. The patient's identity, in some of these embodiments, may be displayed at any suitable location on synchronization screen 210. In some such embodiments, the patient's first or last initials may be utilized in lieu of the patient's full name, thereby preserving some anonymity of the patient. The patient's name may be determined via server 114 communicating with one or more of EMR server 120, ADT server 118, and/or another server on network 102.
In addition to, or in lieu of, identifying the patient's name, vision system 130 may generate synchronized data file 112 in a manner that identifies the device from which the sensors readings were taken and/or the sensors themselves. This identity may be displayed on screen 210 adjacent to the sensor readings from that particular device. In some embodiments, the identity may comprise a serial number, a model number, a device type, and/or other identifying information. Additionally, or alternatively, the device identification may include characteristics of the device, such as its room location. Thus, as an example, vision system 130 may specify the model of patient support apparatus 20, its location, and/or other identifying information next to the load cell readings shown in left portion 212 of synchronization screen 210 (
Vision system 130 (e.g. controller 78 and/or server 114) is configured to monitor changes in the shape of the edges 232 while the patient is positioned on patient support apparatus 20. The shape changes are monitored for the frequency at which the changes occur (which is indicative of the frequency of the patient's eye movement), the amount of change in the shape (e.g. how many millimeters, or fractions thereof, the edges 232 move), the direction in which the shape changes (up/down, left/right, diagonal, etc.), and/or other characteristics.
Vision system 130 may also be configured to monitor changes in the depth within an interior region 234 of the eye images. Such depth changes are detected by the one or more depth sensors that are incorporated into camera(s) 64, and such changes are also indicative of the patient's eye movement. This is because the front of the patient's eyeball is not perfectly spherical, and as a result, the distance (i.e. depth) between the depth sensor and different points within region 234 will change as the patient moves his or her eyes. Vision system 130 looks for these changes in depth to detect eye movement, in at least some embodiments.
In those embodiments of vision system 130 that monitor the patient's eye movement, controller 78 and/or server 114 may also monitor the colors within the images captured by camera(s) 64 to detect the patient's eye movement. That is, when the patient's eyes are open, vision system 130 may be configured to identify the patient's iris and/or pupil within region 234 by their color differences from the generally white areas of the patient's eyes. After identifying the iris and/or pupil within region 234, vision system 130 is configured to track the movement of one or both of these.
It will be understood that the monitoring of the patient's eye movement by tracking the movement of the patient's pupils, irises, and/or other features of the patient's eye is an activity that requires at least one of the patient's eyes to be open. However, the monitoring of the edges 232 for changes in shape and/or size, as well as the monitoring of depth changes within region 234, can both be carried out when the patient's eyes are open or closed. Thus, in at least some embodiments, vision system 130 is configured to monitor the patient's eye movements both when the patient's eyes are open as well as whey they are closed.
In some embodiments of vision system 130, the particular aspects of the patient's eyes that are monitored, as well as the particular eye events that lead to one or more notifications to a caregiver's electronic device 72, are configurable by a user of system 130. That is, in some embodiments, vision system 130 is configured to display on one or more of its associated displays (e.g. display 52, display 96, and/or a display of computer 166) a menu in which a user is able to select what eye conditions are to be monitored and/or what eye conditions warrant notification to one or more electronic devices 72. For example, in some embodiments, such as when a patient is in a coma, coming out of anesthesia, and/or in other situations, a caregiver is able to configure vision system 130 so that it notifies one or more devices 72 when the patient's eyes change from a state of generally little movement (e.g. a sleep state) to a more active state (e.g. an awake state). System 130 may also be configurable to provide notifications to electronic devices 72 when major changes are detected in the patient's eye movements. In general, vision system 130 may be configurable to provide notifications whenever any one or more of the following conditions is detected: REM sleep, patient agitation, slow and/or infrequent eye movement, changes in overall eye movement patterns, changes in frequency of eye movement, changes between sleep and awake states, whenever the patient's eyes open or close, etc.
In some embodiments of vision system 130 that are adapted to monitor the patient's eyes, one or more cameras 64 may be mounted high on a wall or on the ceiling of the room in which patient support apparatus 20 is positioned. Alternatively, or additionally, one or more cameras 64 may be mounted on one or more booms and/or arms that attach to patient support apparatus 20 and that position the camera(s) 64 at a location with an unobstructed view of the patient's eyes, and where the camera is closer to the patient's eyes than what might be possible for any of the cameras 64 that may be mounted directly to footboard 34 and/or siderails 36. The boom and/or arm may be movable so that it can be moved out of the way of the patient when he/she enters/exits patient support apparatus 20, as well as out of the way of a caregiver while that caregiver interacts with the patient.
Although cameras 64 are primarily described herein as being adapted to capture visible light images, it is to be understood that, in at least some embodiments of system 130, one or more of cameras 64 may be modified to include infrared image sensing devices, either in lieu of, or in addition to, their visual light image sensors. When equipped with one or more of such infrared image sensing devices, system 130 is able to capture images of the patient and/or patient support apparatus 20 even when the room is dark. The capturing of such infrared images utilizes existing ambient infrared light within the room, in some embodiments, and in other embodiments, utilizes one or more sources of infrared light that are provided as part of system 130. In addition to capturing images in dark or low-light conditions, utilizing one or more infrared cameras 64 also allows system 130 to detect thermal images. Server 114, controller 78, and/or electronic devices 72 may include software that is adapted to utilize such thermal images for carrying out any one or more of the functions described herein.
In some embodiments, vision system 130 is configured to retain the videos (whether processed or unprocessed) generated by camera(s) 64 and store them in memory for future access. In such embodiments, vision system 130 may be configured to allow different levels of access to these videos depending upon the user. For example, in some embodiments, certain viewers are only able to see the processed videos that have the generic renderings of all or a portion of the patient, thereby preserving the patient's anonymity. Certain other viewers, however, will be granted greater access and be able to see the images and/or videos that do not have the patient's identity obfuscated (i.e. anonymized). In some of these embodiments, the particular videos that are available for displaying to a user will be dependent upon the event(s) captured by the video. For example, in some embodiments, video of the patient in which the only the patient's face (or none of the patient) is obfuscated is made available to all authorized caregivers whenever the patient exits patient support apparatus 20. That is, all caregivers are able to see video of the patient's actual body when he/she exits from patient support apparatus 20. However, during non-exit time periods, those caregivers are only able to see video of the patient that has been processed to obfuscate the patient's face and/or body (e.g. video that includes generic renderings of the patient's head and/or other body parts). Other events besides bed exit, in some embodiments, may cause vision system 130 to display to authorized caregivers video that does not obfuscate the patient's identity, or that obfuscates the patient's identity to a lesser extent than what vision system 130 does when those other events are not transpiring. In any of the embodiments disclosed herein, the access of particular caregivers to particular types of videos captured by cameras 64 (e.g. those with different levels of obfuscation of the patient's identity) may be customized by authorized personnel of the healthcare facility utilizing patient support apparatus server 114.
It will be understood that vision system 130 may include any of the components, functions, software modules, and/or other features of the monitoring system disclosed in commonly assigned U.S. Pat. No. 10,121,070 issued Nov. 6, 2018, to Richard Derenne et al. and entitled VIDEO MONITORING SYSTEM, the complete disclosure of which is incorporated herein by reference. Further, vision system 130 may use any of the techniques, databases, tools, and/or other structures disclosed in the aforementioned U.S. Pat. No. 10,121,070 patent to carry out any one or more of the functions described herein.
Various additional alterations and changes beyond those already mentioned herein can be made to the above-described embodiments. This disclosure is presented for illustrative purposes and should not be interpreted as an exhaustive description of all embodiments or to limit the scope of the claims to the specific elements illustrated or described in connection with these embodiments. For example, and without limitation, any individual element(s) of the described embodiments may be replaced by alternative elements that provide substantially similar functionality or otherwise provide adequate operation. This includes, for example, presently known alternative elements, such as those that might be currently known to one skilled in the art, and alternative elements that may be developed in the future, such as those that one skilled in the art might, upon development, recognize as an alternative. Any reference to claim elements in the singular, for example, using the articles “a,” “an,” “the” or “said,” is not to be construed as limiting the element to the singular.
This application claims priority to U.S. provisional patent application Ser. No. 63/216,298 filed Jun. 29, 2021, by inventors Krishna Bhimavarapu et al. and entitled PATIENT VIDEO MONITORING SYSTEM, and to U.S. provisional patent application Ser. No. 63/218,053 filed Jul. 2, 2012, by inventors Krishna Bhimavarapu et al. and entitled PATIENT VIDEO MONITORING SYSTEM, the complete disclosures of both of which are incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/035359 | 6/28/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63216298 | Jun 2021 | US | |
63218053 | Jul 2021 | US |