Machine learning based monitoring system

Information

  • Patent Grant
  • 12236767
  • Patent Number
    12,236,767
  • Date Filed
    Wednesday, January 11, 2023
    2 years ago
  • Date Issued
    Tuesday, February 25, 2025
    a month ago
Abstract
Systems and methods are provided for machine learning based monitoring. Image data from a camera is received. On the hardware accelerator, a person detection model based on the image data is invoked. The person detection model outputs first classification result. Based on the first classification result, a person is detected. Second image data is received from the camera. In response to detecting the person, a fall detection model is invoked on the hardware accelerator based on the second image data. The fall detection model outputs a second classification result. A potential fall based on the second classification result is detected. An alert is provided in response to detecting the potential fall.
Description
BACKGROUND

A smart camera system can be a machine vision system which, in addition to image capture capabilities, is capable of extracting information from captured images. Some smart camera systems are capable of generating event descriptions and/or making decisions that are used in an automated system. Some camera systems can be a self-contained, standalone vision system with a built-in image sensor. The vision system and the image sensor can be integrated into a single hardware device. Some camera systems can include communication interfaces, such as, but not limited to Ethernet and/or wireless interfaces.


Safety can be important in clinical, hospice, assisted living, and/or home settings. Potentially dangerous events can happen in these environments. Automation can also be beneficial in these environments.


SUMMARY

The systems, methods, and devices described herein each have several aspects, no single one of which is solely responsible for its desirable attributes. Without limiting the scope of this disclosure, several non-limiting features will now be discussed briefly.


According to an aspect, a system is disclosed comprising: a storage device configured to store first instructions and second instructions; a camera; a hardware accelerator configured to execute the first instructions; and a hardware processor configured to execute the second instructions to: receive, from the camera, first image data; invoke, on the hardware accelerator, a person detection model based on the first image data, wherein the person detection model outputs first classification result; detect a person based on the first classification result; receive, from the camera, second image data; and in response to detecting the person, invoke, on the hardware accelerator, a fall detection model based on the second image data, wherein the fall detection model outputs a second classification result, detect a potential fall based on the second classification result, and in response to detecting the potential fall, provide an alert.


According to an aspect, the system may further comprise a microphone, wherein the hardware processor may be configured to execute further instructions to: receive, from the microphone, audio data; and in response to detecting the person, invoke, on the hardware accelerator, a loud noise detection model based on the audio data, wherein the loud noise detection model outputs a third classification result, and detect a potential scream based on the third classification result.


According to an aspect, the hardware processor may be configured to execute additional instructions to: in response to detecting the potential scream, provide a second alert.


According to an aspect, the hardware processor may be configured to execute additional instructions to: in response to detecting the potential fall and the potential scream, provide an escalated alert.


According to an aspect, invoking the loud noise detection model based on the audio data may further comprise: generating spectrogram data from the audio data; and providing the spectrogram data as input to the loud noise detection model.


According to an aspect, the second image data may comprise a plurality of images.


According to an aspect, a method is disclosed comprising: receiving, from a camera, first image data; invoking, on a hardware accelerator, a person detection model based on the first image data, wherein the person detection model outputs first classification result; detecting a person based on the first classification result; receiving, from the camera, second image data; and in response to detecting the person, invoking, on the hardware accelerator, a plurality of person safety models based on the second image data, for each person safety model from the plurality of person safety models, receiving, from the hardware accelerator, a second classification result, detecting a potential safety issue based on a particular second classification result, and in response to detecting the potential safety issue, providing an alert.


According to an aspect, the method may further comprise: in response to detecting the person, invoking, on the hardware accelerator, a facial feature extraction model based on the second image data, wherein the facial feature extraction model outputs a facial feature vector, executing a query of a facial features database based on the facial feature vector, wherein executing the query indicates that the facial feature vector is not present in the facial features database, and in response to determining that the facial feature vector is not present in the facial features database, providing an unrecognized person alert.


According to an aspect, the plurality of person safety models may comprise a fall detection model, the method may further comprise: collecting a first set of videos of person falls; collecting a second set of videos of persons without falling; creating a training data set comprising the first set of videos and the second set of videos; and training the fall detection model using the training data set.


According to an aspect, the plurality of person safety models may comprise a handwashing detection model, the method may further comprise: collecting a first set of videos of with handwashing; collecting a second set of videos without handwashing; creating a training data set comprising the first set of videos and the second set of videos; and training the handwashing detection model using the training data set.


According to an aspect, the method may further comprise: receiving, from a microphone, audio data; and in response to detecting the person, invoking, on the hardware accelerator, a loud noise detection model based on the audio data, wherein the loud noise detection model outputs a third classification result, and detecting a potential scream based on the third classification result.


According to an aspect, the method may further comprise: in response to detecting the potential safety issue and the potential scream, providing an escalated alert.


According to an aspect, the method may further comprise: collecting a first set of videos of with screaming; collecting a second set of videos without screaming; creating a training data set comprising the first set of videos and the second set of videos; and training the loud noise detection model using the training data set.


According to an aspect, a system is disclosed comprising: a storage device configured to store first instructions and second instructions; a camera; a hardware accelerator configured to execute the first instructions; and a hardware processor configured to execute the second instructions to: receive, from the camera, first image data; invoke, on the hardware accelerator, a person detection model based on the first image data, wherein the person detection model outputs first classification result; detect a person based on the first classification result; receive, from the camera, second image data; and in response to detecting the person, invoke, on the hardware accelerator, a plurality of person safety models based on the second image data, for each person safety model from the plurality of person safety models, receive, from the hardware accelerator, a model result, detect a potential safety issue based on a particular model result, and in response to detecting the potential safety issue, provide an alert.


According to an aspect, the plurality of person safety models may comprise a fall detection model, and wherein invoking the plurality of person safety models may comprise: invoking, on the hardware accelerator, the fall detection model based on the second image data, wherein the fall detection model outputs the particular model result.


According to an aspect, the plurality of person safety models may comprise a handwashing detection model, and wherein invoking the plurality of person safety models may comprise: invoking, on the hardware accelerator, the handwashing detection model based on the second image data, wherein the handwashing detection model outputs the particular model result.


According to an aspect, the system may further comprise a microphone, wherein the hardware processor may be configured to execute further instructions to: receive, from the microphone, audio data; and in response to detecting the person, invoke, on the hardware accelerator, a loud noise detection model based on the audio data, wherein the loud noise detection model outputs a third classification result, detect a potential loud noise based on the third classification result, and in response to detecting the potential loud noise, provide a second alert.


According to an aspect, the system may further comprise a display, wherein the hardware processor may be configured to execute further instructions to: cause presentation, on the display, of a prompt to cause a person to perform an activity; receive, from the camera, third image data of a recording of the activity; invoke, on the hardware accelerator, a screening machine learning model based on the third image data, wherein the screening machine learning model outputs a third classification result, detect a potential screening issue based on the third classification result, and in response to detecting the potential screening issue, provide a second alert.


According to an aspect, the screening machine learning model may be a pupillometry screening model, and wherein the potential screening issue indicates potential dilated pupils.


According to an aspect, the screening machine learning model may be a facial paralysis screening model, and wherein the potential screening issue indicates potential facial paralysis.


According to an aspect, a system is disclosed comprising: a storage device configured to store first instructions and second instructions; a wearable device configured to process sensor signals to determine a physiological value for a person; a microphone; a camera; a hardware accelerator configured to execute the first instructions; and a hardware processor configured to execute the second instructions to: receive, from the wearable device, the first physiological value; determine to begin a monitoring process based on the first physiological value; and in response to determining to begin the monitoring process, receive, from the camera, image data; receive, from the microphone, audio data; invoke, on the hardware accelerator, a first unconscious detection model based on the image data, wherein the first unconscious detection model outputs a first classification result, invoke, on the hardware accelerator, a second unconscious detection model based on the audio data, wherein the second unconscious detection model outputs a second classification result, detect a potential state of unconsciousness based on the first classification result and the second classification result, and in response to detecting the potential state of unconsciousness, provide an alert.


According to an aspect, the wearable device may comprise a pulse oximetry sensor and the first physiological value is for blood oxygen saturation, and wherein determining to begin the monitoring process based on the first physiological value further comprises: determining that the first physiological value is below a threshold level.


According to an aspect, the wearable device may comprise a respiration rate sensor and the first physiological value is for respiration rate, and wherein determining to begin the monitoring process based on the first physiological value further comprises: determining that the first physiological value satisfies a threshold alarm level.


According to an aspect, the wearable device comprises a heart rate sensor and the first physiological value is for heart rate, and wherein determining to begin the monitoring process based on the physiological value further comprises: receiving, from the wearable device, a plurality of physiological values measuring heart rate over time; and determining that the plurality of physiological values and the first physiological value satisfies a threshold alarm level.


According to an aspect, a system is disclosed comprising: a storage device configured to store instructions; a display; a camera; and a hardware processor configured to execute the instructions to: receive a current time; determine to begin a check-up process from the current time; and in response to determining to begin the check-up process, cause presentation, on the display, of a prompt to cause a person to perform a check-up activity, receive, from the camera, image data of a recording of the check-up activity, invoke a screening machine learning model based on the image data, wherein the screening machine learning model outputs a classification result, detect a potential screening issue based on the classification result, and in response to detecting the potential screening issue, provide an alert.


According to an aspect, the screening machine learning model may be a pupillometry screening model, and wherein the potential screening issue indicates potential dilated pupils.


According to an aspect, the screening machine learning model may be a facial paralysis screening model, and wherein the potential screening issue indicates potential facial paralysis.


According to an aspect, the system may further comprise a wearable device configured to process sensor signals to determine a physiological value for the person, wherein the hardware processor may be configured to execute further instructions to: receive, from the wearable device, the physiological value; and generate the alert comprising the physiological value.


According to an aspect, the wearable device may comprise a pulse oximetry sensor and the physiological value is for blood oxygen saturation.


According to an aspect, the wearable device may be further configured to process the sensor signals to measure at least one of blood oxygen saturation, pulse rate, perfusion index, respiration rate, heart rate, or pleth variability index.


According to an aspect, the hardware processor may be configured to execute further instructions to: receive, from a second computing device, first video data; cause presentation, on the display, of the first video data; receive, from the camera, second video data; and transmit, to the second computing device, the second video data.


According to an aspect, a method is disclosed comprising: receiving a current time; determining to begin a check-up process from the current time; and in response to determining to begin the check-up process, causing presentation, on a display, of a prompt to cause a person to perform a check-up activity, receiving, from a camera, image data of a recording of the check-up activity, invoking a screening machine learning model based on the image data, wherein the screening machine learning model outputs a model result, detecting a potential screening issue based on the model result, and in response to detecting the potential screening issue, providing an alert.


According to an aspect, the screening machine learning model may be a pupillometry screening model, and wherein the potential screening issue indicates potential dilated pupils, the method further comprise: collecting a first set of images of dilated pupils; collecting a second set of images without dilated pupils; creating a training data set comprising the first set of images and the second set of images; and training the pupillometry screening model using the training data set.


According to an aspect, the screening machine learning model may be a facial paralysis screening model, and wherein the potential screening issue indicates potential facial paralysis, the method may further comprise: collecting a first set of images of facial paralysis; collecting a second set of images without facial paralysis; creating a training data set comprising the first set of images and the second set of images; and training the facial paralysis screening model using the training data set.


According to an aspect, the check-up activity may comprise a dementia test, and wherein the screening machine learning model may comprise a gesture detection model.


According to an aspect, the gesture detection model may be configured to detect a gesture directed towards a portion of the display.


According to an aspect, the method may further comprise: receiving, from the camera, second image data; invoking a person detection model based on the second image data, wherein the person detection model outputs first classification result; detect a person based on the first classification result; receive, from the camera, third image data; and in response to detecting the person, invoking a handwashing detection model based on the third image data, wherein the handwashing detection model outputs a second classification result, detecting a potential lack of handwashing based on the second classification result, and in response to detecting a lack of handwashing, provide a second alert.


According to an aspect, a system is disclosed comprising: a storage device configured to store instructions; a camera; and a hardware processor configured to execute the instructions to: receive, from the camera, first image data; invoke an infant detection model based on the first image data, wherein the infant detection model outputs a classification result; detect an infant based on the classification result; receive captured data; and in response to detecting the infant, invoke an infant safety model based on the captured data, wherein the infant safety model outputs a model result, detect a potential safety issue based on the model result, and in response to detecting the potential safety issue, provide an alert.


According to an aspect, the infant safety model may be an infant position model, and wherein the potential safety issue indicates the infant potentially laying on their stomach.


According to an aspect, the hardware processor may be configured to execute further instructions to: receive, from the camera, second image data; and in response to detecting the infant, invoke a facial feature extraction model based on the second image data, wherein the facial feature extraction model outputs a facial feature vector, execute a query of a facial features database based on the facial feature vector, wherein executing the query indicates that the facial feature vector is not present in the facial features database, and in response to determining that the facial feature vector is not present in the facial features database, provide an unrecognized person alert.


According to an aspect, the infant safety model may be an infant color detection model, and wherein the potential safety issue indicates potential asphyxiation.


According to an aspect, the model result may comprise coordinates of a boundary region identifying an infant object in the captured data, and wherein detecting the potential safety issue may comprise: determining that the coordinates of the boundary region exceed a threshold distance from an infant zone.


According to an aspect, the system may further comprise a wearable device configured to process sensor signals to determine a physiological value for the infant, wherein the hardware processor may be configured to execute further instructions to: receive, from the wearable device, the physiological value; and generate the alert comprising the physiological value.


According to an aspect, the system may further comprise a microphone, wherein the captured data is received from the microphone, wherein the infant safety model is a loud noise detection model, and wherein the potential safety issue indicates a potential scream.


In various aspects, systems and/or computer systems are disclosed that comprise a computer readable storage medium having program instructions embodied therewith, and one or more processors configured to execute the program instructions to cause the one or more processors to perform operations comprising one or more of the above- and/or below-aspects (including one or more aspects of the appended claims).


In various aspects, computer-implemented methods are disclosed in which, by one or more processors executing program instructions, one or more of the above- and/or below-described aspects (including one or more aspects of the appended claims) are implemented and/or performed.





BRIEF DESCRIPTION OF THE DRAWINGS

These and other features, aspects, and advantages are described below with reference to the drawings, which are intended for illustrative purposes and should in no way be interpreted as limiting. Furthermore, the various features described herein can be combined to form new combinations, which are part of this disclosure. In the drawings, like reference characters can denote corresponding features. The following is a brief description of each of the drawings.



FIG. 1A is a drawing of a camera system in a clinical setting.



FIG. 1B is a schematic diagram illustrating a monitoring system.



FIG. 2 is a schematic drawing of a monitoring system in a clinical setting.



FIG. 3 is another schematic drawing of a monitoring system in a clinical setting.



FIG. 4 is a drawing of patient sensor devices that can be used in a monitoring system.



FIG. 5 illustrates a camera image with object tracking.



FIG. 6 is a drawing of a monitoring system in a home setting.



FIG. 7 is a drawing of a monitoring system configured for baby monitoring.



FIG. 8 is a flowchart of a method for efficiently applying machine learning models.



FIG. 9 is a flowchart of another method for efficiently applying machine learning models.



FIG. 10 is a flowchart of a method for efficiently applying machine learning models for infant care.



FIG. 11 illustrates a block diagram of a computing device that may implement one or more aspects of the present disclosure.





DETAILED DESCRIPTION

As described above, some camera systems are capable of extracting information from captured images. However, extracting information from images and/or monitoring by existing camera systems can be limited. Technical improvements regarding monitoring people and/or objects and automated actions based on the monitoring can advantageously be helpful, improve safety, and possibly save lives.


Generally described, aspects of the present disclosure are directed to improved monitoring systems. In some aspects, a camera system can include a camera and a hardware accelerator. The camera system can include multiple machine learning models. Each model of the machine learning models can be configured to detect an object and/or an activity. The hardware accelerator can be special hardware that is configured to accelerate machine learning applications. The camera system can be configured to execute the machine learning models on the hardware accelerator. The camera system can advantageously be configured to execute conditional logic to determine which machine learning models should be applied and when. For example, until a person is detected in an area, the camera system may not apply any machine learning models related to persons, such as, but not limited to, fall detection, person identification, stroke detection, medication tracking, activity tracking, etc.


Some existing monitoring systems can have limited artificial intelligence capabilities. For example, some existing monitoring systems may only have basic person, object, or vehicle detection. Moreover, some existing monitoring systems may require a network connection from local cameras to backend servers that perform the artificial intelligence processing. Some existing cameras may have limited or no artificial intelligence capabilities. Performing artificial intelligence processing locally on cameras can be technically challenging. For example, the hardware processors and/or memory devices in existing cameras may be so limited as being unable to execute machine learning models locally. Moreover, existing cameras may have limited software to be able to execute machine learning models locally in an efficient manner. The systems and methods described herein may efficiently process camera data either locally and/or in a distributed manner with machine learning models. Accordingly, the systems and methods described herein may improve over existing artificial intelligence monitoring technology.


As used herein, “camera” and “camera system” can be used interchangeably. Moreover, as used herein, “camera” and “camera system” can be used interchangeably with “monitoring system” since a camera system can encompass a monitoring system in some aspects.



FIG. 1A depicts a camera system 114 in a clinical setting 101. The clinical setting 101 can be, but is not limited to, a hospital, nursing home, or hospice. The clinical setting 101 can include the camera system 114, a display 104, and a user computing device 108. In some aspects, the camera system 114 can be housed in a soundbar enclosure or a tabletop speaker enclosure (not illustrated). The camera system 114 can include multiple cameras (such as 1080p or 4k camera and/or an infrared image camera), an output speaker, an input microphone (such as a microphone array), an infrared blaster, and/or multiple hardware processors (including one or more hardware accelerators). In some aspects, the camera system 114 can have optical zoom. In some aspects, the camera system 114 can include a privacy switch that allows the monitoring system's 100A, 100B cameras to be closed. The camera system 114 may receive voice commands. The camera system 114 can include one or more hardware components for Bluetooth®, Bluetooth Low Energy (BLE), Ethernet, Wi-Fi, cellular (such as 4G/5G/LTE), near-field communication (NFC), radio-frequency identification (RFID), High-Definition Multimedia Interface (HDMI), and/or HDMI Consumer Electronics Control (CEC). The camera system 114 can be connected to the display 104 (such as a television) and the camera system 114 can control the display 104. In some aspects, the camera system 114 can be wirelessly connected to the user computing device 108 (such as a tablet). In particular, the camera system 114 can be wirelessly connected to a hub device and the hub device can be wirelessly connected to the user computing device 108.


The camera system 114 may include machine learning capabilities. The camera system 114 can include machine learning models. The machine learning models can include, but are not limited to, convolutional neural network (CNN) models and other models. A CNN model can be trained to extract features from images for object identification (such as person identification). In some aspects, a CNN can feed the extracted features to a recurrent neural network (RNN) for further processing. The camera system 114 may track movements of individuals inside the room without using any facial recognition or identification tag tracking. Identification tags can include, but are not limited to, badges and/or RFID tags. This feature allows the camera system 114 to track an individual's movements even when the identification of the individual is unknown. A person in the room may not be identifiable for various reasons. For example, the person may be wearing a mask so that facial recognition modules may not be able to extract any features. As another example, the person may be a visitor who is not issued an identification tag, unlike the clinicians, who typically wear identification tags. Alternatively, when the person is not wearing a mask and/or is wearing an identification tag, the camera system 114 may combine the motion tracking with the identification of the individual to further improve accuracy in tracking the activity of the individual in the room. Having the identity of at least one person in the room may also improve accuracy in tracking the activity of other individuals in the room whose identity is unknown by reducing the number of anonymous individuals in the room. Additional details regarding machine learning capabilities and models that the camera system 114 can use are provided herein.


The camera system 114 can be included in a monitoring system, as described herein. The monitoring system can include remote interaction capabilities. A patient in the clinical setting 101 can be in isolation due to an illness, such as COVID-19. The patient can ask for assistance via a button (such as by selecting an element in the graphical user interface on the user computing device 108) and/or by issuing a voice command. In some aspects, the camera system 114 can be configured to respond to voice commands, such as, but not limited to, activating or deactivating cameras or other functions. In response to the request, a remote clinician 106 can interact with the patient via the display 104 and the camera system 114, which can include an input microphone and an output speaker. The monitoring system can also allow the patient to remotely maintain contact with friends and family via the display 104 and camera system 114. In some aspects, the camera system 114 can be connected to internet of things (IOT) devices. In some aspects, closing of the privacy switch can cause the camera system 114 and/or a monitoring system to disable monitoring. In other aspects, the monitoring system can still issue alerts if the privacy switch has been closed. In some aspects, the camera system 114 can record activity via cameras based on a trigger, such as, but not limited to, detection of motion via a motion sensor.



FIG. 1B is a diagram depicting a monitoring system 100A, 100B. In some aspects, there can be a home/assisted living side to the monitoring system 100A and a clinical side to the monitoring system 100B. As described herein, the clinical side monitoring system 100B can track and monitor a patient via a first camera system 114 in a clinical setting. As described herein, the patient can be monitored via wearable sensor devices. A clinician 110 can interact with the patient via the first display 104 and the first camera system 114. Friends and family can also use a user computing device 102 to interact with the patient via the first display 104 and the first camera system 114.


The home/assisted living side monitoring system 100A can track and monitor a person (which can be an infant) via a second camera system 134 in a home/assisted living setting. For example, a person can be recovering at home or live in an assisted living home. As described herein, the person can be monitored via wearable sensor devices. A clinician 110 can interact with the person via the second display 124 and the second camera system 134. As shown, the clinical side to the monitoring system 100B can securely communicate with the home/assisted living side to the monitoring system 100A, which can allow communications between the clinician 110 and persons in the home or assisted living home. Friends and family can use the user computing device 102 to interact with the patient via the second display 124 and the second camera system 134.


In some aspects, the monitoring system 100A, 100B can include server(s) 130A, 130B. The server(s) 130A, 130B can facilitate communication between the clinician 110 and a person via the second display 124 and the second camera system 134. The server(s) 130A, 130B can facilitate communication between the user computing device 102 and the patient via the first display 104 and the first camera system 114. As described herein, the server(s) 130A, 130B can communicate with the camera system(s) 114, 134. In some aspects, the server(s) 130A, 130B can transmit machine learning model(s) to the camera system(s) 114, 134. In some aspects, the server(s) 130A, 130B can train machine learning models based on training data sets.


In some aspects, the monitoring system 100A, 100B can present modified images (which can be in a video format) to clinician(s) or other monitoring users. For example, instead of showing actual persons, the monitoring system 100A, 100B can present images where a person has been replaced with a virtual representation (such as a stick figure) and/or a redacted area such as a rectangle.



FIG. 2 is a diagram depicting a monitoring system 200 in another clinical setting with an accompanying legend. The monitoring system 200 can include, but is not limited to, cameras 272A, 272B, 280A, 280B, 286, 290, 294, displays 292A, 292B, 292C, and a server 276. Some of the cameras 272A, 272B, 280A, 280B, 286, 290, 294 can be the same as or similar to the camera system 114 of FIG. 1A. The cameras 272A, 272B, 280A, 280B, 286, 290, 294 can send data and/or images to the server 276. The server 276 can be located in the hospital room, or elsewhere in the hospital, or at a remote location outside the hospital (not illustrated). As shown, in a clinical setting, such as a hospital, hospitalized patients can be lying on hospital beds, such as the hospital bed 274. The bed cameras 272A, 272B can be near a head side of the bed 274 facing toward a foot side of the bed 274. The clinical setting may have a handwashing area 278. The handwashing cameras 280A, 280B can face the handwashing area 278. The handwashing cameras 280A, 280B can have a combined field of view 282C so as to maximize the ability to detect a person's face and/or identification tag when the person is standing next to the handwashing area 278 facing the sink. Via the bed camera(s) 272A, 272B, the monitoring system 200 can detect whether the clinician (or a visitor) is within a patient zone 275, which can be located within a field of view 282A, 282B of the bed camera(s) 272A, 272B. Patient zones can be customized. For example, the patient zone 275 can be defined as a proximity threshold around the hospital bed 274 and/or a patient. In some aspects, the clinician 281 is within the patient zone 275 if the clinician is at least partially within a proximity threshold distance to the hospital bed and/or the patient.


The bed cameras 272A, 272B can be located above a head side of the bed 274, where the patient's head would be at when the patient lies on the bed 274. The bed cameras 272A, 272B can be separated by a distance, which can be wider than a width of the bed 274, and can both be pointing toward the bed 274. The fields of view 282A, 282B of the bed cameras 272A, 272B can overlap at least partially over the bed 274. The combined field of view 282A, 282B can cover an area surrounding the bed 274 so that a person standing by any of the four sides of the bed 274 can be in the combined field of view 282A, 282B. The bed cameras 272A, 272B can each be installed at a predetermined height and pointing downward at a predetermined angle. The bed cameras 272A, 272B can be configured so as to maximize the ability to detect the face of a person standing next to or near the bed 274, independent of the orientation of the person's face, and/or the ability to detect an identification tag that is worn on the person's body, for example, hanging by the neck, the belt, etc. Optionally, the bed cameras 272A, 272B need not be able to identify the patient lying on the bed 274, as the identity of the patient is typically known in clinical and other settings.


In some aspects, the cameras 272A, 272B, 280A, 280B, 286, 290, 294 can be configured, including but not limited to being installed at a height and/or angle, to allow the monitoring system 200 to detect a person's face and/or identification tag, if any. For example, at least some of the cameras 272A, 272B, 280A, 280B, 286, 290, 294 can be installed at a ceiling of the room or at a predetermined height above the floor of the room. The cameras 272A, 272B, 280A, 280B, 286, 290, 294 can be configured to detect an identification tag. Additionally or alternatively, the cameras 272A, 272B, 280A, 280B, 286, 290, 294 can detect faces, which can include extracting facial recognition features of the detected face, and/or to detect a face and the identification tag substantially simultaneously.


In some aspects, the monitoring system 200 can monitor one or more aspects about the patient, the clinician 281, and/or zones. The monitoring system 200 can determine whether the patient is in the bed 274. The monitoring system 200 can detect whether the patient is within a bed zone, which can be within the patient zone 275. The monitoring system 200 can determine an angle of the patient in the bed 274. In some aspects, the monitoring system 200 can include a wearable, wireless sensor device (not illustrated) that can track a patient's posture, orientation, and activity. In some aspects, a wearable, wireless sensor device can include, but is not limited to, a Centroid® device by Masimo Corporation, Irvine, CA. The monitoring system 200 can determine how often the patient has turned in the bed 274 and/or gotten up from the bed 274. The monitoring system 200 can detect turning and/or getting up based on the bed zone and/or facial recognition of the patient. The monitoring system 200 can detect whether the clinician 281 is within the patient zone 275 or another zone. As described herein, the monitoring system 200 can detect whether the clinician 281 is present or not present via one or more methods, such as, but not limited to, facial recognition, identification via an image of an identification tag, and/or RFID based tracking. Similarly, the monitoring system 200 can detect intruders that are unauthorized in one or more zones via one or more methods, such as, but not limited to, facial recognition, identification via an image of an identification tag, and/or RFID based tracking. In some aspects, the monitoring system 200 can issue an alert based on one or more of the following factors: facial detection of an unrecognized face; no positive visual identification of authorized persons via identification tags; and/or no positive identification of authorized persons via RFID tags. In some aspects, the monitoring system 200 can detect falls via one or more methods, such as, but not limited to, machine-vision based fall detection and/or fall detection via wearable device, such as using accelerometer data. Any of the alerts described herein can be presented on the displays 292A, 292B, 292C.


In some aspects, if the monitoring system 200 detects that the clinician 281 is within the patient zone 275 and/or has touched the patient, then the system 200 can assign a “contaminated” status to the clinician 281. The monitoring system 200 can detect a touch action by detecting the actual act of touching by the clinician 281 and/or by detecting the clinician 281 being in close proximity, for example, within less than 1 foot, 6 inches, or otherwise, of the patient. If the clinician 281 moves outside the patient zone 275, then the monitoring system 200 can assign a “contaminated-prime” status to the clinician 281. If the clinician 281 with the “contaminated-prime” status re-enters the same patient zone 275 or enters a new patient zone, monitoring system 200 can output an alarm or warning. If the monitoring system 200 detects a handwashing activity by the clinician 281 with a “contaminated-prime” status, then the monitoring system 200 can assign a “not contaminated” status to the clinician 281.


A person may also be contaminated by entering contaminated areas other than a patient zone. For example, as shown in FIG. 2, the contaminated areas can include a patient consultation area 284. The patient consultation area 284 can be considered a contaminated area with or without the presence of a patient. The monitoring system 200 can include a consultation area camera 286, which has a field of view 282D that overlaps with and covers the patient consultation area 284. The contaminated areas can further include a check-in area 288 that is next to a door of the hospital room. Alternatively and/or additionally, the check-in area 288 can extend to include the door. The check-in area 288 can be considered a contaminated area with or without the presence of a patient. The monitoring system 200 can include an entrance camera 290, which has a field of view 282E that overlaps with and covers the check-in area 288.


As shown in FIG. 2, the monitoring system 200 can include an additional camera 294. Additional cameras may not be directed to any specific contaminated and/or handwashing areas. For example, the additional camera 294 can have a field of view 282F that covers substantially an area that a person likely has to pass when moving from one area to another area of the hospital room, such as from the patient zone 275 to the consultation area 284. Additional camera can provide data to the server 276 to facilitate tracking of movements of the people in the room.



FIG. 3 depicts a monitoring system 300 in another clinical setting. The monitoring system 300 may monitor the activities of anyone present in the room such as medical personnel, visitors, patients, custodians, etc. As described herein, the monitoring system 300 may be located in a clinical setting such as a hospital room. The hospital room may include one or more patient beds 308. The hospital room may include an entrance/exit 329 to the room. The entrance/exit 329 may be the only entrance/exit to the room.


The monitoring system 300 can include a server 322, a display 316, one or more camera systems 314, 318, 320, and an additional device 310. The camera systems 314, 318, 320 may be connected to the server 322. The server 322 may be a remote server. The one or more camera systems may include a first camera system 318, a second camera system 320, and/or additional camera systems 314. The camera systems 314, 318, 320 may include one or more processors, which can include one or more hardware accelerators. The processors can be enclosed in an enclosure 313, 324, 326 of the camera systems 314, 318, 320. In some aspects, the processors can include, but are not limited to, an embedded processing unit, such as an Nvidia® Jetson Xavier™ NX/AGX, that is embedded in an enclosure of the camera systems 314, 318, 320. The one or more processors may be physically located outside of the room. The processors may include microcontrollers such as, but not limited to, ASICs, FPGAs, etc. The camera systems 314, 318, 320 may each include a camera. The camera(s) may be communication with the one or more processors and may transmit image data to the processor(s). In some aspects, the camera systems 314, 318, 320 can exchange data and state information with other camera systems.


The monitoring system 300 may include a database. The database can include information relating to the location of items in the room such as camera systems, patient beds, handwashing stations, and/or entrance/exits. The database can include locations of the camera systems 314, 318, 320 and the items in the field of view of each camera system 314, 318, 320. The database can further include settings for each of the camera systems. Each camera system 314, 318, 320 can be associated with an identifier, which can be stored in the database. The server 322 may use the identifiers to configure each of the camera systems 314, 318, 320.


As shown in FIG. 3, the first camera system 318 can include a first enclosure 324 and a first camera 302. The first enclosure 324 can enclose one or more hardware processors. The first camera 302 may be a camera capable of sensing depth and color, such as, but not limited to, an RGB-D stereo depth camera. The first camera 302 may be positioned in a location of the room to monitor the entire room or substantially all of the room. The first camera 302 may be tilted downward at a higher location in the room. The first camera 302 may be set up to minimize blind spots in the field of view of the first camera 302. For example, the first camera 302 may be located in a corner of the room. The first camera 302 may be facing the entrance/exit 329 and may have a view of the entrance/exit 329 of the room.


As shown in FIG. 3, the second camera system 320 can include a second enclosure 326 (which can include one or more processors) and a second camera 304. The second camera 304 may be a RGB color camera. Alternatively, the second camera 304 may be an RGB-D stereo depth camera. The second camera 304 may be installed over a hand hygiene compliance area 306. The hand hygiene compliance area 306 may include a sink and/or a hand sanitizer dispenser. The second camera 304 may be located above the hand hygiene compliance area 306 and may be point downwards toward the hand hygiene compliance area 306. For example, the second camera 304 may be located on or close to the ceiling and may have a view the hand hygiene compliance area 306 from above.


In a room of a relatively small size, the first and second camera systems 318, 320 may be sufficient for monitoring the room. Optionally, for example, if the room is of a relatively larger size, the system 300 may include any number of additional camera systems, such as a third camera system 314. The third camera system 314 may include a third enclosure 313 (which can include one or more processors) and a third camera 312. The third camera 312 of the third camera system 314 may be located near the patient's bed 308 or in a corner of the room, for example, a corner of the room that is different than (for example, opposite or diagonal to) the corner of the room where the first camera 302 of the first camera system 318 is located. The third camera 312 may be located at any other suitable location of the room to aid in reducing blind spots in the combined fields of view of the first camera 302 and the second camera 304. The third camera 312 of the third camera system 314 may have a field of view covering the entire room. The third camera system 314 may operate similarly to the first camera system 318, as described herein.


The monitoring system 300 may include one or more additional devices 310. The additional device 310 can be, but is not limited to, a patient monitoring and connectivity hub, bedside monitor, or other patient monitoring device. For example, the additional device 310 can be a Root® monitor by Masimo Corporation, Irvine, CA Additionally or alternatively, the additional device 310 can be, but is not limited to, a display device of a data aggregation and/or alarm visualization platform. For example, the additional device 310 can be a display device (not illustrated) for the Uniview® platform by Masimo Corporation, Irvine, CA The additional device(s) 310 can include smartphones or tablets (not illustrated). The additional device(s) may be in communication with the server 322 and/or the camera systems 318, 320, 314.


The monitoring system 300 can output alerts on the additional device(s) 310 and/or the display 316. The outputted alert may be any auditory and/or visual signal. Outputted alerts can include, but are not limited to, a fall alert, an unauthorized person alert, an alert that a patient should be turned, or an alert that a person has not complied the hand hygiene protocol. For example, someone outside of the room can be notified on an additional device 310 and/or the display 316 that an emergency has occurred in the room. In some aspects, the monitoring system 300 can provide a graphical user interface, which can be presented on the display 316. A configuration user can configure the monitoring system 300 via the graphical user interface presented on the display 316.



FIG. 4 depicts patient sensor devices 404, 406, 408 (such as a wearable device) and a user computing device 402 (which may not be drawn to scale) that can be used in a monitoring system. In some aspects, one or more of the patient sensor devices 404, 406, 408 can be optionally used in a monitoring system. Additionally or alternatively, patient sensor devices can be used with the monitoring system that are different than the devices 404, 406, 408 depicted in FIG. 4. A patient sensor device can non-invasively measure physiological parameters from a fingertip, wrist, chest, forehead, or other portion of the body. The first, second, and third patient sensor devices 404, 406, 408 can be wirelessly connected to the user computing device 402 and/or a server in the monitoring system. The first patient sensor device 404 can include a display and a touchpad and/or touchscreen. The first patient sensor device 404 can be a pulse oximeter that is designed to non-invasively monitor patient physiological parameters from a fingertip. The first patient sensor device 404 can measure physiological parameters such as, but not limited to, blood oxygen saturation, pulse rate, perfusion index, respiration rate, heart rate, and/or pleth variability index. The first patient sensor device 404 can be a MightySat® fingertip pulse oximeter by Masimo Corporation, Irvine, CA The second patient sensor device 406 can be configured to be worn on a patient's wrist to non-invasively monitor patient physiological parameters from a wrist. The second patient sensor device 406 can be a smartwatch. The second patient sensor device 406 can include a display and/or touchscreen. The second patient sensor device 406 can measure physiological parameters including, but not limited to, blood oxygen saturation, pulse rate, perfusion index, respiration rate, heart rate, and/or pleth variability index. The third patient sensor device 408 can be a temperature sensor that is designed to non-invasively monitor physiological parameters of a patient. In particular, the third patient sensor device 408 can measure a temperature of the patient. The third patient sensor device 408 can be a Radius T°™ sensor by Masimo Corporation, Irvine, CA A patient, clinician, or other authorized user can use the user computing device 408 to view physiological information and other information from the monitoring system.


As shown, a graphical user interface can be presented on the user computing device 402. The graphical user interface can present physiological parameters that have been measured by the patient sensor devices 404, 406, 408. As described herein, the graphical user interface can also present alerts and information from the monitoring system. The graphical user interface can present alerts such as, but not limited to, a fall alert, an unauthorized person alert, an alert that a patient should be turned, or an alert that a person has not complied the hand hygiene protocol.



FIG. 5 illustrates a camera image 500 with object tracking. The monitoring system can track the persons 502A, 502B, 502C in the camera image 500 with the boundary regions 504, 506, 508. In some aspects, each camera system in a monitoring system can be configured to perform object detection. As described herein, some monitoring systems can have a single camera system while other monitoring systems can have multiple camera systems. Each camera system can be configured with multiple machine learning models for object detection. A camera system can receive image data from a camera. The camera can capture a sequence of images (which can be referred to as frames). The camera system can process the frame with a YOLO (You Only Look Once) deep learning network, which can be trained to detect objects (such as persons 502A, 502B, 502C) and return coordinates of the boundary regions 504, 506, 508. In some aspects, the camera system can process the frame with an inception CNN, which can be trained to detect activities, such as hand sanitizing or hand washing (not illustrated). The machine learning models, such as the inception CNN, can be trained using a dataset of a particular activity type, such as handwashing or hand sanitizing demonstration videos, for example.


The camera system can determine processed data that consists of the boundary regions 504, 506, 508 surrounding a detected person 502A, 502B, 502C in the room, such as coordinates of the boundary regions. The camera system can provide the boundary regions to a server in the monitoring system. In some aspects, processed data may not include the images captured by the camera. Advantageously, the images from the camera can be processed locally at the camera system and may not be transmitted outside of the camera system. In some aspects, the monitoring system can ensure anonymity and protect privacy of imaged persons by not transmitting the images outside of each camera system.


The camera system can track objects using the boundary regions. The camera system can compare the intersection of boundary regions in consecutive frames. A sequence of boundary regions associated with an object through consecutive frames can be referred to as a “track.” The camera system may associate boundary regions if the boundary regions of consecutive frames overlap by a threshold distance or are within of a threshold distance of another. The camera system may determine that boundary regions from consecutive frames that are adjacent (or the closest with each other) are associated with the same object. Thus, whenever object detection occurs in the field of view of one camera, that object may be associated with the nearest track.


As described herein, the camera system can use one or more computer vision algorithms. For example, a computer vision algorithm can identify a boundary region around a person's face or around a person's body. In some aspects, the camera system can detect faces using a machine learning model, such as, but not limited to, Google's FaceNet. The machine learning model can receive an image of the person's face as input and output a vector of numbers, which can represent features of a face. In some aspects, the camera system can send the extracted facial features to the server. The monitoring system can map the extracted facial features to a person. The vector numbers can represent facial features corresponding to points on ones' face. Facial features of known people (such as clinicians or staff) can be stored in a facial features database, which can be part of the database described herein. To identify an unknown individual, such as a new patient or a visitor, the monitoring system can initially mark the unknown person as unknown and subsequently identify the same person in multiple camera images. The monitoring system can populate a database with the facial features of the new person.



FIG. 6 depicts a monitoring system 600 in a home setting. The monitoring system 600 can include, but is not limited to, one or more cameras 602, 604, 606. Some of the cameras, such as a first camera 602 of the monitoring system 600, can be the same as or similar to the camera system 114 of FIG. 1A. In some aspects, the cameras 602, 604, 606 can send data and/or images to a server (not illustrated). The monitoring system 600 can be configured to detect a pet 610 using the object identification techniques described herein. The monitoring system 600 can be further configured to determine if a pet 610 was fed or if the pet 610 is chewing or otherwise damaging the furniture 612. In some aspects, the monitoring system 600 can be configured to communicate with a home automation system. For example, if the monitoring system 600 detects that the pet 610 is near a door, the monitoring system 600 can instruct the home automation system to open the door. In some aspects, the monitoring system 600 can provide alerts and/or commands in the home setting to deter a pet from some activity (such as biting a couch, for example).



FIG. 7 depicts a monitoring system 700 in an infant care setting. The monitoring system 700 can include one or more cameras 702. In some aspects, a camera in the monitoring system 700 can send data and/or images to a server (not illustrated). The monitoring system 700 can be configured to detect an infant 704 using the object identification techniques described herein. Via a camera, the monitoring system 700 can detect whether a person is within an infant zone, which can be located within a field of view of the camera 702. Infant zones can be similar to patient zones, as described herein. For example, an infant zone can be defined as a proximity threshold around a crib 706 and/or the infant 704. In some aspects, a person is within the infant zone if the person is at least partially within a proximity threshold distance to the crib 706 and/or the infant 704. The monitoring system 700 can use object tracking, as described herein, to determine if the infant 704 is moved. For example, the monitoring system 700 can issue an alert if the infant 704 leaves the crib 706. The monitoring system 700 can include one or more machine learning models.


The monitoring system 700 can detect whether an unauthorized person is within the infant zone. The monitoring system 700 can detect whether an unauthorized person is present using one or more methods, such as, but not limited to, facial recognition, identification via an image of an identification tag, and/or RFID based tracking. Identification tag tracking (whether an identification badge, RFID tracking, or some other tracking) can be appliable to hospital-infant settings. In some aspects, the monitoring system 700 can issue an alert based on one or more of the following factors: facial detection of an unrecognized face; no positive visual identification of authorized persons via identification tags; and/or no positive identification of authorized persons via RFID tags.


As described herein, a machine learning model of the monitoring system 700 can receive an image of a person's face as input and output a vector of numbers, which can represent features of a face. The monitoring system 700 can map the extracted facial features to a known person. For example, a database of the monitoring system 700 can store a mapping from facial features (but not actual pictures of faces) to person profiles. If the monitoring system 700 cannot match the features to features from a known person, the monitoring system 700 can mark person as unknown and issue an alert. Moreover, the monitoring system 700 can issue another alert if the unknown person moves the infant 704 outside of a zone.


In some aspects, the monitoring system 700 can monitor movements of the infant 704. The monitoring system 700 can monitor a color of the infant for physiological concerns. For example, the monitoring system can detect a change in color of skin (such as a bluish color) since that might indicate potential asphyxiation. The monitoring system 700 can use trained machine learning models to identify skin color changes. The monitoring system 700 can detect a position of the infant 704. For example, if the infant 704 rolls onto their stomach, the monitoring system 700 can issue a warning since it may be safer for the infant 704 to lay on their back. The monitoring system 700 can use trained machine learning models to identify potentially dangerous positions. In some aspects, a non-invasive sensor device (not illustrated) can be attached to the infant 704 (such as a wristband or a band that wraps around the infant's foot) to monitor physiological parameters of the infant. The monitoring system 700 can receive the physiological parameters, such as, but not limited to, blood oxygen saturation, pulse rate, perfusion index, respiration rate, heart rate, and/or pleth variability index. In some aspects, the monitoring system 700 can include a microphone that can capture audio data. The monitoring system 700 can detect sounds from the infant 704, such as crying. The monitoring system 700 can issue an alert if the detected sounds are above a threshold decibel level. Additionally or alternatively, the monitoring system 700 can process the sounds with a machine learning model. For example, the monitoring system 700 can convert sound data into spectrograms, input them into a CNN and a linear classifier model, and output a prediction whether the sounds (such as excessive crying) should cause a warning to be issued. In some aspects, the monitoring system 700 can include a thermal camera. The monitoring system 700 can use trained machine learning models to identify a potentially wet diaper from an input thermal image.


Efficient Machine Learning Model Application



FIG. 8 is a flowchart of a method 800 for efficiently applying machine learning models, according to some aspects of the present disclosure. As described herein, a monitoring system, which can include a camera system, may implement aspects of the method 800 as described herein. The method 800 may include fewer or additional blocks and/or the blocks may be performed in order different than is illustrated.


Beginning at block 802, image data can be received. A camera system (such as the camera systems 114, 318 of FIGS. 1A, 3 described herein) can receive image data from a camera. Depending on the type of camera and configuration of the camera, the camera system can receive different types of images, such as 4K, 1080p, 8 MP images. Image data can also include, but is not limited to, a sequence of images. A camera in a camera system can continuously capture images. Therefore, the camera in a camera system can capture images of objects (such as a patient, a clinician, an intruder, the elderly, an infant, a youth, or a pet) in a room either at a clinical facility, a home, or an assisted living home.


At block 806, a person detection model can be applied. The camera system can apply the person detection model based on the image data. In some aspects, the camera system can invoke the person detection model on a hardware accelerator. The hardware accelerator can be configured to accelerate the application of machine learning models, including a person detection model. The person detection model can be configured to receive image data as input. The person detection model can be configured to output a classification result. In some aspects, the classification result can indicate a likelihood (such as a percentage chance) that the image data includes a person. In other aspects, the classification result can be a binary result: either the object is predicted as present in the image or not. The person detection model can be, but is not limited to, a CNN. The person detection model can be trained to detect persons. For example, the person detection model can be trained with a training data set with labeled examples indicating whether the input data includes a person or not.


At block 808, it can be determined whether a person is present. The camera system can determine whether a person is present. The camera system can determine whether a person object is located in the image data. The camera system can receive from the person detection model (which can execute on the hardware accelerator) the output of a classification result. In some aspects, the output can be a binary result, such as, “yes” there is a person object present or “no” there is not a person object present. In other aspects, the output can be a percentage result and the camera system can determine the presence of a person if the percentage result is above a threshold. If a person is detected, the method 800 proceeds to the block 810 to receive second image data. If a person is not detected, the method 800 proceeds to repeat the previous blocks 802, 806, 808 to continue checking for persons.


At block 810, second image data can be received. The block 810 for receiving the second image data can be similar to the previous block for receiving image data. Moreover, the camera in the camera system can continuously capture images, which can lead to the second image data. As described herein, the image data can include multiple images, such as a sequence of images.


At block 812, one or more person safety models can be applied. In response to detecting a person, the camera system can apply one or more person safety models. The camera system can invoke (which can be invoked on a hardware accelerator) a fall detection model based on the second image data. The fall detection model can output a classification result. In some aspects, the fall detection model can be or include a CNN. The camera system can pre-process the image data. In some aspects, the camera system can covert an image into an RGB image, which can be a m-by-n-by-3 data array that defines red, green, and blue color components for each individual pixel in the image. In some aspects, the camera system can compute an optical flow from the image data (such as the RGB images), which can be a two-dimensional vector field between two images. The two-dimensional vector field can show how the pixels of an object in the first image move to form the same object in the second image. The fall detection model can be pre-trained to perform feature extraction and classification of the image data (which can be pre-processed image data) to output a classification result. In some aspects, the fall detection model can be made of various layers, such as, but not limited to, a convolution layer, a max pooling layer, and a regularization layer, and a classifier, such as, but not limited to, a softmax classifier.


As described herein, in some aspects, an advantage of performing the previous blocks 802, 806, 808 for checking whether a person is present is that more computationally expensive operations, such as applying one or more person safety models, can be delayed until a person is detected. The camera system can invoke (which can be invoked on a hardware accelerator) multiple person safety models based on the second image data. For each person safety model that is invoked, the camera system can receive a model result, such as but not limited to, a classification result. As described herein, the person safety models can include a fall detection model, a handwashing detection model, and/or an intruder detection model.


At block 814, it can be determined whether there is a person safety issue. The camera system can determine whether there is a person safety issue. As described above, for each person safety model that is invoked, the camera system can receive a model result as output. For some models, the output can be a binary result, such as, “yes” a fall has been detected or “no” a fall has not been detected. For other models, the output can be a percentage result and the camera system can determine a person safety issue exists if the percentage result is above a threshold. In some aspects, evaluation of the one or more person safety models can result in an issue detection if at least one model returns a result that indicates issue detection. If a person safety issue is detected, the method 800 proceeds to block 816 to provide an alert and/or take an action. If a person safety issue is not detected, the method 800 proceeds to repeat the previous blocks 802, 806, 808 to continue checking for persons.


At block 816, an alert can be provided and/or an action can be taken. In some aspects, the camera system can initiate an alert. The camera system can notify a monitoring system to provide an alert. In some aspects, a user computing device 102 can receive an alert about a safety issue. In some aspects, a clinician 110 can receive an alert about the safety issue. In some aspects, the camera system can initiate an action. The camera system can cause the monitoring system to take an action. For example, the monitoring system can automatically notify emergency services (such as an emergency hotline and/or an ambulance service) to send someone to help.



FIG. 9 is a flowchart of another method 900 for efficiently applying machine learning models, according to some aspects of the present disclosure. As described herein, a monitoring system, which can include a camera system, may implement aspects of the method 900 as described herein. The method 900 may include fewer or additional blocks and/or the blocks may be performed in order different than is illustrated. The block(s) of the method 900 of FIG. 9 can be similar to the block(s) of the method 800 of FIG. 8. In some aspects, the block(s) of the method 900 of FIG. 9 can be used in conjunction with the block(s) of the method 800 of FIG. 8.


Beginning at block 902, a training data set can be received. The monitoring system can receive a training data set. In some aspects, a first set of videos of person falls can be collected and a second set of videos of persons without falling can be collected. A training data set can be created with the first set of videos and the second set of videos. Other training data sets can be created. For example, for machine learning of handwashing, a first set of videos of with handwashing and a second set of videos without handwashing can be collected; and a training data set can be created from the first set of videos and the second set of videos. For machine learning detection of dilated pupils, a first set of images of with dilated pupils and a second set of images without dilated pupils can be collected; and a training data set can be created from the first set of images and the second set of images. For machine learning detection of facial paralysis, a first set of images of with facial paralysis and a second set of images without facial paralysis can be collected; and a training data set can be created from the first set of images and the second set of images. For machine learning detection of an infant, a first set of images of with an infant and a second set of images without an infant can be collected; and a training data set can be created from the first set of images and the second set of images. For machine learning detection of an infant's position, a first set of images of an infant on their back and a second set of images of an infant on their stomach or their side; and a training data set can be created from the first set of images and the second set of images. For machine learning detection of an unconsciousness state, a first set of videos of persons in an unconscious state and a second set of videos of a person in a state of consciousness; and a training data set can be created from the first set of videos and the second set of videos. For other machine learning detection of an unconsciousness state, a first set of audio recordings of persons in an unconscious state and a second set of audio recordings of a person in a state of consciousness; and a training data set can be created from the first set of audio recordings and the second set of audio recordings. The monitoring system can receive training data sets for any of the machine learning models described herein that can be trained with supervised machine learning.


At block 904, a machine learning model can be trained. The monitoring system can train one or more machine learning models. The monitoring system can train a fall detection model using the training data set from the previous block 902. The monitoring system can train a handwashing detection model using the training data set from the previous block 902. The monitoring system can train any of the machine learning models described herein that use supervised machine learning.


In some aspects, the monitoring system can train a neural network, such as, but not limited to, a CNN. The monitoring system can initiate the neural network with random weights. During the training of the neural network, the monitoring system feeds labelled data from the training data set to the neural network. Class labels can include, but are not limited to, fall, no fall, hand washing, no hand washing, loud noise, no loud noise, normal pupils, dilated pupils, no facial paralysis, facial paralysis, infant, no infant, supine position, prone position, side position, unconscious, conscious, etc. The neural network can process each input vector with its values being assigned randomly and then make comparisons with the class label of the input vector. If the output prediction does not match the class label, an adjustment to the weights of the neural network neurons are made so that output correctly matches the class label. The corrections to the value of weights can be made through a technique, such as, but not limited to backpropagation. Each run of training of the neural network can be called an “epoch.” The neural network can go through several series of epochs during the process of training, which results in further adjusting of the neural network weights. After each epoch step, the neural network can become more accurate at classifying and correctly predicting the class of the training data. After training the neural network, the monitoring system can use a test dataset to verify the neural network's accuracy. The test dataset can be a set of labelled test data that were not included in the training process. Each test vector can be fed to the neural network, and the monitoring system can compare the output to the actual class label of the test input vector.


At block 906, input data can be received. The camera system can receive input data. In some aspects, the block 906 for receiving input data can be similar to the block 802 of FIG. 8 for receiving image data. The camera system can receive image data from a camera. In some aspects, other input data can be received. For example, the camera system can receive a current time. The camera system can receive an RFID signal (which can be used for identification purposes, as described herein). The camera system can receive physiological values (such as blood oxygen saturation, pulse rate, perfusion index, respiration rate, heart rate, and/or pleth variability index) from a patient sensor device, such as a wearable device.


At block 908, it can be determined whether a trigger has been satisfied. The camera system can determine whether a trigger has been satisfied to apply one or more machine learning models. In some aspects, the camera system can determine whether a trigger has been satisfied by checking whether a person has been detected. In some aspects, the camera system can determine whether a trigger has been satisfied by checking whether the current time satisfies a trigger time window, such as, but not limited to, a daily time check-up window. If a trigger is satisfied, the method 900 proceeds to the block 910 to receive captured data. If a trigger is not detected, the method 900 proceeds to repeat the previous blocks 906, 908 to continue checking for triggers.


In some aspects, a trigger can be determined based on a received physiological value. The camera system can determine to begin a monitoring process based on a physiological value. In some aspects, the wearable device can include a pulse oximetry sensor and the physiological value is for blood oxygen saturation. The camera system can determine that the physiological value is below a threshold level (such as blood oxygen below 88%, 80%, or 70%, etc.). In some aspects, the wearable device can include a respiration rate sensor and the physiological value is for respiration rate. The camera system can determine that the physiological value satisfies a threshold alarm level (such as respiration rate under 12 or over 25 breaths per minute). In some aspects, the wearable device can include a heart rate sensor, the physiological value is for heart rate, and the multiple physiological values measuring heart rate over time can be received from the wearable device. The camera system can determine that the physiological values satisfies a threshold alarm level, such as, but not limited to, heart rate being above 100 beats per minute for a threshold period of time or under a threshold level for threshold period of time.


At block 910, captured data can be received. The block 910 for receiving captured data can be similar to the previous block 906 for receiving input data. Moreover, the camera in the camera system can continuously capture images, which can lead to the captured data. In some aspects, the camera system can receive audio data from a microphone. In some aspects, the camera system can be configured to cause presentation, on a display, of a prompt to cause a person to perform an activity. The camera system can receive, from a camera, image data of a recording of the activity.


At block 912, one or more machine learning models can be applied. In response to determining that a trigger has been satisfied, the camera system can apply one or more machine learning models based on the captured data. The camera system can invoke (which can be invoked on a hardware accelerator) one or more machine learning models, which can output a model result. The camera system can invoke a fall detection model based on image data where the fall detection model can output a classification result. The camera system can invoke a loud noise detection model based on the audio data where the loud noise detection model can output a classification result. In some aspects, the camera system can generate a spectrogram data from the audio data and provide the spectrogram data as input to the loud noise detection model. The camera system can invoke a facial feature extraction model based on the image data where the facial feature extraction model can output a facial feature vector. The camera system can invoke a handwashing detection model based on the image data where the handwashing detection model can output a classification result. The camera system can invoke a screening machine learning model based on image data where the screening machine learning model can output a model result. The screening machine learning model can include, but is not limited to, a pupillometry screening model or a facial paralysis screening model.


In some aspects, in response to determining to begin the monitoring process, the camera system can invoke one or more machine learning models. The camera system can invoke (which can be on a hardware accelerator) a first unconscious detection model based on the image data where the first unconscious detection model outputs a first classification result. The camera system can invoke (which can be on the hardware accelerator) a second unconscious detection model based on the audio data where the second unconscious detection model outputs a second classification result.


At block 914, it can be determined whether there is a safety issue. The camera system can determine whether there is a safety issue. For each machine learning model that is invoked, the camera system can receive a classification result as output. For some models, the output can be a binary result, such as, “yes” a fall has been detected or “no” a fall has not been detected. For other models, the output can be a percentage result and the camera system can determine a safety issue exists if the percentage result is above a threshold. In some aspects, evaluation of the one or more machine learning models can result in an issue detection if at least one model returns a result that indicates issue detection. The camera system can detect a potential fall based on the classification result. The camera system can detect a potential scream or loud noise based on the classification result from a loud noise detection model. The camera system can execute a query of a facial features database based on the facial feature vector where executing the query can indicate that the facial feature vector is not present in a facial features database, which can indicate a safety issue. The camera system can detect a potential screening issue based on the classification result. The potential screening issue can indicate, but is not limited to, potential dilated pupils or potential facial paralysis. In some aspects, based on the output from one or more machine learning models, the camera system can detect a potential state of unconsciousness. If a safety issue is detected, the method 900 proceeds to block 916 to provide an alert and/or take an action. If a safety issue is not detected, the method 900 proceeds to repeat the previous blocks 906, 908 to continue checking for triggers.


At block 916, an alert can be provided and/or an action can be taken. In some aspects, the camera system can initiate an alert. The camera system can notify a monitoring system to provide an alert. In some aspects, the camera system can initiate an action. In some aspects, the block 916 for providing an alert and/or taking an action can be similar to the block 816 of FIG. 8 for providing an alert and/or taking an action. In response to detecting an issue, such as, but not limited to, detecting a potential fall, loud noise, scream, lack of handwashing, dilated pupils, facial paralysis, intruder, state of unconsciousness, etc., the monitoring system can provide an alert. The monitoring system can escalate alerts. For example, in response to detecting a potential fall and a potential scream or loud noise, the monitoring system can provide an escalated alert. The camera system can cause the monitoring system to take an action. For example, the monitoring system can automatically notify emergency services (such as an emergency hotline and/or an ambulance service) to send someone to help.


In some aspects, the monitoring system can allow privacy options. For example, some user profiles can specify that the user computing devices associated with those profiles should not receive alerts (which can be specified for a period of time). However, the monitoring system can include an alert escalation policy such that alerts can be presented via user computing devices based on one or more escalation conditions. For example, if an alert isn't responded to a for a period of time, the monitoring system can escalate the alert. As another example, if a quantity of alerts exceed a threshold, then the monitoring system can present an alert via user computing devices despite user preferences otherwise.


At block 918, a communications system can be provided. The monitoring system can provide a communications system. The camera system can receive, from a computing device, first video data (such as, but not limited to, video data of a clinician, friends, or family of a patient). The camera system can cause presentation, on the display, of the first video data. The camera system can receive, from the camera, second video data and transmit, to the computing device, the second video data.


Elderly Care Features


Some of the aspects described herein can be directed towards elderly care features. The monitoring systems described herein can be applied to assisted living and/or home settings for the elderly. The monitoring systems described herein, which can include camera systems, can generally monitor activities of the elderly. The monitoring systems described herein can initiate check-up processes, including, but not limited to, dementia checks. In some aspects, a check-up process can detect a color of skin to detect possible physiological changes. The monitoring system can perform stroke detection by determining changes in facial movements and/or speech patterns. The monitoring system can track medication administration and provide reminders if medication is not taken. For example, the monitoring can monitor a cupboard or medicine drawer and determine whether medication is taken based on activity in those areas. In some aspects, some of the camera systems can be outdoor camera systems. The monitoring system can track when a person goes for a walk, log when the person leaves and returns, and potentially issues an alert if a walk exceeds a threshold period of time. In some aspects, the monitoring system can track usage of good hygiene practices, such as but not limited to, handwashing, brushing teeth, or showering (e.g., tracking that a person enters a bathroom at a showering time). The monitoring system can keep track of whether a person misses a check-up. In some aspects, a camera system can include a thermal camera, which can be used to identify a potentially wet adult diaper from an input thermal image.


With respect to FIG. 9, the method 900 for efficiently applying machine learning models can be applied to elderly care settings. At block 902, a training data set can be received. The monitoring system can receive a training data set, which can be used to train machine learning models to be used in check-up processes for the elderly, such as checking for dilated pupils or facial paralysis. For machine learning of dilated pupils, a first set of images of with dilated pupils and a second set of images without dilated pupils can be collected; and a training data set can be created from the first set of images and the second set of images. For machine learning of facial paralysis, a first set of images of with facial paralysis and a second set of images without facial paralysis can be collected; and a training data set can be created from the first set of images and the second set of images.


At block 904, a machine learning model can be trained. A server in the monitoring system can train a pupillometry screening model using the training data set. The server in the monitoring system can train a facial paralysis screening model using the training data set.


At block 906, input data can be received. The camera system can receive input data, which can be used to determine if a trigger has been satisfied for application of one or more machine learning models. The camera system can receive image data from a camera. The camera system can receive a current time. The camera system can receive an RFID signal, which can be used for person identification and/or detection.


In some aspects, the monitoring system can include patient sensor devices, such as, but not limited to, wearable devices. The wearable device can be configured to process sensor signals to determine a physiological value for the person. The monitoring system can receive a physiological value from the wearable device. In some aspects, the wearable device can include a pulse oximetry sensor and the physiological value can be for blood oxygen saturation. In some aspects, the wearable device can be configured to process the sensor signals to measure at least one of blood oxygen saturation, pulse rate, perfusion index, respiration rate, heart rate, or pleth variability index. Some of the wearable devices can be used for an infant.


At block 908, it can be determined whether a trigger has been satisfied. The camera system can determine whether a trigger has been satisfied to apply one or more machine learning models. The camera system can determine whether a check-up process should begin from a current time. For example, the monitoring system can conduct check-up processes at regular intervals, such as once or two a day, which can be at particular times, such as a morning check-up time or an afternoon check-up time. As described herein, another trigger type can be detection of a person. The camera system can invoke a person detection model based on image data where the person detection model outputs classification result; and detect a person based on the classification result. If a trigger is satisfied, the method 900 proceeds to the block 910 to receive captured data. If a trigger is not detected, the method 900 proceeds to repeat the previous blocks 906, 908 to continue checking for triggers.


At block 910, captured data can be received. In response to determining to begin the check-up process, the monitoring system can cause presentation, on a display, of a prompt to cause a person to perform a check-up activity. In some aspects, the check-up activity can check for signs of dementia. A check-up activity can include having a person standing a particular distance from the camera system. A check-up activity can include simple exercises. The prompts can cause a user to say something or perform tasks. The person can be prompted to perform math tasks, pattern recognition, solve puzzles, and/or identify photos of family members. For example, the person can be prompted to point to sections of the display, which can correspond to answers to check-up tests. The check-up tests can check for loss of motor skills. In some aspects, the check-up activity can include a virtual physical or appointment conducted by a clinician. The camera system can receive, from the camera, image data of a recording of the check-up activity. In some aspects, the camera system can receive other input, such as, but not limited to, audio data from a microphone.


At block 912, one or more machine learning models can be applied. In response to determining that a trigger has been satisfied, the camera system can apply one or more machine learning models based on the captured data. In some aspects, in response to determining to begin the check-up process, the camera system can invoke a screening machine learning model based on image data where the screening machine learning model can output a model result (such as a classification result). The screening machine learning model can include, but is not limited to, a pupillometry screening model, a facial paralysis screening model, or a gesture detection model. The gesture detection model can be configured to detect a gesture directed towards a portion of the display. For example, during a dementia test, the person can be prompted to point to a portion of the display and the gesture detection model can identify a point gesture, such as but not limited to, pointing to a quadrant on the display. In some aspects, in response to detecting a person, the camera system can invoke a handwashing detection model based on image data wherein the handwashing detection model outputs a classification result.


At block 914, it can be determined whether there is a safety issue. The camera system can determine whether there is a safety issue, such as a potential screening issue. The camera system can detect a potential screening issue based on the model result. The potential screening issue can indicate, but is not limited to, potential dilated pupils or potential facial paralysis. The monitoring system can determine whether there is a potential screening issue based on output from a gesture detection model. For example, the monitoring system can use detected gesture to determine an answer and an incorrect answer can indicate a potential screening issue. If a safety issue is detected, the method 900 proceeds to block 916 to provide an alert and/or take an action. If a safety issue is not detected, the method 900 proceeds to repeat the previous blocks 906, 908 to continue checking for triggers.


At block 916, an alert can be provided. In some aspects, the camera system can initiate an alert. The camera system can notify a monitoring system to provide one or more alerts. In response to detecting an issue in an elderly care setting, such as, but not limited to, detecting a potential fall, loud noise, scream, lack of handwashing, dilated pupils, facial paralysis, intruder, etc., the monitoring system can provide an alert. The monitoring system can escalate alerts. For example, in response to detecting a potential fall and a potential scream or loud noise, the monitoring system can provide an escalated alert. In some aspects, the monitoring system can provide alerts via different networks (such as Wi-Fi or cellular) and/or technologies (such as Bluetooth).


At block 918, a communications system can be provided. The monitoring system can provide a communications system in an elderly care setting. The camera system can receive, from a computing device, first video data (such as, but not limited to, video data of a clinician, friends, or family of a patient). The camera system can cause presentation, on the display, of the first video data. The camera system can receive, from the camera, second video data and transmit, to the computing device, the second video data.


Infant Care Features


Some of the aspects described herein can be directed towards infant care features. The monitoring systems described herein can be applied to monitoring an infant. FIG. 10 is a flowchart of a method 1000 for efficiently applying machine learning models for infant care, according to some aspects of the present disclosure. As described herein, a monitoring system, which can include a camera system, may implement aspects of the method 1000 as described herein. The block(s) of the method 1000 of FIG. 10 can be similar to the block(s) of the methods 800, 900 of FIGS. 8 and/or 9. The method 1000 may include fewer or additional blocks and/or the blocks may be performed in order different than is illustrated.


Beginning at block 1002, image data can be received. A camera system can receive image data from a camera, which can be positioned in an infant area, such as a nursery. Image data can also include, but is not limited to, a sequence of images. A camera in a camera system can continuously capture images of the infant area. Therefore, the camera in a camera system can capture images of objects, such as an infant, in a room either at a home or a clinical facility.


At block 1006, an infant detection model can be applied. The camera system can apply the infant detection model based on the image data. In some aspects, the camera system can invoke the infant detection model on a hardware accelerator. The infant detection model can be configured to receive image data as input. The infant detection model can be configured to output a classification result. In some aspects, the classification result can indicate a likelihood (such as a percentage chance) that the image data includes an infant. In other aspects, the classification result can be a binary result: either the infant object is predicted as present in the image or not. The infant detection model can be, but is not limited to, a CNN. The infant detection model can be trained to detect persons. For example, the infant detection model can be trained with a training data set with labeled examples indicating whether the input data includes an infant or not.


At block 1008, it can be determined whether an infant is present. The camera system can determine whether an infant is present. The camera system can determine whether an infant object is located in the image data. The camera system can receive from the infant detection model the output of a classification result. In some aspects, the output can be a binary result, such as, “yes” there is an infant object present or “no” there is not an infant object present. In other aspects, the output can be a percentage result and the camera system can determine the presence of an infant if the percentage result is above a threshold. If an infant is detected, the method 1000 proceeds to the block 1010 to receive captured data. If an infant is not detected, the method 1000 proceeds to repeat the previous blocks 1002, 1006, 1008 to continue checking for infants.


At block 1010, captured data can be received. The camera in the camera system can continuously capture images, which can lead to the captured data. In some aspects, the camera system can receive audio data from a microphone.


At block 1012, one or more infant safety models can be applied. In response to detecting an infant, the camera system can apply one or more infant safety models that outputs a model result. The camera system can invoke (which can be invoked on a hardware accelerator) an infant position model based on the captured data. The infant position model can output a classification result. In some aspects, the infant position model can be or include a CNN. In response to detecting an infant, the camera system can invoke a facial feature extraction model based on second image data where the facial feature extraction model outputs a facial feature vector. The camera system can execute a query of a facial features database based on the facial feature vector where executing the query indicates that the facial feature vector is not present in the facial features database. An infant safety model can be an infant color detection model. In some aspects, the model result can include coordinates of a boundary region identifying an infant object in the image data. As described herein, the camera system can invoke a loud noise detection model based on the audio data where the loud noise detection model can output a classification result.


At block 1014, it can be determined whether there is an infant safety issue. The camera system can determine whether there is an infant safety issue. As described above, for each person safety model that is invoked, the camera system can receive a model result as output. For some models, the output can be a binary result, such as, “yes” an infant is in a supine position or “no” a supine position has not been detected (such as the infant potentially laying on their stomach). For other models, the output can be a percentage result and the camera system can determine an infant safety issue exists if the percentage result is above a threshold. The camera system can determine that an unrecognized person has been detected. In some aspects, the camera system determine that the coordinates of the boundary region exceed a threshold distance from an infant zone (which can indicate that an infant is being removed from the infant zone). The camera system can determine a potential scream from the model result. In some aspects, evaluation of the one or more infant safety models can result in an issue detection if at least one model returns a result that indicates issue detection. If an infant safety issue is detected, the method 1000 proceeds to block 1016 to provide an alert and/or take an action. If an infant safety issue is not detected, the method 1000 proceeds to repeat the previous blocks 1002, 1006, 1008 to continue checking for infants.


At block 1016, an alert can be provided and/or an action can be taken. In some aspects, the camera system can initiate an alert associated with the infant. The camera system can notify a monitoring system to provide an alert. In some aspects, a user computing device 102 can receive an alert about an infant safety issue. In some aspects, a clinician 110 can receive an alert about the infant safety issue. In some aspects, the camera system can initiate an action. The camera system can cause the monitoring system to take an action. For example, the monitoring system can automatically notify emergency services (such as an emergency hotline and/or an ambulance service) to send someone to help.


At Home Features


Some of the aspects described herein can be directed towards at-home monitoring features. The monitoring systems described herein can be applied to monitoring in a home. The monitoring system can accomplish one or more of the following features using the machine learning techniques described herein. The monitoring system can monitor the time spent on various tasks by members of a household (such as youth at home), such as time spent watching television or time spent studying. The monitoring system can be configured to confirm that certain tasks (such as chores) are completed. In some aspects, the monitoring system can allow parents to monitor an amount of time spent using electronics. In some aspects, the camera system can be configured to detect night terrors and amount and types of sleep. As described herein, in some aspects, the monitoring system can track usage of good hygiene practices at home, such as but not limited to, handwashing, brushing teeth, or showering (e.g., tracking that a person enters a bathroom at a showering time). As described herein, zones can be used to provide alerts, such as monitoring a pool zone or other spaces youth should not be allowed, such as, but not limited to, certain rooms at certain times and/or unaccompanied by an adult. For example, the camera system can monitor a gun storage location to alert adults to unauthorized access of weapons.


General Features


Some of the aspects described herein can include any of the following features, which can be applied in different settings. In some aspects, a camera system can have local storage for an image and/or video feed. In some aspects, remote access of the local storage may be restricted and/or limited. In some aspects, the camera system can use a calibration factor which can be useful for correcting color drift in the image data from a camera. In some aspects, the camera system can add or remove filters on camera to provide certain effects. The camera system may include infrared filters. In some aspects, the monitoring system can monitor food intake of subject and/or estimate calories. In some aspects, the monitoring system can detect mask wearing (such as wearing or not wearing an oxygen mask).


The monitoring system can perform one or more check-up tests. The monitoring system, using a machine learning model, can detect slurred speech, drunkenness, drug use, and/or adverse behavior. Based on other check-up tests the monitoring system can detect shaking, microtremors, tremors, which can indicate a potential disease state such as Parkinson's. The monitoring system can track exercise movements to determine a potential physiological condition. A check-up test can be used by the monitoring system for a cognitive assessment, such as, detecting vocabulary decline. In some aspects, the monitoring system can check a user's smile where the monitoring system prompts the user to stand a specified distance away from the camera system. A check-up test can request a subject to do one or more exercise, read something outload (to test muscles of a face), reach for an object. In some aspects, the camera system can perform an automated physical, perform a hearing test, and/or perform an eye test. In some aspects, a check-up test can be for Alzheimer's detection. The monitoring system can provide memory exercises, monitor for good/bad days, and/or monitor basic behavior to prevent injury. In some aspects, the camera system can monitor skin color changes to detect skin damage and/or sunburn detection. The camera system can take a trend of skin color, advise or remind to take corrective action, and/or detect a tan line. The monitoring system can monitor sleep cycles and/or heart rate variability. In some aspects, the monitoring system can monitor snoring, rapid eye movement (REM), and/or sleep quality, which can be indicative of sleep apnea or another disease. As described herein, the camera system can be tried to detect sleep walking. The camera system can be configured to detect coughing or sneezing to determine potential allergies or illness. The camera system can also provide an alert if a possible hyperventilation is detected. Any of the monitoring features described herein can be implemented with the machine learning techniques described herein.


Additional Implementation Details



FIG. 11 is a block diagram that illustrates example components of a computing device 1100, which can be a camera system. The computing device 1100 can implement aspects of the present disclosure, and, in particular, aspects of the monitoring system 100A, 100B, such as the camera system 114. The computing device 1100 can communicate with other computing devices.


The computing device 1100 can include a hardware processor 1102, a hardware accelerator, a data storage device 1104, a memory device 1106, a bus 1108, a display 1112, one or more input/output devices 1114, and a camera 1118. A processor 1102 can also be implemented as a combination of computing devices, e.g., a combination of a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor, or any other such configuration. The processor 1102 can be configured, among other things, to process data, execute instructions to perform one or more functions, such as apply one or more machine learning models, as described herein. The hardware accelerator 1116 can be special hardware that is configured to accelerate machine learning applications. The data storage device 1104 can include a magnetic disk, optical disk, or flash drive, etc., and is provided and coupled to the bus 1108 for storing information and instructions. The memory 1106 can include one or more memory devices that store data, including without limitation, random access memory (RAM) and read-only memory (ROM). The computing device 1100 may be coupled via the bus 1108 to a display 1112, such as an LCD display or touch screen, for displaying information to a user, such as a patient. The computing device 1100 may be coupled via the bus 1108 to one or more input/output devices 1114. The input device 1114 can include, but is not limited to, a keyboard, mouse, digital pen, microphone, touch screen, gesture recognition system, voice recognition system, imaging device (which may capture eye, hand, head, or body tracking data and/or placement), gamepad, accelerometer, or gyroscope. The camera 1118 can include, but is not limited to, a 1080p or 4k camera and/or an infrared image camera.


Additional Aspects and Terminology


As used herein, the term “patient” can refer to any person that is monitored using the systems, methods, devices, and/or techniques described herein. As used herein, a “patient” is not required—to be admitted to a hospital, rather, the term “patient” can refer to a person that is being monitored. As used herein, in some cases the terms “patient” and “user” can be used interchangeably.


While some features described herein may be discussed in a specific context, such as adult, youth, infant, elderly, or pet care, those features can be applied to other contexts, such as, but not limited to, a different one of adult, youth, infant, elderly, or pet care contexts.


The apparatuses and methods described herein may be implemented by one or more computer programs executed by one or more processors. The computer programs include processor-executable instructions that are stored on a non-transitory tangible computer readable medium. The computer programs may also include stored data. Non-limiting examples of the non-transitory tangible computer readable medium are nonvolatile memory, magnetic storage, and optical storage.


The term “substantially” when used in conjunction with the term “real-time” forms a phrase that will be readily understood by a person of ordinary skill in the art. For example, it is readily understood that such language will include speeds in which no or little delay or waiting is discernible, or where such delay is sufficiently short so as not to be disruptive, irritating, or otherwise vexing to a user.


Conditional language used herein, such as, among others, “can,” “might,” “may,” “e.g.,” “for example,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain aspects described herein include, while other aspects described herein do not include, certain features, elements, or states. Thus, such conditional language is not generally intended to imply that features, elements, or states are in any way required for one or more aspects described herein.


Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Such disjunctive language is not generally intended to, and should not, imply that certain aspects require at least one of X, at least one of Y, or at least one of Z to each be present. Thus, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Further, the term “each,” as used herein, in addition to having its ordinary meaning, can mean any subset of a set of elements to which the term “each” is applied.


The term “a” as used herein should be given an inclusive rather than exclusive interpretation. For example, unless specifically noted, the term “a” should not be understood to mean “exactly one” or “one and only one”; instead, the term “a” means “one or more” or “at least one,” whether used in the claims or elsewhere in the specification and regardless of uses of quantifiers such as “at least one,” “one or more,” or “a plurality” elsewhere in the claims or specification.


The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth.


While the above detailed description has shown, described, and pointed out novel features as applied to various aspects described herein, it will be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the spirit of the disclosure. As will be recognized, certain aspects described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others.

Claims
  • 1. A system comprising: a storage device configured to store first instructions and second instructions;a camera;a microphone;a hardware accelerator configured to execute the first instructions; anda hardware processor configured to execute the second instructions to: receive, from the camera, first image data;invoke, on the hardware accelerator, a person detection model based on the first image data, wherein the person detection model outputs first classification result;detect a person based on the first classification result;receive, from the camera, second image data; andin response to detecting the person, invoke, on the hardware accelerator, a fall detection model based on the second image data, wherein the fall detection model outputs a second classification result,detect a potential fall based on the second classification result, andin response to detecting the potential fall, provide an alert;receive, from the microphone, audio data; andin response to detecting the person, invoke, on the hardware accelerator, a loud noise detection model based on the audio data, wherein the loud noise detection model outputs a third classification result,detect a potential scream based on the third classification result, andin response to detecting the potential scream, provide a second alert.
  • 2. The system of claim 1, wherein the second alert is an escalated alert and is in response to detecting both the potential fall and the potential scream.
  • 3. The system of claim 1, wherein invoking the loud noise detection model based on the audio data further comprises: generating spectrogram data from the audio data; andproviding the spectrogram data as input to the loud noise detection model.
  • 4. The system of claim 1, wherein the second image data comprises a plurality of images.
  • 5. A method comprising: receiving, from a camera, first image data;invoking, on a hardware accelerator, a person detection model based on the first image data, wherein the person detection model outputs first classification result;detecting a person based on the first classification result;receiving, from the camera, second image data; andin response to detecting the person, invoking, on the hardware accelerator, a plurality of person safety models based on the second image data,for each person safety model from the plurality of person safety models,receiving, from the hardware accelerator, a second classification result, detecting a potential safety issue based on a particular second classification result, andin response to detecting the potential safety issue, providing an alert;receiving, from a microphone, audio data; andin response to detecting the person, invoking, on the hardware accelerator, a loud noise detection model based on the audio data, wherein the loud noise detection model outputs a third classification result,detecting a potential scream based on the third classification result, andin response to detecting the potential safety issue and the potential scream, providing an escalated alert.
  • 6. The method of claim 5, further comprising: in response to detecting the person, invoking, on the hardware accelerator, a facial feature extraction model based on the second image data, wherein the facial feature extraction model outputs a facial feature vector,executing a query of a facial features database based on the facial feature vector, wherein executing the query indicates that the facial feature vector is not present in the facial features database, andin response to determining that the facial feature vector is not present in the facial features database, providing an unrecognized person alert.
  • 7. The method of claim 5, wherein the plurality of person safety models comprises a fall detection model, further comprising: collecting a first set of videos of person falls;collecting a second set of videos of persons without falling;creating a training data set comprising the first set of videos and the second set of videos; andtraining the fall detection model using the training data set.
  • 8. The method of claim 5, wherein the plurality of person safety models comprises a handwashing detection model, further comprising: collecting a first set of videos with handwashing;collecting a second set of videos without handwashing;creating a training data set comprising the first set of videos and the second set of videos; andtraining the handwashing detection model using the training data set.
  • 9. The method of claim 5, further comprising: collecting a first set of videos with screaming;collecting a second set of videos without screaming;creating a training data set comprising the first set of videos and the second set of videos; andtraining the loud noise detection model using the training data set.
  • 10. A system comprising: a storage device configured to store first instructions and second instructions;a camera;a microphone;a hardware accelerator configured to execute the first instructions; anda hardware processor configured to execute the second instructions to: receive, from the camera, first image data;invoke, on the hardware accelerator, a person detection model based on the first image data, wherein the person detection model outputs first classification result;detect a person based on the first classification result;receive, from the camera, second image data; andin response to detecting the person, invoke, on the hardware accelerator, a fall detection model based on the second image data, wherein the fall detection model outputs a second classification result,detect a potential fall based on the second classification result, andin response to detecting the potential fall, provide an alert;receive, from the microphone, audio data; andin response to detecting the person, invoke, on the hardware accelerator, a loud noise detection model based on the audio data, wherein the loud noise detection model outputs a third classification result,detect a potential scream based on the third classification result,wherein invoking the loud noise detection model based on the audio data further comprises: generating spectrogram data from the audio data; andproviding the spectrogram data as input to the loud noise detection model.
  • 11. The system of claim 10, wherein the hardware processor is configured to execute additional instructions to: in response to detecting the potential scream, provide a second alert.
  • 12. The system of claim 10, wherein the hardware processor is configured to execute additional instructions to: in response to detecting the potential fall and the potential scream, provide an escalated alert.
  • 13. The system of claim 10, wherein the second image data comprises a plurality of images.
  • 14. A method comprising: receiving, from a camera, first image data;invoking, on a hardware accelerator, a person detection model based on the first image data, wherein the person detection model outputs first classification result;detecting a person based on the first classification result;receiving, from the camera, second image data; andin response to detecting the person, invoking, on the hardware accelerator, a plurality of person safety models based on the second image data,for each person safety model from the plurality of person safety models,receiving, from the hardware accelerator, a second classification result, detecting a potential safety issue based on a particular second classification result, andin response to detecting the potential safety issue, providing an alert;receiving, from a microphone, audio data; andin response to detecting the person, invoking, on the hardware accelerator, a loud noise detection model based on the audio data, wherein the loud noise detection model outputs a third classification result, anddetecting a potential scream based on the third classification result,collecting a first set of videos with screaming;collecting a second set of videos without screaming;creating a training data set comprising the first set of videos and the second set of videos; andtraining the loud noise detection model using the training data set.
  • 15. The method of claim 14, further comprising: in response to detecting the person, invoking, on the hardware accelerator, a facial feature extraction model based on the second image data, wherein the facial feature extraction model outputs a facial feature vector,executing a query of a facial features database based on the facial feature vector, wherein executing the query indicates that the facial feature vector is not present in the facial features database, andin response to determining that the facial feature vector is not present in the facial features database, providing an unrecognized person alert.
  • 16. The method of claim 14, wherein the plurality of person safety models comprises a fall detection model, further comprising: collecting a first set of videos of person falls;collecting a second set of videos of persons without falling;creating a training data set comprising the first set of videos and the second set of videos; andtraining the fall detection model using the training data set.
  • 17. The method of claim 14, wherein the plurality of person safety models comprises a handwashing detection model, further comprising: collecting a first set of videos of with handwashing;collecting a second set of videos without handwashing;creating a training data set comprising the first set of videos and the second set of videos; andtraining the handwashing detection model using the training data set.
  • 18. The method of claim 14, further comprising: in response to detecting the potential safety issue and the potential scream, providing an escalated alert.
INCORPORATION BY REFERENCE TO ANY PRIORITY APPLICATIONS

The present application claims benefit of U.S. Provisional Application No. 63/298,569 entitled “Intelligent Camera System” filed Jan. 11, 2022 and U.S. Provisional Application No. 63/299,168 entitled “Intelligent Camera System” filed Jan. 13, 2022, the entirety of each of which is hereby incorporated by reference. Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet as filed with the present application are hereby incorporated by reference under 37 CFR 1.57.

US Referenced Citations (647)
Number Name Date Kind
4960128 Gordon et al. Oct 1990 A
4964408 Hink et al. Oct 1990 A
5319355 Russek Jun 1994 A
5337744 Branigan Aug 1994 A
5341805 Stavridi et al. Aug 1994 A
D353195 Savage et al. Dec 1994 S
D353196 Savage et al. Dec 1994 S
5377676 Vari et al. Jan 1995 A
D359546 Savage et al. Jun 1995 S
5431170 Mathews Jul 1995 A
5436499 Namavar et al. Jul 1995 A
D361840 Savage et al. Aug 1995 S
D362063 Savage et al. Sep 1995 S
D363120 Savage et al. Oct 1995 S
5456252 Vari et al. Oct 1995 A
5479934 Imran Jan 1996 A
5482036 Diab et al. Jan 1996 A
5494043 O'Sullivan et al. Feb 1996 A
5533511 Kaspari et al. Jul 1996 A
5561275 Savage et al. Oct 1996 A
5590649 Caro et al. Jan 1997 A
5602924 Durand et al. Feb 1997 A
5638816 Kiani-Azarbayjany et al. Jun 1997 A
5638818 Diab et al. Jun 1997 A
5645440 Tobler et al. Jul 1997 A
5671914 Kalkhoran et al. Sep 1997 A
5726440 Kalkhoran et al. Mar 1998 A
D393830 Tobler et al. Apr 1998 S
5743262 Lepper, Jr. et al. Apr 1998 A
5747806 Khalil et al. May 1998 A
5750994 Schlager May 1998 A
5758644 Diab et al. Jun 1998 A
5760910 Lepper, Jr. et al. Jun 1998 A
5890929 Mills et al. Apr 1999 A
5919134 Diab Jul 1999 A
5987343 Kinast Nov 1999 A
5997343 Mills et al. Dec 1999 A
6002952 Diab et al. Dec 1999 A
6010937 Karam et al. Jan 2000 A
6027452 Flaherty et al. Feb 2000 A
6040578 Malin et al. Mar 2000 A
6066204 Haven May 2000 A
6115673 Malin et al. Sep 2000 A
6124597 Shehada et al. Sep 2000 A
6128521 Marro et al. Oct 2000 A
6129675 Jay Oct 2000 A
6144868 Parker Nov 2000 A
6152754 Gerhardt et al. Nov 2000 A
6184521 Coffin, IV et al. Feb 2001 B1
6232609 Snyder et al. May 2001 B1
6241683 Macklem et al. Jun 2001 B1
6253097 Aronow et al. Jun 2001 B1
6255708 Sudharsanan et al. Jul 2001 B1
6280381 Malin et al. Aug 2001 B1
6285896 Tobler et al. Sep 2001 B1
6308089 von der Ruhr et al. Oct 2001 B1
6317627 Ennen et al. Nov 2001 B1
6321100 Parker Nov 2001 B1
6334065 Al-Ali et al. Dec 2001 B1
6360114 Diab et al. Mar 2002 B1
6368283 Xu et al. Apr 2002 B1
6411373 Garside et al. Jun 2002 B1
6415167 Blank et al. Jul 2002 B1
6430437 Marro Aug 2002 B1
6430525 Weber et al. Aug 2002 B1
6463311 Diab Oct 2002 B1
6470199 Kopotic et al. Oct 2002 B1
6487429 Hockersmith et al. Nov 2002 B2
6505059 Kollias et al. Jan 2003 B1
6525386 Mills et al. Feb 2003 B1
6526300 Kiani et al. Feb 2003 B1
6534012 Hazen et al. Mar 2003 B1
6542764 Al-Ali et al. Apr 2003 B1
6580086 Schulz et al. Jun 2003 B1
6584336 Ali et al. Jun 2003 B1
6587196 Stippick et al. Jul 2003 B1
6587199 Luu Jul 2003 B1
6595316 Cybulski et al. Jul 2003 B2
6597932 Tian et al. Jul 2003 B2
6606511 Ali et al. Aug 2003 B1
6635559 Greenwald et al. Oct 2003 B2
6639668 Trepagnier Oct 2003 B1
6640116 Diab Oct 2003 B2
6640117 Makarewicz et al. Oct 2003 B2
6658276 Kiani et al. Dec 2003 B2
6661161 Lanzo et al. Dec 2003 B1
6697656 Al-Ali Feb 2004 B1
6697658 Al-Ali Feb 2004 B2
RE38476 Diab et al. Mar 2004 E
RE38492 Diab et al. Apr 2004 E
6738652 Mattu et al. May 2004 B2
6760607 Al-Ali Jul 2004 B2
6788965 Ruchti et al. Sep 2004 B2
6816241 Grubisic Nov 2004 B2
6822564 Al-Ali Nov 2004 B2
6850787 Weber et al. Feb 2005 B2
6850788 Al-Ali Feb 2005 B2
6876931 Lorenz et al. Apr 2005 B2
6920345 Al-Ali et al. Jul 2005 B2
6934570 Kiani et al. Aug 2005 B2
6943348 Coffin IV Sep 2005 B1
6956649 Acosta et al. Oct 2005 B2
6961598 Diab Nov 2005 B2
6970792 Diab Nov 2005 B1
6985764 Mason et al. Jan 2006 B2
6990364 Ruchti et al. Jan 2006 B2
6998247 Monfre et al. Feb 2006 B2
7003338 Weber et al. Feb 2006 B2
7015451 Dalke et al. Mar 2006 B2
7027849 Al-Ali Apr 2006 B2
D526719 Richie, Jr. et al. Aug 2006 S
7096052 Mason et al. Aug 2006 B2
7096054 Abdul-Hafiz et al. Aug 2006 B2
D529616 Deros et al. Oct 2006 S
7133710 Acosta et al. Nov 2006 B2
7142901 Kiani et al. Nov 2006 B2
7225006 Al-Ali et al. May 2007 B2
RE39672 Shehada et al. Jun 2007 E
7254429 Schurman et al. Aug 2007 B2
7254431 Al-Ali et al. Aug 2007 B2
7254434 Schulz et al. Aug 2007 B2
7274955 Kiani et al. Sep 2007 B2
D554263 Al-Ali et al. Oct 2007 S
7280858 Al-Ali et al. Oct 2007 B2
7289835 Mansfield et al. Oct 2007 B2
7292883 De Felice et al. Nov 2007 B2
7341559 Schulz et al. Mar 2008 B2
7343186 Lamego et al. Mar 2008 B2
D566282 Al-Ali et al. Apr 2008 S
7356365 Schurman Apr 2008 B2
7371981 Abdul-Hafiz May 2008 B2
7373193 Al-Ali et al. May 2008 B2
7377794 Al-Ali et al. May 2008 B2
7395158 Monfre et al. Jul 2008 B2
7415297 Al-Ali et al. Aug 2008 B2
7438683 Al-Ali et al. Oct 2008 B2
7483729 Al-Ali et al. Jan 2009 B2
D587657 Al-Ali et al. Mar 2009 S
7500950 Al-Ali et al. Mar 2009 B2
7509494 Al-Ali Mar 2009 B2
7510849 Schurman et al. Mar 2009 B2
7514725 Wojtczuk et al. Apr 2009 B2
7519406 Blank et al. Apr 2009 B2
D592507 Wachman et al. May 2009 S
7530942 Diab May 2009 B1
7593230 Abul-Haj et al. Sep 2009 B2
7596398 Al-Ali et al. Sep 2009 B2
7606608 Blank et al. Oct 2009 B2
7620674 Ruchti et al. Nov 2009 B2
D606659 Kiani et al. Dec 2009 S
7629039 Eckerbom et al. Dec 2009 B2
7640140 Ruchti et al. Dec 2009 B2
7647083 Al-Ali et al. Jan 2010 B2
D609193 Al-Ali et al. Feb 2010 S
D614305 Al-Ali et al. Apr 2010 S
7697966 Monfre et al. Apr 2010 B2
7698105 Ruchti et al. Apr 2010 B2
RE41317 Parker May 2010 E
RE41333 Blank et al. May 2010 E
7729733 Al-Ali et al. Jun 2010 B2
7761127 Al-Ali et al. Jul 2010 B2
7764982 Dalke et al. Jul 2010 B2
D621516 Kiani et al. Aug 2010 S
7791155 Diab Sep 2010 B2
RE41912 Parker Nov 2010 E
7880626 Al-Ali et al. Feb 2011 B2
7909772 Popov et al. Mar 2011 B2
7919713 Al-Ali et al. Apr 2011 B2
7937128 Al-Ali May 2011 B2
7937129 Mason et al. May 2011 B2
7941199 Kiani May 2011 B2
7957780 Lamego et al. Jun 2011 B2
7962188 Kiani et al. Jun 2011 B2
7976472 Kiani Jul 2011 B2
7990382 Kiani Aug 2011 B2
8008088 Bellott et al. Aug 2011 B2
RE42753 Kiani-Azarbayjany et al. Sep 2011 E
8028701 Al-Ali et al. Oct 2011 B2
8048040 Kiani Nov 2011 B2
8050728 Al-Ali et al. Nov 2011 B2
RE43169 Parker Feb 2012 E
8118620 Al-Ali et al. Feb 2012 B2
8130105 Al-Ali et al. Mar 2012 B2
8182443 Kiani May 2012 B1
8190223 Al-Ali et al. May 2012 B2
8203438 Kiani et al. Jun 2012 B2
8203704 Merritt et al. Jun 2012 B2
8219172 Schurman et al. Jul 2012 B2
8224411 Al-Ali et al. Jul 2012 B2
8229532 Davis Jul 2012 B2
8233955 Al-Ali et al. Jul 2012 B2
8255026 Al-Ali Aug 2012 B1
8265723 McHale et al. Sep 2012 B1
8274360 Sampath et al. Sep 2012 B2
8280473 Al-Ali Oct 2012 B2
8315683 Al-Ali et al. Nov 2012 B2
RE43860 Parker Dec 2012 E
8346330 Lamego Jan 2013 B2
8353842 Al-Ali et al. Jan 2013 B2
8355766 MacNeish, III et al. Jan 2013 B2
8374665 Lamego Feb 2013 B2
8388353 Kiani et al. Mar 2013 B2
8401602 Kiani Mar 2013 B2
8414499 Al-Ali et al. Apr 2013 B2
8418524 Al-Ali Apr 2013 B2
8428967 Olsen et al. Apr 2013 B2
8430817 Al-Ali et al. Apr 2013 B1
8437825 Dalvi et al. May 2013 B2
8455290 Siskavich Jun 2013 B2
8457707 Kiani Jun 2013 B2
8471713 Poeze et al. Jun 2013 B2
8473020 Kiani et al. Jun 2013 B2
8509867 Workman et al. Aug 2013 B2
8515509 Bruinsma et al. Aug 2013 B2
8523781 Al-Ali Sep 2013 B2
D692145 Al-Ali et al. Oct 2013 S
8571617 Reichgott et al. Oct 2013 B2
8571618 Lamego et al. Oct 2013 B1
8571619 Al-Ali et al. Oct 2013 B2
8577431 Lamego et al. Nov 2013 B2
8584345 Al-Ali et al. Nov 2013 B2
8588880 Abdul-Hafiz et al. Nov 2013 B2
8630691 Lamego et al. Jan 2014 B2
8641631 Sierra et al. Feb 2014 B2
8652060 Al-Ali Feb 2014 B2
8666468 Al-Ali Mar 2014 B1
8670811 O'Reilly Mar 2014 B2
RE44823 Parker Apr 2014 E
RE44875 Kiani et al. Apr 2014 E
8688183 Bruinsma et al. Apr 2014 B2
8690799 Telfort et al. Apr 2014 B2
8702627 Telfort et al. Apr 2014 B2
8712494 MacNeish, III et al. Apr 2014 B1
8715206 Telfort et al. May 2014 B2
8723677 Kiani May 2014 B1
8740792 Kiani et al. Jun 2014 B1
8755535 Telfort et al. Jun 2014 B2
8755872 Marinow Jun 2014 B1
8764671 Kiani Jul 2014 B2
8768423 Shakespeare et al. Jul 2014 B2
8771204 Telfort et al. Jul 2014 B2
8781544 Al-Ali et al. Jul 2014 B2
8790268 Al-Ali Jul 2014 B2
8801613 Al-Ali et al. Aug 2014 B2
8821397 Al-Ali et al. Sep 2014 B2
8821415 Al-Ali et al. Sep 2014 B2
8830449 Lamego et al. Sep 2014 B1
8840549 Al-Ali et al. Sep 2014 B2
8852094 Al-Ali et al. Oct 2014 B2
8852994 Wojtczuk et al. Oct 2014 B2
8897847 Al-Ali Nov 2014 B2
8911377 Al-Ali Dec 2014 B2
8989831 Al-Ali et al. Mar 2015 B2
8998809 Kiani Apr 2015 B2
9066666 Kiani Jun 2015 B2
9066680 Al-Ali et al. Jun 2015 B1
9095316 Welch et al. Aug 2015 B2
9106038 Telfort et al. Aug 2015 B2
9107625 Telfort et al. Aug 2015 B2
9131881 Diab et al. Sep 2015 B2
9138180 Coverston et al. Sep 2015 B1
9153112 Kiani et al. Oct 2015 B1
9192329 Al-Ali Nov 2015 B2
9192351 Telfort et al. Nov 2015 B1
9195385 Al-Ali et al. Nov 2015 B2
9211095 Al-Ali Dec 2015 B1
9218454 Kiani et al. Dec 2015 B2
9245668 Vo et al. Jan 2016 B1
9267572 Barker et al. Feb 2016 B2
9277880 Poeze et al. Mar 2016 B2
9307928 Al-Ali et al. Apr 2016 B1
9323894 Kiani Apr 2016 B2
D755392 Hwang et al. May 2016 S
9326712 Kiani May 2016 B1
9392945 Al-Ali et al. Jul 2016 B2
9408542 Kinast et al. Aug 2016 B1
9436645 Al-Ali et al. Sep 2016 B2
9445759 Lamego et al. Sep 2016 B1
9474474 Lamego et al. Oct 2016 B2
9480435 Olsen Nov 2016 B2
9510779 Poeze et al. Dec 2016 B2
9517024 Kiani et al. Dec 2016 B2
9532722 Lamego et al. Jan 2017 B2
9560996 Kiani Feb 2017 B2
9579039 Jansen et al. Feb 2017 B2
9622692 Lamego et al. Apr 2017 B2
D788312 Al-Ali et al. May 2017 S
9649054 Lamego et al. May 2017 B2
9697928 Al-Ali et al. Jul 2017 B2
9717458 Lamego et al. Aug 2017 B2
9724016 Al-Ali et al. Aug 2017 B1
9724024 Al-Ali Aug 2017 B2
9724025 Kiani et al. Aug 2017 B1
9749232 Sampath et al. Aug 2017 B2
9750442 Olsen Sep 2017 B2
9750461 Telfort Sep 2017 B1
9775545 Al-Ali et al. Oct 2017 B2
9778079 Al-Ali et al. Oct 2017 B1
9782077 Lamego et al. Oct 2017 B2
9787568 Lamego et al. Oct 2017 B2
9808188 Perea et al. Nov 2017 B1
9839379 Al-Ali et al. Dec 2017 B2
9839381 Weber et al. Dec 2017 B1
9847749 Kiani et al. Dec 2017 B2
9848800 Lee et al. Dec 2017 B1
9861298 Eckerbom et al. Jan 2018 B2
9861305 Weber et al. Jan 2018 B1
9877650 Muhsin et al. Jan 2018 B2
9891079 Dalvi Feb 2018 B2
9924897 Abdul-Hafiz Mar 2018 B1
9936917 Poeze et al. Apr 2018 B2
9955937 Telfort May 2018 B2
9965946 Al-Ali et al. May 2018 B2
D820865 Muhsin et al. Jun 2018 S
9986952 Dalvi et al. Jun 2018 B2
D822215 Al-Ali et al. Jul 2018 S
D822216 Barker et al. Jul 2018 S
10010276 Al-Ali et al. Jul 2018 B2
10086138 Novak, Jr. Oct 2018 B1
10111591 Dyell et al. Oct 2018 B2
D833624 DeJong et al. Nov 2018 S
10123729 Dyell et al. Nov 2018 B2
D835282 Barker et al. Dec 2018 S
D835283 Barker et al. Dec 2018 S
D835284 Barker et al. Dec 2018 S
D835285 Barker et al. Dec 2018 S
10149616 Al-Ali et al. Dec 2018 B2
10154815 Al-Ali et al. Dec 2018 B2
10159412 Lamego et al. Dec 2018 B2
10188348 Al-Ali et al. Jan 2019 B2
RE47218 Al-Ali Feb 2019 E
RE47244 Kiani et al. Feb 2019 E
RE47249 Kiani et al. Feb 2019 E
10205291 Scruggs et al. Feb 2019 B2
10226187 Al-Ali et al. Mar 2019 B2
10231657 Al-Ali et al. Mar 2019 B2
10231670 Blank et al. Mar 2019 B2
RE47353 Kiani et al. Apr 2019 E
10279247 Kiani May 2019 B2
10292664 Al-Ali May 2019 B2
10299720 Brown et al. May 2019 B2
10327337 Schmidt et al. Jun 2019 B2
10327713 Barker et al. Jun 2019 B2
10332630 Al-Ali Jun 2019 B2
10383520 Wojtczuk et al. Aug 2019 B2
10383527 Al-Ali Aug 2019 B2
10388120 Muhsin et al. Aug 2019 B2
D864120 Forrest et al. Oct 2019 S
10441181 Telfort et al. Oct 2019 B1
10441196 Eckerbom et al. Oct 2019 B2
10448844 Al-Ali et al. Oct 2019 B2
10448871 Al-Ali et al. Oct 2019 B2
10456038 Lamego et al. Oct 2019 B2
10463340 Telfort et al. Nov 2019 B2
10471159 Lapotko et al. Nov 2019 B1
10505311 Al-Ali et al. Dec 2019 B2
10524738 Olsen Jan 2020 B2
10532174 Al-Ali Jan 2020 B2
10537285 Shreim et al. Jan 2020 B2
10542903 Al-Ali et al. Jan 2020 B2
10555678 Dalvi et al. Feb 2020 B2
10568553 O'Neil et al. Feb 2020 B2
10608817 Haider et al. Mar 2020 B2
D880477 Forrest et al. Apr 2020 S
10617302 Al-Ali et al. Apr 2020 B2
10617335 Al-Ali et al. Apr 2020 B2
10637181 Al-Ali et al. Apr 2020 B2
D886849 Muhsin et al. Jun 2020 S
D887548 Abdul-Hafiz et al. Jun 2020 S
D887549 Abdul-Hafiz et al. Jun 2020 S
10667764 Ahmed et al. Jun 2020 B2
D890708 Forrest et al. Jul 2020 S
10721785 Al-Ali Jul 2020 B2
10736518 Al-Ali et al. Aug 2020 B2
10750984 Pauley et al. Aug 2020 B2
D897098 Al-Ali Sep 2020 S
10779098 Iswanto et al. Sep 2020 B2
10827961 Iyengar et al. Nov 2020 B1
10828007 Telfort et al. Nov 2020 B1
10832818 Muhsin et al. Nov 2020 B2
10849554 Shreim et al. Dec 2020 B2
10856750 Indorf et al. Dec 2020 B2
D906970 Forrest et al. Jan 2021 S
D908213 Abdul-Hafiz et al. Jan 2021 S
10918281 Al-Ali et al. Feb 2021 B2
10932705 Muhsin et al. Mar 2021 B2
10932729 Kiani et al. Mar 2021 B2
10939878 Kiani et al. Mar 2021 B2
10956950 Al-Ali et al. Mar 2021 B2
D916135 Indorf et al. Apr 2021 S
D917046 Abdul-Hafiz et al. Apr 2021 S
D917550 Indorf et al. Apr 2021 S
D917564 Indorf et al. Apr 2021 S
D917704 Al-Ali et al. Apr 2021 S
10987066 Chandran et al. Apr 2021 B2
10991135 Al-Ali et al. Apr 2021 B2
D919094 Al-Ali et al. May 2021 S
D919100 Al-Ali et al. May 2021 S
11006867 Al-Ali May 2021 B2
D921202 Al-Ali et al. Jun 2021 S
11024064 Muhsin et al. Jun 2021 B2
11026604 Chen et al. Jun 2021 B2
D925597 Chandran et al. Jul 2021 S
D927699 Al-Ali et al. Aug 2021 S
11076777 Lee et al. Aug 2021 B2
11114188 Poeze et al. Sep 2021 B2
D933232 Al-Ali et al. Oct 2021 S
D933233 Al-Ali et al. Oct 2021 S
D933234 Al-Ali et al. Oct 2021 S
11145408 Sampath et al. Oct 2021 B2
11147518 Al-Ali et al. Oct 2021 B1
11185262 Al-Ali et al. Nov 2021 B2
11191484 Kiani et al. Dec 2021 B2
D946596 Ahmed Mar 2022 S
D946597 Ahmed Mar 2022 S
D946598 Ahmed Mar 2022 S
D946617 Ahmed Mar 2022 S
11272839 Al-Ali et al. Mar 2022 B2
11282367 Aquino Mar 2022 B1
11289199 Al-Ali Mar 2022 B2
RE49034 Al-Ali Apr 2022 E
11298021 Muhsin et al. Apr 2022 B2
D950580 Ahmed May 2022 S
D950599 Ahmed May 2022 S
D950738 Al-Ali et al. May 2022 S
D957648 Al-Ali Jul 2022 S
11382567 O'Brien et al. Jul 2022 B2
11389093 Triman et al. Jul 2022 B2
11406286 Al-Ali et al. Aug 2022 B2
11417426 Muhsin et al. Aug 2022 B2
11439329 Lamego Sep 2022 B2
11445948 Scruggs et al. Sep 2022 B2
D965789 Al-Ali et al. Oct 2022 S
D967433 Al-Ali et al. Oct 2022 S
11464410 Muhsin Oct 2022 B2
11504058 Sharma et al. Nov 2022 B1
11504066 Dalvi et al. Nov 2022 B1
D971933 Ahmed Dec 2022 S
D973072 Ahmed Dec 2022 S
D973685 Ahmed Dec 2022 S
D973686 Ahmed Dec 2022 S
D974193 Forrest et al. Jan 2023 S
D979516 Al-Ali et al. Feb 2023 S
D980091 Forrest et al. Mar 2023 S
11596363 Lamego Mar 2023 B2
11627919 Kiani et al. Apr 2023 B2
11637437 Al-Ali et al. Apr 2023 B2
D985498 Al-Ali et al. May 2023 S
11653862 Dalvi et al. May 2023 B2
D989112 Muhsin et al. Jun 2023 S
D989327 Al-Ali et al. Jun 2023 S
11678829 Al-Ali et al. Jun 2023 B2
11679579 Al-Ali Jun 2023 B2
11684296 Vo et al. Jun 2023 B2
11692934 Normand et al. Jul 2023 B2
11701043 Al-Ali et al. Jul 2023 B2
D997365 Hwang Aug 2023 S
11721105 Ranasinghe et al. Aug 2023 B2
11730379 Ahmed et al. Aug 2023 B2
D998625 Indorf et al. Sep 2023 S
D998630 Indorf et al. Sep 2023 S
D998631 Indorf et al. Sep 2023 S
D999244 Indorf et al. Sep 2023 S
D999245 Indorf et al. Sep 2023 S
D999246 Indorf et al. Sep 2023 S
11766198 Pauley et al. Sep 2023 B2
D1000975 Al-Ali et al. Oct 2023 S
11803623 Kiani et al. Oct 2023 B2
11832940 Diab et al. Dec 2023 B2
D1013179 Al-Ali et al. Jan 2024 S
11872156 Telfort et al. Jan 2024 B2
11879960 Ranasinghe et al. Jan 2024 B2
11883129 Olsen Jan 2024 B2
D1022729 Forrest et al. Apr 2024 S
11951186 Krishnamani et al. Apr 2024 B2
11974833 Forrest et al. May 2024 B2
11986067 Al-Ali et al. May 2024 B2
11986289 Dalvi et al. May 2024 B2
11986305 Al-Ali et al. May 2024 B2
12004869 Kiani et al. Jun 2024 B2
12014328 Wachman et al. Jun 2024 B2
D1036293 Al-Ali et al. Jul 2024 S
D1037462 Al-Ali et al. Jul 2024 S
12029844 Pauley et al. Jul 2024 B2
12048534 Vo et al. Jul 2024 B2
12064217 Ahmed et al. Aug 2024 B2
D1041511 Indorf et al. Sep 2024 S
12076159 Belur Nagaraj et al. Sep 2024 B2
D1044828 Chandran et al. Oct 2024 S
20010034477 Mansfield et al. Oct 2001 A1
20010039483 Brand et al. Nov 2001 A1
20020010401 Bushmakin et al. Jan 2002 A1
20020058864 Mansfield et al. May 2002 A1
20020133080 Apruzzese et al. Sep 2002 A1
20030013975 Kiani Jan 2003 A1
20030018243 Gerhardt et al. Jan 2003 A1
20030144582 Cohen et al. Jul 2003 A1
20030156288 Barnum et al. Aug 2003 A1
20030212312 Coffin, IV et al. Nov 2003 A1
20040106163 Workman, Jr. et al. Jun 2004 A1
20050055276 Kiani et al. Mar 2005 A1
20050234317 Kiani Oct 2005 A1
20060073719 Kiani Apr 2006 A1
20060189871 Al-Ali et al. Aug 2006 A1
20070073116 Kiani et al. Mar 2007 A1
20070180140 Welch et al. Aug 2007 A1
20070244377 Cozad et al. Oct 2007 A1
20080064965 Jay et al. Mar 2008 A1
20080094228 Welch et al. Apr 2008 A1
20080103375 Kiani May 2008 A1
20080221418 Al-Ali et al. Sep 2008 A1
20090036759 Ault et al. Feb 2009 A1
20090093687 Telfort et al. Apr 2009 A1
20090095926 MacNeish, III Apr 2009 A1
20090247984 Lamego et al. Oct 2009 A1
20090275844 Al-Ali Nov 2009 A1
20100004518 Vo et al. Jan 2010 A1
20100030040 Poeze et al. Feb 2010 A1
20100099964 O'Reilly et al. Apr 2010 A1
20100234718 Sampath et al. Sep 2010 A1
20100270257 Wachman et al. Oct 2010 A1
20110028806 Merritt et al. Feb 2011 A1
20110028809 Goodman Feb 2011 A1
20110040197 Welch et al. Feb 2011 A1
20110082711 Poeze et al. Apr 2011 A1
20110087081 Kiani et al. Apr 2011 A1
20110118561 Tari et al. May 2011 A1
20110137297 Kiani et al. Jun 2011 A1
20110172498 Olsen et al. Jul 2011 A1
20120123231 O'Reilly May 2012 A1
20120165629 Merritt et al. Jun 2012 A1
20120209084 Olsen et al. Aug 2012 A1
20120226117 Lamego et al. Sep 2012 A1
20120283524 Kiani et al. Nov 2012 A1
20130023775 Lamego et al. Jan 2013 A1
20130060147 Welch et al. Mar 2013 A1
20130096405 Garfio Apr 2013 A1
20130296672 O'Neil et al. Nov 2013 A1
20130345921 Al-Ali et al. Dec 2013 A1
20140166076 Kiani et al. Jun 2014 A1
20140180160 Brown et al. Jun 2014 A1
20140187973 Brown et al. Jul 2014 A1
20140275871 Lamego et al. Sep 2014 A1
20140275872 Merritt et al. Sep 2014 A1
20140316217 Purdon et al. Oct 2014 A1
20140316218 Purdon et al. Oct 2014 A1
20140323897 Brown et al. Oct 2014 A1
20140323898 Purdon et al. Oct 2014 A1
20150005600 Blank et al. Jan 2015 A1
20150011907 Purdon et al. Jan 2015 A1
20150073241 Lamego Mar 2015 A1
20150080754 Purdon et al. Mar 2015 A1
20150099950 Al-Ali et al. Apr 2015 A1
20170005958 Frenkel Jan 2017 A1
20170024748 Haider Jan 2017 A1
20170173632 Al-Ali Jun 2017 A1
20170251974 Shreim et al. Sep 2017 A1
20180242926 Muhsin et al. Aug 2018 A1
20180247712 Muhsin et al. Aug 2018 A1
20190239787 Pauley et al. Aug 2019 A1
20190320906 Olsen Oct 2019 A1
20200060869 Telfort et al. Feb 2020 A1
20200111552 Ahmed Apr 2020 A1
20200113520 Abdul-Hafiz et al. Apr 2020 A1
20200138368 Kiani et al. May 2020 A1
20200163597 Dalvi et al. May 2020 A1
20200196877 Vo et al. Jun 2020 A1
20200253474 Muhsin et al. Aug 2020 A1
20200253544 Belur Nagaraj et al. Aug 2020 A1
20200275841 Telfort et al. Sep 2020 A1
20200288983 Telfort et al. Sep 2020 A1
20200329983 Al-Ali et al. Oct 2020 A1
20200329984 Al-Ali et al. Oct 2020 A1
20200329993 Al-Ali et al. Oct 2020 A1
20210022628 Telfort et al. Jan 2021 A1
20210104173 Pauley et al. Apr 2021 A1
20210113121 Diab et al. Apr 2021 A1
20210117525 Kiani et al. Apr 2021 A1
20210118581 Kiani et al. Apr 2021 A1
20210121582 Krishnamani et al. Apr 2021 A1
20210161465 Barker et al. Jun 2021 A1
20210236729 Kiani et al. Aug 2021 A1
20210256267 Ranasinghe et al. Aug 2021 A1
20210256835 Ranasinghe et al. Aug 2021 A1
20210275101 Vo et al. Sep 2021 A1
20210290060 Ahmed Sep 2021 A1
20210290072 Forrest Sep 2021 A1
20210290080 Ahmed Sep 2021 A1
20210290120 Al-Ali Sep 2021 A1
20210290177 Novak, Jr. Sep 2021 A1
20210290184 Ahmed Sep 2021 A1
20210296008 Novak, Jr. Sep 2021 A1
20210330228 Olsen et al. Oct 2021 A1
20210386382 Olsen et al. Dec 2021 A1
20210402110 Pauley et al. Dec 2021 A1
20220039707 Sharma et al. Feb 2022 A1
20220053892 Al-Ali et al. Feb 2022 A1
20220071562 Kiani Mar 2022 A1
20220096603 Kiani et al. Mar 2022 A1
20220151521 Krishnamani et al. May 2022 A1
20220218244 Kiani et al. Jul 2022 A1
20220287574 Telfort et al. Sep 2022 A1
20220296161 Al-Ali et al. Sep 2022 A1
20220361819 Al-Ali et al. Nov 2022 A1
20220379059 Yu et al. Dec 2022 A1
20220392610 Kiani et al. Dec 2022 A1
20220417986 Barash Dec 2022 A1
20230028745 Al-Ali Jan 2023 A1
20230038389 Vo Feb 2023 A1
20230045647 Vo Feb 2023 A1
20230058052 Al-Ali Feb 2023 A1
20230058342 Kiani Feb 2023 A1
20230069789 Koo et al. Mar 2023 A1
20230087671 Telfort et al. Mar 2023 A1
20230110152 Forrest et al. Apr 2023 A1
20230111198 Yu et al. Apr 2023 A1
20230115397 Vo et al. Apr 2023 A1
20230116371 Mills et al. Apr 2023 A1
20230135297 Kiani et al. May 2023 A1
20230138098 Telfort et al. May 2023 A1
20230145155 Krishnamani et al. May 2023 A1
20230147750 Barker et al. May 2023 A1
20230210417 Al-Ali et al. Jul 2023 A1
20230222805 Muhsin et al. Jul 2023 A1
20230226331 Kiani et al. Jul 2023 A1
20230284916 Telfort Sep 2023 A1
20230284943 Scruggs et al. Sep 2023 A1
20230301562 Scruggs et al. Sep 2023 A1
20230346993 Kiani et al. Nov 2023 A1
20230368221 Haider Nov 2023 A1
20230371893 Al-Ali et al. Nov 2023 A1
20230389837 Krishnamani et al. Dec 2023 A1
20240016418 Devadoss et al. Jan 2024 A1
20240016419 Devadoss et al. Jan 2024 A1
20240047061 Al-Ali et al. Feb 2024 A1
20240049310 Al-Ali et al. Feb 2024 A1
20240049986 Al-Ali et al. Feb 2024 A1
20240081656 DeJong et al. Mar 2024 A1
20240122486 Kiani Apr 2024 A1
20240180456 Al-Ali Jun 2024 A1
20240188872 Al-Ali et al. Jun 2024 A1
20240245855 Vo et al. Jul 2024 A1
20240260894 Olsen Aug 2024 A1
20240267698 Telfort et al. Aug 2024 A1
20240277233 Al-Ali Aug 2024 A1
20240277280 Ai-Ali Aug 2024 A1
20240298920 Fernkbist et al. Sep 2024 A1
Non-Patent Literature Citations (1)
Entry
US 2024/0016391 A1, 01/2024, Lapotko et al. (withdrawn)
Related Publications (1)
Number Date Country
20230222887 A1 Jul 2023 US
Provisional Applications (2)
Number Date Country
63299168 Jan 2022 US
63298569 Jan 2022 US