This application is based upon and claims priority under 35 U.S.C. 119 from Taiwan Patent Application No. 109115409 filed on May 8, 2020, which is hereby specifically incorporated herein by this reference thereto.
The present invention is related to a sleep analysis system, and more particularly to an image sleep analysis method and system thereof.
Taiwan patent No. 1420424 proposes a baby sleep monitoring system and method to monitor the baby's sleep status. The monitoring system has a cellphone with a camera and an accelerator mounted on the baby's bed to detect the shaking status of the bed. If the bed is shaking, the cellphone takes a picture of the baby on the bed by the camera thereof. The cellphone further determines whether the baby's eyes open by image analyzing. When the baby's eye opens, the cellphone transmits an awake notification.
Taiwan patent No. 1642409 proposes a sleep analysis method. A body to be analyzed has to wear an electrical device having a three-axis accelerometer, such as a smartwatch. The three-axis accelerometer detects the body's motions in sleep, so the sleep analysis method determines the quality of sleep for the body by analyzing the motions.
Based on the foregoing description, the accelerator or three-axis accelerometer etc. detector is required in the present sleep analysis device to monitor the sleep status of the baby or adult and simply transmit notification and analysis result.
To overcome the shortcomings, the present invention provides an image sleep analysis method and system thereof to mitigate or to obviate the aforementioned problems.
An objective of the present invention is to provide an image sleep analysis method and system thereof. To achieve the objective as mentioned above, the image sleep analysis method has following steps:
(a) obtaining a plurality of visible-light images of a body to be monitored;
(b) comparing the visible-light images to determine a plurality of image differences among the visible-light images and determining a first position of each of the image differences;
(c) identifying a plurality of features of each of the visible-light images and determining a second position of each of the features;
(d) determining a motion intensity of each of the features according to the first positions of the image differences and the second positions of the features; and
(e) recording and analyzing a variation of the motion intensities.
With the foregoing description, in the image sleep analysis method of the present invention, real-time visible-light images of the body are obtained during sleep duration. To increase an accuracy of determining the motion intensity of the body, the image differences are determined by comparing the continuously-obtained visible-light images, the features, such as head, hands, feet, or the like, of the visible-light images are identified, and the variation of the motion intensities of the body is determined according to the motion intensities of each of the at least one feature. Therefore, the present invention does not require any wearable motion sensor for the body to be monitored. In addition, the present invention provides more details for sleep information.
To achieve the objective as mentioned above, the image sleep analysis system has:
a visible-light sensor outputting a plurality of visible-light images of a body;
a processing unit electrically connected to the visible-light sensor to obtain the visible-light images during sleep duration, identifying a plurality of features, monitoring a variation of motion intensities of the features, and analyzing the variation of motion intensities of the features to generate a sleep quality report;
a first communication module electrically connected to the processing unit; and
a display device linking to the first communication module to obtain the sleep quality report and displaying the sleep quality report.
With the foregoing description, the image sleep analysis system is mounted on a bedside and the visible-light sensor aligns the bed to sense the visible-light images of the body on the bed. The processing unit obtains the visible-light images from the visible-light sensor and further identifies the at least one feature, such as head, hands, feet, or the like of the visible-light images to monitor and analyze the variation of the motion intensities of the body. Therefore, the present invention does not require any wearable motion sensor for the body to provide more details for sleep information by accurately real-time monitoring of the body's motion. In addition, the sleep information is directly displayed on the display device.
Other objectives, advantages and novel features of the invention will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings.
With multiple embodiments and drawings thereof, the features of the present invention are described in detail as follows.
With reference to
With further reference to
With further reference to
With reference to
With reference to
At the same time, the processing unit 22 transmits the received visible-light images to the cloud server 10 through the first communication module 23. Since the camera 20 has logged into the cloud server 10 by identifying the registered account, the visible-light images are stored in a cloud storage space for the corresponding account of the camera 20. The deep-learning module 11 of the cloud server 10 identifies different features 50a, 50b, 50c and 50d of the visible-light images 40. Since the deep-learning module has learned and stored a sleep posture learning model, the deep-learning module 11 of the cloud server 10 identifies different features 50a, 50b, 50c and 50d of the visible-light images 40 according to the sleep posture learning model and the positions thereof. As shown in
When the cloud server 10 completes identification of the different features 50a, 50b, 50c and 50d and the positions thereof, these identified information are transmitted backwardly to the processing unit 22 through the first communication module 23. Therefore, the processing unit 22 has stored the marked sub-images and the positions thereof, and the features 50a, 50b, 50c and 50d and the positions thereof, the processing unit 22. As shown in
To accurately determine the motion intensity of the body, after the processing unit 22 continuously transmits the motion intensity values of the features to the cloud server 10, the sleep analyzing module 12 of the cloud server 10 further periodically reads the motion intensity values of the features to calculate standard deviations to obtain the motion intensity value of each feature for each period. For example, if the motion intensity values of the head and trunk are a2, the motion intensity value of the left hand is a1 and the motion intensity value of the right hand is “0”, different weights are given to the different features. For example, a weight of the head is 60%, a weight of the trunk is 30% and a weight of the hand is 10%. A movement of the body in one minute is calculated by summing the products of the motion intensity values and the corresponding weights (a2*60%+a2*30%+a1*10%+0*10%).
In addition, the deep-learning module 11 of the cloud server 10 may previously learn and store a caregiver posture learning model to identify a feature of the caregiver's body from the visible-light images and to record identifying time. If the camera 20 is used in a baby sleep analysis application, the deep-learning module 11 transmits the feature of the caregiver's body and the identifying time backwardly to the processing unit 22. The processing unit 22 may further have an audio identifying module 222. The audio identifying module 223 has a crying learning model to identify a crying feature from the audio signal and record an identifying time.
Therefore, using the camera 20 and the cloud server 10 real-time to monitor the asleep body during sleep duration obtains a variation of the movements of the body (motion intensity variation) as shown in
When the present invention is used in the baby application, the parents obtain the baby's real sleep information and further understand the baby's sleeping habits, avoid taking care of the baby during the sleeping duration and try to adjust to take care of the baby during awaking duration. Therefore, the parents can strengthen the baby's long-term sleep training and get more times to rest by themselves.
Based on the foregoing the image sleep analysis system, the image sleep analysis method of the present invention has following steps (a) to step (e), as shown in
In the step (a), a plurality of visible-light images is obtained in a sleep duration S11.
In the step (b), the visible-light images are compared to identify image differences and determine positions of the image differences S11. In the present embodiment, pixels of the previous visible-light image are compared with the pixels of the present visible-light image to determine whether the same pixel has changed.
In the step (c), a plurality of features is identified from the visible-light images and positions of the features are determined S12. In the present embodiment, the features may include head, hands, trunk etc. body's parts. Using an AI deep-learning technology with a sleep posture learning model identifies the features from each visible-light image and determines positions of the features.
In the step (d), according to the positions of the image differences and the positions of the features, a motion intensity of each feature is determined S13. In the present embodiment, the motion intensity of each feature is determined by calculating overlapped areas between the image differences and the corresponding features. If the overlapped area between the image difference and the corresponding feature is larger, the motion intensity of the feature is high. If the overlapped area between the image difference and the corresponding feature is smaller, the motion intensity of the feature is low. In particular, the visible-light image is divided into a plurality of sub-images. Comparing the same sub-images of the previous visible-light image and the present visible-light image determines whether an amount of the changed pixels of the sub-image exceeds a difference threshold. If so, the sub-images are marked. The overlapped area between the marked sub-image and the corresponding feature is further calculated. One or more area thresholds may be preset to compare with each of the overlapped areas. According to the comparison result, a motion intensity value is given to the corresponding feature. To accurately determine the motion intensity of the body, the visible-light images and the motion intensity values of the features are periodically obtained. After calculating a highest value, an average, or a standard deviation from the motion intensity values of the same feature in one period, a single motion intensity value of each feature in one period is obtained. And then, a movement of the body in one period is calculated by summing products of the motion intensity values and corresponding weights of the features.
In the step (e), a variation of the motion intensity is analyzed and recorded S14. In the present embodiment, the movements and moving times thereof recorded in the sleep duration of every day are obtained. If anyone of the movements exceeds a light-sleep threshold, the movements are marked and times of marking the movements are defined as time stamps. According to the timestamps, a plurality of sleep zones of the sleep duration is defined and a time length of each sleep zone is calculated. According to the time lengths, a sleep quality of the body for one day is calculated. The sleep quality at least has a total sleep time, an average sleep time, a longest sleep time, waking time and frequency thereof and a crying time, time of taking care etc. related sleep information may be included in the sleep quality for baby sleep analysis monitor application.
Based on the foregoing description, in the image sleep analysis method of the present invention, real-time visible-light images of the body are obtained during sleep duration. To increase an accuracy of determining the motion intensity of the body, the image differences are determined by comparing the continuously-obtained visible-light images, the features, such as head, hands, feet, or the like, of the visible-light images are identified, and the variation of the motion intensities of the body is determined according to the motion intensities of each of the at least one feature. Therefore, the present invention does not require any wearable motion sensor for the body to be monitored. In addition, the present invention provides more details for sleep information. When the present invention is used in the baby sleep analysis application, more baby's sleep details are obtained so the parents can strengthen the baby's long-term sleep training and get more times to rest by themselves.
Even though numerous characteristics and advantages of the present invention have been set forth in the foregoing description, together with the details of the structure and features of the invention, the disclosure is illustrative only. Changes may be made in the details, especially in matters of shape, size, and arrangement of parts within the principles of the invention to the full extent indicated by the broad general meaning of the terms in which the appended claims are expressed.
Number | Date | Country | Kind |
---|---|---|---|
109115409 | May 2020 | TW | national |