This application claims priority to Taiwan Application Serial Number 109125868, filed Jul. 30, 2020, which is herein incorporated by reference.
The present disclosure relates to a breathing detection method and a system thereof. More particularly, the present disclosure relates to a contactless breathing detection method and a system thereof.
A breathing detection is a very important part in much clinical detection. The deviation of breathing rate or the depth of breathing can be regarded as important indicators for judging whether the human body is healthy. The conventional breathing detection mainly has the following three methods: the air flow of nose and mouth, the variation of chest impedance, and the up and down movement of the chest cavity. However, each of the aforementioned three methods is a contact sensing device and is connected to the host device by wires. The contact sensing device is not easy to use or wear and also makes subjects feeling uncomfortable, so that the breathing rate is less measured or paid attention to.
In view of this, how to develop a contactless breathing detection system for the problems of the above-mentioned breathing detecting device becomes the goal of the public and relevant industry efforts.
According to an embodiment of a methodical aspect of the present disclosure, a contactless breathing detection method is for detecting a breathing rate of a subject and includes a photographing step, a capturing step, a calculating step and a converting step. The photographing step is performed to provide a camera to photograph the subject to generate a facial image. The capturing step is performed to provide a processor module to capture the facial image to generate a plurality of feature points. The calculating step is performed to drive the processor module to calculate the feature points according to an optical flow algorithm to generate a plurality of breathing signals. The converting step is performed to drive the processor module to convert the breathing signals to generate a plurality of power spectrums, respectively. The processor module generates an index value by calculating the power spectrums, and the breathing rate is extrapolated from the index value.
According to an embodiment of a structural aspect of the present disclosure, a contactless breathing detection system is for detecting a breathing rate of a subject and includes a camera and a processor module. The camera photographs the subject to generate a facial image. The processor module is electrically connected to the camera and receives the facial image. The processor module includes a capturing sub-module, a calculating sub-module and a converting sub-module. The capturing sub-module captures the facial image to generate a plurality of feature points. The calculating sub-module is connected to the capturing sub-module and receives the feature points. The calculating sub-module calculates the feature points according to an optical flow algorithm to generate a plurality of breathing signals. The converting sub-module is connected to the calculating sub-module and receives the breathing signals. The converting sub-module converts the breathing signals to generate a plurality of power spectrums, respectively. The converting sub-module generates an index value by calculating the power spectrums, and the breathing rate is extrapolated from the index value.
The present disclosure can be more fully understood by reading the following detailed description of the embodiment, with reference made to the accompanying drawings as follows:
The embodiment will be described with the drawings. For clarity, some practical details will be described below. However, it should be noted that the present disclosure should not be limited by the practical details, that is, in some embodiment, the practical details is unnecessary. In addition, for simplifying the drawings, some conventional structures and elements will be simply illustrated, and repeated elements may be represented by the same labels.
It will be understood that when an element (or device) is referred to as be “connected to” another element, it can be directly connected to the other element, or it can be indirectly connected to the other element, that is, intervening elements may be present. In contrast, when an element is referred to as be “directly connected to” another element, there are no intervening elements present. In addition, the terms first, second, third, etc. are used herein to describe various elements or components, these elements or components should not be limited by these terms. Consequently, a first element or component discussed below could be termed a second element or component.
The camera 110 is for photographing a face in front view of the subject to generate a facial image 111. The processor module 120 is electrically connected to the camera 110 and receives the facial image 111. The processor module 120 includes a capturing sub-module 121, a calculating sub-module 122 and a converting sub-module 123. The capturing sub-module 121 captures the facial image 111 to generate a plurality of feature points 112. The calculating sub-module 122 is connected to the capturing sub-module 121 and receives the feature points 112. The calculating sub-module 122 calculates the feature points 112 according to an optical flow algorithm (e.g., Lucas-Kanade method) to generate a plurality of breathing signals 113. The converting sub-module 123 is connected to the calculating sub-module 122 and receives the breathing signals 113. The converting sub-module 123 converts the breathing signals 113 to generate a plurality of power spectrums, respectively. The converting sub-module 123 generates an index value by calculating the power spectrums, and the breathing rate BR is extrapolated from the index value.
Therefore, the present disclosure tracks the feature points 112 of the facial image 111 by the optical flow algorithm and converts the feature points 112 into the power spectrums, and then the breathing rate BR is extrapolated from the index value of a maximum peak of each of the power spectrums, so that the contactless breathing detection system 100 can take a contactless way to measure the breathing rate BR of the subject.
Please refer to
Please refer to
In
In
Therefore, the present disclosure captures the feature points 112 of the facial image 111 with the processor module 120 and tracks the variation of the feature points 112 with the optical flow algorithm. Finally, the present disclosure finds the index value 113d so as to estimate the breathing rate BR of the subject.
In detail, the contactless breathing detection method S100 can be divided into two stages: the first stage includes the image capture of the face and the capture of the feature points 112 (that is, the photographing step S110 and the capturing step S120); the second stage includes the calculation and the conversion of the breathing rate BR (that is, the calculating step S130 and the converting step S140).
Please refer to
In detail, in the tracking step S131, a total of seven facial feature points 112 are extracted, and n=7 in the formula (1). The optical flow unit 1221 finds the variation (i.e., the displacement Di) of the seven feature points 112 caused by the difference between the previous frame and the next frame in the time sequence with the tracking characteristics of the optical flow algorithm to obtain the mixed signal 112a. The mixed signal 112a can be a variety of different signals which include the signals of the body motion, the heart rate, and the breathing rate BR.
Successively, the analyzing step S132 processes the mixed signal 112a to generate the breathing signal 113 with the analysis unit 1222. Particularly, in order to further find out the frequency band matching the breathing rate BR, the analysis unit 1222 separates the mixed signal 112a through the ICA to obtain the seven separated breathing signals 113. In detail, because the human head (or face) contains many subtle movements, it is necessary to calculate the displacement Di to obtain the mixed signal 112a. Then, according to the principle of blind signal separation, the ICA is used for preliminary separation to decompose the independent signal sources hidden in the mixed signal 112a to select the signals matching the breathing rate BR.
Please refer to
Furthermore, each of the frequency domain signals 113a can have a corresponding frequency. The filtering step S142 includes providing the filtering unit 1232 to filter out each of the frequency domain signals 113b having the frequency between 0.15 Hz and 0.35 Hz. The filtering unit 1232 can be a Butterworth Filter which filters out the interesting section from the frequency domain signals 113a. Since the frequency of the breathing is between 0.15 Hz and 0.35 Hz, the filtering unit 1232 is for filtering out frequencies other than the range between 0.15 Hz and 0.35 Hz, and the remained frequency domain signals 113b is the interesting section.
Moreover, the power converting step S143 includes processing the frequency domain signals 113b through the power converting unit 1233 to generate the power spectrums 113c, respectively. In detail, according to Fourier analysis, anyone of the physical signals can be decomposed into a discrete or continuous spectrum. The total energy of the signal in a limited period of time is limited, so that the power spectrums 113c can be calculated by the above characteristic. The calculation of the power spectrums 113c is that after the signal is subjected to the FFT, the real square and imaginary square of the frequency domain signal 113b are added together to obtain the power spectrums 113c.
More detail, the converting sub-module 123 can include the power converting unit 1233, an index calculating unit 1234 and a breathing rate calculating unit 1235. The power converting unit 1233 includes a power, a real part, a variable, and an imaginary part, the power is expressed as P1, the real part is expressed as Ri, the variable is expressed as u, and the imaginary part is expressed as Ii and conforms to a following formula (2):
P
i(u)=Ri2(u)+Ii2(u), i=1,2, . . . ,n (2).
Successively, in the converting step S140, a maximum power and an average power are extrapolated from the power spectrums 113c by the index calculating unit 1234. The maximum power minuses the average power, and the channel with the largest result is selected as the index value 113d for calculating the breathing rate BR, and then importing the index value 113d into the breathing rate calculating unit 1235, and using the formula (4) of the breathing rate BR to obtain the breathing rate BR of the final subject and conforming to the following formulas (3) and (4):
The index value 113d is expressed as I, the breathing rate BR is expressed as Breathing Rate, Pimax is expressed as the maximum power, Piavg is expressed as the average power, argmax is expressed as a function, and the function argma can find the value of the variation when the formula reaches the maximum, a is expressed as the maximum power Pimax and the average power Piavg of the aforementioned value, and u is expressed as a variation.
In summary, the present disclosure has the following advantages: First, the breathing rate of the subject can be measured in the contactless way. Second, there is no need to use a contact-type wearing device so as to reduce the cost of the detecting device.
Although the present disclosure has been described in considerable detail with reference to certain embodiments thereof, other embodiments are possible. Therefore, the spirit and scope of the appended claims should not be limited to the description of the embodiments contained herein.
It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present disclosure without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the present disclosure cover modifications and variations of this disclosure provided they fall within the scope of the following claims.
Number | Date | Country | Kind |
---|---|---|---|
109125868 | Jul 2020 | TW | national |