This application also claims priority to Taiwan Patent Application No. 101146641 filed in the Taiwan Patent Office on Dec. 11, 2012, the entire content of which is incorporated herein by reference.
The present disclosure relates to a physiological information measurement system and method, and more particularly, to a system and a method for physiological information measurement.
Heart rate, an index for cardiovascular disease, and respiratory rate, an index for sleep apnea, are important physiological information for human body. Medical personnel often determine physiological condition of patients according to the heart rate and the respiratory rate.
Conventional heart rate measuring equipments include pulse oximeter, sphygmomanometer and electrocardiograph. Conventional respiratory rate measuring equipments include spirometer, impedance pneumography and respiratory inductive plethysmography.
Measurement by the described equipments is almost contact-based, which often causes the patients discomfort. Besides, the equipments are expensive and seldom used by ordinary people.
To prevent the discomfort caused by the contact-based equipments, contact-free measuring equipments are therefore developed.
The contact-free measuring equipment utilizes single camera and single video region as a signal source which can correctly operate only for stable light sources and motionless objects (patient).
Even the patients are still, slight movement, facial expression change or improper camera shooting direction may influence the measurement and reduce correctness.
In an embodiment, the present disclosure provides a physiological information measurement system, including: at least one video capture unit, a calculating unit electrically coupled to the video capture unit, and a display unit electrically coupled to the calculating unit. The video capture unit captures at least one video data provided for the calculating unit to obtain physiological information displayed on the display unit.
In another exemplary embodiment, the present disclosure provides a physiological information measurement method, including the steps of: providing a plurality of video data, wherein each video data contains sequential image data, extracting and synchronizing the video data to obtain a synchronous features, transforming the features to independent components, detecting peak value of the independent components, selecting a representative component from the independent components to generate a physiological information, and displaying the physiological information.
Further scope of applicability of the present application will become more apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating exemplary embodiments of the disclosure, are given by way of illustration only, since various changes and modifications within the spirit and scope of the disclosure will become apparent to those skilled in the art from this detailed description.
The present disclosure will become more fully understood from the detailed description given herein below and the accompanying drawings which are given by way of illustration only, and thus are not limitative of the present disclosure and wherein:
In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosed embodiments. It will be apparent, however, that one or more embodiments may be practiced without these specific details. In other instances, well-known structures and devices are schematically shown in order to simplify the drawing.
Please refer to
Please refer to
Please refer to
The calculating unit 11 is electrically coupled to the video capture unit 10. The calculating unit 11 includes a feature extraction module 110, a data synchronization module 111, an independent component analysis module 112, a peak detection module 113, a physiological information statistic module 114 and an information carrier module 115.
The feature extraction module 110 is electrically coupled to the video capture unit 10. The feature extraction module 110 receives video data from the video capture units 10 and generates a plurality of features.
Please refer to
The feature extraction module 110 utilizes a temporal differencing method to obtain motion pixels 40, 41 and 42 in the regions 324, 325 and 326 of
The data synchronization module 111 receives the features from the feature extraction module 110 and synchronizes the features.
The independent component analysis module 112 receives the synchronous features and generates a plurality of independent components.
The peak detection module 113 receives the independent components and generates peak information and several serial peak signals.
The physiological information statistic module 114 receives and analyzes the serial peak signals to select one of the independent components. The physiological information statistic module 114 generates a physiological signal based on the selected independent component.
The information carrier module 115 is informatively connected to the feature extraction module 110, the data synchronization module 111, the independent component analysis module 112, the peak detection module 113 and the physiological information statistic module 114. The information carrier module 115 can be an inner or outer data base or a fixed or mobile memory.
Please refer to
Step 1 (S1), providing K groups of video data, and each group of video dada includes sequential image data of physical physiological information regions. For example, the physical physiological information region can be a face region, a neck region, an arm region, a shoulder region, a chest-abdominal region, a left chest region or a right chest region.
The physiological information regions can be obtained by a face detecting process, a skin color detecting process or a manually figuring process. For example the face detecting process can refer to M.-Z. Poh, D. J. McDuff, and R. W. Picard, “Advancements in noncontact, multiparameter physiological measurements using a webcam,” IEEE Trans. Biomedical Engineering, vol. 58, pp. 7-11, January 2011. The skin color detecting process can refer to K.-Z. Lee, P.-C. Hung, and L.-W. Tsai, “Contact-free heart rate measurement using a camera,” in Proc. Ninth Conference on Computer and Robot Vision, 2012, pp. 147-152. The manually figuring process can refer to K. S. Tan, R. Saatchi, H. Elphick, and D. Burke, “Real-time vision based respiration monitoring system,” in Proc. International Symposium on Communication Systems Networks and Digital Signal Processing, 2010, pp. 770-774.
Referring to
For example, the K groups of video data are obtained by shooting a person with the video capture units 10. The K groups of video data are provided to the calculating unit 11.
The K groups of video data can also be obtained by shooting a person with the video capture units 10 built in a mobile device such as a mobile phone.
As described above, Iffk is the image data, where k=1, 2, 3, . . . , K. Ifk is the fth frame in the kth video. T(Ifk) is the time for capturing image Ifk. Unit of the time can be ms, μs, s, minute or hour.
S2, the feature extraction module 110 obtains features including physiological information from each image Ifk to analyze physiological information.
For example, if the physiological information is a heart rate, then the heart rate is obtained by the average color of skin region accompany with a weighted statistical method. The weighted statistical method can refer to K.-Z. Lee, P.-C. Hung, and L.-W. Tsai, “Contact-free heart rate measurement using a camera,” in Proc. Ninth Conference on Computer and Robot Vision, 2012, pp. 147-152. Therefore, when heart rate is measured, the feature ufk of the fth frame in the kth video can be a weighting value for color average.
If the physiological information is a respiratory rate, then the respiratory rate is obtained by measuring the movement of chest. The movement is obtained by a temporal differencing method. The temporal differencing method can refer to K. S. Tan, R. Saatchi, H. Elphick, and D. Burke, “Real-time vision based respiration monitoring system,” in Proc. International Symposium on Communication Systems Networks and Digital Signal Processing, 2010, pp. 770-774. Therefore, when respiratory rate is measured, the feature ufk of the fth frame in the kth video can be an amount of motion pixels.
S3, since the frame rate of each video data is not static, frame rate is defined as the number of frames captured in a specific period. For example, the video capture units 10 has a frame rate N fames/sec, where N is a constant such as 10, 20, 30, 60, 120, 150, 180 or 300.
As described above, the time points of the video data is not synchronous due to unstable frame rate of each video data. A common frequency H fps is provided for each video data to obtain a synchronous feature νtk at time t by interpolation method, where T(νtk)=1000×t/H is the time index of the synchronous feature νtk, t=1, 2, 3, . . .
The synchronous feature vtk of each video data has the same time index T(νtk) at time t after synchronization.
If the feature ufk of a known image Ifk has a time index T (Ifk), the synchronous feature νtk at time t can be obtained by an interpolation method. The interpolation method can be a linear interpolation method, a bilinear interpolation method or a bicubic interpolation method. These interpolation methods refer to J. G. Proakis and D. K. Manolakis, Digital Signal Processing (4th Edition): Prentice Hall, 2006.
For example, the synchronous features are obtained by a linear interpolation method, which is measured by the following equation:
where T(Ifk)≦T(νtk)≦T(If+1k)
the synchronous features are obtained by a data synchronization module 111.
Please refer to
Supposed that the three video data have unstable frame rate, only 129 frames, 150 frames and 140 frames are captured. In addition, since each video capture unit has different characteristics, the captured features are different. Three average values of feature series are 138.43, 64.38 and 90.42 respectively.
A common frequency H fps is therefore defined and provided to each video data to obtain the synchronous feature νtk at time t by the interpolation method.
S4, in addition to the physiological information, the video data also implicitly includes periodical variation of environment light (blinking lamp), periodical regulation of camera (automatic light compensation) and other variations caused by movement or facial expression change. If multiple groups of video data are measured simultaneously, since each video data includes the same physiological information, an independent component analysis method is utilized to extract stable signals from the video data. The independent component analysis method utilizes a linear transformation process to transform signals to a combination of non-Gaussian distributed signals which are statistically independent. The independent component analysis refers to A. Hyvärinen, J. Karhunen, and a. E. Oja, Independent Component Analysis. New York: John Wiley & Sons., 2001.
If N is the number of features which are intended to be analyzed. The value of N depends on the common frequency H fps and a reasonable value of the measured physiological information. For example, if N for the heart rate is defined as 5H, and N for the respiratory rate is defined as 30H, that means the heart rate and the respiratory rate use 5 seconds and 30 seconds as their input features respectively.
zt is a matrix of all features at time t
zt is transformed to a matrix of statistically non-Gaussian independent components. zt=Axi, where A is a mixing matrix. Since A and xi is unknown, zt can be rewritten as
where W is a demixing matrix similar to matrix A. If a demixing matrix W satisfies W≈A−1, the independent component matrix yt≈xt, and ytk is the value of the kth independent component at time t.
The independent conponents are obtained by the independent component analysis module 112.
Referring to
In step S5, the peak of the independent component yt is detected to obtain the signal period.
In the peak detection step, noise of signals is filtered out by a low pass filter or a median filter. Afterwards, local extreme values are searched to determine peaks' location. The signals are the described independent components. The peak detection method can refer to J. G. Proakis and D. K. Manolakis, Digital Signal Processing (4th Edition): Prentice Hall, 2006.
Referring to
In step S8, low frequency signals of each independent component are filtered out by a filter to obtain a denoised signal matrix ot, where otk is the value of the kth group of denoised signal at time t.
In step S9, each denoised signal otk is given a corresponding signal direction Dtk which can be up, down or none.
Dtk is given an initial value which is none, i.e. Dtk=NONE.
When otk−ot−1k>0, the signal direction is up, and when otk−ot−1k<0, the signal direction is down. The signal direction Dtk is therefore determined
In step 510, determine whether the signal direction changes from up to down at current time t. If the kth group of the denoised signal has down direction at time t, and has up direction at time t−1, i.e. Dtk=DOWN, Dt−1k=UP, then a time point pik is obtained (S11), where pik is the time point of the ith peak of the kth group of the denoised signals otk, i=1, 2, 3, . . . nk, nk is the peak number of the kth group of the denoised signals.
In step S12, if the time t is not the point where the signal direction changes from UP to DOWN or a new peak is obtained, then determine whether the signal is the last one of the signal series. If the signal is the last one of the signal series, the peak detection ends (S13); if the signal is not the last one, then return to step S9 to determine the signal direction of next time point.
The peak detection is performed by the peak detection module 113.
In step S6, peak-peak interval (PPI) between two adjacent peaks is calculated and analyzed to select a stable independent component to be the representative component.
The qjk represents the jth PPI of the kth group of the independent components, where j=1, 2, 3, . . . , nk−1. the value of qjk is obtained by the following equation:
The Sk represents the variance of the PPI of the kth group of the independent components. The independent component with the minimal variance (the most stable one) is selected as the representative component. The variance of PPI is calculated by the following equation:
The
The average
The independent component with the minimal variance is selected as the representative component. The average PPI
The physiological information is obtained by the physiological information statistic module 114.
In step S7, the physiological information obtained in step S6 is displayed on the display unit 12.
Information or data obtained by the feature extraction module 110, the data synchronization module 111, the independent component analysis module 112, the peak detection module 113 and the physiological information statistic module 114 can be saved in the information carrier module 115 or loaded from the information carrier module 115.
In the present disclosure, several video capture units are utilized to capture several video data. The video capture units can be various kinds of camera or information from internet.
In the present disclosure, measurement of the physiological information is automatic and noncontact based, which can reduce uncomfortable feeling. Besides, the influence caused by unstable signal is also reduced.
With respect to the above description then, it is to be realized that the optimum dimensional relationships for the parts of the disclosure, to include variations in size, materials, shape, form, function and manner of operation, assembly and use, are deemed readily apparent and obvious to one skilled in the art, and all equivalent relationships to those illustrated in the drawings and described in the specification are intended to be encompassed by the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
101146641 | Dec 2012 | TW | national |