The disclosure relates to the technical field of electrophysiological signal processing, in particular, to a blood pressure prediction method and device using multiple data sources.
Photoplethysmograph (PPG) signals are a set of signals used for recording the change of light intensity by identifying the light intensity of a specific light source with a light sensor. When the heart beats, the blood flow of unit area in blood vessels changes periodically, the blood volume changes correspondingly, and a PPG signal reflecting the light absorption capacity of blood also changes periodically. One cardiac cycle comprises two time periods: a systole period and a diastole period. In the systole period, the heart acts on blood in the whole body to make the pressure and blood flow volume in the blood vessels change continuously and periodically, and at this moment, blood in the blood vessels absorbs the most light. In the diastole period, the pressure applied to the blood vessels is relatively low, and at this moment, blood pushed to the whole body in the previous systole period cyclically impacts the heart valves to reflect and refract light to some extent, so less light energy is absorbed by blood in the blood vessels in the diastole period. Thus, the blood pressure can be predicted by analyzing the PPG signal waveform capable of reflecting the light energy absorbed by blood in the blood vessels.
In actual application, it is found that the PPG signal used for blood pressure prediction may be acquired in different ways. Specifically, the PPG signal may be acquired directly through a PPG signal acquisition device or be acquired indirectly by recording a video of the skin surface of a test subject. If the PPG signal is acquired directly, the PPG signal may be distorted under the influence of factors such as the sensitivity of a sensor, the physiological status of the test subject, and signal interference in the environment. If the PPG signal is extracted from a video by normalized transform of red and green light channel data, the PPG signal may also be distorted due to factors such as the light intensity of a photographing environment. A blood pressure prediction result obtained by using the distorted PPG signal will drastically deviate from the actual blood pressure and may be even incorrect.
The objective of the disclosure is to overcome the defects of the prior art by providing a blood pressure prediction method and device using multiple data sources. According to the blood pressure prediction method and device, two signal filtering and shaping methods are provided for directly acquired PPG signals, and a video quality detection and normalized signal conversion method is provided for indirectly generated PPG signal, and a uniform standard PPG data sequence is finally generated for blood pressure prediction; and the embodiments of the disclosure provide two optional convolutional neural networks (CNN) models for blood pressure prediction. By adoption of the method and device provided by the embodiments of the disclosure, the capacity to process various PPG signal data sources and the capacity to manage various blood pressure prediction models of an application are improved, and the compatibility with various data sources for blood pressure prediction of the application is improved.
To fulfill the above objective, in a first aspect, the embodiments of the disclosure provide a blood pressure prediction method using multiple data sources, comprising:
Acquiring a data source identifier and original data from an upper computer, wherein the data source identifier is one of a first-class PPG original signal identifier, a second-class PPG original signal identifier and a third-class PPG video identifier; and the original data is one of a first-class PPG original data, a second-class PPG original signal and third-class PPG video data, and corresponds to the data source identifier;
Preprocessing the original data according to the data source identifier; when the data source identifier is the first-class PPG original signal identifier, performing normalized filtering on the first-class PPG original signal to generate a standard PPG data sequence; when the data source identifier is the second-class PPG original signal identifier, performing baseline drift removal and normalized filtering on the second-class PPG original signal to generate the standard PPG data sequence; and when the data source identifier is the third-class PPG video identifier, performing video quality detection and normalized signal conversion on the third-class PPG video data to generate the standard PPG data sequence;
Acquiring a CNN model identifier, wherein the CNN model identifier is a first-class CNN identifier or a second-class CNN identifier; and
Selecting a corresponding CNN model to perform blood pressure prediction on the standard PPG data sequence according to the CNN model identifier; when the CNN model identifier is the first-class CNN identifier, selecting a first-class CNN model to perform blood pressure prediction on the standard PPG data sequence; or when the CNN model identifier is the second-class CNN identifier, selecting a second-class CNN model to perform wavelet transform-based blood pressure prediction on the standard PPG data sequence.
Preferably, when the data source identifier is the first-class PPG original signal identifier, performing normalized filtering on the first-class PPG original signal to generate a standard PPG data sequence, specifically comprises:
When the data source identifier is the first-class PPG original signal identifier, performing data sampling on the first-class PPG original signal according to a preset first-class signal sampling threshold to generate a first-class PPG sampling data sequence (X1, X2 . . . Xi . . . XM), wherein the first-class PPG sampling data sequence (X1, X2 . . . Xi . . . XM) comprises M first-class PPG sampling data Xi, M is an integer, and i ranges from 1 to M;
Performing normalized filtering on the first-class PPG sampling data sequence (X1, X2 . . . Xi . . . XM) to generate a first process sequence (Y1, Y2 . . . Yi . . . YM); when i is 1, setting Yi=Xi; or, when i is greater than 1, setting Yi according to a formula
wherein the first process sequence (Y1, Y2 . . . Yi . . . YM) comprises M first process data Yi, a and b are preset first-class filtering constants, and c is a gain coefficient of the first-class PPG original signal; and
Setting the standard PPG data sequence as the first process sequence (Y1, Y2 . . . Yi . . . YM).
Preferably, when the data source identifier is the second-class PPG original signal identifier, performing baseline drift removal and normalized filtering on the second-class PPG original signal to generate the standard PPG data sequence, comprises:
When the data source identifier is the second-class PPG original signal identifier, performing data sampling on the second-class PPG original signal according to a preset second-class signal sampling threshold to generate a second-class PPG sampling data sequence (S1, S2 . . . Sj . . . SN), wherein the second-class PPG sampling data sequence (S1, S2 . . . Sj . . . SN) comprises N second-class PPG sampling data Sj, N is an integer, and j ranges from 1 to N;
Performing baseline drift removal and filtering on the second-class PPG sampling data sequence (S1, S2 . . . Sj . . . SN) to generate a second process sequence (T1, T2 . . . Tj . . . TN); when j is 1, setting Tj=Sj; or, when j is greater than 1, setting Tj according to a formula Tj=e1×Sj+e2×Sj-1−e3×Tj-1, wherein the second process sequence (T1, T2 . . . Tj . . . TN) comprises N second process data Tj, and e1, e2, and e3 are all preset high-pass filtering coefficients;
Extracting a maximum value from the second process sequence (T1, T2 . . . Tj . . . TN) to generate a maximum reference value max, and extracting a minimum value from the second process sequence (T1, T2 . . . Tj . . . TN) to generate a minimum reference value min;
Performing normalized filtering on the second process sequence (T1, T2 . . . Tj . . . TN) to generate a third process sequence (P1, P2 . . . Pj . . . PN) specifically by setting Pj according to a formula
wherein the third process sequence (P1, P2 . . . Pj . . . PN) comprises N third process data Pj; and
Setting the standard PPG data sequence as the third process sequence (P1, P2 . . . Pj . . . PN).
Preferably, when the data source identifier is the third-class PPG video identifier, performing video quality detection and normalized signal conversion on the third-class PPG video data to generate the standard PPG data sequence, specifically comprises:
When the data source identifier is the third-class PPG video identifier, performing video data frame image extraction on the third-class PPG video data to generate a third-class PPG video frame image sequence, wherein the third-class PPG video frame image sequence comprises multiple third-class PPG video frame images;
Performing one-dimensional red light signal extraction on all the third-class PPG video frame images in the third-class PPG video frame image sequence according to a preset red light pixel threshold range to generate a first red light digital signal, and performing one-dimensional green light signal extraction on all the third-class PPG video frame images in the third-class PPG video frame image sequence according to a preset green light pixel threshold range to generate a first green light digital signal;
According to a preset band-pass filtering frequency threshold range, performing band-pass filtering preprocessing on the first red light digital signal to generate a second red light digital signal, and performing band-pass filtering preprocessing on the first green light digital signal to generate a second green light digital signal;
Performing maximum frequency difference determination on the second red light digital signal and the second green light digital signal to generate a first determination result;
When the first determination result is an up-to-standard signal identifier, performing signal-to-noise ratio determination on the second red light digital signal and the second green light digital signal to generate a second determination result; and
When the second determination result is the up-to-standard signal identifier, performing normalized PPG signal data sequence generation on the second red light digital signal and the second green light digital signal to generate the standard PPG data sequence.
Further, performing one-dimensional red light signal extraction on all the third-class PPG video frame images in the third-class PPG video frame image sequence according to a preset red light pixel threshold range to generate a first red light digital signal, and performing one-dimensional green light signal extraction on all the third-class PPG video frame images in the third-class PPG video frame image sequence according to a preset green light pixel threshold range to generate a first green light digital signal, specifically comprise:
Step 51, initializing the first red light digital signal to be null, initializing the first green light digital signal to be null, initializing a first index to 1, and initializing a first total number to a total number of the third-class PPG video frame images in the third-class PPG video frame image sequence;
Step 52, setting a first-index frame image as the third-class PPG video frame image, corresponding to the first index, in the third-class PPG video frame image sequence;
Step 53, collecting all pixels, meeting the red light pixel threshold range, in the first-index frame image to generate a red pixel set, calculating a total number of the pixels in the red pixel set to generate a red pixel total number, calculating the sum of pixel values of all the pixels in the red pixel set to generate a red pixel value sum, and generating first-index frame red light channel data according to a quotient obtained by dividing the red pixel value sum by the red pixel total number; and adding signal points into the first red light digital signal using the first-index frame red light channel data as signal point data;
Step 54, collecting all pixels, meeting the green light pixel threshold range, in the first-index frame image to generate a green pixel set, calculating a total number of the pixels in the green pixel set to generate a green pixel total number, calculating the sum of pixel values of all the pixels in the green pixel set to generate a green pixel value sum, and generating first-index frame green light channel data according to a quotient obtained by dividing the green pixel value sum by the green pixel total number; and adding signal points into the first green light digital signal using the first-index frame green light channel data as signal point data;
Step 55, increasing the first index by 1;
Step 56, determining whether the first index is greater than the first total number; if the first index is less than or equal to the first total number, performing Step 52; or, if the first index is greater than the first total number, performing Step 57; and
Step 57, transferring the first red light digital signal to an upper processing process as a one-dimensional red light signal extraction result, and transferring the first green light digital signal to an upper processing process as a one-dimensional green light signal extraction result.
Further, performing maximum frequency difference determination on the second red light digital signal and the second green light digital signal to generate a first determination result, specifically comprises:
Performing digital signal time domain-frequency domain conversion on the second red light digital signal through discrete Fourier transform to generate a red light frequency domain signal, and performing digital signal time domain-frequency domain conversion on the second green light digital signal through discrete Fourier transform to generate a green light frequency domain signal;
Extracting a maximum-energy frequency from the red light frequency domain signal to generate a maximum red light frequency, and extracting a maximum-energy frequency from the green light frequency domain signal to generate a maximum green light frequency;
Calculating a frequency difference between the maximum red light frequency and the maximum green light frequency to generate a maximum red-green frequency difference; and
When the maximum red-green frequency difference does not exceed a preset maximum frequency difference threshold range, setting the first determination result as the up-to-standard signal identifier.
Further, when the first determination result is an up-to-standard signal identifier, performing signal-to-noise ratio determination on the second red light digital signal and the second green light digital signal to generate a second determination result, specifically comprises:
When the first determination result is the up-to-standard signal identifier, according to a preset band-stop filtering frequency threshold range, removing valid signal points, meeting the band-stop filtering frequency threshold range, from the second red light digital signal through multi-order Butterworth band-stop filtering to generate a red light noise signal, and removing valid signal points, meeting the band-stop filtering frequency threshold range, from the second green light digital signal through multi-order Butterworth band-stop filtering to generate a green light noise signal;
Calculating signal energy of the second red light digital signal to generate red light signal energy, calculating signal energy of the red light noise signal to generate red light noise energy, generating valid red light signal energy according to a difference between the red light signal energy and the red light noise energy, and generating a red light signal-to-noise ratio according to a ratio of the valid red light signal energy to the red light noise energy;
Calculating signal energy of the second green light digital signal to generate green light signal energy, calculating signal energy of the green light noise signal to generate green light noise energy, generating valid green light signal energy according to a difference between the green light signal energy and the green light noise energy, and generating a green light signal-to-noise ratio according to a ratio of the valid green light signal energy to the green light noise energy; and
When any one of the red light signal-to-noise ratio and the green light signal-to-noise ratio is greater than or equal to the signal-to-noise threshold, setting the second determination result as the up-to-standard signal identifier.
Further, when the second determination result is the up-to-standard signal identifier, performing normalized PPG signal data sequence generation on the second red light digital signal and the second green light digital signal to generate the standard PPG data sequence, specifically comprises:
When the second determination result is the up-to-standard signal identifier, performing signal data normalization processing on the second red light digital signal and the second green light digital signal, respectively, to generate a normalized red light signal and a normalized green light signal; setting a red light data sequence of the standard PPG data sequence as the normalized red light signal, and setting a green data sequence of the standard PPG data sequence as the normalized green light signal, wherein the standard PPG data sequence comprises the red light data sequence and the green light data sequence.
Preferably, the first-class CNN model comprises multiple CNN network layers and a fully connected layer, and each CNN network layer comprises a convolutional layer and a pooling layer;
The second-class CNN model comprises a two-dimensional convolutional layer, a maximum pooling layer, a batch normalization layer, an activation layer, an add layer, a global average pooling layer, a dropout layer and a fully connected layer.
Preferably, when the CNN model identifier is the first-class CNN identifier, selecting a first-class CNN model to perform blood pressure prediction on the standard PPG data sequence, specifically comprises:
When the CNN model identifier is the first-class CNN identifier, performing first-class CNN model input data conversion on the standard PPG data sequence according to a preset first-class CNN input width threshold to generate an input data four-dimensional tensor;
According to a preset convolutional layer number threshold, performing multilayer convolution and pooling calculation on the input data four-dimensional tensor by way of the CNN network layers of the first-class CNN model to generate a feature data four-dimensional sensor;
Performing two-dimensional matrix construction of fully connected layer input data two-dimensional matrix construction of fully connected layer input data according to the feature data four-dimensional tensor to generate an input data two-dimensional matrix, and performing feature data regression calculation on the input data two-dimensional matrix by way of the fully connected layer of the first-class CNN model to generate a blood pressure regression data two-dimensional matrix;
Acquiring a preset prediction mode identifier, wherein the prediction mode identifier is a mean prediction identifier or a dynamic prediction identifier;
When the prediction mode identifier is the mean prediction identifier, performing mean blood pressure calculation on the two-dimensional matrix of blood pressure regression data to generate a mean blood pressure prediction data pair, wherein the mean blood pressure prediction data pair comprises mean systolic pressure prediction data and mean diastolic pressure prediction data; or
When the prediction mode identifier is the dynamic prediction identifier, performing dynamic blood pressure data extraction on the two-dimensional matrix of blood pressure regression data to generate a one-dimensional data sequence of dynamic blood pressure prediction.
Preferably, when the CNN model identifier is the second-class CNN identifier, selecting a second-class CNN model to perform wavelet transform-based blood pressure prediction on the standard PPG data sequence, specifically comprises:
When the data source identifier is the first-class PPG original signal identifier or the second-class PPG original signal identifier and the CNN model identifier is the second-class CNN identifier, performing data fragment division on the standard PPG data sequence to generate standard PPG data fragments;
Acquiring a preset wavelet basis type, a scalability factor array and a mobile factor array, wherein the scalability factor array comprises H scalability factors, the mobile factor array comprises L mobile factors, and H and L are both integers;
Performing signal decomposition on the standard PPG data fragments through continuous wavelet transform according to the scalability factors in the scalability factor array, the mobile factors in the scalability factor array and the wavelet basis type to generate a PPG wavelet coefficient matrix [H, L];
Transforming the PPG wavelet coefficient matrix into a real matrix through a modulo operation on matrix elements, and performing normalization processing on values of matrix elements in the real matrix to generate a PPG normalized matrix [H, L];
Acquiring an RGB color palette matrix, and performing PPG time-frequency tensor conversion on the PPG normalized matrix [H, L] according to the RGB color palette matrix to generate a PPG time-frequency three-dimensional tensor [H, L, 3];
According to a preset second-class CNN input width threshold, performing tensor shape reconstruction on the PPG time-frequency three-dimensional tensor [H, L, 3] through a bicubic interpolation algorithm to generate a PPG convolutional three-dimensional tensor [K, K, 3], wherein K is the second-class CNN input width threshold; and
Performing blood pressure prediction on the PPG convolutional three-dimensional tensor [K, K, 3] using the second-class CNN model to generate a PPG blood pressure prediction data pair, wherein the PPG blood pressure prediction data pair comprises PPG systolic pressure prediction data and PPG diastolic pressure prediction data.
According to the blood pressure prediction method using multiple data sources provided by the embodiments of the disclosure in the first aspect, two signal filtering and shaping methods are provided for directly acquired PPG signals, and a video quality detection and normalized signal conversion method is provided for indirectly generated PPG signal, and a uniform standard PPG data sequence is generated for blood pressure prediction; and during blood pressure prediction, different blood pressure prediction modes are provided for blood pressure prediction according to a CNN model identifier. By adoption of the method and device provided by the embodiments of the disclosure, the PPG signal preprocessing capacity and compatibility of an application are improved, and different blood prediction methods are provided.
In a second aspect, the embodiments of the disclosure provide equipment, comprising a memory and a processor, wherein the memory is used to store a program, and the processor is used to implement the method in the first aspect and in all implementations of the first aspect.
In a third aspect, the embodiments of the disclosure provide a computer program product comprising instructions, wherein the computer program product enables a computer to implement the method in the first aspect and in all implementations of the first aspect when running on the computer.
In a fourth aspect, the embodiments of the disclosure provide a computer-readable storage medium having a computer program stored therein, wherein when the computer program is executed by a processor, the method in the first aspect and in all implementations of the first aspect is implemented.
To gain a better understanding of the purposes, technical solutions and advantages of the invention, embodiments of the disclosure will be described in further detail below in conjunction with the accompanying drawings. Clearly, the embodiments in the following description are merely illustrative ones, and are not all possible ones of the invention. All other embodiments obtained by those ordinarily skilled in the art according to the following ones without creative labor should also fall within the protection scope of the invention.
Step 1, a data source identifier and original data are acquired from an upper computer.
Wherein, the data source identifier is one of a first-class PPG original signal identifier, a second-class PPG original signal identifier and a third-class PPG video identifier; and the original data is one of a first-class PPG original data, a second-class PPG original signal and third-class PPG video data, and corresponds to the data source identifier;
Here, to guarantee the compatibility with various PPG data source acquisition approaches, the data source identifier is set to distinguish the type of acquired original data:
As shown in
Step 2, the original data is preprocessed according to the data source identifier;
Step 2 comprises: Step 21, when the data source identifier is the first-class PPG original signal identifier, normalized filtering is performed on the first-class PPG original signal to generate a standard PPG data sequence;
Step 21 comprises: Step 211, when the data source identifier is the first-class PPG original signal identifier, data sampling is performed on the first-class PPG original signal according to a preset first-class signal sampling threshold to generate a first-class PPG sampling data sequence (X1, X2 . . . Xi . . . XM);
Wherein, the first-class PPG sampling data sequence (X1, X2 . . . Xi . . . XM) comprises M first-class PPG sampling data Xi, M is an integer, and i ranges from 1 to M;
Here, considering that the first-class PPG original signal may not be subjected to digital conversion, sampling is performed before filtering to realize standardized processing of the signal;
Step 212, normalized filtering is performed on the first-class PPG sampling data sequence (X1, X2 . . . Xi . . . XM) to generate a first process sequence (Y1, Y2 . . . Yi . . . YM);
When i is 1, Yi=Xi is set;
When i is greater than 1, Yi is set according to a formula
Wherein, the first process sequence (Y1, Y2 . . . Yi . . . YM) comprises M first process data Yi, a and b are preset first-class filtering constants, and c is a gain coefficient of the first-class PPG original signal;
Here, normalized filtering is performed on the first-class PPG original signal by adjusting the relative amplitude of each data point of the first-class PPG original signal, and the signal shape and amplitude after filtering are shown in
Step 213, the standard PPG data sequence is set as the first process sequence (Y1, Y2 . . . Yi . . . YM), and Step 3 is performed;
Step 22, when the data source identifier is the second-class PPG original signal identifier, baseline drift removal and normalized filtering are performed on the second-class PPG original signal to generate a standard PPG data sequence;
Step 22 comprises: Step 221, when the data source identifier is the second-class PPG original signal identifier, data sampling is performed on the second-class PPG original signal according to a preset second-class signal sampling threshold to generate a second-class PPG sampling data sequence (S1, S2 . . . Sj . . . SN);
Wherein, the second-class PPG sampling data sequence (S1, S2 . . . Sj . . . SN) comprises N second-class PPG sampling data Si, N is an integer, and j ranges from 1 to N;
Here, considering that the second-class PPG original signal may not be subjected to digital conversion, sampling is performed before filtering to realize standardized processing of the signal;
Step 222, baseline drift removal and filtering are performed on the second-class PPG sampling data sequence (S1, S2 . . . Sj . . . SN) to generate a second process sequence (T1, T2 . . . Tj . . . TN);
When j is 1, Tj=Sj is set;
When j is greater than 1, Tj is set according to a formula Tj=e1×Sj+e2×Sj-1−e3×Tj-1;
Wherein, the second process sequence (T1, T2 . . . Tj . . . TN) comprises N second process data Tj, and e1, e2, and e3 are all preset high-pass filtering coefficients;
Here, the baseline of the entire signal is pulled to the same horizontal line to the maximum extend by adjusting the positions of relative baselines of every two adjacent data points of the second-class PPG original signal;
Step 223, a maximum value is extracted from the second process sequence (T1, T2 . . . Tj . . . TN) to generate a maximum reference value max, and a minimum value is extracted from the second process sequence (T1, T2 . . . Tj . . . TN) to generate a minimum reference value min;
Step 224, normalized filtering is performed on the second process sequence (T1, T2 . . . Tj . . . TN) to generate a third process sequence (P1, P2 . . . Pj . . . PN) specifically by setting Pj according to a formula
Wherein, the third process sequence (P1, P2 . . . Pj . . . PN) comprises N third process data Pj;
Here, after baseline drift removal is performed on the second-class PPG original signal, normalized filtering is performed on the second process sequence (T1, T2 . . . Tj . . . TN) according to a ratio of the amplitude of each signal point to a full signal maximum amplitude, and the signal shape and amplitude after filtering are shown in
Step 225, the standard PPG data sequence is set as the third process sequence (P1, P2 . . . Pj . . . PN), and Step 3 is performed;
Step 23, when the data source identifier is the third-class PPG video identifier, video quality detection and normalized signal conversion is performed on the third-class PPG video data to generate the standard PPG data sequence;
Step 23 comprises: Step 231, when the data source identifier is the third-class PPG video identifier, video data frame image extraction is performed on the third-class PPG video data to generate a third-class PPG video frame image sequence;
Wherein, the third-class PPG video frame image sequence comprises multiple third-class PPG video frame images;
Here, the third-class PPG video data is a file of a common standard video format, and image frames can be extracted from the video file in seconds through standard video processing software or a standard video processing method. For example, if the length of a video is 5 s and each second of the video includes 24 frames of images, the extracted third-class PPG video frame image sequence comprises 5*24=120 third-class PPG video frame image vector data (120 images);
Step 232, one-dimensional red light signal extraction is performed on all the third-class PPG video frame images in the third-class PPG video frame image sequence according to a preset red light pixel threshold range to generate a first red light digital signal, and one-dimensional green light signal extraction is performed on all the third-class PPG video frame images in the third-class PPG video frame image sequence according to a preset green light pixel threshold range to generate a first green light digital signal;
Step 232 comprises: Step 2321, the first red light digital signal is initialized to be null, the first green light digital signal is initialized to be null, a first index is initialized to 1, and a first total number is initialized to a total number of the third-class PPG video frame images in the third-class PPG video frame image sequence;
Step 2322, a first-index frame image is set as the third-class PPG video frame image, corresponding to the first index, in the third-class PPG video frame image sequence;
Step 2323, all pixels, meeting the red light pixel threshold range, in the first-index frame image are collected to generate a red pixel set, a total number of the pixels in the red pixel set is calculated to generate a red pixel total number, the sum of pixel values of all the pixels in the red pixel set is calculated to generate a red pixel value sum, and first-index frame red light channel data is generated according to a quotient obtained by dividing the sum of red pixel values by the total number of red pixel; and signal points are added into the first red light digital signal using the first-index frame red light channel data as signal point data;
For example, the total number of all pixels, meeting the red light pixel threshold range, in a first frame of image are extracted to generate a total number of red pixel (because the internal structures or blood vessels at different positions may reflect and transmit light to different degrees, the light transmittances will be different, the shades of red pixels in the recorded video will also be different, so the pixel threshold range is adopted), and the sum of pixel values of all the pixels, meeting the red light pixel threshold range, in the first frame of image is calculated to generate a sum of red pixel values, and the first-index frame red channel data=the red pixel value sum/the red pixel total number;
Step 2324, all pixels, meeting the green light pixel threshold range, in the first-index frame image are collected to generate a green pixel set, a total number of the pixels in the green pixel set is calculated to generate a total number of green pixels, the sum of pixel values of all the pixels in the green pixel set is calculated to generate a green pixel value sum, and first-index frame green light channel data is generated according to a quotient obtained by dividing the sum of green pixel values by the total number of green pixels; and signal points are added into the first green light digital signal using the first-index frame green light channel data as signal point data;
For example, the total number of all pixels, meeting the green light pixel threshold range, in a first frame of image are extracted to generate a total number of green pixel (because the internal structures or blood vessels at different positions may reflect and transmit light to different degrees, the light transmittances will be different, the shades of green pixels in the recorded video will also be different, so the pixel threshold range is adopted), and the sum of pixel values of all the pixels, meeting the green light pixel threshold range, in the first frame of image is calculated to generate a sum of green pixel value, and the first-index frame green channel data=the sum of green pixel value/the total number of green pixel;
Step 2325, the first index is increased by 1;
Step 2326, whether the first index is greater than the first total number is determined; if the first index is less than or equal to the first total number, Step 2322 is performed; or, if the first index is greater than the first total number, Step 233 is performed;
Here, in Step 232, two types of light channel data, red light channel data and green light channel data, are extracted from all the third-class PPG video frame images in the third-class PPG video frame image sequence in the following way: weighted average calculation is performed on specific pixels in each frame image to obtain a pixel average that is used to represent the color channel data of the corresponding light in the frame image; and all frame images in the video are processed in the same way in chronological order to obtain two segments of one-dimensional digital signals: first red light digital signal and first green light digital signal.
Step 233, according to a preset band-pass filtering frequency threshold range, band-pass filtering preprocessing is performed on the first red light digital signal to generate a second red light digital signal, and band-pass filtering preprocessing is performed on the first green light digital signal to generate a second green light digital signal;
Here, signal filtering preprocessing, namely denoising, is performed on the two types of light channel data. In Embodiment 1, band-pass filtering is used for denoising, that is, a band-pass filtering frequency threshold range is preset, and signals, interference and noise lower or higher than the band-pass filtering frequency threshold range are restrained based on the band-pass filtering principle. Generally, the band-pass filtering frequency threshold range is 0.5-10 THz. When some mobile terminals are used for band-pass filtering, a finite impulse response (FIR) filtering module is used;
Step 234, maximum frequency difference determination is performed on the second red light digital signal and the second green light digital signal to generate a first determination result;
Step 234 comprises: Step 2341, digital signal time domain-frequency domain conversion is performed on the second red light digital signal through discrete Fourier transform to generate a red light frequency domain signal, and digital signal time domain-frequency domain conversion is performed on the second green light digital signal through discrete Fourier transform to generate a green light frequency domain signal;
Step 2342, a maximum-energy frequency is extracted from the red light frequency domain signal to generate a maximum red light frequency, and a maximum-energy frequency is extracted from the green light frequency domain signal to generate a maximum green light frequency;
Step 2343, a frequency difference between the maximum red light frequency and the maximum green light frequency is calculated to generate a maximum red-green frequency difference;
Step 2344, when the maximum red-green frequency difference does not exceed a preset maximum frequency difference threshold range, the first determination result is set as the up-to-standard signal identifier;
Here, in Step 234, frequency domain signals of the second red light digital signal and the second green light digital signal are obtained through discrete Fourier transform; maximum-energy frequencies are obtained according to the frequency domain signals (generally, this frequency corresponds to the heart rate); whether the maximum-energy frequencies of the two digital signals are consistent is checked; if an error is within an allowable error range, the first determination result is set as the up-to-standard signal identifier; or, if the error is large, the first determination result is set as a not-up-to-standard signal identifier;
Step 235, when the first determination result is the up-to-standard signal identifier, signal-to-noise ratio determination is performed on the second red light digital signal and the second green light digital signal to generate a second determination result;
Step 235 comprises: Step 2351, when the first determination result is the up-to-standard signal identifier, according to a preset band-stop filtering frequency threshold range, valid signal points, meeting the band-stop filtering frequency threshold range, are removed from the second red light digital signal through multi-order Butterworth band-stop filtering to generate a red light noise signal, and valid signal points, meeting the band-stop filtering frequency threshold range, are removed from the second green light digital signal through multi-order Butterworth band-stop filtering to generate a green light noise signal;
Step 2352, signal energy of the second red light digital signal is calculated to generate red light signal energy, signal energy of the red light noise signal is calculated to generate red light noise energy, valid red light signal energy is generated according to a difference between the red light signal energy and the red light noise energy, and a red light signal-to-noise ratio is generated according to a ratio of the valid red light signal energy to the red light noise energy;
Step 2353, signal energy of the second green light digital signal is calculated to generate green light signal energy, signal energy of the green light noise signal is calculated to generate green light noise energy, valid green light signal energy is generated according to a difference between the green light signal energy and the green light noise energy, and a green light signal-to-noise ratio is generated according to a ratio of the valid green light signal energy to the green light noise energy;
Step 2354, when any one of the red light signal-to-noise ratio and the green light signal-to-noise ratio is greater than or equal to a signal-to-noise threshold, the second determination result is set as the up-to-standard signal identifier;
Here, in Step 235, secondary filtering is performed on red and green lights: the secondary filtering is band-stop filtering, that is, signals within the band-stop filtering frequency threshold range are restrained specifically through multi-order Butterworth band-stop filtering (such as, four-order Butterworth band-stop filtering or one-order Butterworth band-stop filtering); through band-stop filtering, noise and interference signals are reserved to generate noise signals, and then valid signals and the noise signals are calculated to generate signal-to-noise ratios; and finally, whether the red and green light digital signals are up to standard are recognized according to the signal-to-noise ratios;
Step 236, when the second determination result is the up-to-standard signal identifier, normalized PPG signal data sequence generation is performed on the second red light digital signal and the second green light digital signal to generate the standard PPG data sequence;
Here, because a standard PPG data sequence is of a regressive data structure while the value of color channel data of the second red light digital signal and the second green light digital signal is greater than 1, normalization processing should be performed on the second red light digital signal and the second green light digital signal. Many normalization methods can be used for normalization processing, and will not be further expounded in this embodiment;
Step 236 comprises: when the second determination result is the up-to-standard signal identifier, signal data normalization processing is performed on the second red light digital signal and the second green light digital signal, respectively, to generate a normalized red light signal and a normalized green light signal; a red light data sequence of the standard PPG data sequence is set as the normalized red light signal, and a green data sequence of the standard PPG data sequence is set as the normalized green light signal, wherein the standard PPG data sequence comprises the red light data sequence and the green light data sequence.
Here, due to the number of lights in a video, the standard PPG data sequence generated according to the third-class video data differs from the standard PPG data sequences generated according to the first-class original signal and the second-class PPG original signal in the following aspect: the standard PPG data sequences generated according to the first-class original signal and the second-class PPG original signal are always one-channel data; if a single light exists in the video, the standard PPG data sequence extracted from the third-class video data is single-channel data; if red-green lights exist in the video, the standard PPG data sequence extracted from the third-class video data is double-channel data.
Step 3, a CNN model identifier is acquired;
Wherein, the CNN model identifier is a first-class CNN identifier or a second-class CNN identifier;
Here, CNN models will be introduced briefly. The CNN has always been one of the key algorithms in the field of feature recognition. When applied to image recognition, the CNN is used, during fine classification and recognition, to extract discriminant features of images, which are then learned by other classifiers. When applied to the field of blood pressure feature recognition, the CNN is used to perform PPG signal feature extraction and calculation on an input one-dimensional standard PPG data sequence: after convolution and pooling are performed on the input standard PPG data sequence, feature data in conformity to PPG signal features are reserved for a fully connected layer to perform regression calculation. This embodiment of the disclosure provides two types of CNN models to perform blood pressure prediction on the standard PPG data sequence: first-class CNN model and second-class CNN model. The CNN model identifier is used to distinguish and recognize these two CNN models, thus being the first-class CNN identifier or the second-class CNN identifier;
First, these two CNN models are different in feature extraction object: the first-class CNN model performs feature extraction directly on the standard PPG data sequence according to the time-domain amplitude of signals, and the second-class CNN model converts the standard PPG data sequence into a time-domain graph data sequence and then performs on the time-domain graph data sequence;
Second, these two CNN models are different in internal network structure:
(1) The first-class CNN model comprises multiple CNN network layers and a fully connected layer, and each CNN network layer comprises a convolutional layer and a pooling layer; the second-class CNN model comprises a two-dimensional convolutional layer, a maximum pooling layer, a batch normalization layer, an activation layer, an add layer, a global average pooling layer, a dropout layer and a fully connected layer; the first-class CNN model is a CNN model that has been trained through blood pressure feature extraction, and specifically comprises multiple CNN network layers and a fully connected layer, and each CNN network layer comprises a convolutional layer used to perform blood pressure feature extraction and calculation on input data of the CNN model and a pooling layer used to perform down-sampling on an extraction result of the convolutional layer, a preset convolutional layer number threshold indicates the specific number of the CNN network layers of the CNN model, and an output result of each CNN network layer is used as an input of the next CNN network layer; and finally, a result obtained after the preset convolutional layer number threshold times of calculation by the CNN network layers is input to the fully connected layer of the CNN model for regression calculation;
(2) Different from the first-class CNN model, the second-class CNN model adopts a customized convolutional network structure, and comprises a two-dimensional convolutional layer, a maximum pooling layer, a batch normalization layer, an activation layer, an add layer, a global average pooling layer, a dropout layer and a fully connected layer, wherein the two-dimensional convolutional layer may comprise multiple sub-convolutional layers and is used to perform multiple times of convolution calculation on input data, and a convolution result (four-dimensional tensor) output the two-dimensional convolutional layer comprises multiple one-dimensional tensor; the maximum pooling layer is used to sampling the convolution result by acquiring a maximum value of each one-dimensional vector to reduce the data size; the batch normalization layer is used to perform data normalization on an output result of the maximum pooling layer; the activation layer performs neural network connection on an output result of the batch normalization layer by way of a nonlinear activation function; the add layer is used to perform weighed sum calculation on an output result of the activation layer; the global average pooling layer is used to perform weighted average calculation on an output result of the add layer; the dropout layer is used to clip an output result of the global average pooling layer; and finally, the fully connected layer performs dichotomous regression calculation on a clipped output result of the dropout layer to output regression calculation results of the diastolic pressure and the systolic pressure.
Step 4, a corresponding CNN model is selected to perform blood pressure prediction on the standard PPG data sequence according to the CNN model identifier;
Step 4 comprises: Step 41, when the CNN model identifier is the first-class CNN identifier, the first-class CNN model is selected to perform blood pressure prediction on the standard PPG data sequence;
Step 41 comprises: Step 411, first-class CNN model input data conversion is performed on the standard PPG data sequence according to a preset first-class CNN input width threshold to generate an input data four-dimensional tensor;
Here, the first-class CNN input width threshold is the maximum value of an initial input data length of the first-class CNN model; and in this embodiment of the disclosure, input data of the first-class CNN model is of a four-dimensional tensor format;
Step 412, according to a preset convolutional layer number threshold, multilayer convolution and pooling calculation is performed on the input data of four-dimensional tensor by way of the CNN network layers of the first-class CNN model to generate a feature data of four-dimensional sensor;
Here, a preprocessed input data of four-dimensional tensor is input to the CNN network layer of the trained first-class CNN model to perform feature extraction to generate the feature data of four-dimensional sensor, of which the data format is also the four-dimensional tensor format; as described above, the CNN network layer comprises multiple convolutional layers and multiple pooling layers; generally, one convolutional layer matched one pooling layer and is then connected to the next convolutional layer, and the final layer number depends on the threshold of convolutional layer number, for example, a network comprising four convolutional layers and four pooling layer is called a four-layer convolution network; the convolutional layers perform convolution calculation to convert an input into outputs of different dimensions, and these output may be regarded as another presentation of the input; and the pooling layers are used to output a quantity to simplify the operation and promote the network to extract more valid information;
Step 413, two-dimensional matrix construction of fully connected layer input data two-dimensional matrix construction of fully connected layer input data is performed according to the four-dimensional tensor of feature data to generate an two-dimensional matrix of input data, and feature data regression calculation is performed on the two-dimensional matrix of input data by way of the fully connected layer of the first-class CNN model to generate a two-dimensional matrix of blood pressure regression data of blood pressure regression data;
Here, input and output data of the first-class CNN model is of a two-dimensional matrix format, so before regression calculation is performed by the fully connected layer, dimension reduction needs to be performed on the four-dimensional tensor output by the CNN network layer to convert the four-dimensional tensor into a two-dimensional matrix; the fully connected layer of the first-class CNN model comprises multiple sub-fully connected layers, each node of each sub-fully connected layer is connected to all nodes of the prior sub-fully connected layer to integrate all features extracted previously, and the number of nodes and an activation function (generally, the ReLU function, or other functions) of each sub-fully connected layer may be set; and the number of nodes of the last sub-fully connected layer is set to 2, so that two regression calculation values, that respectively represent the systolic pressure and the diastolic pressure of the blood pressure, can be obtained after several layers of fully connected calculation;
Step 414, a preset prediction mode identifier is acquired;
Wherein, the prediction mode identifier is a mean prediction identifier or a dynamic prediction identifier;
Here, the prediction mode identifier is a system variable, and output contents may be further predicted by way of the variable according to blood pressure prediction values obtained after regression calculation of the fully connected layer: when the prediction mode identifier is a mean prediction identifier, it indicates that mean blood pressure data in the original signal needs to be output; or, when the prediction mode identifier is a dynamic prediction identifier, it indicates that a blood pressure change data sequence within the time period of the original signal needs to be output;
Step 415, when the prediction mode identifier is the mean prediction identifier, mean blood pressure calculation is performed on the two-dimensional matrix of blood pressure regression data to generate a mean blood pressure prediction data pair;
Wherein, the mean blood pressure prediction data pair comprises mean systolic pressure prediction data and mean diastolic pressure prediction data;
Here, the two-dimensional matrix of blood pressure regression data of blood pressure regression data may be construed as a vector sequence comprising multiple one-dimensional vectors [2], and the mean of smaller values of each one-dimensional vectors [2] is calculated to obtain mean diastolic pressure prediction data (that the smaller value is calculated is because the systolic pressure is greater than the diastolic pressure, the smaller one of two regression calculation values is a predicted value of the diastolic pressure), the mean of larger values in the one-dimensional vectors [2] is calculated to obtain mean systolic pressure prediction data (that the larger value is calculated is because the systolic pressure is greater than the diastolic pressure, the larger one of the two regression calculation values is a predicted value of the systolic pressure);
Step 416, when the prediction mode identifier is the dynamic prediction identifier, dynamic blood pressure data extraction is performed on the two-dimensional matrix of blood pressure regression data of blood pressure regression data to generate a one-dimensional data sequence of one-dimensional data sequence of dynamic blood pressure prediction;
Here, the systolic pressure and diastolic pressure in all the one-dimensional vectors [2] in the two-dimensional matrix of blood pressure regression data of blood pressure regression data are extracted to form a data sequence, and the dynamic change of the blood pressure within a period of time is reflected by the data sequence;
Step 42, when the CNN model identifier is the second-class CNN identifier, the second-class CNN model is selected to perform wavelet transform-based blood pressure prediction on the standard PPG data sequence.
Here, as described above, the second-class CNN model is used to perform feature extraction on a time-domain graph data sequence, so it is necessary to convert the standard PPG data sequence, which is a time-domain data sequence, into a time-frequency data sequence and then convert the time-frequency data sequence into a time-domain graph data sequence before the second-class CNN model is used; conventionally, time-frequency conversion of signals is realized through Fourier transform, but due to the fact that the size of the time-frequency analysis window for Fourier transform is fixed, feature data may be lost when Fourier transform is used to process non-stationary PPG signals; in this embodiment of the disclosure, wavelet transform, which is a time-frequency analysis method based on Fourier transform and can highlight local features of signals in principle, is used to realize time domain-frequency domain conversion; in this embodiment, continuous wavelet transform (one method of wavelet transform) is used for conversion of the PPG signal; and a common method used for converting a time-frequency data sequence into a time-domain graph data sequence is to use a red-green-blue (RGB) mode.
Step 42 comprises: Step 421, data fragment division is performed on the standard PPG data sequence to generate standard PPG data fragments;
Step 422, a preset wavelet basis type, a scalability factor array and a mobile factor array are acquired;
Wherein, the scalability factor array comprises H scalability factors, the mobile factor array comprises L mobile factors, and H and L are both integers;
Compared with short-time Fourier transform, continuous wavelet transform, as an important means for local analysis of signals, has an adjustable window, thus having a high capacity to analyze non-stationary signals; the signals can be refined on multiple scales through a scaling and translational operation of wavelets, high-frequency components of the signals may have a high time resolution, and low-frequency components of the signals may have a high frequency resolution; continuous wavelet transform has three key parameters: wavelet basis, scalability factor and mobile factor, wherein the wavelet basis type is a wavelet function specifically used for wavelet transform, the scalability factor is a scale parameter that may change automatically in the wavelet transform process, and the mobile factor is a mobile time parameter that may change automatically in the wavelet transform process;
Step 423, signal decomposition is performed on the standard PPG data fragments through continuous wavelet transform according to the scalability factors in the scalability factor array, the mobile factors in the scalability factor array and the wavelet basis type to generate a PPG wavelet coefficient matrix [H, L];
Here, the PPG wavelet coefficient matrix [H, L] is formed by H*L wavelet coefficients, and each wavelet coefficient is a complex number that reflects the scalability factor and the mobile factor;
Step 424, the PPG wavelet coefficient matrix is transformed into a real matrix through a modulo operation on matrix elements, and normalization processing is performed on values of matrix elements in the real matrix to generate a PPG normalized matrix [H, L];
Here, a complex matrix is transformed into a real matrix through a modulo operation, and values of all matrix elements in the real matrix are normalized to obtain a PPG normalized matrix; if the PPG normalized matrix [H, L] is construed as a data sequence, this data sequence is a time-frequency data sequence obtained after time domain-frequency domain conversion is performed on the standard PPG data sequence;
Step 425, an RGB color palette matrix is acquired, and PPG time-frequency tensor conversion is performed on the PPG normalized matrix [H, L] according to the RGB color palette matrix to generate a PPG time-frequency three-dimensional tensor [H, L, 3];
The RGB mode, as a color standard in the industry, is used to obtain various colors by changing three color channels, red (R), green (G) and blue (B) and superposing of these three colors, and this standard includes almost all colors that can be perceived by human eyes and is one of the most widely used color systems; assume the RGB color palette matrix comprises 256 color vectors, each color vector has a length of 3 and comprises values of the three primary colors;
The values of all the matrix elements in the PPG normalized matrix [H, L] are within 0-1; when PPG time-frequency tensor conversion is performed on the PPG normalized matrix [H, L], and the range 0-1 is divided into 256 segments; then, all the elements in the PPG normalized matrix [H, L] are polled to turn original values of the elements into indexes of the segments to which the values of the elements belong (for example, the first segment is 0-1/256 and the value of one element is 1/257, the value 1/257 of this element will be turned into 1; the 256th segment is 255/256-1 and the value of one element is 511/512, the value 511/512 of this element will be turned into 256); and finally, the values of the elements in the PPG normalized matrix [H, L] are turned from 0-1 to 1-256;
Assume the RGB color palette matrix comprises 256 colors, each element in the PPG normalized matrix [H, L] corresponds to one RGB color vector in the RGB color palette matrix (each color vector is a one-dimensional vector [3] comprising the pixel values of the red color, the green color and the blue color), and this RGB color vectors [3] is extracted from the RGB color palette matrix and is then added to a corresponding position of the PPG normalized matrix [H, L] to generate a time-frequency graph sequence, namely the PPG time-frequency three-dimensional tensor [H, L, 3];
Step 426, according to a preset second-class CNN input width threshold, tensor shape reconstruction is performed on the PPG time-frequency three-dimensional tensor [H, L, 3] through a bicubic interpolation algorithm to generate a PPG convolutional three-dimensional tensor [K, K, 3];
Wherein, K is the second-class CNN input width threshold;
Here, the size of the PPG time-frequency three-dimensional tensor [H, L, 3] may not meet the requirement for the input size of the second-class CNN model; when the size of the PPG time-frequency three-dimensional tensor [H, L, 3] is smaller than the input size of the second-class CNN model, a bicubic interpolation algorithm (a method for increasing the number of matrix points in matrix data by interpolation calculation, generally, the interpolation technique is used to increase the graphic data and the graphic size) is used to add mean values to change the shape of the three-dimensional tensor and increase the time-frequency graphic size, and finally, the PPG convolutional three-dimensional tensor [K, K, 3] meeting the requirement of the second-class CNN model is generated;
Step 427, blood pressure prediction is performed on the PPG convolutional three-dimensional tensor [K, K, 3] using the second-class CNN model to generate a PPG blood pressure prediction data pair;
Wherein, the PPG blood pressure prediction data pair comprises PPG systolic pressure prediction data and PPG diastolic pressure prediction data.
Here, the second-class CNN model in this embodiment comprises: a two-dimensional convolutional layer, a maximum pooling layer, a batch normalization layer, an activation layer, an add layer, a global average pooling layer, a dropout layer and a fully connected layer, wherein the two-dimensional convolutional layer may comprise multiple sub-convolutional layers and is used to perform multiple times of convolution calculation on input data, and a convolution result (four-dimensional tensor) output by the two-dimensional convolutional layer comprises multiple one-dimensional tensor; the maximum pooling layer is used to sampling the convolution result by acquiring a maximum value of each one-dimensional vector to reduce the data size; the batch normalization layer is used to perform data normalization on an output result of the maximum pooling layer; the activation layer performs neural network connection on an output result of the batch normalization layer by way of a nonlinear activation function; the add layer is used to perform weighed sum calculation on an output result of the activation layer; the global average pooling layer is used to perform weighted average calculation on an output result of the add layer; the dropout layer is used to clip an output result of the global average pooling layer; and finally, the fully connected layer performs dichotomous regression calculation on a clipped output result of the dropout layer to output regression calculation results of PPG systolic pressure prediction data and PPG diastolic pressure prediction data.
As shown in
Step 101, a data source identifier and original data are acquired from an upper computer;
Wherein, the data source identifier is one of a first-class PPG original signal identifier, a second-class PPG original signal identifier and a third-class PPG video identifier; and corresponding the data source identifier, the original data is one of a first-class PPG original data, a second-class PPG original signal and third-class PPG video data.
Here, to guarantee the compatibility with various PPG data source acquisition approaches, the data source identifier is set to distinguish the type of acquired original data: a first-class PPG original data, a second-class PPG original signal; as shown in
Step 102, when the data source identifier is the third-class PPG video identifier, video data frame image extraction is performed on the third-class PPG video data to generate a third-class PPG video frame image sequence;
Wherein, the third-class PPG video frame image sequence comprises multiple third-class PPG video frame images;
Here, the third-class PPG video data is a file of a common standard video format, and image frames can be extracted from the video file in seconds through standard video processing software or a standard video processing method. For example, if the length of a video is 5 s and each second of the video includes 24 frames of images, the extracted third-class PPG video frame image sequence comprises 5*24=120 third-class PPG video frame image vector data (120 images);
Step 103, one-dimensional red light signal extraction is performed on all the third-class PPG video frame images in the third-class PPG video frame image sequence according to a preset red light pixel threshold range to generate a first red light digital signal, and one-dimensional green light signal extraction is performed on all the third-class PPG video frame images in the third-class PPG video frame image sequence according to a preset green light pixel threshold range to generate a first green light digital signal;
Here, information of two lights, red light and green light, is extracted from all the third-class PPG video frame images in the third-class PPG video frame image sequence in the following way: weighted average calculation is performed on specific pixels in each frame image to obtain a pixel average that is used to represent the color channel data of the corresponding light in the frame image; and all frame images in the video are processed in the same way in chronological order to obtain two segments of one-dimensional digital signals: first red light digital signal and first green light digital signal.
Step 104, according to a preset band-pass filtering frequency threshold range, band-pass filtering preprocessing is performed on the first red light digital signal to generate a second red light digital signal, and band-pass filtering preprocessing is performed on the first green light digital signal to generate a second green light digital signal;
Here, signal filtering preprocessing, namely denoising, is performed on the digital signals of the two lights extracted from the video data. In Embodiment 1, band-pass filtering is used for denoising, that is, a band-pass filtering frequency threshold range is preset, and signals, interference and noise lower or higher than the band-pass filtering frequency threshold range are restrained based on the band-pass filtering principle. Generally, the band-pass filtering frequency threshold range is 0.5-10 THz. When some mobile terminals are used for band-pass filtering, a finite impulse response (FIR) filtering module is used;
Step 105, maximum frequency difference determination is performed on the second red light digital signal and the second green light digital signal to generate a first determination result as not up to standard identifier;
Here, frequency domain signals of the second red light digital signal and the second green light digital signal are obtained through discrete Fourier transform; maximum-energy frequencies are obtained according to the frequency domain signals (generally, this frequency corresponds to the heart rate); whether the maximum-energy frequencies of the two digital signals are consistent is checked; if an error is within an allowable error range, the first determination result is set as an up-to-standard signal identifier; or, if the error is large, the first determination result is set as a not-up-to-standard signal identifier.
Step 106, if the first determination result is the not-up-to-standard signal identifier, the PPG signal processing process is stopped, and warning information indicating that the quality of the PPG original signal is not up to standard is returned to the upper computer.
Here, this error may be caused by many reasons. For example, due to a large distance between the skin surface of a test subject and a photographing device during the video recording process, light leaking occurs, so there may be a large deviation in the red channel data or the green channel data extracted from the video frame images, which makes the frequency difference therebetween exceed a preset range; once the video quality is not up to standard, blood pressure data deduced from the video data will not be accurate and may even incorrect, so the analysis of the video data should be stopped, and an upper application will mark the video data as unqualified when receiving the warning information indicating that the quality of the PPG original signal is not up to standard and may further initiate a re-photographing operation.
It should be noted that the embodiments of the disclosure further provide a computer-readable storage medium having a computer program stored therein, and when the computer program is executed by a processor, the method provided by the embodiments of the disclosure is implemented.
The embodiments of the disclosure further provide a computer program product comprising instructions. When the computer program product runs on a computer, a processor implements the method mentioned above.
According to the blood pressure prediction method and device using multiple data sources provided by the embodiments of the disclosure, two signal filtering and shaping methods are provided for directly acquired PPG signals, and a video quality detection and normalized signal conversion method is provided for indirectly generated PPG signal, and a uniform standard PPG data sequence is finally generated for blood pressure prediction; and the embodiments of the disclosure provide two optional CNN models for blood pressure prediction. By adoption of the method and device provided by the embodiments of the disclosure, the capacity to process various PPG signal data sources and the capacity to manage various blood pressure prediction models of an application are improved, and the compatibility with various data sources for blood pressure prediction of the application is improved.
Those skilled should further appreciate that the units and arithmetic steps described in conjunction with the embodiments in this specification may be implemented by electronic hardware, computer software, or a combination of these two. To clearly explain the interchangeability of hardware and software, the components and steps of illustrative embodiments have been generally described according to their functions. Whether these functions are implemented by hardware or software depends on specific applications and design constraints of the technical solutions. For each specific application, those skilled may implement these functions in different ways, which should not be construed as exceeding the scope of the disclosure.
The steps of the method or algorithm described in the embodiments in this specification may be implemented by hardware, software modules executed by a processor, or a combination of these two. The software modules may be configured in a random access memory (RAM), a memory, a read-only memory (ROM), an electrically programmable ROM, an electrically erasable and programmable ROM, a register, a hard disk, a removable disk, a CD-ROM, or a storage medium in any other forms in the art.
The purposes, technical solutions and beneficial effects of the invention are described in further detail with reference to the above specific implementations. It should be understood that the above implementations are merely specific ones of the disclosure, and are not used to limit the protection scope of the invention. Any amendments, equivalent substitutions, and improvements made based on the spirit and principle of the disclosure should also fall within the protection scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
202010189221.3 | Mar 2020 | CN | national |
This application is a national phase entry under 35 U.S.C. § 371 of International Patent Application PCT/CN2020/129646, filed Nov. 18, 2020, designating the United States of America and published as International Patent Publication WO 2021/184805 A1 on Sep. 23, 2021, which claims the benefit under Article 8 of the Patent Cooperation Treaty to Chinese Patent Application Serial No. 202010189221.3, filed Mar. 17, 2020.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2020/129646 | 11/18/2020 | WO |