The present disclosure relates generally to remotely monitoring vital signs of a person and more particularly to an imaging photoplethysmography (iPPG) system and a method for remote measurements of vital signs.
Vital signs of a person, for example heart rate (HR), heart rate variability (HRV), respiration rate (RR), or blood oxygen saturation, serve as indicators of a person's current state and as a potential predictor of serious medical events. For this reason, vital signs are extensively monitored in inpatient and outpatient care settings, at home, and in other health, leisure, and fitness settings. One way of measuring the vital signs is plethysmography. Plethysmography corresponds to measurement of volume changes of an organ or a body part of a person. There are various implementations of Plethysmography, such as Photoplethysmography (PPG).
PPG is an optical measurement technique that evaluates a time-variant change of light reflectance or transmission of an area or volume of interest, which can be used to detect blood volume changes in microvascular bed of tissue. PPG is based on a principle that blood absorbs and reflects light differently than surrounding tissue, so variations in the blood volume with every heartbeat affect light transmission or reflectance correspondingly. PPG is often used non-invasively to make measurements at the skin surface. The PPG waveform includes a pulsatile physiological waveform attributed to cardiac-synchronous changes in the blood volume with each heartbeat and is superimposed on a slowly varying baseline with various lower frequency components attributed to other factors such as respiration, sympathetic nervous system activity, and thermoregulation.
Conventional pulse oximeters, for measuring the heart rate and the (arterial) blood oxygen saturation of a person, are attached to the skin of the person, for instance to a fingertip, earlobe, or forehead. Therefore, they are referred to as ‘contact’ PPG devices. A typical pulse oximeter can include a combination of a green LED, a blue LED, a red LED, and an infrared LED as light sources and one photodiode for detecting light that has been transmitted through patient tissue. Conventional available pulse oximeters quickly switch between measurements at different wavelengths and thereby measure transmissivity of the same area or volume of tissue at different wavelengths. This is referred to as time-division-multiplexing. The transmissivity over time at each wavelength yields the PPG signals for different wavelengths. Although contact PPG is regarded as a basically non-invasive technique, contact PPG measurement is often experienced as being unpleasant, since the pulse oximeter is directly attached to the person and any cables limit freedom to move.
Recently, non-contact, remote PPG (RPPG) for unobtrusive measurements has been introduced. RPPG utilizes light sources or, in general, radiation sources disposed remotely from the person of interest. Similarly, a detector, e.g., a camera or a photo detector, can be disposed remotely from the person of interest. RPPG is also often referred to as imaging PPG (iPPG), due to its use of imaging sensors such as cameras. (Hereinafter, the terms remote PPG (RPPG) and imaging PPG (iPPG) are used interchangeably.) Because they do not require direct contact with a person, remote photoplethysmography systems and devices are considered unobtrusive and are in that sense well suited for medical as well as non-medical everyday applications.
One advantage of camera-based vital signs monitoring versus on-body sensors is ease of use. There is no need to attach a sensor to the person, as aiming the camera at the person is sufficient. Another advantage of camera-based vital signs monitoring over on-body sensors is that cameras have greater spatial resolution than contact sensors, which mostly include a single-element detector.
One of the challenges for RPPG technology is to be able to provide accurate measurements in a volatile environment where there exist unique sources of noise. For example, in a volatile environment such as in-vehicle environment, illumination on a driver varies drastically and suddenly during driving (e.g., while driving through shadows of buildings, trees, etc.), making it difficult to distinguish iPPG signals from other variations. Also, there is significant motion of the driver's head and face due to a number of factors, such as motion of the vehicle, the driver looking around both within and outside the car (for oncoming traffic, looking into rear-view mirrors and side-view mirrors), and the like.
Several methods have been developed to enable robust camera-based vital signs measurement. One of these methods uses a narrow-band active near-infrared (NIR) illumination, where the NIR illumination greatly reduces the adverse effects of lighting variation. During driving, for example, this method can reduce adverse effects of lighting variation such as sudden variation between sunlight and shadow, or passing through streetlights and other cars' headlights, without impacting the driver's ability to see at night. However, NIR frequencies introduce new challenges for iPPG, including low signal-to-noise ratio (SNR). Reasons for this include that in the NIR portion of the spectrum, camera sensors have reduced sensitivity, and blood-flow related intensity changes have smaller magnitude. Accordingly, there is a need for a RPPG system which can accurately estimate PPG signals from the NIR frequencies.
Accordingly, it is an object of some embodiments to estimate vital signs of a person with high accuracy. To that end, some embodiments utilize imaging photoplethysmography (iPPG). It is also an objective of some embodiments to use a narrow-band near-infrared (NIR) system and determine a wavelength range that reduces illumination variations. Additionally or alternatively, some embodiments aim to use NIR monochromatic videos (or a sequence of images) to obtain multidimensional time-series data associated with different regions of a skin of the person and accurately estimate the vital signs of the person by processing the multidimensional time-series data using a deep neural network (DNN).
Some embodiments are based on the realization that the vital signs of the person can be estimated from NIR monochromatic video or a sequence of NIR images. To that end, the iPPG system obtains a sequence of NIR images of a face of a person of interest (also referred to as “person”) and partitions each image into a plurality of spatial regions. Each spatial region comprises a small portion of the face of the person. The iPPG system analyses variation in skin color or intensity in each region of the plurality of spatial regions to estimate the vital signs of the person.
To that end, the iPPG system generates a multidimensional time-series signal, wherein the dimensions of the multidimensional signal at each time instant correspond to the number of spatial regions, and each time point corresponds to one image in the sequence of images. The multidimensional time-series signal is then provided to a deep neural network (DNN)-based module to estimate the vital signs of the person. The DNN-based module applies a time-series U-Net architecture to the multidimensional time-series data, wherein the pass-through connections of the U-Net architecture are modified to incorporate temporal recurrence for NIR imaging PPG.
Some embodiments are based on the realization that the usage of a recurrent neural network (RNN) in pass-through layers of the U-Net neural network to sequentially process the multidimensional time-series signal can enable more accurate estimation of the vital signs of the person.
Some embodiments are based on recognition that sensitivity of PPG signals to noise in measurements of intensities (e.g., pixel intensities in NIR images) of a skin of a person is caused at least in part by independent estimation of photoplethysmographic (PPG) signals from the intensities of a skin of a person measured at different spatial positions (or spatial regions). Some embodiments are based on recognition that at different locations, e.g., at different regions of the skin of the person, the measurement intensities can be subjected to different measurement noise. When the iPPG signals are independently estimated from intensities at each location (e.g., the PPG signal estimated from intensities at one skin region is estimated independently of the intensities or estimated signals from other skin regions), the independence of the different estimates may cause an estimator to fail to identify such noise.
Some embodiments are based on recognition that measured intensities at different spatial regions of the skin of the person can be subjected to different and sometimes even unrelated noise. The noise includes one or more of illumination variations, motion of the person, and the like. In contrast, heartbeat is a common source of intensity variations present in the different regions of the skin. Thus, the effect of the noise on the quality of the vital signs' estimation can be reduced when the independent estimation is replaced by a joint estimation of PPG signals measured from the intensities at different regions of the skin of the person. In this way, some embodiments can extract the PPG signal that is common to many skin regions (including regions that may also contain considerable noise), while ignoring noise signals that are not shared across many skin regions.
Some embodiments are based on recognition that it can be beneficial to estimate the PPG signals of the different skin regions collectively, because by estimating the PPG signal of the different skin regions collectively, noise affecting the estimation of the vital signs is reduced. Some embodiments are based on recognition that two types of noise are acting on the intensities of the skin, i.e., external noise and internal noise. The external noise affects the intensity of the skin due to external factors such as lighting variations, motion of the person, and resolution of the sensor measuring the intensities. The internal noise affects the intensity of the skin due to internal factors such as different effects of cardiovascular blood flow on appearance of different regions of the skin of the person. For example, the heartbeat can affect the intensity of the forehead and cheeks of the person more than it affects the intensity of the nose.
Some embodiments are based on realization that both types of noise can be addressed in the frequency domain of the intensity measurements. Specifically, the external noise is often non-periodic or has a periodic frequency different than that of a signal of interest (e.g., pulsatile signal), and thus can be detected in the frequency domain. On the other hand, the internal noise, while resulting in intensity variations or time-shifts of the intensity variations in different regions of the skin, preserves the periodicity of the common source of the intensity variations in the frequency domain.
Some embodiments aim to provide accurate estimation of the vital signs even in volatile environments where there is dramatic illumination variation. For example, in a volatile environment such as an in-vehicle environment, some embodiments provide an RPPG system suitable for estimating vital signs of a driver or passenger of a vehicle. However, during driving, illumination on a person's face can change dramatically. To address these challenges, additionally or alternatively one embodiment uses active in-car illumination, in a narrow spectral band in which the sunlight, streetlamp, and headlight and taillight spectral energy are all minimal. For example, due to the water in the atmosphere, the sunlight that reaches the earth's surface has much less energy around the NIR wavelength of 940 nm than it does at other wavelengths. The light output by streetlamps and vehicle lights is typically in the visible spectrum, with very little power at infrared frequencies. To that end, one embodiment uses an active narrow-band illumination source at or near 940 nm and a camera filter at the same frequency, which ensures that the illumination changes due to environmental ambient illumination are filtered away. Further, since this narrow frequency band is beyond the visible range, humans do not perceive this light source and thus are not distracted by its presence. Moreover, the narrower the bandwidth of the light source used in the active illumination, the narrower the bandpass filter on the camera can be, which further rejects intensity changes due to ambient illumination.
Accordingly, one embodiment uses a narrow-bandwidth (narrow-band) near-infrared (NIR) light source to illuminate the skin of the person at a narrow frequency band including a near-infrared wavelength of 940 nm and an NIR camera with a narrow-band filter overlapping the wavelengths of the narrow-band light source to measure the intensities of different regions of the skin in the narrow frequency band.
One embodiment discloses an imaging photoplethysmography (iPPG) system for estimating a vital sign of a person from images of a skin of the person, comprising: at least one processor; and memory having instructions stored thereon that, when executed by the at least one processor, cause the iPPG system to: receive a sequence of images of different regions of the skin of the person, each region including pixels of different intensities indicative of variation of coloration of the skin; transform the sequence of images into a multidimensional time-series signal, each dimension corresponding to a different region from the different regions of the skin; process the multidimensional time-series signal with a time-series U-Net neural network to generate a PPG waveform, wherein a U-shape of the time-series U-Net neural network includes a contracting path formed by a sequence of contractive layers followed by an expansive path formed by a sequence of expansive layers, wherein at least some of the contractive layers downsample their input and at least some of the expansive layers upsample their input, forming pairs of contractive and expansive layers of corresponding resolutions wherein at least some of the corresponding contractive layers and expansive layers are connected through pass-through layers. Further, at least one of the pass-through layers includes a recurrent neural network that processes its input sequentially. The at least one processor is further configured to estimate the vital sign of the person based on the PPG waveform and render the estimated vital sign of the person.
Another embodiment discloses a method for estimating a vital sign of a person, the method comprising: receiving a sequence of images of different regions of the skin of the person, each region including pixels of different intensities indicative of variation of coloration of the skin; transforming the sequence of images into a multidimensional time-series signal, each dimension corresponding to a different region from the different regions of the skin; processing the multidimensional time-series signal with a time-series U-Net neural network to generate a PPG waveform, wherein a U-shape of the time-series U-Net neural network includes a contracting path formed by a sequence of contractive layers followed by an expansive path formed by a sequence of expansive layers, wherein at least some of the contractive layers down sample their input and at least some of the expansive layers up sample their input forming pairs of contractive and expansive layers of corresponding resolutions, wherein at least some of the corresponding contractive layers and expansive layers are connected through pass-through layers, and wherein each of the pass-through layers includes a recurrent neural network that processes its input sequentially. The method further comprises estimating the vital sign of the person based on the PPG waveform and rendering the estimated vital sign of the person.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one skilled in the art that the present disclosure may be practiced without these specific details. In other instances, apparatuses and methods are shown in block diagram form only in order to avoid obscuring the present disclosure.
As used in this specification and claims, the terms “for example,” “for instance,” and “such as,” and the verbs “comprising,” “having,” “including,” and their other verb forms, when used in conjunction with a listing of one or more components or other items, are each to be construed as open ended, meaning that that the listing is not to be considered as excluding other, additional components or items. The term “based on” means at least partially based on. Further, it is to be understood that the phraseology and terminology employed herein are for the purpose of the description and should not be regarded as limiting. Any heading utilized within this description is for convenience only and has no legal or limiting effect.
In some embodiments, the iPPG system 100 may include a near-infrared (NIR) light source configured to illuminate the skin of the person, and a camera configured to capture a monochromatic video 105 (also referred as the NIR video 105). The NIR video 105 captures at least one body part of one or more persons (such as a face of a person). For ease of explanation, assume that the NIR video 105 captures the face of the person. The NIR video 105 includes a plurality of frames. Therefore, each frame in the NIR video 105 comprises an image 107 of the face of the person. In operation, the iPPG system 100 obtains input(s) such as the NIR video 105. In some embodiments, the image 107 in each frame of the NIR video 105 is partitioned into a plurality of spatial regions 103, where the plurality of spatial regions 103 is analyzed jointly to accurately determine the PPG waveform.
For ease of explanation, assume that the RGB video 106 captures the face of the person. The RGB video 106 includes a plurality of frames. Therefore, each frame in the RGB video 106 comprises an image 107 of the face of the person. In this embodiment (unlike the embodiment pictured in
The partitioning (segmentation) of each image 107 is based on the realization that specific areas of the body part under consideration contain the strongest PPG signal. For example, specific areas of a face (also referred to as “regions of interest (ROIs),” also referred to as simply “regions”) containing the strongest PPG signals include areas located around forehead, cheeks, and chin (as shown in
The partitioning of each image 107 results in a sequence of images comprising different spatial regions of the plurality of spatial regions 103, where each spatial region includes a different part of the skin of the person. For example, in the NIR video 105 and the RGB video 106 of the face of the person, the image 107 in each frame of the video corresponds to the face of the person, and the plurality of spatial regions 103 in the sequence of images formed by partitioning the image 107 into the may correspond to areas of the skin of the person. Further, each spatial region of the plurality of spatial regions 103 is used to determine PPG signal. Due to occlusions of parts of the face, which may be due to one or more occluders such as hair (such as bangs over the forehead), facial hair, an object (such as sunglasses), another body part (such as a hand), and head pose or camera pose causing part of the face to not be visible in the image, some regions may not contain skin or may only partially contain skin, which may disrupt or weaken the quality of the signal from those regions.
Some embodiments are based on recognition that sensitivity of PPG signals to noise in measurements of intensities (e.g., pixel intensities in images) of a skin of a person is caused at least in part by independent estimation of PPG signals from the intensities of a skin of a person measured at different spatial positions (or spatial regions). Some embodiments are further based on recognition that at different locations, e.g., at different regions of the skin of the person, the measurement intensities can be subjected to different measurement noise. When the PPG signals are independently estimated from intensities at each spatial region (e.g., the PPG signal estimated from intensities at one skin region is estimated independently of the intensities or estimated signals from other skin regions), the independence of the different estimates may cause an estimator to fail to identify such noise affecting accuracy in determining the PPG signal.
The noise may be due to one or more of illumination variations, motion of the person, and the like. Some embodiments are based on further realization that heartbeat is a common source of the intensity variations present in the different regions of the skin. Thus, the effect of the noise on the quality of vital signs' estimation can be reduced when the independent estimation is replaced by a joint estimation of PPG signals measured from the intensities at different regions of the skin of the person.
Therefore, the iPPG system 100 jointly analyzes the plurality of spatial regions 103 in order to estimate the vital sign to reduce the effect of noise, where the vital sign is one or a combination of pulse rate of the person, and a heart rate variability (also referred to as “heartbeat signal”) of the person. In some embodiments, the vital sign of the person is a one-dimensional signal at each time instant in a time series.
Some embodiments are based on the realization that the vital sign may be estimated accurately by adopting temporal analysis. Therefore, the iPPG system 100 is configured to extract at least one multidimension time-series signal from the sequence of images corresponding to different regions of the skin of the person, where the time-series signal is used to determine the PPG signal to accurately estimate the vital sign.
To that end, the iPPG system 100 uses the time-series extraction module 101.
Time-Series Extraction Module:
In some embodiments, the time-series extraction module 101 is configured to receive a sequence of images of a plurality of frames of the NIR video 105 and to extract the multidimensional time-series signal from the sequences of images. In some embodiments, the time-series extraction module 101 is further configured to partition the image 107 from a frame of the NIR monochromatic video 105 into the plurality of spatial regions 103 and generate a multidimensional time series corresponding to the plurality of spatial regions 103.
In other embodiments, the time-series extraction module 101 is configured to receive a sequence of images of a plurality of frames of the RGB video 106 and to extract the multidimensional time-series signal from the sequences of images. In some embodiments, the time-series extraction module 101 is further configured to partition the image 107 from a frame of the RGB video 106 into red (R), green (G) and blue (B) channels. In some embodiments, the time-series extraction module 101 is further configured to partition each of R, G, and B channels of the image into a plurality of spatial regions 103 and generate multidimensional time series corresponding to the plurality of spatial regions 103.
The images 107 in the sequence of images may contain different regions of a skin of the person, where each region includes pixels of different intensities indicative of variation of coloration of the skin.
In some embodiments, each dimension of the multidimensional time-series signal obtained from the NIR monochromatic video 105 corresponds to a different spatial region from the plurality of spatial regions of skin of the person in the image 107.
In some embodiments, each dimension of the multidimensional time-series signal obtained from the RGB video 106 corresponds to a different color channel and a different spatial region from the plurality of spatial regions of skin of the person in the image 107.
Further, in some embodiments, each dimension is a signal from an explicitly tracked (alternatively, explicitly detected in each frame) region of interest (ROI) of the plurality of spatial regions of the skin of the person. The tracking (alternatively, the detection) reduces an amount of motion-related noise. However, the multidimensional time-series still contains significant noise due to factors such as landmark localization errors, lighting variations, 3D head rotations, and deformations such as facial expressions.
To recover a signal of interest (PPG signal) from the noisy multidimensional time-series signal, the multidimensional time-series signal is given to the PPG estimator module 109.
PPG Estimator Module:
The PPG estimator module 109 is configured to recover and output 111 the PPG signal from the noisy multidimensional time-series signal. Further, based on the PPG signal, the vital signs of the person are determined.
Given the semi-periodic nature of the time-series signal obtained by the PPG estimator module 109, architecture of the PPG estimator module 109 is designed to extract temporal features at different time resolutions. To that end, the PPG estimator module 109 is implemented using a neural network such as a recurrent neural network (RNN), a deep neural network (DNN), and the like.
In some embodiments, the present disclosure proposes a Time-series U-net with RecurreNce for Imaging PPG (TURNIP) architecture for the PPG estimator module 109.
Some embodiments are based on realization that the U-net is a convolutional network architecture, which has been used in image processing applications such as image segmentation. The U-net architecture is a “U” shaped architecture, where the U-net architecture includes contracting path on a left side of the U-net architecture and an expansive path on a right side of the U-net architecture. The U-Net architecture can be broadly categorized into an encoder network that corresponds to the contracting path, and a decoder network that corresponds to the expansive path, where the encoder network is followed by the decoder network.
The encoder network forms a first half of the U-net architecture. In the image processing applications in which the U-net architecture is typically used, the encoder is comprised of a series of spatial convolutional layers and may have max-pooling downsampling layers to encode the input image into feature representations at multiple different levels.
The decoder network forms a second half of the U-net architecture and comprises a series of convolutional layers as well as upsampling layers. The goal of the decoder network is to semantically project the (lower resolution) features learned by the encoder network back into the original (higher resolution) space. In the image processing applications in which the U-net architecture is typically used, the convolutional layers use spatial convolutions, and the input and output space are image pixel spaces.
Some embodiments are based on the realization that the input of the PPG estimator module 109 (also referred to as the “PPG estimator network”) is a multidimensional time series, and the desired output is a one-dimensional time series of the vital sign. Accordingly, in some preferred embodiments, the convolutional layers of the encoder and decoder subnetworks of the time-series U-net 109a use temporal convolutions.
Some embodiments are based on further realization that the recurrent neural network (RNN) is a class of artificial neural networks (ANNs) where connections between nodes form a directed graph along a temporal sequence. The directed graph allows the RNN to exhibit temporal dynamic behavior. Unlike feedforward neural networks, RNNs can use their internal state (memory) to process variable length sequences of inputs. Accordingly, RNN's are capable of remembering important features of past inputs, which allows the RNN to more accurately determine temporal patterns. Therefore, the RNN can form a much deeper understanding of a sequence and its context. Hence, the RNN can be used for sequential data such as time series.
In some embodiments of the proposed TURNIP architecture of the iPPG system 100, a U-Net architecture is applied to the time series data. In some embodiments, the pass-through connections incorporate 1×1 convolutions. Unlike in previous U-Nets, in TURNIP the pass-through connections are modified to incorporate temporal recurrence by using an RNN. Thus, the PPG estimator module 109 comprises a time-series U-Net neural network (also referred to as “U-net”) 109a coupled with a recurrent neural network (RNN) 109b. The U-net 109a and the RNN 109b are coupled to process the multidimensional time-series data to accurately determine the PPG waveform, where the PPG waveform is used to estimate the vital sign of the person. More details regarding the workings of the proposed iPPG system 100 using the TURNIP architecture is described below in more detail with reference to
To that end, the iPPG system 100, for each NIR video 105 of the one or more videos, obtains an image (for example, image 107) from each of a sequence of image frames of the NIR video 105. Each image is partitioned or segmented into a plurality of spatial regions (for example, the spatial regions 103), resulting in a sequence of images whose spatial regions corresponding to different areas of the body part. The partitioning of the image 107 is performed such that each spatial region comprises a specific area of the body part that may be strongly indicative of the PPG signal. Thus, each spatial region of the plurality of spatial regions 103 is a region of interest (ROI) for determining PPG signal. Further, for each of the spatial region a time-series signal is derived using the time-series extraction module 101.
In an example embodiment, for each NIR video 105, the time-series extraction module 101 extracts a 48-dimensional time series corresponding to pixel intensities over time of 48 facial regions (ROIs), where the facial regions correspond to the plurality of spatial regions 103. In some embodiments, the multidimensional time series signal may have more or less than 48 dimensions corresponding to more or less than 48 facial regions.
In some embodiments, to extract the ROIs associated with a specific body part of the person in the image, a plurality of landmarks locations corresponding to the specific body part of the persons is localized in each image frame 107 of the video. Therefore, the plurality of landmark locations may vary depending on the body part used for PPG signal determination. In an example embodiment, when the face of the person is used for determining the PPG signal, 68 landmark locations corresponding to the face of the person (i.e., 68 facial landmarks) are localized in each image frame 107 of the video.
Some embodiments are based on the realization that due to imperfect or inconsistent landmark localization, motion jitter of estimated landmark locations in subsequent frames causes the boundaries of regions to jitter from one frame to the next, which adds noise to the extracted time series. To lessen the degree of this noise, the plurality of landmark locations are temporally smoothed prior to extracting the ROIs (e.g., the 48 facial regions).
Therefore, in some embodiments, before extracting the ROI from the plurality of landmark locations, the plurality of landmark locations are smoothed across time using a smoothing technique such as a moving average technique. In particular, a temporal kernel of a predetermined length is applied to the plurality of landmark locations over time to determine each landmark's location in each video frame image 107 as a weighted average of the estimated locations of the landmark in the preceding frames and subsequent frames within a time window corresponding to the length of the kernel.
For instance, in one embodiment, 68 landmark locations are smoothed using the moving average with a kernel of length 11 frames. The smoothed landmark locations in each frame of the NIR video 105 (that is, in each image 107) are then used to extract the 48 ROIs located around the forehead, cheeks, and chin in the frame. Then, the average intensity of the pixels in each spatial region of the 48 spatial regions is computed for the frame. In this way, an intensity value for each region in the plurality of spatial regions 103 (or ROIs) is extracted from each image, where the intensity values from the plurality of spatial regions 103 for a sequence of frames 107 (e.g., a sequence of 314 frames) forms a multidimensional time series.
The time-series extraction module 101 is configured to transform the sequence of images 107 corresponding to the plurality of spatial regions 103 into the multidimensional time series signal. Some embodiments are based on a realization that spatial averaging reduces the impact of sources of noise, such as quantization noise of a camera that captured the video the NIR video 105 or the RGB video 106 and minor deformations due to head and face motion of the person. To that end, pixel intensities of pixels from each spatial region of the plurality of spatial regions (also referred to as “different spatial regions”) 103 at an instant of time are averaged to produce a value for each dimension of the multidimensional time-series signal at the instant of time.
In some embodiments, the time-series extraction module 101 is further configured to temporally window (or segment) the multidimensional time series signals. Accordingly, there may be a plurality of segments of the multidimensional time-series signals, where at least some part of each segment of the plurality of segments overlaps with a subsequent segment of the plurality of segments forming a sequence of overlapping segments. Further, the multidimensional time series corresponding to each of the segments is normalized before submitting the multidimensional time series signals to the PPG estimator module 109, where the PPG estimator module 109 may process, using the time-series U-Net 109a, each segment from the sequence of overlapping of the multidimensional time-series signals.
The windowed sequences are of specific duration with a specific frame stride during inference (e.g., 10 seconds duration (300 frames at 30 fps) with a 10-frame stride during inference), where stride indicates a number of frames (e.g., 10 frames) temporal shift between subsequent windowed sequences (e.g., the 10-second windowed sequences).
In an example case where the vital sign to be estimated for the person is a heartbeat signal, the heartbeat signal is locally periodic, where a period of the heartbeat signal changes over time. In such a case, some embodiments are based on realization that the 10 seconds window is a good compromise duration for extracting a current heart rate.
Some embodiments are based on the realization that longer strides are more efficient for training using a larger dataset. Therefore, the stride (in frames) used for windowing during training may be longer (e.g., 60 frames) than the stride used for windowing during inference (e.g., 10 frames). The length of the stride in frames may also be varied depending on the vital sign of the person to be estimated.
In some embodiments, a preamble of a specific time duration (e.g., 0.5 seconds) is added to each window. For instance, a number of additional frames (e.g., 14) are added immediately preceding a start of the window, resulting in a longer duration (e.g., 314 frames) multidimensional time series.
In some embodiments, where the input is an NIR video 105, the multidimensional time-series (e.g., 48 dimensions of the time sequence) is fed into the PPG estimator module 109 as channels. The PPG estimator module 109 comprises a sequence of layers associated with the time-series U-net 109a and the RNN 109b forming the TURNIP architecture. The channels corresponding to the multidimensional time-series signal are combined during a forward pass through the sequence of layers. In the PPG estimator module 109, the time-series U-Net 109a with RNN 109b maps the multidimensional time series signal to the desired PPG signal. For each windowed sequence of the multidimensional time-series signal (e.g., the 10-second window), the TURNIP architecture extracts convolutional features at a specific temporal resolution (e.g., three temporal resolutions). The specific temporal resolution may be predefined.
Further, in some embodiments the TURNIP architecture downsamples the inputted time series by a first factor and later by a second factor, where the second factor is an additional factor. The first factor and the second factor for down sampling the input time series may be predefined (e.g., the first factor may be 3 and the second factor may be 2). The PPG estimator module 109 then estimates the desired PPG signal in a deterministic way.
Turnip Architecture:
The TURNIP architecture is a neural network (for example, a DNN) based architecture, which is trained on at least one data set to accurately determine PPG signal(s) based on the multidimensional time-series data. The time-series U-Net 109a comprises the contractive path formed by a sequence of contractive layers followed by the expansive path formed by a sequence of expansive layers. The sequence of contractive layers is a combination of convolutional layers, max pooling layers, and dropout layers. Similarly, the sequence of expansive layers is a combination of convolutional layers, upsampling layers, and drop out layers. At least some of the contractive layers downsample their input multidimensional time-series signal and at least some of the expansive layers upsample their input forming pairs of contractive and expansive layers of corresponding resolutions. Further, at least some of the contractive layers and expansive layers are connected through pass-through layers. The plurality of contractive layers forms an encoding sub-network that can be thought of as encoding its input data into a sequence with lower temporal resolution. On the other hand, the plurality of expansive layers forms a decoding sub-network that can be thought of as decoding the input data encoded by the encoding network. Further, at least at some resolutions, the encoding sub-network and the decoding sub-network are connected by a pass-through connection. In parallel with the 1×1 convolutional pass-through connections, a specific recurrent pass-through connection is included. The specific recurrent pass-through connection is implemented using the RNN 109b. The RNN 109b processes its input sequentially, and the RNN 109b is included in each of the pass-through layers.
In a preferred embodiment, the RNN 109b is implemented using a gated recurrent units (GRU) 113 architecture to provide temporally recurrent features. In other embodiments, the RNN 109b may be implemented using a different RNN architecture, such as a long short-term memory (LSTM) architecture. Some embodiments are based on the realization that GRU is an advancement of the standard RNN. GRU uses gates to control a flow of information, and unlike LSTM, GRU does not have a separate cell state (Ct). GRU only has a hidden state (Ht). GRU at each timestamp t takes an input Xt and the hidden state Ht-1 from the previous timestamp t−1. Later it outputs a new hidden state Ht which is then passed to the GRU at the next timestamp. There are primarily two gates in a GRU. The first gate is a reset gate and the other one is an update gate. Some embodiments are based on further realization that GRU is faster to train due to its simpler architecture, compared to other types of RNNs such as a Long Short-Term Memory (LSTM) networks.
Contractive Path:
In the time series U-net 109a, the contractive path is formed by the sequence of contractive layers, where each contractive layer comprises a combination of one or more of a convolutional layer, a single downsampling convolutional layer, and a dropout layer. A dropout layer is a regularization layer used to reduce overfitting of a layer (for example, a convolutional layer) that it is used with and improve generalization of the corresponding layer. A dropout layer drops outputs of the layer it is used with (for example, the convolutional layer) with a specific probability p, which is also referred to as a dropout rate. The dropout rate may be predefined or calculated in real time based on a training dataset used for training the TURNIP architecture. In an example embodiment, the dropout rate (or p) of every dropout layer is equal to 0.3.
Alternatively, in some other embodiments, the contractive path of the time series U-net 109a may not include the dropout layer. In such embodiments, the contractive path is formed by the sequence of contractive layers, where each contractive layer comprises a combination of one or more of only a convolutional layer and a single downsampling convolutional layer.
Further, in some embodiments of the TURNIP architecture, the sequence of contractive layers is formed by 5 contractive layers. In other embodiments, there may be more than 5 contractive layers, and in still other embodiments, there may be fewer than 5 contractive layers. In the 5 contractive layers, a first contractive layer 116a comprises two convolutional layers. The first contractive layer 116a processes its input, where the input is a multidimensional time series signal provided as multiple channels, and a multi-channel output generated by the first contractive layer 116a is submitted to one of the layers (e.g., the fourth expansive layer 118d) in the expansive path. Note that although we refer to all of the layers in the contractive path as “contractive layers” and all of the layers in the expansive path as “expansive layers,” in some embodiments not every contractive layer actually contracts the length of its input sequence. For example, in one embodiment illustrated in
Further, each of a second contractive layer 116b, a third contractive layer 116c, and a fourth contractive layer 116d comprises a convolutional layer (sometimes referred to as a “single downsampling layer,” although note as above that not every downsampling layer actually downsamples the length of its input) followed by a dropout layer with a specific dropout rate (e.g., p=0.3). In one embodiment, illustrated in
The fifth and the last contractive layer in the sequence of five contractive layers comprises two convolutional layers followed by a dropout layer with a specific dropout rate. The fifth contractive layer receives input from the fourth contractive layer and submits its output to one of the expansive layers (e.g., the first expansive layer 118a) in the expansive path.
Expansive Path:
In some embodiments, the expansive path comprises a sequence of 5 expansive layers. In one such embodiment, illustrated in
Still referring to
Similarly, for the second contractive layer 116b, third contractive layer 116c, fourth contractive layer 116d, and fifth contractive layer 116e, input channels, output channels, a kernel, and stride are specified.
In one embodiment illustrated in
Each pass-through layer, such as the first pass through layer 113a, consists of a layer of 1×1 convolutions 117 and an RNN such as a GRU 113, whose respective outputs are concatenated 115 and then passed to a corresponding layer of the expansive path.
The third contractive layer 116c has 64 input channels and 128 output channels, and a convolutional kernel of size k=7 with stride s=1. An output of the third contractive layer 116c is provided to the fourth contractive layer 116d of the contractive path and to a second pass-through layer 113b, whose output is passed to the corresponding layer 118b of the expansive path. The fourth contractive layer 116d has 128 input channels and 256 output channels and a convolution using kernel size 7 and stride 1; an output of the fourth contractive layer 116d is provided to the fifth contractive layer 116e of the contractive path and to a third pass-through layer 113c, which passes its output to the corresponding expansive layer 118b. At the final stage of the contractive path, the fifth contractive layer 116e has 256 input channels and 512 output channels, a convolutional kernel size of 7, and a stride of 1. Further, the output of the fifth contractive layer 116e is provided to the first expansive layer 118a of the expansive path.
The first expansive layer 118a obtains two inputs, where a first input is obtained from the fifth contractive layer 116e, and a second input is obtained from an output of the third pass-through layer 113c. The first expansive layer 118a processes its inputs and passes on its output to the second expansive layer 118b. The second expansive layer 118b also obtains two inputs, where a first input corresponds to the output of the first expansive layer 118a, and a second input corresponds to the output of the second pass-through layer 113b.
Similarly, a first input of the third expansive layer 118c corresponds to the output of the second expansive layer 118b, and a second input of the third expansive layer 118c corresponds to the output of the first pass-through layer 113a. Further, the output of the third expansive layer 118c is provided to the fourth expansive layer 118d.
The fourth expansive layer 118d obtains a first input from the third expansive layer 118c and a second input from the first contractive layer 116a. Output of the fourth expansive layer is provided to the fifth expansive layer, which performs channel reduction (e.g., from 64 channels to 1 channel), followed by a dropout layer.
In some embodiments, the output of the fifth expansive layer 118e is the final output of the PPG estimator module 109. This output (e.g., a one-dimensional time series that estimates a PPG waveform) is used to obtain the output 111 of the iPPG system 100.
At each time scale, the convolutional layers of the time series U-net 109a process all samples from the time series window (e.g., the 10-second window) in parallel. (The computation that obtains each output time step of each convolution may be performed in parallel with the corresponding computations of the other output time steps of the convolution.) In contrast, the proposed RNN layers (e.g., the GRU layers 113) process the temporal samples sequentially. This temporal recurrence has the effect of extending the temporal receptive field at each layer of the expansive path of the time series U-net 109a.
For instance, in an embodiment illustrated in
More details regarding steps executed by the iPPG system 100 to determine the PPG signal are described below with reference to
To that end, an image corresponding to each frame of the inputted NIR video is segmented into different regions, where the different regions correspond to different parts of the skin of the person in the image. The different regions of the skin of the person may be identified using landmark detection. For instance, if the body part of the person is the person's face, then the different regions of the face may be obtained using facial landmark detection.
At step 119b, the sequence of images that include different regions of the skin of the person is received by the time-series extraction module 101 of the iPPG system 100.
At step 119c, the sequence of images is transformed into a multidimensional time-series signal by the time-series extraction module 101. To that end, pixel intensities of the pixels from each spatial region of the plurality of spatial regions 103 (also referred to as “different spatial regions”) at an instant of time (e.g., in one video frame image 107) are averaged to produce a value for each dimension of the multidimensional time-series signal for the instant of time.
At step 119d, the multidimensional time-series signal is processed by the time-series U-net 109a coupled with the recurrent neural network 109b in the pass-through layers that form the TURNIP architecture. The multidimensional time-series signal is processed by the different layers of the TURNIP architecture to generate a PPG waveform, which in some embodiments is represented as a one-dimensional (1D) time series.
At step 119e, the vital signs, such as heartbeat or pulse rate of the person, are estimated based on the PPG waveform. In some embodiments, the output 111 of the iPPG system 100 comprises the vital signs.
In this way, the PPG estimator module 109 estimates the PPG signal from the multidimensional time-series signal extracted from the NIR video 105. To that end, the multidimensional time-series signal is temporally convolved at each layer of the TURNIP architecture. More details regarding temporal convolution are provided below with respect to
Time Series Extraction from Multi-Channel Video:
In some embodiments, such as those illustrated in
In other embodiments, however, the iPPG system or method starts with multi-channel video. The discussion of multi-channel images in this document primarily discuss RGB video (i.e., video with red, green, and blue color channels) as an example of multi-channel video. However, it is to be understood that the same ideas can be similarly applied to other multi-channel video inputs, such as multi-channel NIR video, RGB-NIR four-channel video, multi-spectral video, and color video that is stored using a different color-space representation than RGB, such as YUV video, or a different permutation of the RGB color channels such as BGR.
With multi-channel video, such as RGB video, there are multiple methods for the time series extraction module to extract a time series from the multi-channel video, and different embodiments use different methods for time series extraction from multi-channel video.
To that end, an image corresponding to each frame of the inputted NIR video is segmented into different regions, where the different regions correspond to different parts of the skin of the person in the image. The different regions of the skin of the person may be identified using landmark detection. For instance, if the body part of the person is the person's face, then the different regions of the face may be obtained using facial landmark detection.
At step 120b, the sequence of images that include different regions of the skin of the person is received by the time-series extraction module 101 of the iPPG system 100.
At step 120c, the sequence of images is transformed into a multidimensional time-series signal by the time-series extraction module 101. To that end, pixel intensities in each color channel of the pixels from each spatial region of the plurality of spatial regions 103 (also referred to as “different spatial regions”) at an instant of time (e.g., in one video frame image 107) are averaged to produce a value for each dimension of a multidimensional time-series signal for the color channel for the instant of time. From the color-channel multidimensional time series, a single multidimensional time series is extracted, e.g., using one of the methods described in
At step 120d, the multidimensional time-series signal is processed by the time-series U-net 109a coupled with the recurrent neural network 109b in the pass-through layers that form the TURNIP architecture. The multidimensional time-series signal is processed by the different layers of the TURNIP architecture to generate a PPG waveform, which in some embodiments is represented as a one-dimensional (1D) time series.
At step 120e, the vital signs, such as heartbeat or pulse rate of the person, are estimated based on the PPG waveform. In some embodiments, the output 111 of the iPPG system 100 comprises the vital signs.
In this way, the PPG estimator module 109 estimates the PPG signal from the multidimensional time-series signal extracted from the RGB video 106. To that end, the multidimensional time-series signal is temporally convolved at each layer of the TURNIP architecture. More details regarding temporal convolution are provided below with respect to
In
Let each block drawn in the figure of the input channel x(t) 201 represent the value of the channel at one time step. Further, let each coefficient of the kernel be denoted by k(τ). Assume that a size of the kernel used for convolution with the input channel 201 by the convolutional layer is 3. Since the kernel size is 3, the kernel comprises 3 coefficients, corresponding to τ=−1, 0, and 1. Further, assume that the kernel is traversed (or shifted) over the input channel 201 with a stride value of s=1 (the stride value can also be referend to as “stride length”). Further, the output of the convolution is obtained in output channel y(t) 203. Accordingly, temporal convolution is calculated as:
y(t)=Στx(t+τ)k(τ), (1)
where τ=−1, 0, and 1. Thus, kernel coefficients (also referred to as “Learnable filter”) are k(−1), k(0), k(1).
Similarly, in
In
Thus, the three input channels are a channel 1 of an input feature map (also referred to as “a first channel”) 301, a channel 2 of an input feature map (also referred to as “a second channel”) 303, and a channel 3 of an input feature map (also referred to as “a third channel”) 305. Let the first channel 301 be denoted as x(t), the second channel 303 be denoted as y(t), and the third channel 305 be denoted as z(t), and an output channel 307 generated after the temporal convolution of the multiple channels (301-305) be denoted as o(t). Further, let the kernel size be 3, which is shifted on each of the three input channels (301-305) with a stride value of 4 frames. The temporal convolution for the multiple input channels (301-305) is calculated based on the equation (1) for each input channel. The temporal convolution is performed with as many filters as there are channels of the output feature map. In some embodiments, a learnable bias is also added to the output of each filter. In some embodiments, at least one of the temporal convolutions is followed by a non-linear activation function, such as a rectified linear unit (RELU) or sigmoidal activation function.
Further, the outputs of temporal convolutions are passed to the RNN 109b via the pass-through layers (
After the RNN has sequentially processed all of the shorter time windows 405 of the input time series 401, The sequential outputs 407 of the RNN 109b are restacked into a longer time window to form the output time series 403 of the RNN, whose dimensions (time×input channels) respectively represent the number of time steps in the output time series (which in some embodiments is the same as the number of time steps in the input time series) and the number of channels in the output time series. In some embodiments, the restacking of the outputs 407 into the output time series may be in the reverse order to the stacking illustrated in
Once the entire input time series 401 has been passed sequentially through the RNN and restacked into the output time series 403, it is ready to be concatenated (e.g., concatenation 115 in
At each time scale, the convolutional layers of the time series U-net 109a process all samples from the time series window (e.g., the 10-second window) in parallel. (The computation that obtains each output time step of each convolution may be performed in parallel with the corresponding computations of the other output time steps of the convolution.) In contrast, the proposed RNN layers (e.g., the GRU layers 113) process the temporal samples sequentially. This temporal recurrence has the effect of extending the temporal receptive field at each layer of the expansive path of the time series U-net 109a.
In this way, the sequential temporal processing of the RNN 109b is coupled with the temporally parallel processing of a time-series U-Net 109a, enabling the PPG estimator module 109 to more accurately estimate the PPG signal from the multidimensional time-series signals.
Some embodiments are based on recognition that in a narrow frequency band including a near-infrared frequency of 940 nm, the signal observed by the NIR camera is significantly weaker than a signal observed by a color intensity camera, such as an RGB camera. However, the iPPG system 100 is configured to handle such weak intensity signals by using a bandpass filter. The bandpass filter is configured to denoise measurements of pixel intensities of each spatial region of the different spatial regions. More details regarding processing of the NIR signal to estimated iPPG signal is described below with reference to
In some embodiments, the first frequency band and the second frequency band include a near-infrared frequency of 940 nm. The iPPG system 100 may include a filter to denoise the measurements of the intensities of each of the different regions. To that end, techniques such as robust principal components analysis (RPCA) may be used. In an embodiment, the second frequency band has a passband of width less than 20 nm, e.g., the bandpass filter has a narrow passband whose full width at half maximum (FWHM) is less than 20 nm. In other words, the overlap between the first frequency band and the second frequency band is less than 20 nm wide.
Some embodiments are based on the realization that optical filters such as bandpass filters and long-pass filters (i.e., filters that block transmission of light having a wavelength less than a cutoff frequency but allow transmission of light having a wavelength greater than a second cutoff frequency) may be highly sensitive to an angle of incidence of the light passing through the filter. For example, an optical filter may be designed to transmit and block specified frequency ranges when the light enters the optical filter parallel to the axis of symmetry of the optical filter (roughly perpendicular to the optical filter's surface), which may be an angle of incidence of 0°. When an angle of incidence varies from 0°, many optical filters exhibit “blue shift,” in which the passband and/or cutoff frequencies of the filter effectively shift to shorter wavelengths. To account for the blue shift phenomenon, some embodiments use a center frequency of the overlap between the first and second frequency bands to have a wavelength greater than 940 nm (e.g., the center frequency of a bandpass optical filter or the cutoff frequencies of a long-pass optical filter are shifted to have a longer wavelength than 940 nm).
As light from different parts of the skin may be incident upon the optical filter at different angles of incidence, the optical filter allows different transmission of the light from different parts of the skin. In response, some embodiments use a bandpass filter with a wider passband (e.g., the bandpass optical filter that has a passband wider than 20 nm), and hence the overlap between the first and second frequency bands is greater than 20 nm wide.
In some embodiments, the iPPG system 100 uses the narrow frequency band including the near-infrared frequency of 940 nm to reduce the noise due to illumination variations. As a result, the iPPG system 100 provides accurate estimation of the vital signs of the person.
Some embodiments are based on the realization that illumination intensity across a body part (e.g., a face of the person) can be non-uniform due to factors such as variation in 3D directions of the normals across the face surface, due to shadows cast on the face, and due to different parts of the face being at different distances from the NIR light source. To make the illumination more uniform across the face, some embodiments use a plurality of NIR light sources (e.g., two NIR light sources placed on each side of the face and at approximately equal distances from the head). In addition, horizontal and vertical diffusers are placed on the NIR light sources to widen the light beams reaching the face, to minimize the illumination intensity difference between the center of the face and the periphery of the face.
Some embodiments aim to capture well-exposed images of the skin regions in order to measure strong iPPG signals. However, the intensity of the illumination is inversely proportional to square of a distance from the light source to the face. If the person is too close to the light source, the images become saturated and may not contain the iPPG signals. If the person is at a farther distance from the light source, the images may become dimmer and have weaker iPPG signals. Some embodiments may select the most favorable position of the light sources and their brightness setting to avoid capturing saturated images, while recording well-exposed images at a range of possible distances between the skin regions of the person and the camera.
The type of U-net architecture used in the time-series U-Net 109a in some embodiments, such as the embodiment illustrated in
Further, to enable the PPG estimator module 109 to accurately estimate the PPG signal, the PPG estimator module 109 is trained. Details regarding the training of the PPG estimator module 109 are described below.
For training TURNIP, one or more training loss functions may be used. The one or more training loss functions are used to determine optimal values of weights to weigh features such that similarity between ground truth values and estimated values is maximized. For instance, let y denote a ground truth PPG signal and
where μx and μz are the sample means of x and z, respectively. The one or more loss functions may include one or both of temporal loss (TL) and spectral loss (SL).
To minimize TL, network (i.e., TURNIP) parameters are found such that:
To minimize SL, in some embodiments inputs to the loss function are first transformed to a frequency domain, e.g. using a fast Fourier transform (FFT), and any frequency components lying outside of desired range of frequencies are suppressed. For example, for heart rates, the frequency components lying outside of the range [0.6, 2.5] Hz band are suppressed because they are outside a typical range of human heart rates. In this case, the network parameters are computed to solve:
where Y=FFT(y) and
In an example embodiment, TURNIP is trained based on MERL-Rice Near-Infrared Pulse (MR-NIRP) Car Dataset. The dataset contains face videos recorded with an NIR camera, fitted with a 940±5 nm bandpass filter. Frames were recorded at 30 frames per second (fps), with 640×640 resolution and fixed exposure. The ground truth PPG waveform is obtained using a finger pulse oximeter (for example, CMS 50D+) recording at 60 fps, which is then down sampled to 30 fps and synchronized with the video recording. The dataset features 18 subjects and is divided into two main scenarios, labeled Driving (city driving) and Garage (parked with engine running). Further, only “minimal head motion” condition is evaluated for each scenario. The dataset includes female and male subjects, with and without facial hair. Videos are recorded both at night and during the day in different weather conditions. All recordings for the garage setting are 2 minutes long (3,600 frames), and during driving range from 2 to 5 minutes (3,600-9,000 frames).
Further, the training dataset consists of subjects with heart rates ranging from 40 to 110 beats per minute (bpm). However, the heart rates of test subjects are not uniformly distributed. For most subjects, the heart rate ranges roughly from 50 to 70 bpm. The dataset has a smaller number of outliers. Therefore, a data augmentation technique is used to address both (i) the relatively small number of subjects and (ii) gaps in the distribution of subject heart rates. At training time, for each 10-second window, in addition to using the 48-dimensional PPG signal that is output by the time series extraction module 101, a signal with linear resampling rates l+r and l−r is also resampled, where a value of r∈[0.2, 0.6] is randomly chosen, for each 10-second window.
Therefore, the data augmentation is useful for those subjects with out-of-distribution heart rates. Accordingly, it is desirable to train TURNIP with as many examples as possible for a given frequency range.
In an example embodiment, TURNIP is trained for 10 epochs, and the trained model is used for testing (also called “inference”). In another embodiment, TURNIP may be trained for fewer than 10 epochs. In an example embodiment, the Adam optimizer is selected, with a batch size of 96 and a learning rate of 1.5*10−4. The learning rate is reduced at each epoch by a factor of 0.05. Further, a train-test protocol of leave-one-subject-out cross-validation is used. At test time (i.e., inference time), the test subject's time-series is windowed using the time-series extraction module 101, and the heart rate is estimated sequentially with a stride of 10 samples between the windows. In an example embodiment, one heart rate estimate is outputted for every 10 frames.
Further, the performance of the system is evaluated using two metrics. The first metric, percent of time the error is less than 6 bpm (PTE6), indicates the percentage of heart rate (HR) estimations that deviate in absolute value by less than 6 bpm from the ground truth. The error threshold is set to 6 bpm as that is the expected frequency resolution of a 10-second window. The second metric is root-mean-squared error (RMSE) between the ground-truth and estimated HR. The second metric is measured in bpm for each 10-second window and averaged over the test sequence.
The standard deviation of the iPPG system 100 for PTE6 is considerably higher without data augmentation, indicating a high variability across subjects. Further, impact of data augmentation on tested subjects is analyzed.
Further, the impact of the GRU cell in the pass-through connection is analyzed. The GRUs process the feature maps sequentially at multiple time resolutions. Thus, they extract features beyond the local receptive field of convolutional kernels used at the convolutional layers of the TURNIP. The addition of the GRU improves performance of the iPPG system 100. Further, the two training loss functions TL and SL used for training are compared.
The instructions stored in the memory 803 correspond to an iPPG method for estimating the vital signs of the person based on a set of iPPG signals' waveforms measured from different regions of a skin of a person. The iPPG system 800 may also include a storage device 807 configured to store various modules such as the time-series extraction module 101 and the PPG estimator module 109, where the PPG estimator module 109 comprises the time-series U-net 109a and RNN 109b. The aforesaid modules stored in the storage device 807 are executed by the processor 801 to perform the vital signs estimations. The vital sign corresponds to a pulse rate of the person or heart rate variability of the person. The storage device 807 can be implemented using a hard drive, an optical drive, a thumb drive, an array of drives, or any combinations thereof.
The time-series extraction module 101 obtains an image from each frame of a video from one or more videos 809 that are fed to the iPPG system 800, where the one or more video 809 comprises a video of a body part of a person whose vital signs are to be estimated. The one or more videos may be recorded by one or more cameras. The time-series extraction module 101 may partition the image from each frame into a plurality of spatial regions corresponding to ROI of the body part that are strong indicators of PPG signal, where the partitioning of the image into the plurality of spatial regions form a sequence of images of the body part. Each image comprises different region of a skin of the body part in the image. The sequence of images may be transformed into a multidimensional time-series signal. The multidimensional time-series signal is provided to the PPG estimator module 109. The PPG estimator module 109 uses the time-series U-net 109a and the RNN 109b to process the multidimensional time-series signal by temporally convoluting multidimensional time-series signal and the convoluted data is further processes sequentially by the RNN 109b to estimate the PPG waveform, where the PPG waveform is used to estimate the vital signs of the person.
The iPPG system 800 includes an input interface 811 to receive the one or more videos 809. For example, the input interface 811 may be a network interface controller adapted to connect the iPPG system 800 through the bus 805 to a network 813.
Additionally or alternatively, in some implementations, the iPPG system 800 is connected to a remote sensor 815, such as a camera, to collect the one or more videos 809. In some implementations, a human machine interface (HMI) 817 within the iPPG system 800 connects the iPPG system 800 to input devices 819, such as a keyboard, a mouse, trackball, touchpad, joystick, pointing stick, stylus, touchscreen, and among others.
The iPPG system 800 may be linked through the bus 805 to an output interface to render the PPG waveform. For example, the iPPG system 800 may include a display interface 821 adapted to connect the iPPG system 800 to a display device 823, wherein the display device 823 may include, but not limited to, a computer monitor, a projector, or mobile device.
The iPPG system 800 may also include and/or be connected to an imaging interface 825 adapted to connect the iPPG system 800 to an imaging device 827.
In some embodiments, the iPPG system 800 may be connected to an application interface 829 through the bus 805 adapted to connect the iPPG system 800 to an application system 831 that can be operated based on the estimated vital signals. In an exemplary scenario, the application system 831 is a patient monitoring system, which uses the vital signs of a patient. In another exemplary scenario, the application system 831 is a driver monitoring system, which uses the vital signs of a driver to determine if the driver can drive safely, e.g., whether the driver is drowsy or not.
The camera 903 may include a CCD or CMOS sensor for converting incident light and the intensity variations thereof into an electrical signal. The camera 903 non-invasively captures light reflected from a skin portion of the patient 901. A skin portion may thereby particularly refer to the forehead, neck, wrist, part of the arm, or some other portion of the patient's skin. A light source, e.g., a near-infrared light source, may be used to illuminate the patient or a region of interest including a skin portion of the patient.
Based on the captured images, the iPPG system 800 determines the vital signs of the patient 901. In particular, the iPPG system 800 determines the vital signs such as the heart rate, the breathing rate or the blood oxygenation of the patient 901. Further, the determined vital signs are usually displayed on an operator interface 905 for presenting the determined vital signs. Such an operator interface 905 may be a patient bedside monitor or may also be a remote monitoring station in a dedicated room in a hospital, in a group care facility such as a nursing home, or even in a remote location in telemedicine applications.
Further, the processor of iPPG system 800 may produce one or more control action commands, based on the estimated vital signs of the driver 1005 of the vehicle 1003. The one or more control action commands includes vehicle braking, steering control, generation of an alert notification, initiation of an emergency service request, or switching of a driving mode. The one or more control action commands are transmitted to a controller 1005 of the vehicle 1003. The controller 1005 may control the vehicle 1003 according to one or more control action commands. For example, if the determined pulse rate of the driver is very low, then the driver 1005 may be experiencing a heart attack. Consequently, the iPPG system 800 may produce control commands for reducing a speed of the vehicle and/or steering control (e.g., to steer the vehicle to a shoulder of a highway and make it come to a halt) and/or initiate an emergency service request.
The above description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the above description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing one or more exemplary embodiments. Contemplated are various changes that may be made in the function and arrangement of elements without departing from the spirit and scope of the subject matter disclosed as set forth in the appended claims.
Specific details are given in the above description to provide a thorough understanding of the embodiments. However, it is understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, systems, processes, and other elements in the subject matter disclosed may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known processes, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments. Further, like reference numbers and designations in the various drawings indicated like elements.
Also, individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process may be terminated when its operations are completed but may have additional steps not discussed or included in a figure. Furthermore, not all operations in any particularly described process may occur in all embodiments. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, the function's termination can correspond to a return of the function to the calling function or the main function.
Furthermore, embodiments of the subject matter disclosed may be implemented, at least in part, either manually or automatically. Manual or automatic implementations may be executed, or at least assisted, through the use of machines, hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine readable medium. A processor(s) may perform the necessary tasks.
Various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments.
Embodiments of the present disclosure may be embodied as a method, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts concurrently, even though shown as sequential acts in illustrative embodiments. Although the present disclosure has been described with reference to certain preferred embodiments, it is to be understood that various other adaptations and modifications can be made within the spirit and scope of the present disclosure. Therefore, it is the aspect of the append claims to cover all such variations and modifications as come within the true spirit and scope of the present disclosure.
Number | Date | Country | |
---|---|---|---|
63237347 | Aug 2021 | US |