The device and method disclosed in this document relates to detecting abnormal heart sounds and, more particularly, to automated detection of abnormal heart sounds using neural networks.
Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to the prior art by inclusion in this section.
Cardiovascular diseases (CVDs) are responsible for about 17.7 million deaths every year, representing 31% of the global mortality. Cardiac auscultation is the most popular non-invasive and cost-effective procedure for the early diagnosis of various heart diseases. However, effective cardiac auscultation requires trained physicians, a resource which is limited especially in low-income countries of the world. Automated classification of the Phonocardiogram (PCG), i.e., the heart sound, have been extensively studied and researched in the past few decades. Analysis of the PCG can be broadly divided into two principal areas: (i) segmentation of the PCG signal, i.e., detection of the first and second heart sounds (S1 and S2), and (ii) classification of recordings as pathologic or physiologic.
A method of detecting abnormal heart sounds in a phonocardiogram of a person is disclosed. The method comprises: receiving, with a processor, a first segment of the phonocardiogram, the first segment comprising a time series of acoustic values from the phonocardiogram; decomposing, with the processor, the first segment into a plurality of frequency sub-band segments using a first convolutional neural network having a plurality of kernel weights that were learned in a training process of the first convolutional neural network, each frequency sub-band segment comprising a time series of acoustic values corresponding to a respective frequency sub-band of first segment; determining, with the processor, a probability that the first segment contains an abnormal heart sound based on the plurality of frequency sub-band segments using at least one neural network; and generating, with an output device, a perceptible output depending on the probability that first segment contains the abnormal heart sound.
A system for detecting abnormal heart sounds in a phonocardiogram of a person. The system includes: a stethoscope having at least one acoustic sensor configured to record the phonocardiogram of the person and a transceiver configured to transmit the phonocardiogram; and a portable electronic device having a processor, an output device, and a transceiver. The processor is configured to: operate the transceiver to receive the phonocardiogram from the stethoscope; segment the phonocardiogram into a plurality of segments, each segment comprising a time series of acoustic values corresponding to only one cardiac cycle from the phonocardiogram; for each segment in the plurality of segments: decompose the respective segment into a respective plurality of frequency sub-band segments using the first convolutional neural network, each frequency sub-band segment comprising a time series of acoustic values corresponding to a respective frequency sub-band of the respective segment; and determine a probability that the respective segment contains the abnormal heart sound based on the respective plurality of frequency sub-band segments using at least one neural network; and operate the output device to generate a perceptible output depending on the probabilities that each segment in the plurality of segments contains the abnormal heart sound.
The foregoing aspects and other features of the method and system for detecting abnormal heart sounds are explained in the following description, taken in connection with the accompanying drawings.
For the purposes of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiments illustrated in the drawings and described in the following written specification. It is understood that no limitation to the scope of the disclosure is thereby intended. It is further understood that the present disclosure includes any alterations and modifications to the illustrated embodiments and includes further applications of the principles of the disclosure as would normally occur to one skilled in the art which this disclosure pertains.
With reference to
As shown in
The transceiver 26 at least comprises a transceiver, such as a Bluetooth® transceiver, configured to communicate with the portable electronic device 30, but may also include any of various other devices configured for communication with other electronic devices, including the ability to send communication signals and receive communication signals. It will be appreciated that, in alternative embodiments, the stethoscope 20 communicates with the portable electronic device 30 via a wired interface.
The microphone 28 comprises any type of acoustic sensor configured to record a phonocardiogram of the heart of the person 12 when placed near or against the chest of the person 12. Particularly, the microphone 28 is configured to convert sound waves and/or pressure changes corresponding to heart sounds into an electrical signal. In at least one embodiment, the stethoscope 20 and/or the microphone 28 includes an analog to digital converter (not shown) configured to convert an electrical signal corresponding to heart sounds into a digital phonocardiogram. In some embodiments, the microphone 28 may comprise a microelectromechanical system (MEMS) acoustic and/or pressure sensor. However, in some embodiments, the microphone 28 may comprise a traditional electret, coil, or condenser type microphone.
The transceivers 36 at least includes a transceiver, such as a Bluetooth® transceiver, configured to communicate with the stethoscope 20, but may also include any of various other devices configured for communication with other electronic devices, including the ability to send communication signals and receive communication signals. In one embodiment, the transceivers 36 further include additional transceivers which are common to smart phones and/or smart watches, such as Wi-Fi transceivers and transceivers configured to communicate via for wireless telephony networks.
The I/O interface 38 includes software and hardware configured to facilitate communications with the one or more interfaces of the portable electronic device 30 including the display screen 39, as well as other interfaces such as tactile buttons, switches, and/or toggles, microphones, speakers, and connection ports. The display screen 39 may be an LED screen or any of various other screens appropriate for a portable electronic device. The I/O interface 38 is in communication with the display screen 39 and is configured to visually display graphics, text, and other data to the user via the display screen 39.
The memory 34 may be of any type of device capable of storing information accessible by the processor 32, such as a memory card, ROM, RAM, hard drives, discs, flash memory, or other computer-readable medium. The memory 34 is configured to store program instructions that, when executed by the processor 32, enable the portable electronic device 30 to perform various operations described elsewhere herein, including communicating with the stethoscope 20 to receive a phonocardiogram the heart of the person 12, processing the phonocardiogram to identify abnormal heart sounds, and informing the user in the event that abnormal heart sounds are detected. In at least one embodiment, the memory 34 is configured to store user data 40 that may include a user profile having demographic information such as name, age, gender, height, weight, and/or other information for the person 12. The user data 40 may further include medical history information, such as previously recorded phonocardiograms and information regarding any previously detected heart abnormalities.
The memory 34 is also configured to store program instructions corresponding to at least one machine learning model, in particular to a phonocardiogram classification model 42 and classification parameters 44 thereof. The processor 32 is configured to utilize the phonocardiogram classification model 42 to extract features from the phonocardiogram of the heart of the person 12 and to classify the phonocardiogram as being normal or abnormal. As used herein, the term “machine learning model” refers to a system or set of program instructions and/or data configured to implement an algorithm or mathematical model that predicts and provides a desired output based on a given input. It will be appreciated that parameters of a machine learning model are not explicitly programmed or the machine learning model is not necessarily designed to follow particular rules in order to provide the desired output for a given input. Instead, the machine learning model is provided with a corpus of training data from which identifies or “learns” patterns and statistical relationships or structures in the data, which are generalized to make predictions with respect to new data inputs. The classification parameters 44 include a plurality of values for parameters of the phonocardiogram classification model 42 which were learning during a training process.
The abnormal heart sound detecting system 10 with the phonocardiogram classification model 42 improves upon traditional cardiac auscultation methods. Particularly, the abnormal heart sound detecting system 10 is a non-invasive, cost-effective, and easy to use. This ease of use beneficial not only for use by physicians for in-office diagnoses, but also for use by non-expert individuals at home and for telemedicine applications. Thus, the system 10 can be of significant impact for early diagnosis of cardiac diseases, particularly for regions of the world that suffer from a shortage and geographic mal-distribution of skilled physicians. Additionally, the system 10 helps physicians make more confident decisions on heart abnormalities which can avoid the order of non-necessary tests and lead to great cost savings.
The classification parameters 44 of the phonocardiogram classification model 42 comprise a plurality of kernel weights and/or filter values which are learned in a training process and used by the convolutional neural network(s) to extract features from the segmented cardiac cycle and to classify the segmented cardiac cycle 100 as being normal or abnormal. The phonocardiogram classification model 42 is trained using a dataset comprising a large number of phonocardiograms, each having a large number of cardiac cycles recorded therein. In at least one embodiment, each phonocardiogram of the dataset is labeled with a corresponding class label: normal or abnormal. One example of such a dataset is the 2016 PhysioNet/CinC Challenge dataset. In one embodiment, during training, Adam is used for stochastic optimization and binary cross-entropy is chosen as the loss function to be minimized. In at least one embodiment, the training process is performed on an external device, such as a server (not shown), and the resulting classification parameters 44 are provided to the portable electronic device 30 for storage in the memory 34 and usage thereat.
After the classification parameters 44 are learned in the training process, phonocardiogram classification model 42 can be used at the portable electronic device 30 to extract features from a segmented cardiac cycle 100 and classify the segmented cardiac cycle 100 as being normal or abnormal. Particularly, the portable electronic device 30 receives a digital audio waveform (hereinafter the “phonocardiogram”) from the stethoscope 20 corresponding to a phonocardiogram of the heart of the person 12. In one embodiment, in a pre-processing step, the processor 32 is configured to resample the phonocardiogram to a predetermined sample rate (e.g., 1000 Hz). In one embodiment, in a pre-processing step, the processor 32 is configured to apply a bandpass filter to eliminate extraneous frequencies that are unrelated to heart sounds (e.g., a band pass filter between 25 Hz and 500 Hz). The processor 32 of the portable electronic device 30 is configured to segment the phonocardiogram into one or more segmented cardiac cycles 100. In one embodiment, the processor 32 is configured to zero-pad the segmented cardiac cycles 100 to be a predetermined length (e.g., 2.5 seconds or 2500×1).
The phonocardiogram classification model 42 includes time-convolution (tConv) layers 110 configured to receive the segmented cardiac cycle 100 and decompose the segmented cardiac cycle 100 into a plurality of different time series corresponding to different frequency sub-bands. In the illustrated embodiment, the phonocardiogram classification model 42 includes four time-convolution layers 110, each configured to decompose the segmented cardiac cycle 100 into a time series corresponding to a respective frequency sub-band. Each decomposed time series has dimensions equal to that of the segmented cardiac cycle 100 (e.g., 2.5 seconds or 2500×1). Each time-convolution layer 110 is implemented as a one-dimensional convolutional neural network (1D-CNN) having a kernel which is learned during the training process.
As illustrated in
Through a local connectivity pattern of neurons between adjacent layers, the 1D-CNN of the time-convolution layer 110 is configured to perform cross-correlation between its input x[n] (i.e., the segmented cardiac cycle 100) and its kernel. The output of the convolutional layer, with a kernel of odd length N+1, can be expressed as:
where b0, b1, . . . , bN are the kernel weights, x[n] is the input signal, y[n] are the output samples, and N is the order of the filter.
Considering a causal system, the output of the convolutional layer becomes:
where σ(.) is the activation function and β is the bias term. Therefore, a 1D convolutional layer with linear activation (i.e., σ(x)=x) and zero bias (i.e., β=0), acts as an FIR filter with an added delay of N/2.
In the embodiments of
This FIR intuition of the time-convolution layers 110 discussed above provides new insights into the frequency and phase response of the kernel. Especially, large kernels can introduce significant phase distortion into their activations. The phase response of a filter indicates the phase shift in radians that each input component sinusoid will undergo. A convolutional kernel with non-linear phase response would introduce a temporal shift between the high frequency (e.g., murmurs) and low frequency (e.g., systole and diastole) patterns in the phonocardiogram signal.
To mitigate the effect, in at least one embodiment, one or more of time-convolution layers 110 have symmetric kernel weights around its center, such that the time-convolution layers 110 have a linear phase response. Particularly, in at least one embodiment, one or all of time-convolution layers 110 are trained with the constraint that their respective kernel weights are symmetric around their centers (i.e., b0=bN, b1=bN-1, b2=bN-2, etc.). This embodiment is referred to herein as a linear phase time-convolution layer (LP-tConv). We note that linear phase is the condition when the phase response of a filter is a linear function of frequency (excluding phase wraps at +/−π radians). A time-convolution layers 110 having a kernel with symmetric weights around its center have a linear phase response because they introduces an equal delay for all of the passing frequencies/patterns, ensuring no phase distortion.
In at least one further embodiment, one or more of the time-convolution layers 110 are configured to incorporate a forward-reverse convolution such that they have a zero phase response. This embodiment is referred to herein as a zero phase time-convolution layer (ZP-tConv). We note that a zero phase filter is a special case of a linear phase FIR filter in which the phase response is nullified.
where X(ejω) is the Fourier transform of the input signal x[n], Y(ejω) is the Fourier transform of the final output y[n], and H(ejω) is the Fourier transform of the impulse response of the kernel h[n]. We note that, the flip operation in time domain is equivalent to taking the complex conjugate in the frequency domain. Therefore, the effect of the ZP-tConv is a multiplication by the squared magnitude in the frequency domain.
The parameters of the time-convolution layers 110, in particular the kernel weights b0, b1, . . . , bN are learned during the training process. In at least one embodiment, the kernel weights b0, b1, . . . , bN are learned and/or updated with Stochastic Gradient Descent (SGD). In at least one embodiment, the kernel weights b0, b1, . . . , bN are initialized based on equivalent FIR filter coefficients corresponding to band pass filters for a predetermined set of frequency sub-bands (e.g., 25-45, 45-80, 80-200, 200-500 Hz). In further embodiments, the kernel weights b0, b1, . . . , bN are initialized randomly or with zero values.
It will be appreciated that the time-convolution layers 110 offer an improvement over traditionally implemented FIR filter-bank front-end because, rather than arbitrarily selecting cutoff frequencies for each frequency sub-band (e.g., 25-45, 45-80, 80-200, 200-500 Hz), the filter characteristics of each time-convolution layer 110 are learned based on training data. In this way, the time-convolution layers 110 decompose the segmented cardiac cycle 100 into more pathologically significant frequency sub-bands, thereby making the phonocardiogram classification model 42 more effective in distinguishing pathologic heart sounds.
Returning to
The first convolutional layer 120 is implemented as a convolutional neural network having a predetermined number of filters with a predetermined length and/or kernel size. The first convolutional layer 120 is configured to extract features of the respective frequency sub-band segment. In the illustrated embodiment, the first convolutional layer 120 of each branch has 8 filters of length and/or kernel size 5. The first convolutional layer 120 is followed by a Rectified Linear Unit (ReLU) activation of the output. In at least one embodiment, the first convolutional layer 120 is also followed by batch normalization and/or L2 regularization. After activation, the first maxpooling layer 130 pools and/or reduces the dimensionality of the output with a predetermined pool size (e.g., 2). In at least one embodiment, after the maxpooling, a dropout layer is applied to dropout a random set (e.g., 50%) of activations.
The second convolutional layer 140 is similarly implemented as a convolutional neural network having a predetermined number of filters with a predetermined length and/or kernel size. The second convolutional layer 140 is configured to extract features of the respective frequency sub-band segment. In at least one embodiment, the second convolutional layer 140 has fewer filters than the first convolutional layer 120. In the illustrated embodiment, the second convolutional layer 140 of each branch has 4 filters of length and/or kernel size 5. The second convolutional layer 140 is followed by a Rectified Linear Unit (ReLU) activation of the output. In at least one embodiment, the second convolutional layer 140 is also followed by batch normalization and/or L2 regularization. After activation, the second maxpooling layer 150 pools and/or reduces the dimensionality of the output with a predetermined pool size (e.g., 2). In at least one embodiment, after the maxpooling, a dropout layer is applied to dropout a random set (e.g., 50%) of activations.
The flattening layer 160 flattens and concatenates the outputs of each branch of the phonocardiogram classification model 42. After flattening and concatenation, the output is fed to the hidden fully connected layer 170 of the multilayer perceptron network. The hidden fully connected layer 170 is followed by a Rectified Linear Unit (ReLU) activation. In one embodiment, a dropout layer is applied to dropout a random set (e.g., 50%) of activations at the hidden fully connected layer 170. In one embodiment, L2 regularization is applied at the hidden fully connected layer 170. Finally, the output layer 180 of the multilayer perceptron network comprises a single neuron as output with sigmoid activation.
The phonocardiogram classification model 42 is configured to provide a prediction with respect to the segmented cardiac cycled 100 at the output layer 180. In the illustrated embodiment, the output layer 180 includes only a single neuron and thus provides a single output value. In particular, after sigmoid activation the output layer provides a probability (e.g., a value between 0 and 1) that the segmented cardiac cycled 100 is abnormal and/or probability that the segmented cardiac cycled 100 is normal. However, it will be appreciated that, in some embodiments, the output layer 180 may be configured to provide more than one output. For example, the output layer 180 may be configured to provide probabilities of various specific heart sound abnormalities, if the training data was classified and labeled with specific types of heart sound abnormalities.
As discussed above, a phonocardiogram generally comprises several cardiac cycles. Accordingly, the phonocardiogram classification model 42 may provide a predicted probability that the segmented cardiac cycle 100 is abnormal/normal for each segmented cardiac cycle 100. In one embodiment, the predicted probabilities of all the segmented cardiac cycles 100 are averaged to determine a final prediction with respect to the phonocardiogram.
In one embodiment, hyper-parameters of the phonocardiogram classification model 42 are tuned for optimal performance using a Tree of Parzen Estimators. Such hyper-parameters may include learning rate (e.g., 0.0012843784), learning rate decay (e.g., 0.00011132885), dropout after convolution layers (e.g., 50%), L2 regularization in convolution layers (e.g., 0.0486), and pool size (e.g., 2).
Methods for operating the abnormal heart sound detecting system 10 are described below. In particular, methods of operating the stethoscope 20 and/or the portable electronic device 30 to detect abnormal hearts sounds of a heart of a person 12 are described. In the description of the methods, statements that a method is performing some task or function refers to a controller or general purpose processor executing programmed instructions stored in non-transitory computer readable storage media operatively connected to the controller or processor to manipulate data or to operate one or more components in the abnormal heart sound detecting system 10 to perform the task or function. Particularly, the processors 22 of the stethoscope 20 and/or the processor 32 of the portable electronic device 30 above may be such a controller or processor. Alternatively, the controller or processor may be implemented with more than one processor and associated circuitry and components, each of which is configured to form one or more tasks or functions described herein. It will be appreciated that some or all of the operations the method can also be performed by a remote server or cloud processing infrastructure. Additionally, the steps of the methods may be performed in any feasible chronological order, regardless of the order shown in the figures or the order in which the steps are described.
The method 200 begins with a step of receiving a phonocardiogram from a stethoscope having at least one acoustic sensor configured to record heart sounds of a person (block 210). Particularly, with respect to the embodiments described in detail herein, the processor 22 of the stethoscope 20 is configured to operate the microphone 28 to record a phonocardiogram of the heart of the person 12, while the stethoscope 20, or at least the microphone 28 thereof, is placed near or against the chest of the person 12. The phonocardiogram is digitized and comprises a time series of acoustic values corresponding to heart sounds of the person 12. The processor 22 is configured to operate the transceiver 26 to transmit the recorded phonocardiogram to the portable electronic device 30. The processor 32 of the portable electronic device 30 is configured to operate the transceiver 36 to receive the phonocardiogram.
The method 200 continues with a step of segmenting the phonocardiogram into a plurality of segments, each segment comprising a time series of acoustic values corresponding to only one cardiac cycle from the phonocardiogram (block 220). Particularly, with respect to the embodiments described in detail herein, the processor 32 of the portable electronic device 30 is configured to segment the phonocardiogram into a plurality of segment cardiac cycles 100. In at least one embodiment, each segmented cardiac cycle 100 comprises a time series of acoustic values corresponding to only one cardiac cycle from the phonocardiogram. In one embodiment, the processor 32 is configured to zero-pad the segmented cardiac cycles 100 to be a predetermined length (e.g., 2.5 seconds). In one embodiment, in a pre-processing step, the processor 32 is configured to resample the phonocardiogram to a predetermined sample rate (e.g., 1000 Hz). In one embodiment, in a pre-processing step, the processor 32 is configured to apply a bandpass filter to eliminate extraneous frequencies that are unrelated to heart sounds (e.g., a band pass filter between 25 Hz and 500 Hz).
The method 200 continues with a step of, for each segment in the plurality of segments, decomposing the respective segment into a respective plurality of frequency sub-band segments using a first convolutional neural network, each frequency sub-band segment comprising a time series of acoustic values corresponding to a respective frequency sub-band of the respective segment (block 230). Particularly, with respect to the embodiments described in detail herein, the processor 32 of the portable electronic device 30 is configured to decompose the segmented cardiac cycle(s) 100 into a plurality of frequency sub-band segments (e.g., four different frequency sub-band segments) using the time-convolution layers 110. Each frequency sub-band segment comprises a time series of acoustic values corresponding to a respective frequency sub-band of the segmented cardiac cycle(s) 100. Each frequency sub-band segment series has dimensions equal to that of the segmented cardiac cycle 100 (e.g., 2500×1).
As discussed above, each time-convolution layer 110 has a unique set of kernel weights b0j, b1j, b2j . . . , bNj, where j corresponds to the respective time-convolution layer 110. In at least one embodiment, the processor 32 is configured to determine each respective frequency sub-band segment based on a respective segmented cardiac cycle 100 using a different respective time-convolution layer 110. More particularly, in one embodiment, the processor 32 is configured to determine each respective frequency sub-band segment by calculating a cross-correlation between the respective segmented cardiac cycle 100 and the unique set of kernel weights b0j, b1j, b2j . . . , bNj of the different respective time-convolution layer 110. As a result, each time-convolution layer 110 generates a different pathologically significant frequency sub-band segment, due to unique filtering characteristics of each time-convolution layer 110. As discussed above, exemplary frequency sub-band segment comprises are shown in columns (B) and (C) of the
In at least one embodiment, the time-convolution layers 110 are configured as linear phase time-convolution layers (LP-tConv). Particularly, to achieve a linear phase response, the ordered sequence of kernel weights b0j, b1j, b2j . . . , bNj of each time-convolution layer 110 have values that are symmetric about a center of the ordered sequence. In other words, the time-convolution layers 110 are trained with the constraints that b0=bN, b1=bN-1, b2=bN-2, etc. By providing symmetric kernel weights, the time-convolution layer 110 provides a linear phase response.
In at least one embodiment, the time-convolution layers 110 are configured as zero phase time-convolution layers (ZP-tConv), which incorporate forward-reverse convolution. Particularly, in one embodiment, the processor 32 is configured to determine each respective frequency sub-band segment by performing forward and reverse passes through the respective time-convolution layer 110. More particularly, the processor 32 is configured to determine a first intermediate output by calculating a cross-correlation between the respective segmented cardiac cycle 100 and the kernel weights b0j, b1j, b2j . . . , bNj of the respective time-convolution layer 110. Next, the processor 32 is configured to determine a second intermediate output by flipping the first intermediate output. Next, the processor 32 is configured to determine a third intermediate output by calculating a cross-correlation between the second intermediate output and the kernel weights b0j, b1j, b2j . . . , bNj of the respective time-convolution layer 110. Finally, the processor 32 is configured to determine the respective frequency sub-band segment by flipping the third intermediate output. By performing forward and reverse passes through the respective time-convolution layer 110, the time-convolution layer 110 provides a zero phase response.
With continued reference to
In at least one embodiment, the processor 32 is configured to provide each frequency sub-band segment to a respective branch of the phonocardiogram classification model 42. As discussed above, each branch includes the first and second convolutional layer 120 and 140, which are configured to extract features of the respective frequency sub-band segment. After activation and pooling, the processor 32 is configured to determine an intermediate output by flattening and concatenating the outputs of the branches using the flattening layer 160. Finally, at the output layer 180, the processor 32 is configured to determine the probability that the segmented cardiac cycle(s) 100 contains an abnormal heart sound based on the intermediate output using the multilayer perceptron network of the phonocardiogram classification model 42.
As discussed above, the phonocardiogram classification model 42 is configured to provide a prediction with respect to the segmented cardiac cycled 100 at the output layer 180 and a phonocardiogram generally comprises several cardiac cycles. In at least one embodiment, the processor 32 is configured to determine respective probabilities for each of a plurality of segmented cardiac cycle(s) 100. In one embodiment, the processor 32 is configured to determine an average of the determined probabilities.
Finally, the method 200 continues with a step of generating a perceptible output depending on the probabilities that each segment in the plurality of segments contains the abnormal heart sound (block 250). Particularly, with respect to the embodiments described in detail herein, the processor 32 of the portable electronic device 30 is configured to operate an output device to generate a perceptible output depending on the determined probability or probabilities that the segmented cardiac cycle(s) 100 contains an abnormal heart sound. In one embodiment, the processor 32 is configured to operate the output device to generate a perceptible output in response to the probability and/or the average of the probabilities exceeding a predetermined threshold. In at least one embodiment, the output device is the display screen 39 of the portable electronic device 30 and the perceptible output is a notification displayed on the display screen 39 which indicates that the phonocardiogram likely includes an abnormal heart sound. However, in other embodiments, the output device may be a speaker or light that is operated to generate a perceptible output indicating that the phonocardiogram likely includes an abnormal heart sound.
While the disclosure has been illustrated and described in detail in the drawings and foregoing description, the same should be considered as illustrative and not restrictive in character. It is understood that only the preferred embodiments have been presented and that all changes, modifications and further applications that come within the spirit of the disclosure are desired to be protected.
This application claims the benefit of priority of U.S. provisional application Ser. No. 62/680,404, filed on Jun. 4, 2018 the disclosure of which is herein incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62680404 | Jun 2018 | US |