In the accompanying drawings:
Referring to
As used herein, the term “auscultatory sound” is intended to mean a sound originating from inside a human or animal organism as a result of the biological functioning thereof, for example, as might be generated by action of the heart, lungs, other organs, or the associated vascular system; and is not intended to be limited to a particular range of frequencies—for example, not limited to a range of frequencies or sound intensities that would be audible to a human ear, —but could include frequencies above, below, and in the audible range, and sound intensities that are too faint to be audible to a human ear. Furthermore, the term “auscultatory-sound sensor” is intended to mean a sound sensor that provides for transducing auscultatory sounds into a corresponding electrical or optical signal that can be subsequently processed.
The auscultatory sound sensors 12, 121′, 122′, 123′, 121″, 122″, 123″ provide for transducing the associated sounds received thereby into corresponding auscultatory sound signals 16 that are preprocessed and recorded by an associated hardware-based signal conditioning/preprocessing and recording subsystem 25, then communicated to the first wireless transceiver 18, and then wirelessly transmitted thereby to an associated second wireless transceiver 26 of an associated wireless interface 26′ of an associated docking system 27, possibly running a second portion 14.2 of the Data Recording Application (DRA) 14, 14.2 on a corresponding second specialized computer or electronic system comprising an associated second computer processor or FPGA (Field Programmable Gate Array) 28 and second memory 30, both of which are powered by an associated second power supply 32, which together provide for recording and preprocessing the associated auscultatory sound signals 16 from the auscultatory sound sensors 12, 121′, 122′,123′, 121″, 122″, 123″.
For example, in accordance with one set of embodiments, the hardware-based signal conditioning/preprocessing and recording subsystem 25 includes an amplifier—either of fixed or programmable gain, —a filter and an analog-to-digital converter (ADC). For example, in one set of embodiments, the analog-to-digital converter (ADC) is a 16-bit analog-to-digital converter (ADC) that converts a −2.25 to +2.25 volt input to a corresponding digital value of −32, 768 to +32, 767. Furthermore, in accordance with one set of embodiments of the amplifier, the amplifier gain is programmable to one of sixteen different levels respectively identified as levels 0 to 15, with corresponding, respective gain values of 88, 249, 411, 571, 733, 894, 1055, 1216, 1382, 1543, 1705, 1865, 2027, 2188, 2350 and 2510, respectively for one set of embodiments. In accordance with another set of embodiments of the amplifier, the amplifier gain is fixed at the lowest above value, i.e., for this example, 88, so as to provide for avoiding the relative degradation of the associated signal-to-noise ratio (SNR) that naturally occurs with the relatively high gain levels of the programmable-gain set of embodiments.
It should be understood that the associated processes of the Data Recording Application (DRA) 14 could be implemented either in software-controlled hardware, hardware, or a combination of the two.
For example, in one set of embodiments, either or both the recording module 13 or docking system 27 may be constructed and operated in accordance with the disclosure of U.S. Provisional Application No. 62/575,364 filed on 20 Oct. 2017, entitled CORONARY ARTERY DISEASE DETECTION SYSTEM, or International Application No. PCT/US2018/056832 filed on 22 Oct. 2018, entitled CORONARY ARTERY DISEASE DETECTION SIGNAL PROCESSING SYSTEM AND METHOD, each of which is incorporated by reference in its entirety. Furthermore, in accordance with one set of embodiments, the auscultatory coronary-artery-disease detection system 10 may further incorporate an ECG sensor 34, for example, in one set of embodiments, an ECG sensor 34′ comprising a pair of electrodes incorporated in a corresponding pair of auscultatory sound sensors 12, wherein the signal from the ECG sensor 34′ is also preprocessed and recorded by a corresponding different signal channel of the same hardware-based signal conditioning/preprocessing and recording subsystem 25 of the recording module 13 that is used to preprocess the signals from the one or more auscultatory sound sensors 12. Alternatively, the ECG sensor 34 may comprise a separate set of a pair or plurality of electrodes that are coupled to the skin of the test subject, for example, in one set of embodiments, a pair of signal electrodes 35, 35+/− in cooperation with a ground electrode 350, wherein, referring to
The functionality of the Data Recording Application (DRA) 14 is distributed across the recording module 13 and the docking system 27. For example, referring to
The auscultatory sound sensor 12 provides for sensing sound signals that emanate from within the thorax 20 of the test-subject 22 responsive to the operation of the test-subject's heart, and the resulting flow of blood through the arteries and veins, wherein an associated build-up of deposits therewithin can cause turbulence in the associated blood flow that can generate associated cardiovascular-condition-specific sounds, the latter of which can be detected by a sufficiently-sensitive auscultatory sound sensor 12 that is acoustically coupled to the skin 38 of the thorax 20 of the test-subject 22. For some cardiovascular-conditions associated with, or predictive of, a cardiovascular disease, the sound level of these cardiovascular-condition-specific sounds can be below a level that would be detectable by a human using a conventional stethoscope. However, these sound levels are susceptible to detection by sufficiently sensitive auscultatory sound sensor 12 that is sufficiently acoustically coupled to the skin 38 of the thorax 20 of the test-subject 22. For example, in one of embodiments, the auscultatory sound sensor 12 may be constructed in accordance with the teachings of U.S. Provisional Application No. 62/568,155 filed on 4 Oct. 2017, entitled AUSCULTATORY SOUND SENSOR, or International Application No. PCT/US2018/054471 filed on 4 Oct. 2018, entitled AUSCULTATORY SOUND-OR-VIBRATION SENSOR, each of which is incorporated by reference in its entirety. Furthermore, in another set of embodiments, the auscultatory sound sensor 12 may be constructed in accordance with the teachings of U.S. Pat. Nos. 6,050,950, 6,053,872 or 6,179,783, which are incorporated herein by reference.
Referring also to
Generally, the adhesive interface 42 could comprise either a hydrogel layer 40′, for example, P-DERM® Hydrogel; a silicone material, for example, a P-DERM® Silicone Gel Adhesive; an acrylic material, for example, a P-DERM® Acrylic Adhesive; a rubber material; a synthetic rubber material; a hydrocolloid material; or a double-sided tape, for example, with either rubber, acrylic or silicone adhesives.
Referring to
Referring to
More particularly, referring to
More particularly, referring to
Referring again to
SmMAX=Max(|S(k)|)k
Then, in step (1104), the median of these maximum values is determined, as given by
median(S1≤m≤N
Finally, in step (1106), the scale factor SF is determined, as given by:
wherein DR0-P is the nominal zero-to-peak dynamic range of the auscultatory-sound-sensor time-series data S after scaling, i.e. after multiplying the acquired values by the scale factor SF. For example, in one set of embodiments, the nominal zero-to-peak dynamic range is set to be about 80 percent—more broadly, but not limiting, 80 percent plus or minus 15 percent—of the zero-to-peak range of the associated analog-to-digital converter—for example, in one set of embodiments, a 16-bit signed analog-to-digital converter—used to digitize the auscultatory-sound-sensor time-series data S in step (806). In one set of embodiments, the scale factor SF is integer-valued that, for an attached and bonded auscultatory sound sensor 12, 121′, 122′, 123′, 121″, 122″, 123″, ranges in value between 1 and 28.
If one or more of the associated auscultatory sound sensors 12, 121′, 122′, 123′, 121″, 122″,123″ is detached from the skin 38 of the thorax 20 of the test-subject 22, then the associated level of the auscultatory sound signals 16 will be low—for example, at a noise level—resulting in a relatively large associated scale factor SF from step (1106). Accordingly, if, in step (1108), the scale factor SF is in excess of an associated threshold SFMAX, then the Data Recording Application (DRA) 14 is aborted in step (1110), and the operator 48 is alerted that the one or more auscultatory sound sensors 12, 121′, 122′, 123′, 121″, 122″, 123″ is/are detached, so that this can be remedied. For example, in one set of embodiments, the value of the threshold SFMAX is 28 for the above-described fixed-gain embodiment, i.e. for which the associated amplifier has a fixed gain of 88, feeding a 16-bit analog-to-digital converter (ADC) that provides for converting a +/−5 volt input signal to +/−32,767. Otherwise, from step (1108), if the value of the scale factor SF does not exceed the associated threshold SFMAX, in step (1112), the scale factor SF is returned to step (706) for use in scaling subsequently-recorded breath-held auscultatory sound signals 16.1.
Referring again to
Accordingly, in step (710), a next block of NS contiguous samples of auscultatory-sound-sensor time-series data S is acquired over an acquisition period δi in accordance with a first aspect 800 of a data acquisition process 800, during which time the test-subject 22 is instructed to hold their breath. For example, in one set of embodiments, the nominal acquisition period δi is 10 seconds—but at least 5 seconds, —which, at a sampling frequency Fs of 24 KHz, results in NS=δi*Fs=240,000 samples.
More particularly, referring again to
Referring also to
More particularly, referring also to
Otherwise, from step (1406), if the current sample of thorax acceleration A is not the first sample, then, in step (1412), the current value of thorax velocity V is calculated by integrating the previous A0 and current A measurements of thorax acceleration, for example, using a trapezoidal rule, as follows:
wherein dt is the time period between samples, i.e. dt=1/Fs. Then, in step (1414), the current value of thorax displacement Y is calculated by integrating the above-calculated previous V0 and current V values of thorax velocity, for example, again using a trapezoidal rule, as follows:
Then, in step (1416), the respective previous values of thorax acceleration A0, thorax displacement Y0 and thorax velocity V0 are respectively set equal to the corresponding current values of thorax acceleration A, thorax velocity V and thorax displacement Y, respectively, that will each be used in subsequent iterations of steps (1412) and (1414).
Then, in step (1418), if the current value of thorax displacement Y is greater than then to current maximum value of thorax displacement YMAX—for example, as would result during a phase of chest expansion by the test-subject 22, —then, in step (1420), the current maximum value of thorax displacement YMAX is set equal to the current value of thorax displacement Y and the corresponding value of the sample counter iMAX associated therewith is set to the current value of the sample counter i. Otherwise, from step (1418)—for example, as would result from other than a phase of chest expansion by the test-subject 22, —if, in step (1422), the amount by which the current value of the sample counter i exceeds the value of the sample counter iMAX associated with the maximum value of thorax displacement YMAX is not equal to a threshold value Δ (the relevance of which is described more fully hereinbelow), then in step (1424), if the current value of thorax displacement Y is less than then current minimum value of thorax displacement YMIN—for example, as would result during a phase of chest contraction by the test-subject 22, —then, in step (1426), the current minimum value of thorax displacement YMIN is set equal to the current value of thorax displacement Y and the corresponding value of the sample counter iMIN associated therewith is set to the current value of the sample counter i. From either steps (1420) or (1426), in step (1428), if the amount by which the current maximum value of thorax displacement YMAX is greater the current minimum value of thorax displacement YMIN meets or exceeds a displacement threshold ΔYMAX, then, in step (1430), the BreathingFlag is set to indicate that the test-subject 22 is breathing, after which, in step (1410), the sample counter i is incremented, after which the breath-hold detection process 1400 repeats with step (1404). Similarly, from step (1428), if the displacement threshold ΔYMAX is not exceeded, in step (1410), the sample counter i is incremented, after which the breath-hold detection process 1400 repeats with step (1404). Further similarly, from step (1424)—for example, as would result from other than a phase of chest contraction by the test-subject 22, —if, in step (1432), the amount by which the current value of the sample counter i exceeds the value of the sample counter iMIN associated with the minimum value of thorax displacement YMIN is not equal to the threshold value Δ, in step (1410), the sample counter i is incremented, after which the breath-hold detection process 1400 repeats with step (1404).
If, from step (1432), the amount by which the current value of the sample counter i exceeds the value of the sample counter iMIN associated with the minimum value of thorax displacement YMIN is equal to the threshold value Δ—following a minimum chest contraction of the test-subject 22, in anticipation of subsequent chest expansion, wherein the threshold value Δ is greater than or equal to one, —then, in step (1434), the peak-to-peak thorax displacement ΔY is calculated as the difference between the current maximum YMAX and minimum YMIN values of thorax displacement, and, in step (1436), the maximum value of thorax displacement YMAX is set equal to the current value of thorax displacement Y, and the value of the sample counter iMAX at which the corresponding maximum value of thorax displacement YMAX occurred is set equal to the current value of the sample counter i, in anticipation of subsequently increasing magnitudes of the current value of thorax displacement Y to be tracked in steps (1418) and (1420).
Similarly, if, from step (1422), the amount by which the current value of the sample counter i exceeds the value of the sample counter iMAX associated with the maximum value of thorax displacement YMAX is equal to the threshold value Δ—following a maximum chest expansion of the test-subject 22, in anticipation of subsequent chest contraction, wherein the threshold value Δ is greater than or equal to one, —then, in step (1438), the peak-to-peak thorax displacement ΔY is calculated as the difference between the current maximum YMAX and minimum YMIN values of thorax displacement, and, in step (1440), the minimum value of thorax displacement YMIN is set equal to the current value of thorax displacement Y, and the value of the sample counter iMIN at which the corresponding minimum value of thorax displacement YMIN occurred is set equal to the current value of the sample counter i, in anticipation of subsequently decreasing magnitudes of the current value of thorax displacement Y to be tracked in steps (1424) and (1426).
Accordingly, the threshold value Δ, provides for a delay to assure that a most-recent extremum of displacement has been reached, either the current maximum YMAX or minimum YMIN values of thorax displacement, before calculating the associated peak-to-peak thorax displacement ΔY.
From either steps (1436) or (1440), in step (1442), if the amount of the peak-to-peak thorax displacement ΔY calculated in steps (1434) or (1438), respectively, meets or exceeds the displacement threshold ΔYMAX, then, in step (1444), the BreathingFlag is set to indicate that the test-subject 22 is breathing. Otherwise, from step (1442), if the amount of the peak-to-peak thorax displacement ΔY calculated in steps (1434) or (1438), respectively, does not exceed the displacement threshold ΔYMAX, then, in step (1446), the BreathingFlag is reset to indicate that the test-subject 22 is not breathing. Following either step (1444) or (1446), in step (1410), the sample counter i is incremented, after which the breath-hold detection process 1400 repeats with step (1404).
Referring again to
More particularly, referring to
Returning to
In step (724), an associated noise detection (i.e. noise-screening) process—operating on either the block of scaled auscultatory-sound-sensor time-series data Ŝ, or the block of auscultatory-sensor time-series data S, in parallel with the debond detection process 1500—provides for detecting if the block of auscultatory-sound-sensor time-series data S has been corrupted by excessive noise, and if so, from step (726), that block of auscultatory-sound-sensor time-series data S is ignored, and the auscultatory-sound-sensing process 700 continues by repeating step (710) to acquire a new block of auscultatory-sound-sensor time-series data S. Otherwise, from step (726), if the block auscultatory-sound-sensor time-series data S has not been corrupted by excessive noise, the process continues with the above-described step (720).
From step (720), if sufficient noise-screened data has been gathered for which the associated one or more auscultatory sound sensors 12, 121′, 122′, 123′, 121″, 122″, 123″ were not debonded from the skin 38 of the thorax 20 of the test-subject 22—for example, in one set of embodiments, a total duration of at least 65 seconds of recorded data, —then, in step (722), at least the composite set of blocks of breath-held auscultatory-sound-sensor time-series data S acquired in step (710) are subsequently analyzed by an associated Data Analysis Application (DAA) 54 operative on the docking system 27—as illustrated in
a-10f illustrate a simulation of six blocks of breath-held auscultatory-sound-sensor time-series data S recorded in accordance with the first aspect 700 of auscultatory-sound-sensing process 700, with respective durations of δ1, δ2, δ3, δ4, δ5, and δ6 during which time periods the test-subject 22 was holding their breath, separated by periods Δ1, Δ2, Δ3, Δ4, and Δ5 of normal breathing, wherein
Alternatively, one or more of the auscultatory-sound-sensing process 700, the data acquisition process 800, the scale-factor-determination process 1300, or the de-bond detection process 1500 could be implemented with corresponding alternative processes disclosed in U.S. application Ser. No. 16/136,015 filed on 19 Sep. 2018—with particular reference to
Referring to
Referring to
More particularly, referring to
Then, in step (1714), the frequency-domain noise filter FH[ ] generated by a matched-noise-filter generation process 1800 that—referring also to
Referring to
Referring to
More particularly, referring again to
Referring again to
Referring to
Then, from step (2104), referring to
The breath-held auscultatory sound signal 16.1 in some cases can include very low frequency—for example, around 10 Hz—vibrations that are believed to be associated with movements of the entire auscultatory sound sensor 12 stimulated by chest motion. Considering the sensor housing as an inertial mass under a tangential component of gravitational force attached to elastic surface, it is possible to initiate sensor vibration by small surface displacements. Such vibrations can be amplified by resonance characteristics of the tissue-sensor interface. Depending on the Q-factor of tissue-sensor system, vibrations may decay very slowly, extending well into diastolic interval of heart beat, contaminating the signal of interest with relatively large amplitude unwanted interference. The net effect of such interference is an unstable signal baseline and distortion of the actual underlying heart sounds. Potential sources of noise relevant to digitized acquisition of acoustic signals include: electric circuit thermal noise, quantization noise from the A/D converter, electro-magnetic interference, power line 60 Hz interference and acoustic noises relevant to human physiology and the environment where recording is done (ambient noise). Generally thermal noise power and A/D converter quantization noise are very low for the bandwidth of interest and may be significant only for the signals with amplitude in microvolt region. Furthermore, recording artifacts may significantly reduce visibility of the signal of interest since these artifacts may have relatively high amplitude and may overlap in spectral content. These artifacts are due to uncontrolled patient movements, signals related to respiration, and sensor vibrations produced by the associated oscillating mass of the auscultatory sound sensor 12 coupled to the elastic skin tissue surface (cardio seismographic waves). The latter type of artifact may be caused by the inertial mass of the sensor housing and can be relatively high in amplitude due to resonance properties of the sensor-tissue interface. Although the frequency of such vibrations is relatively low (around 10 Hz), the associated relatively high amplitude thereof can result in an unstable signal baseline, which complicates the detection of target signals.
However cardiac activity may also produce low frequency signals—for example, as a result of contraction of heart muscle, or as a result of valvular sounds—that may have valuable diagnostic information. In some situations, the spectrum of the artifacts may overlap with the acoustic spectrum of cardiac signals such as myocardium vibrations. Therefore, it can be beneficial to reduce baseline instability so as to provide for recording primarily acoustic signals originating from the cardiac cycle.
In accordance with one set of embodiments, these very low frequency artifacts may be rejected by using additional signal filtering to suppress the associated characteristic vibration frequencies, for example, using a software-implement high-pass filter 66 having a 3 dB cut-off frequency above 10 Hz, for example, as provided for by a Savitzky-Golay-based high-pass filter 66′, wherein, in step (2210), the filtered-decimated breath-held sampled auscultatory sound signal 66 is smoothed by a Savitzky-Golay (SG) smoothing filter 68 to generate a smoothed breath-held sampled auscultatory sound signal 70, the latter of which, in step (2212), is subtracted from the filtered-decimated breath-held sampled auscultatory sound signal 64 to then generate the corresponding resulting high-pass-filtered breath-held sampled auscultatory sound signal 72 having a relatively-higher signal-to-noise ratio (SNR), but without significant distortion of the original filtered-decimated breath-held sampled auscultatory sound signal 64.
The digital Savitzky-Golay smoothing filter 68 is useful for stabilizing baseline wandering (for example, as may be exhibited in ECG signals) and provides for removing the low-frequency signal components without causing significant ringing artifacts. The Savitzky-Golay smoothing filter 68 employs a least squares approximation of a windowed signal using a polynomial function. The associated parameters of the Savitzky-Golay smoothing filter 68 include the window size M in samples that defines the associated cut-off frequency, and the polynomial degree N used for the approximation. The associated roll-off range is fairly wide and the cut-off frequency is somewhat arbitrary. For example, for the Savitzky-Golay smoothing filter 68 used in step (2210), the associated window size M—expressed in terms of window time duration tw—is in the range of 8 milliseconds to 100 milliseconds, with N=3. For example, a window time duration tw=8 milliseconds provides a cut-off frequency of approximately 100 Hz, and a window time duration tw=25 milliseconds, provides for passing signal frequencies above 40 Hz through the Savitzky-Golay-based high-pass filter 66′.
The Savitzky-Golay smoothing filter 68 is defined by a least-squares fit of windowed original samples, with an Nth degree polynomial,
so as to minimize the associated error function, E:
wherein the total window width is 2M+1 samples. The associated short-time window sliding through entire time series fits the data with a smooth curve. The frequency response of the Savitzky-Golay smoothing filter 68 depends strongly on the window size M and polynomial order N. The normalized effective cut-off frequency of the Savitzky-Golay smoothing filter 68 is empirically given as follows, wherein fc=ωs/π, for which ωs. is the radian sampling frequency:
Following the high-pass filter 66, 66′ of steps (2210) and (2212), from step (2214), the auscultatory sound signal acquisition and filtering process 2200 is repeated beginning with step (2202) for each of the auscultatory sound sensors 12, after which, from step (2216), the resulting blocks of high-pass-filtered breath-held sampled auscultatory sound signals 72—for each of the auscultatory sound sensors 12—are returned to step (2104).
Then, also from step (2104), referring to
Referring again to
In accordance with a first aspect, this may be accomplished using the breath-held auscultatory sound signals 16.1—or the corresponding associated high-pass-filtered breath-held sampled auscultatory sound signals 72—alone, without relying upon the associated electrograhic signal 37 from the ECG sensor 34, 34′, or upon the corresponding associated filtered-decimated electrographic signal 76, for example, in accordance with the following portions of the disclosure and drawings of U.S. Pat. No. 9,364,184: Abstract,
However, in accordance with a second aspect, the electrographic signal 37 from the ECG sensor 34, 34′, and particularly, the corresponding associated filtered-decimated electrographic signal 76 responsive thereto, provides an effective basis for segmenting the breath-held auscultatory sound signals 16.1, 72 by heart cycle, after which the high-pass-filtered breath-held sampled auscultatory sound signal 72 may then be used to locate the associated dominant S1 and S2 heart sounds that provide for locating the associated heart phases of systole and diastole, data from the latter of which provides for detecting coronary artery disease responsive to information in the breath-held auscultatory sound signals 16.1, 72.
Referring also to
The normal human cardiac cycle consists of four major intervals associated with different phases of heart dynamics that generate associated audible sounds: 1) the first sound (S1) is produced by closing of mitral and tricuspid valves at the beginning of heart contraction, 2) during the following systolic interval, the heart contracts and pushes blood from ventricle to the rest of the body, 3) the second sound (S2) is produced by the closing of aortic and pulmonary valves and 4) during the following diastolic interval, the heart is relaxed and the ventricles are filled with oxygenated blood.
Referring to
Although the QRS complex 78 is the most prominent feature, the electrographic signal 37, 76 may be distorted by low frequency baseline wandering, motion artefacts and power line interference. To stabilize the baseline, a Savitzky-Golay-based high-pass filter 66′—similar to that used in steps (2210/2212) when filtering the breath-held auscultatory sound signal 16.1—may be used to cancel low-frequency drift, for example, prior to the subsequent low-pass filtering, in step (2206′), by the fourth order Type II Chebyshev filter low-pass filter, for example, having a 40 Hz cut-off frequency, and/or prior to decimation of the sampling by factor of 10 in step (2208′), which together provide for both reducing high frequency noise and emphasizing QRS complexes 78.
More particularly, referring again to
Equation (9) is similar to a Shannon energy function, but with the associated discrete signal values raised to the fourth power—rather than a second power—to emphasize difference between R-peaks 80′ and baseline noise. The value of the electrographic envelope waveform 80, Fs[ ] is calculated for each of the NPTS values of index k until, in step (2706), all points have been calculated, after which, in step (2708), the electrographic envelope waveform 80, Fs[ ] is returned to step (2606) of the electrographic segmentation process 2600.
Returning to
The electrographic segmentation process 2600 for finding R-peaks is quite simple and stable when signal-to-noise ratio (SNR) of recording is sufficiently high. Otherwise, when the electrographic data 74 is excessively noisy, additional signal processing such as discrete wavelet decomposition may be used to further enhance R-peaks and facilitate QRS detection. Occasional relatively high amplitude transients due to patient movements or ambient interference may produce false peaks unrelated to QRS complex, and therefore complicate segmentation.
Referring to
|tPEAK(i+1)−tPEAK(i)|≥TMIN (10)
and
|P(tPEAK(i))−median(P)|≤VMAX (11)
wherein tPEAK(i) is the time of the R-peak 80′ and P(tPEAK(i)) is the corresponding magnitude of the R-peak 80′.
Referring again to
Then, from step (5410) following R-peak detection within the last sample window 81, in step (5416), a minimum peak-threshold PMINLIM is set equal to about 60 percent of the median amplitude of the R-peaks 80′ within the window-peak array PW[ ], and, in step (5418), a maximum peak-threshold PMAXLIM is set equal to twice the median amplitude of the R-peaks 80′ within the window-peak array PW[ ]. Then, in steps (5420) through (5430), R-peaks 80′ that are outside those limits are ignored, so as to provide for ignoring noise or other spurious signal components. More particularly, beginning in step (5420), a second window counter iW—that provides pointing to the above-detected unique R-peaks 80′—is initialized to a value of 1, and a peak counter NPEAK —that provides for counting the number of R-peak 80′ within the above-determined amplitude thresholds—is initialized to a value of 0. Then, in step (5422), if the value of the second window counter iW is less than the number of windows NW determined above in steps (5402) through (5414), then, in step (5424), if the magnitude of the currently-pointed-to R-peaks 80′, i.e. PW[iW], is greater than the minimum peak-threshold PMINLIM and less than the maximum peak-threshold PMAXLIM—indicating a potentially valid R-peak 80′—then, in step (5226), the peak counter NPEAK is incremented, and the corresponding magnitude and time of the associated R-peak 80′, i.e. PW[iW] and tPEAK_W[iW], are stored as respective values in in a corresponding peak array P [NPEAK] and peak-time array tPEAK[NPEAK] (alternatively, the window-peak array PW[ ]and the window-peak-time array tPEAK_W[ ] could be reused in place for storing these values). For example, referring again to
Referring again to
Referring again to
The exact location of the first S1 and second S2 heart sounds produced by the respective closing of atrioventricular and semilunar valves is somewhat ambiguous, and can be particularly difficult to locate in situations when the associated breath-held auscultatory sound signals 16.1 are recorded in an environment with relatively high level ambient noise, or if there are associated heart murmurs resulting from turbulent blood flow. However, even under relatively-poor recording conditions, the first S1 and second S2 heart sounds of the cardiac cycle remain the most prominent acoustic features.
Referring again to
The value of the acoustic envelope waveform 84, Es[k] is calculated for each of the NSAMPLES values of index k until, in step (3206), all points have been calculated, after which, in step (3208), the acoustic envelope waveform 84, Es[k] is returned to step (2310) of the heart-cycle segmentation and heart-phase identification process 2300.
For envelope generation by each of the first 2700 and second 3200 envelope generation processes illustrated in
Referring again to
Referring again to
Then, in step (2316), the locations of the envelope peaks 88S1, 88S2 associated with the corresponding S1 and S2 heart sounds are validated using a normalized acoustic envelope waveform 84, Es[ ], i.e. normalized to a range of 0 and 1, and the associated local quadratic models 90.1, 90.2 thereof, in accordance with a minimum-time-spacing criteria used to remove or ignore spurious transient peaks unrelated to heart sounds, similar to equation 10 above that is associated with step (2904) of the above-described peak validation process 2900 used to validate the electrographic envelope waveform 80, Fs[ ].
Then, in step (2318), the acoustic envelope waveform 84, Es[ ] is searched relative to the associated indices kS1_PEAK, kS2_PEAK—respectively associated with the corresponding respective envelope peaks 88S1, 88S2—to find adjacent data points therein, —i.e. having associated indices kS1−, kS1+, kS2−, kS2+—for which the corresponding values of the acoustic envelope waveform 84, Es(kS1−), Es(kS1+), Es(kS2−), Es(kS2+) are each about 5 percent down, i.e. 95 percent of, the corresponding values Es(kS1_PEAK), Es(kS2_PEAK) of the associated envelope peaks 88S1, 88S2.
Then, in step (2320), respective local quadratic models ES1(k), 90.1′ and ES2(k), 90.2′ are fitted—for example, by least-squares approximation—to the three points associated with each of the corresponding respective envelope peaks 88S1, 88S2 as follows:
ES1(k)=Quadratic Fit({kS1−,Es(kS1−)},{kS1_PEAK,Es(kS1_PEAK)},{kS1+,Es(kS1+)}) (13a)
ES2(k)=Quadratic Fit({kS2−,Es(kS2−)},{kS2_PEAK,Es(kS2_PEAK)},{kS2+,Es(kS2+)}) (13b)
Then, referring again to
Referring to
The S3, S4, and S5 sounds do not appear for every patient. But each may indicate one or more cardiac conditions, such as, CAD, Aortic Insufficiency, Aortic Stenosis, Luminal Irregularities, and Mitral Regurgitation. The regions within diastole during which the S3 and S5 sounds may occur are located relative to the diastasis region of diastole, which is a relative rest period of the heart during mid-to-late diastole, during which period the heart motion is minimal. The region during which the S4 sound may occur is located relative to the R-peak 80′ at the end of the heart cycle 82 and the beginning of the next heart cycle 82.
The starting point of the diastasis region is determined using what is referred to as the Weissler or Stuber formula for the period of delay DT—in milliseconds—from an R-peak 80′ to the starting point of the diastasis region, given by the following:
DTms=(ΔTR-R_ms−350)·0.3+350
wherein ΔTR-R is the time interval in milliseconds between R-peaks 80′. In one set of embodiments, this is approximately the ending point for the region most likely to include S3, i.e. the S3 region. Accordingly, the starting point for the S3 region is determined by advancing relative to the starting point of diastasis—or, equivalently, the end point of the S3 region—by a time interval ΔTS3. For example, in this set of embodiments, the time interval commences at about 100 milliseconds prior to the starting point of diastasis and ends at the starting point of diastasis. In another set of embodiments, the time interval of the S3 region is taken to extend from about 60 milliseconds prior, to 60 milliseconds after, the starting point of diastasis. The S3 swing is calculated by subdividing the S3 region of the associated breath-held auscultatory sound signal 16.1, e.g. the high-pass-filtered breath-held sampled auscultatory sound data 72, s[ ], into a series of—i.e. one or more—time intervals, and calculating or determining one or more of the difference between the maximum and minimum amplitude values—i.e. maximum amplitude—minimum amplitude, —the minimum amplitude, and the maximum amplitude, for each interval in the S3 region.
In addition to the above time-domain analysis, the associated breath-held auscultatory sound signal 16.1 is also analyzed in the frequency domain using a Short-Time Fourier Transform (STFT), for example, in one set of embodiments, having a 1 Hz frequency resolution and a 0.0025 second time resolution—but generally, using frequency and time resolutions that may be empirically adjusted to improve detection and discrimination—in cooperation with an associate windowing method, for example, using Chebyshev windowing on a sliding window that is moved along the S3 region of the breath-held auscultatory sound signal 16.1. The frequency-domain features for each heart beat are generated by calculating the mean and median of each of the windows of the STFT.
The S3 swing values and frequency-domain features are saved for further calculations and/or for use as an input to one or more of the below-described classification processes. In view of there being relatively few patients that clearly exhibit an S3 sound, an unsupervised clustering method is applied on all the generated features to classify heart beats into two clusters that respectively include “with S3” and “without S3” heart beats. S3 is analyzed on a beat-by-beat basis. Given the S3 is not an intermittent signal, one strategy is to analyze all heart beats from a given patient, and if S3 appears in more than one third of all the heart beats, that patient would be identified as having S3. There are some patients who exhibit a low-ejection fraction ratio (e.g. an ejection fraction (E. F.) lower than 35%) that are highly likely to have S3. However some of these patients have CAD, and some have other types of cardiac arrhythmia. Once the unsupervised clustering is applied followed by voting among all beats belonging to one patient, if those patients are found in the cluster “with S3”, this would provide for validating that the clustering matches reality.
The S4 region is located in a final portion of diastole, for example, a time interval of ΔTS4, for example, from 10 to 25 percent of the period of the heart cycle 82, for example, about 100 to 200 milliseconds in a 1 second duration heart cycle 82, in advance of the R-peak 80′ at the end of the heart cycle 82, or equivalently, in advance of the beginning of the S1 sound of the next heart cycle 82. The S4 region of the associated breath-held auscultatory sound signal 16.1, e.g. the high-pass-filtered breath-held sampled auscultatory sound data 72, s[ ], is subdivided into a series of intervals and the associated S4 swing within each interval is calculated as the absolute value of the difference between maximum and minimum amplitude magnitudes of the raw or high-pass-filtered data within that interval in the S4 region. The S4 swing is calculated separately for audible (above 20 Hz) and inaudible (below 20 Hz) frequency ranges, for which the configurations of the associated signal filters are specific to the particular frequency range. Generally, the minimum and maximum values of the signal will depend upon the associated frequency range of the associated signal, and the type of filter used to generate the filtered signal.
The S2 swing is also similarly calculated over the associated S2 region, in addition to calculating the S4 swing as described hereinabove. The ratio of the S4 swing to S2 swing provides a measure of the likelihood that a patient exhibits an S4 sound, with that likelihood increasing with increasing value of the S4-to-S2 swing ratio. For example,
The S2 and S4 swing values, and/or the S4-to-S2 swing ratio, are saved for further calculations and/or for use as an input to one or more of the below-described classification processes. In accordance with one approach, the mean value of the S4-to-S2 swing ratio is calculated for an entire population of patients, beat by beat, and then the median of S4-to-S2 swing ratio is taken across all beats of each patient. For those patients for which the S4-to-S2 swing ratio is greater than the associated mean value of the S4-to-S2 swing ratio are identified as having an S4 sound for purposes of training the associated below-described classifier, from which an associated threshold value is determined that—in combination with other factors—provides for discriminating patients with CAD from patients without CAD, after which the threshold value of the S4-to-S2 swing ratio is then applied to the associated test set.
The S5 region is identified as the end of the S3 region to the start of the S4 region. Accordingly, the starting point is determined using the above-described Weissler or Stuber formula. As mentioned, this is approximately the ending point for the S3 region. The ending point of the S5 region is located at the beginning of a time interval of ΔTS4, for example, about 100 milliseconds, in advance of the R-peak 80′ at the end of the heart cycle 82, or equivalently, in advance of the beginning of the S1 sound of the next heart cycle 82. The S5 swing is calculated by subdividing the S5 region of the associated breath-held auscultatory sound signal 16.1, e.g. the high-pass-filtered breath-held sampled auscultatory sound data 72, s[ ], into a series of intervals and calculating the absolute value of maximum amplitude—minimum amplitude for each interval in the S5 region. The S5 swing values may be saved for further calculations and/or for use as an input to one or more of the below-described classification processes.
If, in step (2324), all heart cycles 82 in the high-pass-filtered breath-held sampled auscultatory sound signal 72 have not been processed, then the heart-cycle segmentation and heart-phase identification process 2300 is repeated beginning with step (2308) for the next heart cycle 82. Otherwise, from step (2326), if all auscultatory sound sensors 12 have not been processed, then the heart-cycle segmentation and heart-phase identification process 2300 is repeated beginning with step (2304) for the next auscultatory sound sensor 12. Otherwise, from step (2326), in step (2328), the heart-cycle segmentation and heart-phase identification process 2300 returns either the mean values kS1, kS2 of the corresponding root locations {kS1_START, kS1_END}, {kS2_START, kS2_END} associated with the corresponding S1 and S2 heart sounds, or the corresponding mean values tS1, tS2 of the associated times, i.e. {t(kS1_START), t(kS1_END)}, {t(kS2_START), t(kS2_END)}, for each of the heart cycles 82 in each of the high-pass-filtered breath-held sampled auscultatory sound signals 72 from each of the associated auscultatory sound sensors 12.
Referring to
Referring again to
Then, in step (3622), if the standard deviation compactness metric STDDEVCM exceeds a threshold, for example, in one set of embodiments, equal to 6, but generally between 1 and 10, the particular region of diastole for the particular breath-held segment from the particular auscultatory sound sensor 12, m, is flagged as an outlier in step (3624). Then, or otherwise from step (3622), in step (3626), the process returns to step (2110) of the auscultatory sound signal preprocessing and screening process 2100.
Referring again to
Otherwise, from step (2112), referring again to
Then, returning to the step (2116) of the auscultatory sound signal screening process 2100, if a noise threshold was exceeded in step (1930) of the noise-content-evaluation process 1900, in step (2120), if the end of the breath-held segment has not been reached, i.e. if kBEAT<NBEATS(kSEG), then the process repeats beginning with step (2108) for the next heart cycle. Otherwise, from step (2116), the good beat counter GB is incremented in step (2118) before continuing with step (2120) and proceeding therefrom as described hereinabove. Otherwise, from step (2120) if the end of the breath-held segment has been reached, in step (2122), if a threshold number of valid heart cycles has not been recorded, the process repeats with step (2104) after incrementing the segment counter kSEG in step (2123). Otherwise, the recording process ends with step (2124).
Accordingly, each breath-holding interval B, kSEG of either the breath-held auscultatory sound signal 16.1, or the corresponding high-pass-filtered breath-held sampled auscultatory sound signal 72 is segmented into individual heart beats 82 (i.e. heart cycles 82, wherein reference to heart beats 82 is also intended as a short-hand reference to the associated breath-held sampled auscultatory sound data S[ ]) and the diastolic interval D is analyzed to determine the associated noise level to provide for quality control of the associated breath-held sampled auscultatory sound data S[ ], so as to provide for signal components thereof associated with coronary artery disease (CAD) to be detectable therefrom. Quality control of the recorded signals provides for detecting weak signals that may indicate health problems but can otherwise be blocked by strong noise or unwanted interference. The present method is developed for quantitative control of signal quality and can be deployed in the recording module 13 for real-time quality monitoring or can be used at the post recording stage to extract only low-noise heart beats 82 that satisfy specific quality condition.
Referring to
Referring to
The variance within each time-window TW—used to detect any outliers therein—is computed as:
wherein xik and μx are respectively the kth sample and the mean value, of the ith time-window TW, respectively. The local signal power of the ith time-window TW, is given by:
Pi=σi2 (16)
An outlier power threshold PLIM is determined by adding 6 dB to the median value of Pi for all time-windows TW, and in step (3714), if the value of Pi exceeds PLIM for any time-window TW, then, in step (3720), the current heart cycle 82 is ignored.
If none of the time-windows TW include an outlier, then the mean power of diastole is given by:
The associated noise power threshold PTh is defined with respect to the 2-byte A/D converter range, so that:
wherein Th is a predetermined threshold, for example, −50 dB, and PADC is the power associated with the maximum signed range of a 2-byte A/D converter, i.e. (32767)2. Accordingly, if, in step (3714), the mean diastole power Pm exceeds the noise power threshold PTh, then, in step (3720), the current heart cycle 82 is ignored.
If, from step (3714), the diastolic signal power exceeds the mean noise power level threshold P0, then, in step (3720), the associated heart beat 82 is labeled as noisy beat and is not counted in the overall tally of heart beats 82. Otherwise, from step (3714), the good beat counter GB in step (3716), and if, in step (3718), if the number of good heart beats 82, i.e. the value of the good beat counter GB, is less than the required number NGMIN of high quality heart beats 82, then the second aspect auscultatory sound signal preprocessing and screening process 3700 repeats with step (3708) for the next heart cycle 82. In one set of embodiments, if the required number of high quality heart beats 82 is not reached within a reasonable period of time, then the user is informed that recording is excessively noisy so that additional actions can be performed to improve signal quality.
Following the acquisition of a sufficient number of sufficiently-low-noise heart beats 82, following step (3718), the sampled auscultatory sound data Sm[ ] is further preprocessed to emphasize acoustic signals in diastole D and extract signal specific features that can be used for disease classification. For example, in step (3722), the mean position of the peak of the S2 heart sound—for example, the above-described mean value tS2—is determined for each heart beats 82, after which the heart beats 82 are synchronized with respect thereto, for example, as illustrated in
Then, in step (3724), the heart beats 82 are normalized with respect to time so as to compensate for a variation in heart-beat rate amongst the heart beats 82, and to compensate for a resulting associated variation in the temporal length of the associated diastolic intervals. Generally, heart-beat rate is always changing and typically never remains the same over a recording period of several minutes, which if not compensated, can interfere with the identification of specific signal features in diastole D, for example, when using the below-described cross-correlation method. Although heart-beat segmentation alone provides for aligning heart-beat starting points, variations in the heart-beat rate can cause remaining features of the heart cycle 82 to become shifted and out of sync—i.e. offset—with respect to each other. However, such offsets can be removed if the associated heat beats 82 are first transformed to common normalized time scale t/T*, where T* is the fixed time interval, for example, the duration of the slowest heart beat 82, followed by beat resampling and interpolation so as to provide for normalizing the original signal at a new sampling rate, for example, as illustrated in
Referring again to
The cross-correlation assigned to each beat is given by an average of cross-correlations thereof with the remaining Nb−1 heat beats 82:
wherein xi and xj are the diastolic high-pass-filtered breath-held sampled auscultatory sound data 72, s[ ] of two distinct heart beats 82, and Nb is the total number of heart beats 82 in the 2-D stack. Following computation of all possible pairs of heart beats 82, an Nb×Nt cross-correlation matrix is obtained and displayed as a 2D image, wherein Nt is the number of time samples in each heart beat 82. A similar signal pattern during diastole that is present in majority of heart beats 82 will produce a localized cross-correlation peak in diastole. Accordingly, cross-correlation peaks associated with a micro-bruit signal occurring at approximately at the same temporal location within the diastole interval from one heart beat 82 to another will produce distinct bands across the image within the same temporal region of each heart beat 82, for example, as illustrated in
Alternatively, or additionally, acoustic features in diastolic interval of heart beat 82 can be visualized using continuous wavelet transform (CWT). The wavelet transform processing is similar to the short-time cross-correlation but instead of cross-correlating signals from different heart beats 82, the signal of interest is correlated using a wavelet function with limited support to facilitate temporal selectivity.
wherein the associated mother wavelet w is a continuous function in time and frequency domains. The wavelet family is typically chosen empirically using specifics of the signal under consideration. The family of Morlet wavelets appears as an appropriate choice for analysis of heart sounds. Output of the wavelet transform is a two-dimensional time-frequency representation of signal power |X(a,b)|2 defined in terms of scaling and shift parameters. Example of a wavelet transform is illustrated in
Generally, step (3730) of the second aspect auscultatory sound signal preprocessing and screening process 3700—which may also be used in cooperation with the above-described first aspect auscultatory sound signal preprocessing and screening process 2100—incorporates one or more feature extraction algorithms which identify significant signal parameters that can be linked to coronary artery disease (CAD) and which can be used for training a machine learning algorithm for automatic CAD detection. Furthermore, if auscultatory sound signals 16 are recorded at a relatively high sampling rate (for example, at a 4 kHz sampling rate), each heart beat 82 might contain over 4000 samples per each of six channels. Such large amount of highly correlated variables makes the usage of the raw waveform for classification very difficult without additional signal processing to reduce the dimensionality of the problem. Such dimensionality reduction can be achieved by use of an appropriate feature extraction algorithm that identifies a reduced set of parameters that are related to CAD. Mathematically, the feature extraction procedure provides a mapping of the high-dimensional raw data into the low-dimensional feature space with adequate inter-class separation. For example, standard dimensionality reduction routines such as singular value decomposition (SVD) or principal component analysis (PCA) may be used to decompose raw data onto orthonormal basis and to provide for selecting relevant features with minimal loss of information. The time domain signal itself can be transformed prior to feature extraction to emphasize unique features thereof. For example, frequency domain representation by Fourier transform can be advantageous for feature extraction if the signal contains discrete set of characteristic frequencies. The performance of a signal classifier can be dramatically improved by excluding a large number of irrelevant features from analysis. In accordance with one aspect, the signal classification problem begins with a mapping from the original high-dimensional space (size N) to a feature space (size p<<N), followed by a mapping of the feature space to an m-dimensional space, wherein the dimension m is equal to the number of classes. For example, for a binary classification problem—e.g. CAD or no CAD, —m=2.
In accordance with one set of embodiments, step (3730) of the second aspect auscultatory sound signal preprocessing and screening process 3700 employs a wavelet packet transformation (WPT) for sparse representation of heart sounds in time-frequency domain, followed by a custom designed binary classifier. Several standard classifier algorithms can be trained using reduced feature set, and to provide for binary classification of the associated heart sounds—useable either individually or in combination, —including, but not limited to, a support vector machine (SVM), a fully-connected artificial neural network (ANN), or a convolution neural network (CNN) applied to two-dimensional time-frequency images.
Referring to
Referring to
Referring to
The filter functions are designed to provide for energy conservation and lossless reconstruction of the original signal from the set of transformed time series wj,k[l] from a particular decomposition level j. These properties along with smoothness requirements define the family of scaling and wavelet functions used for decomposition. The resulting set of K=2j distinct frequency bands at each decomposition level j, together with the corresponding associated transformed time series wj,k[l]. can be used for analysis and feature extraction, instead of relying upon the corresponding original raw signal x[n].
The wavelet packet transformation (WPT) is generalization of the standard multi-level DWT decomposition, wherein both approximation and detail coefficients are decomposed using quadrature mirror filters, for example, as described in M. Wickerhauser, “Lectures on Wavelet Packet Algorithms”, http://citeseerx.ist.psu.edu, which is incorporated herein by reference.
where gm is the coefficient of the scaling function and hm is the coefficient of the wavelet function, k is the index of the associated frequency bin at decomposition level j, and l is the index of the associated time series array associated with the particular frequency bin k.
The wavelet packet transformation (WPT) provides a benefit of sparse signal representation similar to the discrete wavelet transformation (DWT), but also provides better resolution of frequency components by decomposing the detail part of discrete wavelet transformation (DWT), which results in the sub-band structure illustrated in
For each decomposition level j, the total energy from all frequency bins is the same, i.e.
The wavelet packet transformation (WPT) energy map of
The wavelet packet transformation (WPT) energy map and the associated best basis selection can be used to reduce dimensionality of the heart beat classification problem by analysing the signal represented by transformed time series wj,k[l] and rejecting information irrelevant for the classification task. The wavelet packet transformation (WPT) is one of a variety of signal processing techniques that can be used for extraction of important parameters or features suitable for prediction of CAD. The very basic set of features may include typical metrics of raw signals (amplitude, timing, spectral power and other) that can be derived from segmented heart beats. However, such hand-crafted feature set may not be optimal for current problem of CAD classification. Regardless of which method is used for feature extraction, the output of this data processing stage is a vector of p elements with p<<N, where Nis the size of raw signals. The feature vector can be represented either as a 1-D array of p elements or as a 2-D matrix for a classification algorithm operating on image information. There are several powerful classification algorithms that can be trained for disease detection using recorded heart sounds and the extracted feature vector. These algorithms include support vector machine (SVM), feed-forward artificial neural network (ANN) and convolutional neural network (CNN), which is particularly suitable for 2-D image classification.
Referring to
The SVM algorithm can be used as a CAD classifier with data recorded by the Data Recording Application (DRA) 14, 14.1. Prior to sending data to SVM algorithm, recordings are processed by beat segmentation and feature extraction stages, to produce a feature vector for each test-subject 22, by preprocessing of n available recordings and extracting p features, with each channel data then transformed into an n x p feature matrix. The feature vector extracted from the segmented beats can be either a set of custom selected metrics (amplitude, timing of specific segment, energy, statistic parameters, sample entropy and others) or a subset of wavelet packet transformation (WPT) coefficients associated with the signal region of interest. If the resulting number of features p is still relatively high, a principal component analysis (PCA) procedure can be applied to identify a subset of features with highest variance and eliminate correlating features. After all preprocessing steps and data normalization, the feature matrix is split into testing and training sets with ratio 1 to 4 respectively. The training set is used to train SVM and optimize classifier hyper-parameters, while the testing set is used to evaluate classifier performance with unseen data. Computer code that provides for implementing a SVM classifier is available in several open source packages for Python and R programming languages, for example, the sklearn machine learning package in Python.
Referring to
wherein wij is the weight matrix that defines connection strength of ith neuron to jth input, bi is the neuron bias and g(z=wTx+b) is the associated nonlinear activation function. The specific form of the activation function g is chosen at the design stage, wherein commonly used functions include sigmoid, hyperbolic tan and rectified linear unit (ReLu). The network of interconnected neurons constitutes the artificial neural network (ANN), which can be capable of modeling relatively complicated relationships between the input vector and target class variables by adjusting weights and biases during training stage.
Properties of a specific artificial neural network (ANN) implementation are defined at the design stage and include: 1) number of hidden layers, 2) number of neurons per hidden layer, 2) type of activation function, 3) learning rate and 4) regularization method to prevent overfitting. For example, in one set of embodiments, the artificial neural network (ANN) is implemented using the open source TensorFlow deep learning framework, which provides for setting each of these parameters. The neuron connection strength is defined by the weight matrix wij for each layer which is adjusted during network training and a cost function evaluated at each training epoch using available truth labels. The artificial neural network (ANN) training is accomplished by a standard back-propagation algorithm using a cross-entropy cost function. The network design specifics are determined by the available data since a network with multiple hidden layers (deep ANN) can be very powerful but also is prone to overfitting when a small data set is used. Therefore, the specific artificial neural network (ANN) architecture is developed on a trail-and-error basis, dependent upon the available data and the size of feature vector. In one set of embodiments, output layer of the artificial neural network (ANN) for a binary classifier is implemented as the softmax function with two-element vector [1, 0] for CAD positive and [0, 1] for CAD negative case.
Referring to
Convolutional neural networks (CNN) have proved to be very efficient for prediction and classification problems especially with large scale problems involving images. The typical size of input image data can be quite large, which makes application of standard feed forward networks either impractical or even impossible due to huge number of parameters to be trained. Convolutional neural networks (CNN) accommodate the size of the problem by weight sharing within small number of neurons comprising a receptive field that is scanned over the 2-D input. One benefit of using a convolutional neural network (CNN) for machine learning problems to the ability thereof to learn important features directly from data, so as to provide for bypassing the feature extraction stage that is used by support vector machine (SVM) and feed-forward artificial neural network (ANN) classifiers. A typical convolutional neural network (CNN structure consists of one or more convolution and pooling layers that build a hierarchical structure of features with increasing complexity. Following convolution and max pooling, the extracted features are fed to a fully connected network at the final stage of convolutional neural network (CNN) classifier. For example,
The receptive field is a relatively small 2-D array of neurons (for example, 5×5) that is scanned across the input image while performing an associated cross-correlation operation. A to relatively small number of connected neurons provides for a relatively small number of corresponding weights to be adjusted. The max polling operation provides for reducing the size of the input to the associated fully-connected neural network by selecting pixels with maximum intensity from the associated convolution layer. Similar convolution and max polling operations can be performed multiple times to extract the most significant features before submitting to an associated fully-connected neural network for classification. Although a convolutional neural network (CNN) can be trained to recognize complicated patterns in 2-D data sets, this typically requires large amount of data for efficient generalization and to avoid overfitting. The convolutional neural network (CNN) classifier can be applied either directly to the auscultatory sound signals 16 (or filtered versions thereof), or to corresponding 2-D images generated therefrom, for example, using either a continuous wavelet transform or an associated decomposition thereof by wavelet packet transformation (WPT). For example, in cooperation with the extraction of features using wavelet packet transformation (WPT), the coefficients of Jth level of decomposition can be transformed into a matrix with dimensions (N/2J)×2J, where N is the size of the time domain signal. Such 2-D data can be used train the convolutional neural network (CNN) classifier in order to find any patterns associated with CAD. This type of network is more complicated than a standard feed-forward artificial neural network (ANN), and utilizes more hyperparameters to be tuned to achieve optimal performance. In addition to parameters applicable to an artificial neural network (ANN), the convolutional neural network (CNN) design includes specification of the number of convolution layers, the size of receptive fields (kernel size), number of channels processed simultaneously, filter properties, regularization. After finalization of its design, the convolutional neural network (CNN) can be trained using a training data set and then evaluated using an unseen test data set. For example, for one set of embodiments, the open-source TensorFlow flexible deep learning toolkit and API—which provide for building high-performance neural networks for variety of applications—have been used to design and train the convolutional neural network (CNN) for detecting CAD.
The auscultatory coronary-artery-disease detection system 10 can present various views of the acoustic data (both unprocessed and processed) that was captured during the test for review by a clinician. By reviewing different visualizations of the test results, the clinician may strengthen his case for a particular diagnosis, either in agreement with or disagreement with the result produced by the system. Additionally, particular visualizations may help reassure the patient that the diagnosis is the correct one.
Referring to
In accordance with the Textual View, the system presents, in a textual manner, the system's interpretation of the acoustic data, including: whether or not the patient has a clinical level of CAD; the count of obstructions detected, for each obstruction: the location (zone) of the obstruction; the percentage occlusion of the obstruction; or the type of blockage (soft plaque, hard plaque). For each of the items in the view listed above, the system provides for presenting an associated confidence level, which indicates how confident the system is of each specific element of information presented, or, based on the patient's demographic data (age, sex, BMI, medical and family history), the percentage of other patients who have a similar count, severity, and position of obstructions. From this view, the clinician may switch to any other view listed in this document for more information.
Referring to
On this graph, the system will plot the ROC curve and calculate the Area Under the Curve (AUC) based on the patient's demographics, and the system's clinical trial results. The coordinate that corresponds to the current positivity criterion will be highlighted. The clinician will be able to display a modified graph if he commands the system to exclude specific parts of the patient's demographic data. The graph may or may not contain gridlines and/or data points defining the ROC curve, and it may or may not fill in the AUC.
Alternatively, referring to
Referring to
Referring to
Generally the vertical axis may comprise acoustic data captured from each of the system's sensors—both acoustic and ECG. A correlation procedure is performed to ensure that the data captured from each of the system's sensors is aligned to one another. Because the initial display of this view may contain many dozens of heartbeats, the clinician has the option to highlight a subset of the data on the horizontal axis, and command the system to zoom into the selected section. In this way, the clinician can perform a deeper analysis on one or more sections of the acoustic capture data as he so chooses, and explore any discrepancies between the data captured by the ECG and acoustic sensors for any particular heartbeat. For example,
Referring to
Referring to
For each heartbeat diastole displayed, the system highlights any unexpected acoustic signals captured, as such signals may be an indication of an obstruction or other cardiac condition. Such signals are highlighted by the height of the line graph, which represents intensity. These highlighted areas on the graph allow a clinician to distinguish low-energy noise (which may or may not be a sign of non-obstructive CAD) from high-energy noise (which is a likely indicator of obstructive CAD). In this view, it is also easy for a clinician to understand the correlation (across heartbeats) of noise, as well as the number and timing of correlated instances of noise (which may indicate the number and location of blockages, respectively). For example, the data illustrated in
The clinician has the option to highlight a subset of the data on the horizontal axis, and command the system to zoom into the selected section. In this way, the clinician can perform a deeper analysis on one or more sections of the acoustic capture data as he so chooses, especially so that he may explore more deeply any unexpected acoustic signals.
Referring to
For each diastole displayed, the system highlights any unexpected acoustic signals captured, as such signals may be an indication of an obstruction or other cardiac condition. Such highlighted areas indicate the intensity of the unexpected signal. Highlights are in the form of color, which indicate varying intensity of the signal. This view could alternatively be represented in monochrome as a contour plot. For example, the data illustrated in
These highlighted areas on the graph allow a clinician to distinguish low-energy noise (which may or may not be a sign of non-obstructive coronary artery disease (CAD( ) from high-energy noise (which is a likely indicator of obstructive CAD). In this view, it is also easy for a clinician to understand the correlation (across heartbeats) of noise, as well as the number and timing of correlated instances of noise (which may indicate the number and location of blockages, respectively).
The clinician has the option to highlight a subset of the data on the horizontal axis, and command the system to zoom into the selected section. In this way, the clinician can perform a deeper analysis on one or more sections of the acoustic capture data as he so chooses, especially so that he may explore more deeply any unexpected acoustic signals.
Referring to
These highlighted areas on the graph allow a clinician to distinguish low-energy noise (which may or may not be a sign of non-obstructive CAD) from high-energy noise (which is a likely indicator of obstructive CAD). It is also easy for a clinician to understand the frequency of high-energy noise, as well as the timing of this noise—this is important as different cardiac conditions may be defined by specific frequencies and timings of noise. For example, the data illustrated in
The system provides a User Interface, and associated navigation, is designed for use on tablets and smartphones, and thus uses common touch-screen user interface paradigms. For example, two fingers moving apart from one another can be used to zoom in to any area on any graph, and two fingers moving together can be used to zoom out. A single finger can be used to highlight any areas on the horizontal axis, and the graph can be zoomed in to that highlighted area by touching a button.
Touching any area of the graph provides information to the user (either in a pop-up window or beside/below the graph) on the values of the horizontal and vertical axes at that point, as well as the “height” information of that point if available (e.g. in the Matrix View). For example, in Stacked Heartbeat View, touching on an individual heartbeat would cause the system to provide the heartbeat number, the maximum intensity of that heartbeat (and the time on the horizontal axis at which the maximum intensity occurs), and the time on the horizontal axis corresponding to the touch. In the case of the Matrix View, touching on any area of the graph would cause the system to provide the frequency, time, and intensity corresponding to the coordinate on the graph that was touched.
For use with a mouse, the user interface paradigms are similar, except with zoom. In this case, zoom can be accomplished through a common desktop/laptop user interface paradigm, such as dedicated zoom in/out buttons in the UI, or mouse wheel scrolling.
For some of these graphs, it is possible to restrict the display of data to some subset of the acoustic sensors (for example, as illustrated in
The following features of sound propagation with the body provide for the localization of acoustic sources based upon the relative strengths of the associated acoustic signals from different auscultatory sound sensors 12. First, any sound feature that originates from the heart, acts as a single source for all auscultatory sound sensors 12. For example, the characteristic sound that the heart makes, dub-blub, acts like two sounds coming from two separate locations. Second, for practical purposes, sound travels fast enough in the body that all auscultatory sound sensors 12 would effectively receive each sound individually at substantially the same time. Third, the farther a sound's origin is from an auscultatory sound sensor 12, the weaker that sound will be when received by that auscultatory sound sensor 12. This is due to the sound energy being dissipated as it travels through the body. In view of these features, using the relative signal strengths from different auscultatory sound sensors 12, the location of the sound source can be triangulated from the locations of the auscultatory sound sensors 12 with the three largest signal strengths, weighted by the relative strengths of those signals, with the resulting calculated location of the sound source being relatively closer to auscultatory sound sensors 12 with relatively stronger signals than to auscultatory sound sensors 12 with relatively weaker signals.
More particularly, referring to
(X−X1)2+(Y−Y1)2+Z2=A12 (29)
(X−X2)2+(Y−Y2)2+Z2=A22 (30)
(X−X3)2+(Y−Y3)2+Z2=A32 (31)
Following the solution of equations 29-31, the resulting lateral (X, Y) location of the sound source 94 may then be displayed, for example, as a location on a silhouette of a torso, or transformed to a corresponding location on the image of the heart illustrated in
For some of these graphs, it is possible for the user to directly compare the results of the current test with one or more previous tests. This can be done by simply placing the graphs side-by-side (for example, as illustrated in
The ability to compare the current test with a previous test is critical to the clinicians understanding of the progression of a particular cardiovascular condition (either worsening, or improved as in the case of a PCI procedure). This capability is relevant to the Stacked Heartbeat View, the Bruit Identification View (both modes), and the Bruit Analysis View.
The graphs may be rendered locally or remotely (server-based) or both, depending on the capabilities desired by the clinician and the organization to which he belongs. In most use cases (tablet, phone, desktop, or laptop), the graph rendering will be done locally, either through a web-browser (full or embedded) on the client, or through a graphics library optimized for each specific supported client platform.
In other cases, the rendering may be done on the server side—graphs may be generated and exported to JPEG (or other similar) format so that they can be emailed or sent via instant message to interested parties.
While specific embodiments have been described in detail in the foregoing detailed description and illustrated in the accompanying drawings, those with ordinary skill in the art will appreciate that various modifications and alternatives to those details could be developed in light of the overall teachings of the disclosure. It should be understood, that any reference herein to the term “or” is intended to mean an “inclusive or” or what is also known as a “logical OR”, wherein when used as a logic statement, the expression “A or B” is true if either A or B is true, or if both A and B are true, and when used as a list of elements, the expression “A, B or C” is intended to include all combinations of the elements recited in the expression, for example, any of the elements selected from the group consisting of A, B, C, (A, B), (A, C), (B, C), and (A, B, C); and so on if additional elements are listed. Furthermore, it should also be understood that the indefinite articles “a” or “an”, and the corresponding associated definite articles “the” or “said”, are each intended to mean one or more unless otherwise stated, implied, or physically impossible. Yet further, it should be understood that the expressions “at least one of A and B, etc.”, “at least one of A or B, etc.”, “selected from A and B, etc.” and “selected from A or B, etc.” are each intended to mean either any recited element individually or any combination of two or more elements, for example, any of the elements from the group consisting of “A”, “B”, and “A AND B together”, etc. Yet further, it should be understood that the expressions “one of A and B, etc.” and “one of A or B, etc.” are each intended to mean any of the recited elements individually alone, for example, either A alone or B alone, etc., but not A AND B together. Furthermore, it should also be understood that unless indicated otherwise or unless physically impossible, that the above-described embodiments and aspects can be used in combination with one another and are not mutually exclusive. Accordingly, the particular arrangements disclosed are meant to be illustrative only and not limiting as to the scope of the invention, which is to be given the full breadth of the appended claims, and any and all equivalents thereof.
The instant application is a continuation-in-part of International Application No. PCT/US2018/056956 filed on 22 Oct. 2018, which claims benefit of the following: U.S. Provisional Application Ser. No. 62/575,390 filed on 21 Oct. 2017, U.S. Provisional Application Ser. No. 62/575,397 filed on 21 Oct. 2017, and U.S. Provisional Application Ser. No. 62/575,399 filed on 21 Oct. 2017; The instant application also claims benefit of the following: U.S. Provisional Application Ser. No. 62/838,270 filed on 24 Apr. 2019, and U.S. Provisional Application Ser. No. 62/838,296 filed on 24 Apr. 2019, each of which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
3762397 | Cage | Oct 1973 | A |
3799147 | Adolph et al. | Mar 1974 | A |
3814083 | Fletcher et al. | Jun 1974 | A |
4226248 | Manoli | Oct 1980 | A |
4378022 | Suobank | Mar 1983 | A |
4446873 | Groch et al. | May 1984 | A |
4510944 | Porges | Apr 1985 | A |
4548204 | Groch et al. | Oct 1985 | A |
4549551 | Dyck et al. | Oct 1985 | A |
4549552 | Groch et al. | Oct 1985 | A |
4594731 | Lewkowicz | Jun 1986 | A |
4628939 | Little | Dec 1986 | A |
4679570 | Lund | Jul 1987 | A |
4792145 | Eisenberg et al. | Dec 1988 | A |
4832038 | Arai et al. | May 1989 | A |
4862361 | Gordon et al. | Aug 1989 | A |
4862897 | Eisenberg et al. | Sep 1989 | A |
4905706 | Duff et al. | Mar 1990 | A |
4951678 | Joseph | Aug 1990 | A |
4967760 | Bennett, Jr. et al. | Nov 1990 | A |
4979110 | Albrecht et al. | Dec 1990 | A |
4989611 | Zanetti et al. | Feb 1991 | A |
5002060 | Nedivi | Mar 1991 | A |
5010889 | Bredesen et al. | Apr 1991 | A |
5012815 | Bennett, Jr. et al. | May 1991 | A |
5025809 | Johnson et al. | Jun 1991 | A |
5036857 | Semmlow et al. | Aug 1991 | A |
5109863 | Semmlow et al. | May 1992 | A |
5159932 | Zanetti et al. | Nov 1992 | A |
5213108 | Bredesen et al. | May 1993 | A |
5218969 | Bredesen et al. | Jun 1993 | A |
5262958 | Chui | Nov 1993 | A |
5337752 | Reeves | Aug 1994 | A |
5365937 | Reeves | Nov 1994 | A |
5373460 | Marks, II | Dec 1994 | A |
5416847 | Boze | May 1995 | A |
5420501 | Matsuoka | May 1995 | A |
5439483 | Duong Van | Aug 1995 | A |
5533511 | Kaspari et al. | Jul 1996 | A |
5542430 | Farrugia | Aug 1996 | A |
5590650 | Genova | Jan 1997 | A |
5632272 | Diab | May 1997 | A |
5638823 | Akay | Jun 1997 | A |
5685317 | Sjostrom | Nov 1997 | A |
5687738 | Shapiro et al. | Nov 1997 | A |
5850622 | Vassiliou | Dec 1998 | A |
5876350 | Lo | Mar 1999 | A |
5947909 | Watrous | Sep 1999 | A |
5957855 | Oriol | Sep 1999 | A |
5957866 | Shapiro et al. | Sep 1999 | A |
5967981 | Watrous | Oct 1999 | A |
6002777 | Grasfield | Dec 1999 | A |
6005951 | Grasfield | Dec 1999 | A |
6024705 | Schlager et al. | Feb 2000 | A |
6050950 | Mohler | Apr 2000 | A |
6053872 | Mohler | Apr 2000 | A |
6105015 | Nguyen | Aug 2000 | A |
6135966 | Ko | Oct 2000 | A |
6152879 | Mohler | Nov 2000 | A |
6179783 | Mohler | Jan 2001 | B1 |
6193668 | Chassaing et al. | Feb 2001 | B1 |
6230048 | Selvester et al. | May 2001 | B1 |
6243599 | Van Horn | Jun 2001 | B1 |
6278890 | Chassaing et al. | Aug 2001 | B1 |
6371924 | Stearns | Apr 2002 | B1 |
6478744 | Mohler | Nov 2002 | B2 |
6478746 | Chassaing et al. | Nov 2002 | B2 |
6516220 | Selvester et al. | Feb 2003 | B2 |
6572560 | Watrous et al. | Jun 2003 | B1 |
6574494 | Van Horn | Jun 2003 | B2 |
6629937 | Watrous | Oct 2003 | B2 |
6699201 | Stearns | Mar 2004 | B2 |
6699204 | Kehyayan et al. | Mar 2004 | B1 |
6778852 | Galen et al. | Aug 2004 | B2 |
6780159 | Sandler et al. | Aug 2004 | B2 |
6790183 | Murphy | Sep 2004 | B2 |
6824519 | Narimatsu et al. | Nov 2004 | B2 |
6898459 | Hayek et al. | May 2005 | B2 |
6939308 | Chassaing et al. | Sep 2005 | B2 |
6947789 | Selvester et al. | Sep 2005 | B2 |
6953436 | Watrous et al. | Oct 2005 | B2 |
6979297 | Andresen et al. | Dec 2005 | B2 |
6999592 | Chelen | Feb 2006 | B2 |
7096060 | Arand et al. | Aug 2006 | B2 |
7130429 | Dalgaard et al. | Oct 2006 | B1 |
7171269 | Addison et al. | Jan 2007 | B1 |
7174203 | Arand et al. | Feb 2007 | B2 |
7190994 | Mohler et al. | Mar 2007 | B2 |
7248923 | Maile et al. | Jul 2007 | B2 |
7300405 | Guion et al. | Nov 2007 | B2 |
7300407 | Watrous | Nov 2007 | B2 |
7416531 | Mohler | Aug 2008 | B2 |
7462153 | Bostian et al. | Dec 2008 | B2 |
7517319 | Kushnir et al. | Apr 2009 | B2 |
7520860 | Guion-Johnson et al. | Apr 2009 | B2 |
7520861 | Murphy | Apr 2009 | B2 |
7527597 | Sandler et al. | May 2009 | B2 |
7668589 | Bauer | Feb 2010 | B2 |
7670298 | Carlson et al. | Mar 2010 | B2 |
7753856 | Dziubinski | Jul 2010 | B2 |
7828740 | Longhini et al. | Nov 2010 | B2 |
7844334 | Maile et al. | Nov 2010 | B2 |
7853327 | Patangay et al. | Dec 2010 | B2 |
7909772 | Popev et al. | Mar 2011 | B2 |
8007442 | Carlson et al. | Aug 2011 | B2 |
8172764 | Gregson et al. | May 2012 | B2 |
8235912 | Schmidt et al. | Aug 2012 | B2 |
8277389 | Carlson et al. | Oct 2012 | B2 |
8332034 | Patangay et al. | Dec 2012 | B2 |
8419651 | Owsley et al. | Apr 2013 | B2 |
8419652 | Rajamani et al. | Apr 2013 | B2 |
8469896 | Schmidt et al. | Jun 2013 | B2 |
8535235 | Carlson et al. | Sep 2013 | B2 |
8585603 | Peretto | Nov 2013 | B2 |
8600488 | Mohler et al. | Dec 2013 | B2 |
8641631 | Sierra et al. | Feb 2014 | B2 |
8684943 | Schmidt et al. | Apr 2014 | B2 |
8690789 | Watrous | Apr 2014 | B2 |
8755872 | Marinow | Jun 2014 | B1 |
8827919 | Siejko et al. | Sep 2014 | B2 |
8845544 | Carlson et al. | Sep 2014 | B2 |
8870791 | Sabatino | Oct 2014 | B2 |
8870792 | Al-Ali et al. | Oct 2014 | B2 |
8905932 | Lovoi et al. | Dec 2014 | B2 |
8920343 | Sabatino | Dec 2014 | B2 |
8961427 | Owsley et al. | Feb 2015 | B2 |
8972002 | Wariar et al. | Mar 2015 | B2 |
9044144 | Figgatt et al. | Jun 2015 | B2 |
9049981 | Patangay et al. | Jun 2015 | B2 |
9101274 | Bakema et al. | Aug 2015 | B2 |
9125564 | Schmidt et al. | Sep 2015 | B2 |
9125574 | Zia et al. | Sep 2015 | B2 |
9198634 | Pretorius et al. | Dec 2015 | B2 |
9226726 | Semmlow | Jan 2016 | B1 |
9237870 | Mittal | Jan 2016 | B2 |
9364184 | Figgatt et al. | Jun 2016 | B2 |
9364193 | Patangay et al. | Jun 2016 | B2 |
9408549 | Brockway et al. | Aug 2016 | B2 |
9436645 | Al-Ali et al. | Sep 2016 | B2 |
D769278 | Ukrainsky et al. | Oct 2016 | S |
9492099 | Gamble et al. | Nov 2016 | B2 |
9591972 | Owsley et al. | Mar 2017 | B1 |
9597004 | Hughes et al. | Mar 2017 | B2 |
9636029 | Narasimhan et al. | May 2017 | B1 |
9724016 | Al-Ali et al. | Aug 2017 | B1 |
9747421 | Krishna et al. | Aug 2017 | B2 |
9848800 | Lee et al. | Dec 2017 | B1 |
9943269 | Muhsin et al. | Apr 2018 | B2 |
9955939 | Sezan et al. | May 2018 | B2 |
10512436 | Muhsin et al. | Dec 2019 | B2 |
10548562 | Hoppmann et al. | Feb 2020 | B2 |
10555717 | Hsu et al. | Feb 2020 | B2 |
20020052559 | Watrous | May 2002 | A1 |
20040092846 | Watrous | May 2004 | A1 |
20050033144 | Wada | Feb 2005 | A1 |
20060018524 | Suzuki et al. | Jan 2006 | A1 |
20060135876 | Andresen et al. | Jun 2006 | A1 |
20060161064 | Watrous et al. | Jul 2006 | A1 |
20070038137 | Arand et al. | Feb 2007 | A1 |
20070055151 | Shertukde et al. | Mar 2007 | A1 |
20070191725 | Nelson | Aug 2007 | A1 |
20070191740 | Shertukde et al. | Aug 2007 | A1 |
20070239003 | Shertukde et al. | Oct 2007 | A1 |
20080021336 | Dubak, III | Jan 2008 | A1 |
20080058607 | Watrous | Mar 2008 | A1 |
20080255465 | Nelson | Oct 2008 | A1 |
20090043218 | Warner et al. | Feb 2009 | A1 |
20090112107 | Nelson et al. | Apr 2009 | A1 |
20090204167 | Bauer | Aug 2009 | A1 |
20090216138 | Arand et al. | Aug 2009 | A1 |
20100087746 | Radzievsky et al. | Apr 2010 | A1 |
20100094148 | Bauer et al. | Apr 2010 | A1 |
20100094152 | Semmlow | Apr 2010 | A1 |
20100145210 | Graff et al. | Jun 2010 | A1 |
20110066041 | Pandia et al. | Mar 2011 | A1 |
20110066042 | Pandia et al. | Mar 2011 | A1 |
20110077707 | Maile et al. | Mar 2011 | A1 |
20110098583 | Pandia et al. | Apr 2011 | A1 |
20110125060 | Telfort et al. | May 2011 | A1 |
20110137210 | Johnson | Jun 2011 | A1 |
20110222697 | Dong et al. | Sep 2011 | A1 |
20110263994 | Burns et al. | Oct 2011 | A1 |
20120130263 | Pretorius et al. | May 2012 | A1 |
20130109989 | Busse et al. | May 2013 | A1 |
20130231576 | Tanaka et al. | Sep 2013 | A1 |
20130261484 | Schmidt | Oct 2013 | A1 |
20140221772 | Wolloch et al. | Aug 2014 | A1 |
20140243616 | Johnson | Aug 2014 | A1 |
20140276229 | Ikeda | Sep 2014 | A1 |
20150018702 | Galloway et al. | Jan 2015 | A1 |
20150038856 | Houlton et al. | Feb 2015 | A1 |
20150320323 | Bakema et al. | Nov 2015 | A1 |
20160143612 | Tsai | May 2016 | A1 |
20160361041 | Barsimantov et al. | Dec 2016 | A1 |
20170055912 | Gamble et al. | Mar 2017 | A1 |
20170079594 | Telfort et al. | Mar 2017 | A1 |
20170127951 | Al Ahmad et al. | May 2017 | A1 |
20170188868 | Kale et al. | Jul 2017 | A1 |
20170209115 | Lönnroth et al. | Jul 2017 | A1 |
20180020987 | Schmidt et al. | Jan 2018 | A1 |
20180132815 | Tsai et al. | May 2018 | A1 |
20180242874 | Johnson et al. | Aug 2018 | A1 |
20180242926 | Muhsin et al. | Aug 2018 | A1 |
20180247712 | Muhsin et al. | Aug 2018 | A1 |
20180300919 | Muhsin et al. | Oct 2018 | A1 |
20190083038 | Griffin | Mar 2019 | A1 |
20190083056 | Abiri | Mar 2019 | A1 |
20190099152 | Martin et al. | Apr 2019 | A1 |
20190117070 | Muhsin et al. | Apr 2019 | A1 |
20190117162 | Zhou et al. | Apr 2019 | A1 |
20190117165 | Zeng et al. | Apr 2019 | A1 |
20190117184 | Laska et al. | Apr 2019 | A1 |
20190175072 | Schmidt et al. | Jun 2019 | A1 |
20200029840 | Nguyen et al. | Jan 2020 | A1 |
20200046244 | Alam et al. | Feb 2020 | A1 |
20200060629 | Muhsin et al. | Feb 2020 | A1 |
20200121205 | Telfort et al. | Apr 2020 | A1 |
20200170611 | Hoppmann et al. | Jun 2020 | A1 |
Number | Date | Country |
---|---|---|
2849294 | Mar 2013 | CA |
2900160 | Aug 2014 | CA |
3025748 | Dec 2017 | CA |
3026972 | Dec 2017 | CA |
10046703 | Mar 2005 | DE |
2014235 | Jan 2009 | EP |
9826716 | Jun 1998 | WO |
9923940 | May 1999 | WO |
200027287 | May 2000 | WO |
200355395 | Jul 2003 | WO |
2003079891 | Oct 2003 | WO |
2006078954 | Jul 2006 | WO |
2006127022 | Nov 2006 | WO |
2008036911 | Mar 2008 | WO |
2009053913 | Apr 2009 | WO |
2011015935 | Feb 2011 | WO |
2011047213 | Apr 2011 | WO |
2013043157 | Mar 2013 | WO |
2014123512 | Aug 2014 | WO |
2017037678 | Mar 2017 | WO |
2017216374 | Dec 2017 | WO |
2019060455 | Mar 2019 | WO |
2019071050 | Apr 2019 | WO |
2019079785 | Apr 2019 | WO |
2019079786 | Apr 2019 | WO |
2019079829 | Apr 2019 | WO |
2019161277 | Aug 2019 | WO |
2020219991 | Oct 2020 | WO |
Entry |
---|
MathWorks_“Envelope Detection”,MathWorks Help Center, MATLAB & Simulink, Internet document, 6 pp., downloaded on Apr. 15, 2019, https://www.mathworks.com/help/dsp/ug/envelope-detection.html. |
Miller et al., “Spectral Analysis of Arterial Bruits (Phonoangiography): Experimental Validation,” 61 Circulation (3) pp. 515-520 (Mar. 1980). |
Mohamed et al., “An Approach for ECG Feature Extraction using Daubechies 4 (DB4) Wavelet”, International Journal of Computer Applications (0975-8887), vol. 96—No. 12, Jun. 2014. |
Jeong-Seon Park et al: “R Peak Detection Method Using Wavelet Transform and Modified Shannon Energy Envelope”, Journal of Healthcare Engineering, vol. 2017, Jul. 5, 2017 (Jul. 5, 2017), pp. 1-14, XP055546974, Brentwood ISSN: 2040-2295, DOI: 10.1155/2017/4901017. |
Perini et al., “Body Position Affects the Power Spectrum of Heart Rate Variability During Dynamic Exercise,” 66 Eur. J. Appl. Physiol. , pp. 207-213 (1993). |
Rangayyan et al., “Phonocardiogram Signal Analysis: A Review,” 15 CRC Critical Reviews in Biomedical Engineering(3) pp. 211-237 (1988). |
Richman et al, “Physiological Time-Series Analysis Using Approximate Entropy and Sample Entropy,” Am J Physiol Heart Circ Physiol., 278: H2039-H2049, 2000. |
Saito et al., “Local discriminant bases and their applications,” J. Math. Imaging and Vision, 5, 337-358 (1995). |
Saito et al., “On Local Feature Extraction for Signal Classification,” ICIAM, 1995, https://www.math.ucdavis.edu/˜saito/publications/saito_iciam95.pdf. |
Santos et al., “Detection of First and Second Cardiac Sounds Based on Time Frequency Analysis,” 23rd Annual International Conference of the IEEE Enginerring in Medicine and Biology Society, Oct. 25-28, 2001, Istanbul, Turkey. |
Schmidt, Samuel, Detection of Coronary Artery Disease with an Electric Stethoscope, PhD Thesis, 2011, 49 pp. |
Tan, Andrew,“Principal Components of Electrocardiograms”, Internet Document, Medium, Nov. 29, 2016, 8 pp., https://medium.com/@andrewtan_36013/principal-components-of-electrocardiograms-14874b3a96b1. |
Sharma et al.,“Study and Design of a Shannon-Energy-Envelope based Phonocardiogram Peak Spacing Analysis for Estimating Arrhythmic Heart-Beat”, International Journal of Scientific and Research Publications, vol. 4, Issue 9, Sep. 2014, ISSN 2250-3153, 5 pp., www.ijsrp.org/research-paper-0914/ijsrp-p3325.pdf. |
Thakor et al., “Applications of Adaptive Filtering to ECG Analysis: Noise Cancellation and Arrhythmia Detection,” 38 IEEE Transaction on Biomedical Engineering (8) pp. 785-794 (Aug. 1991). |
Xinpei Wang et al: “Detection of the First and Second Heart Sound Using Heart Sound Energy”, 2009 2nd International Conference on Biomedical Engineering and Informatics : BMEI 2009 ; Tianjin, China, Oct. 17-19, 2009, Jan. 1, 2009 (Jan. 1, 2009), pp. 1-4, XP055546725, Piscataway, NJ, USA DOI: 10.1109/BMEI.2009.5305640 ISBN: 978-1-4244-4132-7. |
Wasilewski, Filip, “Wavelet Daubechies 4 (db4) Properties,” PYWAVELETS, Internet Document: http://wavelets.pybytes.com/wavelet/db4/, 2008-2019. |
Weissler et al., “Systolic Time Intervals in Heart Failure in Man,” Circulation, Vo. 37, No. 2, Feb. 1968, pp. 149-159. |
Weston, Jason, “Support Vector Machine (and Statistical Learning Theory) Tutorial,”, NEC Labs America, Internet Document: http://www.cs.columbia.edu/˜kathy/cs4701/documents/jason_svm_tutorial.pdf, downloaded on Apr. 17, 2019. |
Wikerhauser, Mladen Victor, Lectures on Wavelet Packet Algorithms, http://citeseerx.ist.psu.edu, Apr. 1992. |
Wikipedia, “Softmax Function”, Internet Document, 7 pp., https://en.wikipedia.org/wiki/Softmax_function, downloaded on Apr. 17, 2019. |
Wood et al., “Time-Frequency Transforms: A New Approach to First Heart Sound Frequency Dynamics,” IEEE Transaction on Biomedical Engineering, vol. 39, No. 7, pp. 730-740 (Jul. 1992). |
Zhou et al, “A Novel Technique for Muscle Onset Detection Using Surface EMG Signals without Removal of ECG Artifacts,”, US National Library of Medicine, National Institutes of Health, HHS Public Access, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4035355/, 1995, 12 pp. |
Zhu et al., “An R-peak detection method based on peaks of Shannon enegy envelope : Abstract”,ScienceDirect, Biomedical Signal Processing and Control, vol. 8, Issue 5, Sep. 2013, pp. 466-474, https://doi.org/10.1016/j.bspc.2013.01.001. |
Zuber et al., “Acoustic cardiography to improve detection of coronary artery disease with stress testing,” World Journal of Cardiology, vol. 2, Issue 5, May 26, 2010, pp. 118-124. |
International Search Report and Written Opinion of International Searching Authority in International Application No. PCT/US2018/056956, European Patent Office, dated Feb. 1, 2019, 9 pp. |
International Preliminary Report On Patentability in International Application No. PCT/US2018/056956, USPTO, dated Nov. 26, 2019, 86 pp. |
Examiner's search strategy and results, Written Opinion of the ISA, International Search Report, and Transmittal in International Application No. PCT/US2020/029979, USPTO, dated Oct. 1, 2020, 23 pp. |
Giordano, N et al. “A Novel Method for Measuring the Timing of Heart Sound Components through Digital Phonocardiography”; Sensors 2019, 19, 1868; Publication [online]. Apr. 19, 2019 [retrieved Jul. 6, 2020]. Retrieved from the Internet: <URL: https://www.mdpi.com/1424-8220/19/8/1868 >. |
Abe et al., “Measurement of Left Atrial Systolic Time Intervals in Hypertensive patients Using Doppler Echocardiography: Relation to Fourth Heart Sound and Left Ventricular Wall Thickness,” 11 JACC (4) pp. 800-805 (Apr. 1988). |
Akay et al., “Acoustical Detection of Coronary Occlusions Using Neural Networks,” 15 J. Biomed. Eng pp. 469-473 (1993). |
Akay et al., “Noninvasive Acoustical Detection of Coronary Artery Disease: A Comparative Study of Signal Processing Methods,” 40 IEEE Transaction on Biomedical Engineering (6) pp. 571-578 (Jun. 1993). |
Akay et al., “Application of Adaptive FTF/FAEST Zero Tracking Filters to Noninvasive Characterization of the Sound Pattern Caused by Coronary Artery Stenosis Before and After Angioplasty,” 21 Annals of Biomedical Engineering, pp. 9-17 (1993). |
Akay et al., “Noninvasive Characterization of the Sound Pattern Caused by Coronary Artery Stenosis Using FTF/FAEST Zero Tracking Filters: Normal/Abnonnal Study,” 21 Annals of Biomedical Engineering, pp. 175-182 (1993). |
Bogaert et al., Clinical cardiac MRI,Springer Science & Business Media, 2005, pp. 10-11. |
Bogaert et al., Clinical cardiac MRI, Springer Science & Business Media, 2012, p. 517. |
Bogaert et al., Clinical cardiac MRI, Springer Science & Business Media, 2012, p. 17. |
Daubechies, Ingrid, “Orthonormal Bases of Compactly Supported Wavelets,” Communications on Pure and Applied Mathematics, vol. XLI, 909-996 (1988). |
Donnerstein, Richard L., “Continuous Spectral Analysis of Heart Murmers for Evauating Stenotic Cardiac Lesions,”64 American J, Cardiology pp. 625-630 (Sep. 1989). |
Erne, Paul, “Beyond auscultation—acoustic cardiography in the diagnosis and assessment of cardiac disease,” Swiss Med. Wkly 2008, 128 (31-32), pp. 439-452. |
Glower, et al., “Mechanical Correlates of the Third Heart Sound,” 19 JACC (2) pp. 450-157 (Feb. 1992). |
Graps, Amara, “An Introduction to Wavelets,” IEEE Computationsl Science and Engineering, Summer 1995, vol. 2, No. 2, 1995, pp. 1-18. |
Hamilton et al., “Compression of the Ambulatory ECG by Average Beat Subtraction and Residual Differencing,” IEEE Transactions on Biomedical Engineering, vol. 38, No. 3, pp. 253-259 (Mar. 1991). |
Healio-Learn the Heart, “Heart Sounds Topic Review,” Cardiology Review. Topic Reviews, Downloaded on Oct. 18, 2018, 14 pp., https://www.healio.com/cardiology/learn-the-heart/cardiology-review/topic-reviews/heart-sounds. |
Kessler et al., “Wavelet Notes”, University of Iowa, https://arxiv.org/pdf/nucl-th/0305025.pdf, Feb. 5, 2008. |
Khadra et al., “The Wavelet Transform and its Applications to Phonocardiogram Signal Analysis,” 16 Med. Inform. (3), pp. 271-277 (1991). |
Mallat, Stephane G., A theory for multiresolution signal decomposition: the wavelet representation. IEEE Trans. Pattern Anal. Mach. Intell., 11(7), 674-693 (1989). |
MathWorks, “Code for shannon energy envelope?”, MATLAB Answers, MATLAB Central, Internet document, 3 pp. , Jan. 17, 2019, https://www.mathworks.com/matlabcentral/answers/440144-code-for-shannon-energy-envelope/. |
Number | Date | Country | |
---|---|---|---|
20200245889 A1 | Aug 2020 | US |
Number | Date | Country | |
---|---|---|---|
62838270 | Apr 2019 | US | |
62838296 | Apr 2019 | US | |
62575399 | Oct 2017 | US | |
62575397 | Oct 2017 | US | |
62575390 | Oct 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US2018/056956 | Oct 2018 | US |
Child | 16854894 | US |