This invention relates in general to the field of dual-axis swallowing accelerometry signal analysis and more specifically to a method for denoising such signals.
Swallowing accelerometry is a potentially informative adjunct to bedside screening for dysphagia. These measurements arc minimally invasive, requiring only the superficial attachment of a sensor anterior to the thyroid notch. Even though single-axis accelerometers were traditionally used for swallowing accelerometry, recent studies have shown that dual-axis accelerometers can capture more of the clinically relevant information. Nevertheless, such measurements are inherently very noisy due to various physiological and motion artifacts. Denoising of dual-axis swallowing accelerometry signals is therefore essential for the development of a robust medical device based on these signals.
Estimation of unknown signals in white Gaussian noise has been dealt with by others. Wavelet denoising has previously been proposed as a valuable option. Wavelet denoising removes the additive white Gaussian noise from a signal by zeroing the wavelet coefficients with small absolute value. The suggested optimal threshold is equal to σε√{square root over (2 log N)} where σε2 is the variance of the additive noise and N is the length of the signal. This approach requires the knowledge of the noise variance, which can be estimated from the wavelet coefficients at the finest scale. However, wavelet denoising with the suggested optimal threshold does not necessarily produce the optimal results for signals that arc not smooth. i.e., signals with noiseless coefficients being of very small amplitude for a large number of basis functions. Recent attempts to overcome this shortcoming have yielded methods that can suffer from high computational complexities for very long signals, and do not necessarily reach the optimal results.
It is an object of this invention to: (1) reduce high computational complexity; and, (2) reduce reconstruction error associated with denoising swallowing accelerometry signals.
This invention teaches a method for denoising of long duration dual-axis swallowing accelerometry signals using a computationally efficient algorithm. The algorithm achieves low computational complexity by performing a search for the optimal threshold in a reduced wavelet subspace. To find this reduced subspace, the proposed scheme uses the minimum value of the estimated reconstruction error. By finding this value, the proposed approach also achieves a smaller reconstruction error than previous approaches such as MNDL. SURE-based and Donoho's approaches. This finding has been confirmed for both, synthetic test signals and dual-axis swallowing accelerometry signals.
In the drawings, embodiments of the invention arc illustrated by way of example. It is to be expressly understood that the description and drawings are only for the purpose of illustration and as an aid to understanding, and are not intended as a definition of the limits of the invention.
Table 1 is a table of SNRs (dB) between the Donoho approach and the method of the present invention.
Consider N noisy discrete-time observations:
x(n)=f(n)+ε(n) (1)
where n=0, . . . , N−1, f(n) is a sampled version of a noiseless continuous signal, and ε(n) is the additive white Gaussian noise drawn from N(0,σε2)
Assume that f(n) can be expanded using
basis functions.bk(n), on the observation space. BN:
f(n)=Σk=1Nckbk(n) (2)
where
c
k
=
b
k(n),f(n) (3)
and (p.q) denotes the inner product of vectors p and q. However, given the noisy observations, the coefficients, ck, can only be approximated as follows:
ĉ
k
=
b
k(n),x(n)=ck+
bk(n),ε(n)
(4)
If f(n) can be described with M nonzero coefficients, where M<<N, then many estimated co-efficients. ĉk, represent samples of a zero mean Gaussian random variable with variance σε2. A classical approach known as wavelet denoising diminishes the effects of noise by first expanding the noisy signal in terms of orthonormal bases of compactly supported wavelets. The estimated coefficients below some threshold, τ, are disregarded either by hard or soft thresholding. The value of τ is always chosen based on an attempt to minimize the so-called reconstruction error. re:
where ∥·∥ denotes the Euclidean norm and {circumflex over (f)}(n) represents the estimated noiseless signal. re is a sample of random variable Re that has the following expected value:
where in represents the number of coefficients describing f(n) in some subspace of BN and Δm is a vector of length N-m, representing the coefficients of bases that are not selected to describe the unknown signal. In reality, re is not available and only the number of coefficients not disregarded by the thresholding operation, {circumflex over (m)}, is known. In a recent contribution, probabilistic upper and lower bounds for re were derived based on the available data error:
Therefore, it, has been shown that the upper bound for re is equal to
where α and β represent the parameters for validation probability (Pv=Q(α)) and confidence probability (Pc=Q(β)), with Q(·) for an argument λ being defined as
In audition, {circumflex over (m)}(r) denotes the number of bases whose expansion coefficients are greater than τ in some subspace of BN.
It should be note that for some values of {circumflex over (m)} the reconstruction error given by eqn. (5) and its upper bound given by eqn. (8) achieve a minimum due to the bias-to-variance trade-off. The principle of MDL has been borrowed from coding theory to find such a minimum value. Also, it has been demonstrated that smaller reconstruction errors can be achieved with MDL-derived thresholds.
The MNDL-based approach can be computationally expensive for very long data sets since the bases are incrementally added to the subspace describing the unknown signal. Considering the length of acquired dual-axis accelerometry signals (>>103 points}. an attempt should be made to minimize the search space, while choosing a threshold that minimizes the reconstruction error. In some cases the MNDL-based approach can yield higher reconstruction errors than Donoho's approach.
In light of the computational and reconstruction limitations or the MNDL-based approach, a new denoising strategy is proposed here, the goal of this new approach is twofold. First, it should be computationally efficient. Second, it should attain a minimum reconstruction error. Minimization of the search space can be achieved by exploiting the fact that the optimal threshold is usually larger than the actual threshold which minimizes the reconstruction error. The algorithm for determining the optimal threshold is defined through the following steps:
The results of a two-step numerical analysis are presented in this section. First, the performance of the proposed algorithm is examined using two test signals. The goal of this analysis is to compare the performance of the proposed scheme against that of other well-established techniques under well-controlled conditions. In the second step, the proposed denoising algorithm is applied to the acquired dual-axis swallowing accelerometry signals. The goal is to understand the benefits of the proposed approach in the context of a real biomedical application.
Referring to
The next task is to examine the reconstruction error under various SNR values with the Haar wavelet. One thousand realizations are used for each SNR value yielding the results depicted in
To more closely mimic a real swallowing scenario, the test signal shown in
where w(n) is Gaussian window with standard deviation σg=1.9 and
f
o(n)=0.1 sin(8πnT)+0.2 sin(2πnT)+0.15 sin(20πnT)+0.15 sin(6πnT)+0.12 sin(14πnT)+0.1 sin(4πnT) (12)
with 0≦n≦N−1, N=35000 and T=10−4 seconds. The duration of the signal is based on previously reported swallow durations. It should be mentioned that this signal only mimics a realistic signal, and does not represent a model of a swallow. The same group of wavelets as in the Blocks signal analysis arc used to examine the reconstruction error. It is assumed again that the signal is contaminated with additive zero-mean Gaussian noise and SNR=10 dB. For this particular signal, the Meyer wavelet (indexed by number 7 in
During a three month period, lour hundred and eight participants (aged 18-65) were recruited at a public science centre. All participants provided written consent. The study protocol was approved by the research ethics boards of the Toronto Rehabilitation Institute and Bloorview Kids Rehab, both located in Toronto. Ontario. Canada. A dual-axis accelerometer (ADXL322, Analog Devices) was attached to the participant's neck (anterior to the cricoid cartilage) using double-sided tape. The axes of acceleration were aligned to the anterior-posterior (A-P) and superior-inferior (S-I) directions. Data were band-pass filtered in hardware with a pass hand of 0.1-3000 Hz and sampled at 10 kHz using a custom LabVIEW program running on a laptop computer.
With the accelerometer attached, each participant was cued to perform 5 saliva swallows (denoted as dry in Table 1). After each swallow, there was a brief rest to allow for saliva production. Subsequently, the participant completed 5 water swallows (denoted as wet in Table 1) by cup with their chin in the natural position (i.e. perpendicular to the floor) and water swallows in the chin-tucked position (denoted as WTC in Table 1). The entire data collection session lasted 15 minutes per participant.
The acquired dual-axis swallowing accelerometry signals were denoised using Donoho's approach, the MNDL-based approach, the SURE-based approach and the proposed approach. In particular, a 10-level discrete wavelet transform using the Meyer wavelet with soil thresholding was implemented. Before denoising, the signals were pre-processed using inverse filters to annul effects of the data collection system on the acquired data. In order to compare the performance of the aforementioned denoising schemes. SNR values were evaluated before and after denoising using the following formula:
where Ef represents the approximate energy of the noise-free signal, and E{circumflex over (ε)} represents an approximate variance of the white Gaussian noise. The approximate energy is calculated as Ef={circumflex over (σ)}x2−{circumflex over (σ)}{circumflex over (ε)}2, where {circumflex over (σ)}x2 is the variance of the observed signal, and {circumflex over (σ)}{circumflex over (ε)}2 represents the variance of the noise calculated by (9). Similarly. E{circumflex over (ε)}={circumflex over (σ)}x2 for the noisy signals, and for the denoised signals E{circumflex over (ε)}=reub({circumflex over (m)}(τ),{circumflex over (σ)}{circumflex over (ε)}2,α,β) for the threshold estimated by (10).
Using the SNR metric given by (13), the results of the analysis are summarized in Table 1. Donoho's approach provides the least amount of improvement in SNR as expected, followed by the MNDL-based approach. The SURE-based approach achieves greater improvement in the SNR values in comparison to the other two aforementioned approaches. Nevertheless, as demonstrated by the results in Table 1, the SURE approach exhibits strong variations in performance. The proposed approach provides the greatest improvement in SNR values. On average, the greatest gain in SNR is over Donoho's approach (3.8 dB and 4.0 dB in the A-P and S-I directions, respectively), while smaller improvements were obtained over the SURE-based approach (2.0 dB and 1.3 dB in the A-P and S-I directions, respectively). Nevertheless, the proposed approach still provides a statistically significant improvement over SURE-based approach in denoising the dual-axis swallowing accelerometry signals (Wilcoxon rank-sum test, p<<10−10 for both directions). This improvement was achieved regardless of whether or not the different swallowing types were considered individually or as a group. As a last remark, it should be noted that these SNR values were estimated using eqn. (13), which from our experience with swallowing signals, provides a conservative approximation. In reality, we expect the gains in SNR to be even greater.
A denoising algorithm is proposed for dual-axis swallowing accelerometry signals, which have potential utility in the non-invasive diagnosis of swallowing difficulties. This algorithm searches for the optimal threshold value in order to achieve the minimum reconstruction error for a signal. To avoid the high computational complexity associated with competing algorithms, the proposed scheme conducts the threshold search in a reduced wavelet subspace. Numerical analysis showed that the algorithm achieves a smaller reconstruction error than Donoho, MNDL- and SURE-based approaches. Furthermore, the computational complexity of the proposed algorithm increases logarithmically with signal length. The application of the proposed algorithm to dual-axis swallowing accelerometry signals demonstrated statistically significant improvements in SNR over the other three considered methods.
This application claims the benefit of U.S. Provisional Patent Application No. 61/218,976 filed on Jun. 21, 2009