The present invention relates to signal processing, and more particularly to echo cancellation devices and methods.
Hands-free telephones (e.g., speakerphones) provide conveniences such as conversations while driving an automobile and teleconferencing with multiple speakers at a single phone. However, acoustic reflections of the loudspeaker output of a hands-free phone to its microphone input simulate another participant speaker and thus appear as an echo to the original remote speaker. Acoustic echo cancellation and echo suppression attempt to minimize these effects.
Acoustic echo cancellation (AEC) methods approximate the properties of the loudspeaker-to-microphone acoustic channel and thereby can generate an approximation of the microphone pickup of sounds emitted by the loudspeaker. Then this approximation can be cancelled from the actual microphone pickup. Acoustic echo cancellation typically uses adaptive filtering to track the varying acoustic channel; see U.S. Pat. No. 5,633,936.
Various methods for filter definition and fast convergence have been proposed, including normalized least mean squares with input decorrelation or affine projection. See for example, Doherty et al, A Robust Echo Canceler for Acoustic Environments, 44 IEEE Trans. Circuits Systems 389 (1997) and Dutweiler, Proportionate Normalized Least-Mean-Squares Adaptation in Echo Cancellers, 8 IEEE Tran. Speech Audio Proc. 508 (2000).
However, these approaches still have problems of insufficient performance.
The present invention provides echo cancellation with dual estimation filters having fast and slow adaptations plus hysteresis switching between filters.
This has advantages including improved performance.
a-1c illustrate an implementation for a preferred embodiment method.
a-2b show echo cancellation features.
1. Overview
b illustrates functional blocks of a preferred embodiment system for echo cancellation as could be used in a hands-free phone. In particular, the left-hand edge of
The preferred embodiment methods can be performed with digital signal processors (DSPs) or general purpose programmable processors or application specific circuitry, and these can be combined into systems on a chip such as both a DSP and RISC processor on the same chip with the RISC processor controlling operations. A stored program in an onboard ROM or external flash EEPROM for a DSP or programmable processor could perform the signal processing. Analog-to-digital converters and digital-to-analog converters provide coupling to the real world, and modulators and demodulators (plus antennas for air interfaces) provide coupling for transmission waveforms. The speech can be encoded, packetized, and transmitted over networks such as the Internet.
2. Acoustic Echo Cancellation with Adaptive Channel Estimation
Preferred embodiment echo cancellation methods use a variant of the normalized LMS (least mean squares) method for adaptation of an acoustic channel estimation filter. Thus first consider the LMS method and
v(n)=u(n)+y(n)+n0(n)
e(n)=v(n)−y(n)
And when there is no further signal processing in the downlink or uplink (compare
x(n)=r(n)
s(n)=e(n)
Let {hk(n): k=0, 1, . . . , N−1} denote the coefficients of the length-N impulse response of the acoustic channel (from the loudspeaker input to the microphone output) at time n. Typically, filters of length N=100-200 would be used in small echo environments, such as a car interior, and longer filters in larger echo environments. Further, N=256 would be a convenient size when various computations (convolutions and correlations) are performed in the transform domain. The digital data may be 64-bit floating point or 16-bit fixed-point or any other convenient size.
It is convenient to express the acoustic channel impulse response as a length-N vector:
Similarly, let ĥ(n)={ĥk(n)} denote the acoustic channel estimation filter impulse response; ideally, ĥ(n) closely approximates h(n). And as an N-vector the echo estimation filter is:
a-2b (and 3) indicate the filter by Ĥ(z), its z-transform.
Now let x(n) denote the far-end observation vector; that is, at time n the last N far-end samples:
Without echo suppression the far-end observation vector is the same as the downlink observation vector, r(n).
Linearity of the acoustic channel implies:
where | denotes the inner (scalar) product of two N-vectors. Similarly, define the echo approximation:
where ĥ(n−1) is used for the echo estimate because the current acoustic channel estimate is not available until the echo estimate is computed. AEC attempts to remove the echo signal, y(n), from the near-end signal, v(n), by subtraction of the echo estimate, ŷ(n), from v(n) to yield e(n). Then the AEC updates the acoustic channel estimate filter from ĥ(n−1) to ĥ(n) using e(n).
The LMS method updates the acoustic channel estimation filter ĥ(n) by minimizing (with respect to filter coefficients) the expected error for random inputs:
ĥ(n)=arg minĥ{E[|v(n)ĥ|x(n)|2]}
where E denotes the expectation. This yields a steepest-descent type of update:
ĥ(n)=ĥ(n1)+μ(n)e(n)x(n)
where μ(n) is a positive “step size” parameter to scale the gradient. The step size determines convergence rate and filter stability; μ(n) could be a constant roughly equal to 0.1. Variants of the LMS method allow μ(n) to depend upon parameters such as the estimated noise power and ∥x(n)∥2(|x(n)|2 equals x(n)|x(n)). In particular, the normalized LMS method may have:
ĥ(n)=ĥ(n1)+μe(n)x(n)/μx(n)∥2
1st-order decorrelation methods improve filter convergence by preprocessing the input through a decorrelation of x(n) with respect to x(n−1). In particular, define xdc(n)=x(n)−c(n)x(n−1) where c(n) is the decorrelation coefficient:
Of course, c(n) x(n−1) is the projection of x(n) onto the subspace spanned by x(n−1), so the decorrelation replaces x(n) by its projection xdc(n) onto the orthogonal complement of the span of x(n−1).
The normalized decorrelating LMS filter update is then
where ∥x(n)∥2=∥x(n−1)∥2 was assumed for simplicity.
And thus the optimal update is μe(n)xdc(n)/xdc(n)|x(n).
Affine projection methods generalize this decorrelation approach by use of more prior input samples together with a conjugate gradient. Indeed, for the simplest second-order affine projection method the optimal filter update is:
{circumflex over (h)}(n)={circumflex over (h)}(n−1)+μX(n)[X(n)HX(n)]−1e(n)
where X(n) is the N×2 matrix with columns x(n) and x(n−1) and e(n) is the 2×1 vector of components e0(n) and e1(n) with
e0(n)=v(n)−{circumflex over (h)}(n−1)|x(n)
e1(n)=v(n−1)−{circumflex over (h)}(n−1)|x(n−1)
The 2×2 matrix X(n)HX(n) has off-diagonal elements equal to the correlation between x(n) and x(n−1):
The inverse is simply
where det is the determinant of the 2×2 matrix. Hence, the update becomes:
Note that the first update term uses the forward decorrelation of x(n) with respect to x(n−1) and the second term uses the backward decorrelation of x(n−1) with respect to x(n).
Of course, the optimal updating may be undesirable under certain conditions, such as for the acoustic channel of a hands-free phone at low signal-to-noise ratio (SNR) levels. And consequently, the preferred embodiment methods modify the normalized LMS filter adaptation to (i) spectrally flatten x(n) based on first-order linear predictive whitening which is analogous to decorrelation, (ii) limit stepsize to control adaptation and prevent filter divergence due to near-end signals (doubletalk or acoustic noise), and (iii) select between dual filters to have both rapid filter convergence and protection against filter divergence.
The step size limitation controls the maximum amount of filter change per adaptation update, so that divergence due to bad input signals will be very slow. The dual-filter aspect improves robustness to adaptation divergence by using an older copy of the filter coefficients for filtering and by resetting the fast-adapting filter. Using step size control along with dual filters allows AEC to have moderate step size and provide good divergence control while providing good tracking capability for echo channel change. The following sections detail these modifications.
3. Spectral Flattening
Preferred embodiment AEC filter update methods first apply predictive spectral flattening to the loudspeaker input, x(n), and then use this modified input in a LMS-type update. Initially, define a normalized correlation coefficient, λ, as:
and use λ(n) to predictively whiten x(n) by subtracting the normalized correlation to define xwh(n)=x(n)−λ(n)x(n−1).
Next, define the AEC filter adaptation update in terms of the predictively-whitened input as:
{circumflex over (h)}(n)=ĥ(n−1)+Step(n)xwh(n)
Then as with the normalized LMS, find the optimal Step(n) factor by minimizing the AEC output error:
|v(n)−ĥ(n−1)+Step(n)xwh(n)/|x(n)|2
This yields (again presuming that ∥x(n)∥=∥x(n 1)∥ which implies ∥xwh(n)∥2=(1 (n)2)∥x(n)∥2) the optimal AEC filter adaptation update, including a step size parameter μ, as:
Step(n)=μe(n)/∥xwh(n)∥2
So the optimal update, including parameter μ, is
ĥ(n)=ĥ(n1)+ĥ(n)
where ĥ(n)=Step(n) xwh(n).
4. Step Size Control
Convergence of the adaptive AEC filter is based on the assumption that the only near-end input signal is the echo of the loudspeaker output propagating through the acoustic channel; if there is acoustic noise or the near-end speaker is talking, then the echo cancellation filter can quickly diverge. In a traditional double-talk detector, the energy of the near-end and the far-end signals are compared, and if the near-end energy is too high, then adaptation of the filter is stopped and the filter coefficients are frozen. However, in difficult acoustic echo situations the echo can be so loud as to stop the adaptation, paralyzing the system. In addition, convergence enhancements such as spectral whitening as in forgoing can magnify near-end noise in quiet frequency bands, distorting the estimation process even when the echo appears to be the dominant signal.
To prevent divergence in the presence of near-end signals, preferred embodiment methods monitor the amount of filter adaptation per input sample and limit the amount of filter change defined by the energy in the filter update normalized by the energy in the current filter. That is, consider the relative change ∥Δĥsm(n)∥2/∥ĥ(n−1)∥2 where Δĥsm(n) is a smoothed version of Δĥ(n) and the update is ĥ(n)=ĥ(n−1)+Δĥ(n). Thus divergence due to bad input signals can be made very slow. Indeed, during periods of strong near-end energy (local speech plus noise), the filter estimate can diverge quickly, which is reflected in large values of ∥Δĥsm(n)∥2/∥ĥ(n−1)∥2.
Preferred embodiment step size limit methods limit the relative change to a maximum value of max by scaling down Step(n) for samples where this limit would be exceeded. This limit ensures that any divergence of the filter will be very slow. In particular, the preferred embodiment AEC filter adaptation update vector relative energy is limited as:
if ∥Δĥsm(n)∥2≦Δmax∥ĥ(n−1)∥2
Because computing the filter energy ∥ĥ(n−1)∥2 (and thus the maximum relative filter update vector energy Δmax∥ĥ(n−1)∥2) and the optimal filter update vector energy ∥Δĥ(n)∥2 for each sample is computationally expensive, preferred embodiments compute the filter energy only once per 20 ms frame (160 samples) and only estimate the optimal filter update vector energy for each sample. In particular, for a frame with samples n=n0, n0+1, n0+2, . . . , n0+159, compute the filter energy for the first sample: ∥ĥ(n0−1)∥2=Σ0≦k<Nĥk(n0−1)2, and then use Δmax∥ĥ(n0−1)∥2 as the maximum filter update vector energy for each sample in the frame. Also, at sample n, estimate the optimal filter update vector energy ∥Δĥ(n)∥2 simply by noting that:
where Step(n) was part of the Δĥ(n) computation.
Indeed, preferred embodiment computations for each input far-end sample x(n) and near-end sample v(n) use incremental sums within a frame (or other filter energy update interval). The computations could be as follows:
(1) After filter updating from the preceding sample n 1 inputs, the following are in memory (along with the step size parameter μ):
(2) Receive nth sample inputs: far-end x(n) and near-end v(n).
(3) Form N-vector x(n) of N most recent far-end inputs from x(n−1) by taking the first component as x(n) and the remaining N−1 components from x(n−1) and disregarding the last component x(n−N).
(4) Compute echo estimation ŷ(n) by applying current echo estimation filter ĥ(n−1) to x(n); that is, ŷ(n)=Σ0≦k<N ĥk(n−1)x(n−k)=ĥ(n−1)|x(n).
(5) Compute the echo-cancelled output as e(n)=v(n)−ŷ(n).
(6) Update the estimation filter from ĥ(n−1) to ĥ(n) by following steps (7)-(15).
(7) Compute the scalar product of x(n) and x(n−1) as an update of the scalar product of x(n−1) and x(n−2):
x(n)|x(n−1)=x(n)x(n−1)+x(n−1)|x(n−2)−x(n−N)x(n−N−1).
(8) Compute the energy of x(n) as an update of the energy of x(n−1):
∥x(n)∥2=x(n)2+∥x(n−1)∥2−x(n−N)2.
(9) Compute the normalized correlation from the foregoing (7)-(8):
λ(n)=x(n)|x(n−1)/∥x(n)∥2
(10) Compute the predictively whitened xwh(n) from the foregoing steps (1), (3), and (9):
xwh(n)=x(n)−λ(n)x(n−1).
(11) Compute the energy of xwh(n) from (1) and (8)-(9):
∥xwh(n)∥2=∥x(n)∥2(1−λ(n)2).
(12) Compute optimal filter update vector from (5) and (10)-(11):
(13) Compute relative energy of optimal update vector from (5) and (12):
(14) Compute ∥Δĥsm(n)∥2, the smoothed energy of Δĥ(n)
(15) Compare the smoothed energy ∥Δĥsm(n)∥2 from (14) to the estimation filter energy ∥ĥ(n0−1)∥2 from (1):
(16) Update the echo channel estimation filter from (1) and (15):
{circumflex over (h)}(n)={circumflex over (h)}(n−1)+Δ{circumflex over (h)}(n).
(17) Repeat (1)-(16) for next input samples x(n+1) and v(n+1); additionally, if the input samples are at the start of a frame, then compute the energy of the current echo channel estimation filter.
5. Dual Estimation Filters for Acoustic Echo Channel
The dual-path model for the acoustic echo channel estimation uses two AEC filters: a fast-adapting filter based on the foregoing adaptation and step size control and a slow tracking filter based on previous fast-adapting filter coefficients. The fast-adapting filter provides rapid adaptation to any change in the echo path, while the slow filter provides protection against divergence of the adaptation due to the near-end speech or noise. The update of the slow filter, as well as the selection of which filter to use for the current frame output, is based on long-term measurement of the relative echo cancellation performance of both filters.
In particular, a preferred embodiment method, steps illustrated in
(1) At the start of the frame at sample no the memories contain the N-coefficient fast-adapting filter updated from the immediately prior sample, ĥfast(n0−1), the current N-coefficient slow-tracking filter updated at the end of the prior frame, ĥ(n0−1), the state counter value in the range −2 to +5, the filter flag value (fast or slow), plus the sample vectors, energies, scalar products, and fast filter energy as described in the foregoing section 4.
(2) Sequentially, for each of the 160 pairs of far-end plus near-end samples, x(n) and v(n), of the frame, apply the fast-adapting filter to the corresponding vector x(n) and generate an AEC output efast(n) plus a fast filter update as in section 4. This yields a frame of AEC outputs, efast(n0), efast(n0+1), efast(n0+2), . . . , efast(n0+159), plus a final fast-adapting filter ĥfast(n0+159), together with updated memories.
(3) Apply the current slow filter, ĥslow(n0−1), to re-filter the samples in the frame to yield AEC outputs eslow(n0), eslow(n0+1), eslow(n0+2), . . . , eslow(n0+159). Note that the slow filter is constant throughout the frame, so there is no updating within the frame.
(4) Compute the energies of both the fast filter and slow filter AEC outputs for the frame: Efast=Σ0≦k≦159 efast(n0+k)2 and Eslow=Σ0≦k≦159 eslow(n0+k)2.
(5) Adjust the state counter value as follows (
(a) If log Efast+3 dB<log Eslow, then increment the state counter by +1. In this case the fast-adapting filter has good performance for this frame as compared to the slow-tracking filter. Large positive values of the state counter reflect long-term better performance by the fast-adapting filter as compared to the slow-tracking filter. The state counter saturates in the upward direction at +5.
(b) If log Efast−1 dB>log Eslow, then decrement the state counter by −1, and clip the state counter to non-positive values; that is, state counter→min{0, state counter}. In this case the fast-adapting filter performance is much worse than the slow tracking filter performance. The state counter saturates downwards at −2.
(c) If neither (a) nor (b) applies and when the state counter is positive, then decrement the state counter by −1. This prevents AEC from updating the slow-adapting filter very frequently.
(d) If neither (a) nor (b) applies and when the state counter is non-positive, then make no change.
(6) Update the filters and filter flag
(a) when the state counter is at +5, the fast-adaptation has been performing well over the recent long-term, so both (i) set the filter flag to fast and (ii) update the slow filter coefficients using the current fast-adapting filter coefficients.
In particular, take ĥslow(n0+159)=(1α)ĥslow(n01)+αĥfast(n0+159), where α is a step size, typically equal to about 0.125, to prevent rapid change in the slow filter coefficients. Then, the state counter is reset to +4 to keep the counter near the upper saturation but also to help decrement when the fast and slow filters have comparable performance in the next frame as in foregoing (5)(c).
(b) when the state counter value is at −2, the fast filter is diverging, and so both (i) set the filter flag to slow and (ii) reset the fast filter coefficients to equal the slow filter coefficients. That is, take ĥfast(n0+159)=ĥslow(n0+159). Then the state counter is reset to −1 to keep the counter near the bottom.
(c) when the state counter value is between −2 and +5, leave both (i) the filter flag and (ii) the filters unchanged; this provides hysteresis.
Note that seven successive frames (a total of 140 ms for 20 ms frames) with the fast filter outperforming the slow filter by 3 dB will ensure the filter flag is set to fast and the slow filter (slowly) updated towards the fast filter; whereas, three successive frames with the slow filter outperforming the fast filter by 1 dB will ensure the filter flag is set to slow and the fast filter set equal to the slow filter.
The use of step size control with dual filters allows preferred embodiment AEC to have good tracking capability while providing good divergence control. Without dual filters, there is a trade-off between divergence control (need small Δmax) and tracking capability (need large Δmax). With dual filters, the AEC can use a relatively large Δmax for better tracking capability because the fast filter divergence can be suppressed by the slow-adapting filter. Also, the asymmetry of the counter increments/decrements and the filter updates helps the combination of good tracking with divergence protection.
6. Modifications
The preferred embodiments may be modified while retaining one or more of the features of step size control for both fast and slow filter adaptations with switching between filters according to performance.
For example, the spectral whitening could be omitted and a differing fast-adapting filter updating method used; the various values such as counter increment/decrement size, counter saturation limits, filter flag switch points, adaptation factors, filter change limit, frame size, and so forth could each be varied; the counter scale could be translated (e.g., no negative values) and/or inverted; various computations such as the slow filtering could be performed in a frequency domain by use of a transform such as the FFT; the filters could be partitioned into subfilters for low latency computations; a measure differing from energy could be used to compare the performance of the fast and slow filters, such as sum of absolute values of the outputs; the relative filter change limit Δmax could be made adaptive; and so forth. The positive counter value decrementing and/or the clipping to non-positive counter values of a decremented counter value could be omitted. The counter resets after saturation could be omitted.
This application claims priority from provisional patent application No. 60/640,690, filed Dec. 30, 2004. The following co-assigned copending patent application discloses related subject matter: application Ser. No. 11/165,903, filed Jun. 24, 2005.
Number | Name | Date | Kind |
---|---|---|---|
5345119 | Khoury | Sep 1994 | A |
5737409 | Inoue | Apr 1998 | A |
6947549 | Yiu et al. | Sep 2005 | B2 |
7453921 | Gossett | Nov 2008 | B1 |
Number | Date | Country | |
---|---|---|---|
20060147032 A1 | Jul 2006 | US |
Number | Date | Country | |
---|---|---|---|
60640690 | Dec 2004 | US |