SYSTEMS AND METHODS OF ECHO & NOISE CANCELLATION IN VOICE COMMUNICATION

Information

  • Patent Application
  • 20140064476
  • Publication Number
    20140064476
  • Date Filed
    September 04, 2013
    11 years ago
  • Date Published
    March 06, 2014
    10 years ago
Abstract
In an example, time and frequency domain speech enhancement is implemented on a platform having a programmable device, such a PC or a smartphone running an OS. Echo cancellation is done first in time domain to cancel a dominant portion of the echo. Residual echo is cancelled jointly with noise reduction during a subsequent frequency domain stage. The time domain block uses a dual band, shorter length Adaptive Filter for faster convergence. Non-linear residual echo is cancelled based on an echo estimate and an error signal from the adaptive filters. A controller locates regions that had residual echo suppressed and which do not have speech and injects comfort noise. The controller can be full-duplex and operate non-linearly. An AGC selectively amplifies the frequency bins, based on the Gain function used by the residual echo and noise canceller.
Description
BACKGROUND

The present invention generally relates to improving quality of voice communication and more particularly to echo and noise cancellation in packet-based voice communication systems.


Most VoIP vendors have of goal of to provide a generic VoIP solution for heterogeneous platforms, including platforms such as PCs and mobile platforms. However, variation in platform requirements and characteristics make high performance and platform-generic and speech enhancement a difficult problem. For example, variation in echo path pure delay, hardware non-linearity, and negative ERL, due to situations such as bad acoustic coupling, clock drift and so on pose difficulties. Full duplex voice communication presents difficulties as well. Still other considerations are computation and power efficiency, and maintaining stable performance and quality in a multitasking environment, in which there may be variable computation resource availability.


SUMMARY

The following discloses methods and systems of echo cancellation that may find application across a wide variety of platforms. In one aspect, the proposed echo cancellation system uses dual band, shorter length time domain Adaptive Filter (ADF) followed by a frequency domain speech enhancement system. The ADF works on two bands with appropriate de-correlation filter to speed up the convergence rate. The frequency domain speech enhancement system includes a Residual Echo and Noise Cancellation System (RENC), a Non-linear Processor (NLP) controller and a Frequency domain Automatic Gain Controller (FAGC).


In an aspect, the residual echo from longer reverberation and non-linearity is suppressed further jointly with noise cancellation. It has been found that a large part of the residual echo is correlated with acoustic echo estimate from the ADF. Canceling the residual echo as part of noise cancellation has been found to produce better results than using a spectral subtraction method with platform specific tunable gain parameters for individual frequency bins.


In one example implementation, a modified Wiener Filter is used to cancel both residual echo and noise jointly. In another example, a modified Minimum Mean-Square Error Log Spectral Amplitude (MMSE-LSA) cancels residual echo and noise together. In these examples, since residual echo is canceled simultaneously with noise, additional complexity specifically for the residual echo cancellation is reduced.


In some examples, the FAGC uses the frequency domain gain function obtained from the residual echo canceller to produce a Voice Activity Decision (VAD). The FAGC amplifies only speech frequency bins, so that the FAGC does not boost a noise signal embedded with the speech and provides better voice quality.


The NLP Controller locates sample regions that have only residual echo (and not speech). These regions are processed by an Acoustic Echo Suppressor (AES), which replaces the signal in these regions with comfort noise. In an example, to identify the residual echo alone region, NLP controller uses correlation between inputs including error and microphone signal, error energy, microphone signal energy, and long term average of reference signal amplitude, as described below. In the example, the NLP controller activates non-linear processing on based on a plurality of decision parameters, and further based on a set of pre-defined validation conditions.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 depicts a system context in which methods and systems according to the disclosure can be practiced;



FIG. 2 depicts an example architecture of an echo cancellation system according to the disclosure;



FIG. 3 depicts an example architecture of a Residual Echo and Noise Canceller (RENC) according to the disclosure;



FIG. 4 depicts an example architecture of a gain estimation block;



FIG. 5 depicts an example architecture of a Frequency domain Automatic Gain Controller (FAGC) according to the disclosure;



FIG. 6 depicts an example flow for a controller of a Non-Linear Processor (NLP) used in the example echo cancellation architecture;



FIG. 7 depicts an example of NLP decision logic for a Non-Linear Processor (NLP) used in the example echo cancellation architecture;



FIG. 8 depicts an ensemble average of ERLE without nearend party;



FIG. 9 depicts ensemble average of ERLE with nearend party active;



FIG. 10 depicts FAGC input and output signals and global gain for a tone signal;



FIG. 11 depicts FAGC input and output signal power level for a tone signal;



FIG. 12 depicts FAGC input and output signals and global gain for a speech signal;



FIG. 13 depicts FAGC input and output signal power level for a speech signal;



FIG. 14 depicts NLP decisions on an Echo Suppressor (ES) input signal; and



FIG. 15 depicts ES output (AES input) and AES output signals.





DETAILED DESCRIPTION

This disclosure includes sections relating to an example high level architecture of speech enhancement system, details of an example Residual Echo and Noise Cancellation (RENC) system, details of an example Automatic Gain Controller (FAGC), details of a proposed NLP controller and performance examples of the proposed speech enhancement system for real-time captured test signals.



FIG. 1 depicts a system context in which methods and systems according to the disclosure can be practiced. FIG. 1 depicts a situation in which devices (device 20 and device 45) support voice communication over packet networks (e.g., network 40), and in a particular example, where devices support Voice over Internet Protocol (VoIP). User satisfaction with voice communication is degraded greatly by echo. To provide context, echo can be viewed as a situation in which a far-end signal (13) from a far-end device 45 is being played from a speaker at a near end (12) device 20 (this signal can include voice from a person at device 45, noise 16, and echo derived from near-end 12 speech played out at a speaker of device 45. A microphone 23 at near end device 20 samples the available audio energy, including picks up the far-end signal, encodes some part of the far-end signal and returns it to the far-end device 45 (such as in voice packets 34 and 35), which produces audio through a speaker, including noise and the echoed far-end signal picked up at near end 12. note that near-end and far-end here are simply conventions which would change based on perspective; in a full-duplex conversation, they are interchangeable.


By further explanation, device 20 (and device 45) may include a display 22, a speaker 24, a non-volatile storage 25, a volatile memory hierarchy 26, and one or more processors 27. These components can execute an echo cancellation system according to these disclosures.


Overview of Echo Cancellation System

A high level architecture of an example echo and noise cancellation system is shown in FIG. 2. The input signals to Acoustic Echo Canceller (AEC) 102 are the microphone signal d(n) and the farend signal x(n) being played out through speaker (signals having n as an argument are digital versions of a time-domain signal. The system contains a Band Pass Filter(BPF) 107, Band Splitters 113, De-Correlation Filters (DCFs) 129, 131, Adaptive Filters (ADFs) 123, 125, Band Mixers 115, 117, Residual Echo & Noise Canceller (RENC) 119, NLP controller 109, and Acoustic Echo Suppressor (AES) 111. Aspects presented herein include example designs for a high performance simultaneous noise & residual echo cancellation unit, an example design of a full-duplex NLP controller and an example design of an efficient frequency domain gain control unit.


The example system contains two delay compensation units: pure delay compensation and delay compensation with respect to a microphone signal, in order to synchronize the microphone signal with RENC output signal. The pure delay can be estimated using an ADF running in decimated domain. The estimation of pure delay is configurable. In an example, the algorithmic delay of Residual and Noise Cancellation (RENC) unit is 6 ms, so that a compensation delay is introduced to the microphone signal of about that amount to align with residual echo and Noise Canceller output signal.


Band Pass Filter (BPF)

It is to remove the DC and unwanted high frequency signal from the inputs. The cut-off frequencies of this filter are 0.0125 and 0.96. A 6th order IIR filter is used because of its simplicity and low processing requirement.


Band Splitter

It is to split the signal into two channels. Band splitter uses Quadrature Mirror Filter (QMF) filter for band splitting. For the two-bands of AEC processing, the input signal is split into 2 channels with a cut-off frequency of π/2. The sampling rate of each channel is reduced to half of the original sampling rate using decimation factor of 2. This sample rate reduction provides efficient processing of AEC.


De-Correlation Filter (DCF)

To avoid degradation of the performance of NLMS algorithms due to strong correlation of the speech signals, the farend signal is pre-whitened by applying a de-correlation filter before giving it to adaptive filter. De-correlation filter is a prediction error first order HPF, with its coefficient matched to the correlation properties of the speech signal. This filtering increases the rate of convergence of the adaptive filter for the speech signal. The typical value of filter co-efficient is 0.875.


Adaptive Filer (ADF)

Adaptive filter (ADF) uses delayed error NLMS algorithm. Since the filter is running in decimated and de-correlated domain with shorter filter length, the convergence of the filter is very faster. The maximum number of taps used per filter is 256. Each ADF has its own built-in near-end speech detector that activates/de-activates the weight adaptation.


Band Mixer

It is to combine echo estimates and error signals from the two bands after AEC processing to their single bands respectively. Echo estimates and error signals are up-sampled before combining by the synthesis filter bank into an original sampling rate signal. The combined structure for splitting the channels and combining again is called a Quadrature-Mirror Filter (QMF) bank.


Band Mixer 115, 117 outputs e(n) and y(n) are passed to RENC 119, which as will be described below, further suppresses echo and background noise. RENC 119 also has an AGC 121. The RENC 119 outputs signals including s(n) through AGC 120 (see FIG. 2) to AES 111 and s′(n) to NLP controller 109. s(n) is enhanced nearend signal after canceling residual echo and background noise. The signal s′(n) is an output of FAGC.


Since NLP controller 109 uses correlation between error and microphone signal, the output signal obtained before FAGC's action is given to it. The FAGC output is given to AES unit for further processing to eliminate unwanted very low level residual echo. The AES is controlled based on Non-linear Processor (NLP) decisions.


NLP controller 109 enables or disables Non-Linear Processing (NLP), and AES, as being part of NLP. NLP can completely remove the residual echo during single talk. The NLP decision also can ensure no signal clipping when passing from single talk to double-talk. The NLP controller 109 responds quickly, without hangover during start of near end signal present in the output of the microphone signal, this unit also can be called a Sensitive Double-Talk Detector (SNS DTD).


Acoustic Echo Suppresser (AES) 111 is a switched attenuator. AES comprises a noise parameter extractor and a Comfort Noise Injection units (CNI). During single talk, AES replaces residual echo by comfort noise generated by CNI unit. AES provides a smooth transition between the original signal and the comfort noise generated by CNI module at the beginning of single talk, as well as ensuring a smooth transition when moving from single talk to nearend speech or nearend background noise. For this seamless transition, AES performs Overlap and Add (OLA) using a triangular window on CNI generated noise and enhanced nearend signal s(n) from FAGC at the start of single talk and also at the end of single talk. During start of the single talk, CNI generated noise is multiplied by a rising ramp and is added to the s(n) multiplied by a falling ramp. Similarly, during end of the single talk, CNI generated noise is multiplied by a falling ramp and is added to s(n), which is multiplied by a rising ramp. In an example, the attenuation or rising factor of the ramp is 0.3662 over a 10 ms period.


The AGC output, s(n) is classified into Speech and noise frames. In an example, each frame is 10 ms in length. The classification uses an energy-based VAD algorithm. Average ambient noise level and Linear Predictive Coefficients (LPC) are extracted for each silence/noise frame.


CNI unit uses 10th order LPC parameters and Gaussian random number generator for generating comfort noise, which is used for matching the spectrum of nearend ambient noise. This simulated comfort noise replaces the residual echo without a noticeable transition (observable by user), when NLP is activated.


Residual Echo and Noise Canceller

A block diagram of an example RENC 119 is shown in FIG. 3. The example RENC 119 uses modified Frequency domain wiener filtering or an MMSE LSA estimator. In brief, the summation of an estimate of short-term spectral magnitude of ambient background noise and an estimate of short-term spectral magnitude of echo is used to estimate a spectral gain to be applied to an error signal, which includes residual echo, noise, and potentially, near end speech.


Assuming that the noise, v(n), is additive to near-end speech signal s(n) at respective discrete time indexes, denoted by the variable n, the noisy near-end speech signal d(n) is represented in equation (1).






d(n)=s(n)+v(n)  (1)


Error signal e(n) from Band Mixer will contain noisy near-end speech d(n) and residual echo ry(n), as denoted in equation (2).















e


(
n
)


=




d


(
n
)


+

ry


(
n
)









=




s


(
n
)


+

v


(
n
)


+

ry


(
n
)










(
2
)








Windowing

An asymmetric trapezoidal window represented in equation (3). Where, D is the overlap length, L is the input frame length and M is the window length. Incoming samples are stored in a buffer of length L samples; last D samples from the previous frame are appended to this buffer and remaining samples are taken as zeros to make up for a buffer of length equal to window length M. In one example, the value of M is 176 samples, L is 80 samples and D is 48 samples. Buffered samples are windowed using trapezoidal window and then transformed into frequency domain for processing to reduce the jitter in packet transmission of packet based communication system such as VoIP.










w


(
n
)


=

{






sin
2



(



π


(

n
+
0.5

)


/
2


D

)


,





0

n
<
D

,






1
,





D

n
<
L

,








sin
2



(



π


(

n
-
L
+
D
+
0.5

)


/
2


D

)


,





L

n
<

D
+
L


,






0
,






D
+
L


n
<
M

,









(
3
)







Frequency Domain Conversion: The error signal e(n) and scaled echo estimate r′y(n) are divided into overlapping frames by the application of a trapezoidal window function where r′ is a fixed correlation factor. The respective windowed signals are converted to frequency domain using Fourier Transform 160161 (e.g., a Short-Time Fourier transform (STFT).


Let Ek(l) and r′Yk(l) represent STFT of error signal e(n) and the scaled echo estimate r′y(n) respectively for the frame index l and frequency bin index k. Then error signal is given as






E
k(l)=Sk(l)+Vk(l)+Yk(l)  (4)


Where, Sk(l), Vk(l) and Yk(l) represent STFT of nearend signal, s(n), the background noise, v(n) and residual echo y(n).


Reverberation Tracker

Since the AEC tail length used is short, it may not cancel the echoes completely when actual reverberation is longer than the tail length of the echo cancellation filters. So, to cancel them, a moving average filter with low attack rate and fast release rate is used on actual echo estimate obtained from echo cancellation filter. The estimation from moving average filters is controlled using appropriate logic when actual reverberation is within the tail length of echo cancellation filter. Equation 5 represents lengthen echo estimate Rk(l).











R
k



(
l
)


=

{







α
1




R
k



(

l
-
1

)



+


(

1
-

α
1


)



r





Y
k



(
l
)




,






if






(



r





Y
k



(
l
)



>


R
k



(

l
-
1

)



)










α
2




R
k



(

l
-
1

)



+


(

1
-

α
2


)



r





Y
k



(
l
)




,
else









(
5
)









    • Where, α21<1.





Noise Estimation

Noise estimation uses external VAD. The VAD identifies presence of voice activity in the input error signal coming from ADF. When the VAD decision shows noise frame (i.e., VAD=0), noise estimation Vk(l) is updated as per equation (6).











V
k



(
l
)


=

{






α
3




V
k



(

l
-
1

)



+


(

1
-

α
3


)




V
k



(
l
)








if





VAD

=
0







V
k



(

l
-
1

)




otherwise








(
6
)







Cancellation Part

The total signal that is suppressed from error signal in frequency domain at all frequency bins for a given frame l is given as






NR
k(l)=Vk(l)+Rk(l)  (7)


Estimation Controller

Even though equation (7) represents the unwanted components that are to be subtracted from error signal, there is a chance of over estimation possible in different platforms. This over estimation can be due to r′ value being greater than the ratio between actual residual echo and the echo estimate. To control the over estimate, moving average of error signal can be estimated using low pass filtering with dual α coefficient, such as in equation (8).











W
k



(
l
)


=

{






α
4




W
k



(

l
-
1

)



+


(

1
-

α
4


)




E
k



(
l
)









if






(



W
k



(

l
-
1

)


>


E
k



(
l
)



)
















α
5




W
k



(

l
-
1

)



+


(

1
-

α
5


)




E
k



(
l
)









if






(



W
k



(

l
-
1

)





E
k



(
l
)



)










(
8
)







To control the over estimation of cancellation part NRk(l), a ceiling operation is performed and modified cancellation part is estimated as given in equation (9).






P
k(l)=min(NRk(l),Wk(l))  (9)


The example RENC 119 filters out the cancellation part by modifying the spectral amplitudes of each frequency bins |Ek(l)| in equation (4) by applying the gain estimates Gk(l) as below






S
k(l)=Gk(l)Ek(l),for 0≦Gk(l)≦1  (10)


Gain estimates Gk(l) is formed as a function of a posteriori SNR γk(l), and a priori SNR ξk(l). The γk(l) and ξk(l) are estimated as below using statistical variances of error signal or the expected clean near-end speech and the cancellation part signal.











γ
k



(
l
)









E
k



(
l
)




2


E


(





P
k



(
l
)




2

)







(
11
)








ξ
k



(
l
)





E


(





S
k



(
l
)




2

)



E


(





P
k



(
l
)




2

)







(
12
)







The statistical variance of clean near-end speech E(|Sk(l)|2) for the estimation of ξk(l) is estimated using Decision-Directed (DD) method [1] proposed by Ephraim and Malah using 0<α<1 and is as.











ξ
k



(
l
)


=


α







S
k



(

l
-
1

)




2


E


(





P
k



(
l
)




2

)




+


(

1
-
α

)



MAX


(




γ
k



(
l
)


-
1

,
0

)








(
13
)








FIG. 4 shows a block diagram of a gain estimator 175. The formation of Gk(l) function is done using: (1) Frequency domain wiener filtering or (2) MMSE LSA estimator.


Frequency Domain Weiner Filtering: The Wiener filter is a popular adaptive technique that has been used in many enhancement methods. Approach based on optimal filtering and the aim is to find the optimal filter that would minimize the mean square error between the desired signal (clean signal) and the estimated output. The Wiener filter gain Gk(l) is estimated by solving an equation in which the derivative of the mean square error with respect to the filter coefficients is set to zero:











G
k
W



(
l
)


=



ξ
k



(
l
)





ξ
k



(
l
)


+
1






(
14
)







The Wiener filter emphasizes portions of the spectrum where the SNR is high, and attenuates portions of the spectrum where the SNR is low. Iterative Wiener filtering constructs an optimal linear filter using estimates of both the underlying speech and underlying noise spectra.


Minimum Mean-Square Error Log Spectral Amplitude (MMSE-LSA):


This technique makes an assumption that Fourier expansion coefficients of noise components (Vk(l) and RYk(l)) and near-end speech are statistically independent, and that they follow a Gaussian distribution. Log-spectra is used in distortion measures, and is motivation to examine the effect of an amplitude estimator constrained to minimizing mean-squared error of the log-spectra. Let Ak be the actual amplitude of the near-end speech signal and Āk be the estimated amplitude of the near-end speech signal. The cost function used to estimate the gain is given by






E{(log Ak−log Āk)2}  (15)


The gain function is given by the equation (16),











G
k
LSA



(
l
)


=




ξ
k



(
l
)



1
+


ξ
k



(
l
)





exp


{


1
2






v
k









-
t


t




t




}






(
16
)







Since the estimation of integral function over the exponential of equation (16) is very complex, the exponential integral in (16) can be evaluated using a functional approximation shown in equation 17.











G
k
LSA



(
l
)


=


(



ξ
k



(
l
)



1
+


ξ
k



(
l
)




)



exp


(





v
k



(
l
)



2

)







(
17
)







Where, vk(l) and evk(l) are defined in the following equations (18) and (19) respectively.











v
k



(
l
)


=




ξ
k



(
l
)



1
+


ξ
k



(
l
)







γ
k



(
l
)







(
18
)










v
k



(
l
)



=

{







-
2.31







log
10




v
k



(
l
)



-
0.6

,






v
k



(
l
)


<
0.1







10

-

(


0.52







v
k



(
l
)



+
0.26

)



,






v
k



(
l
)


>
1









-
1.54







log
10




v
k



(
l
)



+
0.166

,



otherwise








(
19
)







Gain Smoothing

To avoid abrupt change across the frequency bins, gain smoothing is done as below.











G
k



(
l
)


=

{






α
5




G
k
W



(

l
-
1

)



+


(

1
-

α
5


)




G
k
W



(
l
)







if






(



G
k
W



(

l
-
1

)


>


G
k
W



(
l
)



)









α
6




G
k
W



(

l
-
1

)



+


(

1
-

α
6


)




G
k
W



(
l
)







if






(



G
k
W



(

l
-
1

)





G
k
W



(
l
)



)








G
k
W



(
l
)





if






(

l
<
T

)










(
20
)







2D Filtering: To smooth abrupt change in gain estimation across the frequency bins, smoothing is done as below.






G
k
F(l)=(α7Gk(l−1)+α8Gk(l))*(1/(α78))  (21)

    • for (k>1)









TABLE 1







Constants used by RENC 119











Constant
Value
Remarks















α1
0.61
Reverberation Tracker



α2
0.21
smoothing factor



α3
0.13
Noise estimation





smoothing factor



α4
0.61
Estimation Controller



α5
0.21
smoothing factor



α
0.98
Decision Directed





smoothing factor



α5
0.98
Gain estimation



α6
0.28
smoothing factor



α7
7
2D filtering



α8
1
smoothing factor



r′
2.8
Expected ratio between





residual echo and echo





estimate



T
40
Initial 40 frames



L
80
Frame size of 10 msec



D
48
Overlap size of 6 msec



M
176
Window length










Overlap and Add (OLA)

The estimated Gain is applied on error signal as per equation (10) and the enhanced STSA Sk(l) is obtained. Enhanced near-end speech s(n) is then reconstructed by applying the inverse FFT to the enhanced STSA, |Sk(l)|, with the noisy phase Ek(l), followed by an appropriate overlap-and-add (OLA) procedure to compensate for the window effect and to alleviate abrupt signal changes between two consecutive frames.


Frequency Domain Automatic Gain Controller (FAGC)

The smoothed gain GkF(l), and enhanced speech frequency bins Sk(l) are used for estimating gain for each frequency bin to achieve target power level in the output. The high level architecture of the proposed AGC is shown in FIG. 4. VAD block estimates presence of voice activity for each frequency bin. If voice activity presence is detected at least on one frequency bin, the new gain is estimated by the computation module. Then the new gain is applied on the enhanced speech Sk(l).


Voice Activity Detection (VAD)

Since calculating AGC gain for the silence frames is not needed, classification of a frame as speech/silence is required for gain calculations. Since, AGC is supposed to apply gain only on the nearend signal, it should not amplify echo or noise regions. So, the suppressor gain GkF(l) is expected to be lower than unity for echo and noise regions. Also, the suppressor gain can be used for deciding the presence of nearend speech activity, as below.






bvad
k(l)=1 if (GkF(l)>λ1)






vad(l)=1





if (bvadk(l)==1),for any k  (22)


Where bvadk(l) represents VAD decision for kth frequency bin in lth frame. vad(l) represents global VAD decision for lth frame.


The decision of VAD-activity for individual bins in a given frame are considered and if more than one bin is classified as a speech bin the frame is classified as a speech frame otherwise as silence frame.


Gain Computation Unit

The Gain Computation Unit estimates global frame gain from the RMS power level of nearend speech. The gain for each frequency bin is estimated using global frame gain GM(l) and low pass filtering. Total speech power level is given by






P
p(l)=Σ(Sk2(l)*bvadk(l))  (23)


Similarly, noise power is estimated as






P
n(l)=Σ(Sk2(l)−Psp(l))  (24)


Global frame gain is estimated as given below,











G
r
M



(
l
)


=


1


msqr


(
l
)




*

(
TL
)






(
25
)







Where, TL is calibrated target power level considering the frame size and spectral leakage during windowing for the given actual target level in dB. Initial mean square value msqr(0) is given by equation (26).





msqr(0)=(TL*TL)  (26)


Mean square values (msqr(l)) are estimated using a LPF as given below





msqr(l)=msqr(l−1)+P′m(l)  (27)


Where, P′m(l) is given by equation (27), and Pm(l) is given by equation (28).










tmp
=



P
m



(
l
)


-

msqr


(

l
-
1

)












P
m




(
l
)


=

{






P
m



(
l
)


*

λ
2





if






(

tmp
>
0

)









P
m



(
l
)


*

λ
3




otherwise









(
28
)








P
m



(
l
)


=



P
sp



(
l
)


+



P
n



(
l
)


*

λ
4







(
29
)







The calculated gain is limited to the range of the allowable maximum and minimum values before applying it to the frames. In a case where low amplitude to high amplitude level transition is encountered in the input, the computed gain may exceed the limit and may cause a momentary transition spike. This phenomenon can be minimized through a condition to check gain blow over, by limiting the gain to a maximum gain value GMAX to avoid any spiking and ensure smooth transition.











G
r
M



(
l
)


=

{




G
MAX





if







G
r
M



(
l
)



>

G
MAX







G
MIN





if







G
r
M



(
l
)



<

G
MIN










(
30
)







To avoid high fluctuations between two frames that will result in signal distortion the gain is smoothed over time and is given below.










tmp
=



G
r
M



(
l
)


-


G
M



(

l
-
1

)












G
M



(
l
)


=

{






G
M



(

l
-
1

)


+

tmp
*

λ
5






if






(

tmp
>
0

)









G
M



(

l
-
1

)


+

tmp
*

λ
6





otherwise









(
31
)







Different smoothing factors are applied for transitions from noise to speech and speech to noise respectively. These values are chosen in such a way that the attack time is faster than the release time. Attack Time should be fast for preventing harsh distortion when the amplitude rapidly increases and the decay Time should be relatively longer to avoid chopper effect to assure low distortion.


The computed gain is applied to speech and noise bins separately based on the VAD activity decision for each bin. To avoid distortion across frequency bins due to high gain differences across neighboring frequency bins, 2-D filtering on individual VAD decisions of each frequency bin is applied.











bvad
k

2





d




(
l
)


=

{



1



if






(



bvad
i



(
l
)


==
1








i
=

k
-

1





or





k







or





k

+
1









(
32
)







With the knowledge of voice activity for each frame, individual frames are treated separately for the gain calculation. Gain to unvoiced portions that contain only background noise is set to unity. The AGC gain calculated for a given frame is given below for speech frequency bins bvadk2c(l).










tmp
=



G
M



(
l
)


-


G
k
AGC



(

l
-
1

)












G
k
AGC



(
l
)


=

{






G
k
AGC



(

l
-
1

)


+

tmp
*

λ
7






if






(

tmp
>
0

)









G
k
AGC



(

l
-
1

)


+

(

tmp
*

λ
8


)




otherwise









(
33
)







If bvadk2d(l) is noise, below equation is estimated for AGC gain (GkAGC(l)).











G
k
AGC



(
l
)


=

{






G
k
AGC



(

l
-
1

)


*

λ
9





if






(





(



G
k
AGC



(

l
-
1

)


>
1

)

||






(



G
k
AGC



(

l
-
1

)


>


G
M



(
l
)



)




)








G
k
AGC



(

l
-
1

)




otherwise








(
34
)







Finally, the computed gain is applied to respective frequency bins of enhanced speech coming out of residual echo suppressor.






S′
k(l)=GkAGC(l)*Sk(l)  (35)


After gain multiplication on frequency domain, the frame is inverse transformed and the segments are put in order by overlap and add method (OLA) discussed in earlier sections.









TABLE 2







Constants used by FAGC











Constant
Value
Remarks















λ1
0.732
VAD decision factor





for each bin



λ2
0.793
Multiplication factor



λ3
0.183
Multiplication factor



λ4
0.5
Multiplication factor to Noise power



GMAX
8
Gain Limitation



GMIN
0.00015



λ3
32
Global Gain



λ4
0.6
Smoothing factor



λ5
0.457
AGC gain



λ6
0.793
Smoothing factors



λ7
0.996
AGC Gain limiter










Non-Linear Processor (NLP) Controller


FIGS. 6 and 7 depict example aspects of NLP control and NLP decision logic (which is used in NLP control), which are performed in NLP controller 109. NLP controller 109 enables or disables NLP to completely remove the residual echo during single talk. Also, it is a goal to ensure no signal clipping occurs while passing from single talk to double-talk and vice versa. The NLP decisions are made from the combination of normalized correlation between modified microphone signal and enhanced error signal by power of microphone signal and the normalized correlation between modified microphone signal and enhanced error signal by power of error signal.


NLP controller 109 outputs NLP decisions for discrete time intervals, nlp(n). NLP controller 109 uses several inputs in producing NLP decisions. The production of these inputs is collectively referred to as decision parameters estimation 305. These inputs include correlation between error signal and microphone signal, edenr(n). This correlation also can be used for echo detection, such that edenr(n) also can be used as an indication of echo. Other inputs include, normalization parameters, such as error energy eenr(n), and microphone signal energy denr(n), noise energy venr(n), convergence indicator conv(n), long term average of reference signal amplitude ly(n), absolute value of error signal, eabs(n), and absolute value of modified microphone signal. NLP also uses counters for stability checking. These counters include counts for hangover. Before starting NLP decision making, hangover counts and NLP decision parameters are set as given below.






nlp(n)=0





distorsion(n)=0






st_hngovr(n)=st_hngovr(n−1)






dt_hngovr(n)=dt_hngovr(n−1)






nlp
enr(n)=nlpenr(n−1)  (36)


The input signals (microphone signal and error signal) to the NLP controller 109 are scaled to avoid saturation in computation using 16-bit registers. The scaling factor can be experimentally determined. The scaled down signals are called modified microphone signal d′(n) and enhanced error signal en(n), and respectively are estimated by below equation (37).






d′(n)=d(n−D1)/16






e
n(n)=s′(n)/16  (37)


Cross correlation edenr(n) between modified microphone signal d′(n) and enhanced error signal en(n) is called echo indicator parameter and is a major parameter deciding NLP activation/de-activation (decision to activate, not activate or deactivate). This parameter is estimated as below











ed
enr



(
n
)


=



ed
enr



(

n
-
1

)


-

(



d




(

n
-
K

)


*


e
n



(

n
-
K

)



)

+

(



d




(
n
)


*


e
n



(
n
)



)






(
38
)







Other important parameters include normalization factors, including microphone energy denr(n) and enhanced error energy eenr(n), and can be estimated as in equation (39)












d
enr



(
n
)


=



d
enr



(

n
-
1

)


-

[



d




(

n
-
K

)


*


d




(

n
-
K

)



]

+

(



d




(
n
)


*


d




(
n
)



)















e
enr



(
n
)


=



e
enr



(

n
-
1

)


-

[



e
n



(

n
-
K

)


*


e
n



(

n
-
K

)



]

+

(



e
n



(
n
)


*


e
n



(
n
)



)







(
39
)







Noise energy is another decision parameter that is used mainly for breaking hangover. Noise energy is estimated using a moving average filter as per (40).






v
enr(n)=venr(n−1)+β1(eenr(n)−venr(n−1))





if (eenr(n)>venr(n−1))






v
enr(n)=venr(n−1)+β2(eenr(n)−venr(n−1))  (40)





otherwise


There are five counters used for stability and other purposes. Startup indicator counter m_cnt(n) is used to indicate initial session timing. This counter also indicates a number of samples processed by the proposed system before ADF convergence is achieved. This counter's maximum value is limited by the register length being used to avoid overflow.






m

cnt(n)=mcnt(n)+1





if (mcnt(n)<β3)  (41)


Another counter counts recent noise frames. This counter uses VAD decisions (VAD(l)) from RENC 119.










v_cnt


(
l
)


=

{




0
,




if






(


VAD


(
l
)


==
1

)









v_cnt


(

l
-
1

)


+
1

,



else








(
42
)







Another counter is an adaptation counter adp_cnt(n) used to indicate a number of samples, during which the ADFs have maintained convergence. Adaptation counter allows taking hard NLP decisions during start of convergence. After ADF convergence, the adaptation counter does not factor into NLP decision logic.










adp_cnt


(
n
)


=

{






adp_cnt


(

n
-
1

)


+
1

,




if






(


ADAP


(
n
)


==
1

)








adp_cnt


(

n
-
1

)


,



else








(
43
)







Another counter is suppressor activated counter, sup_cnt(n), which is similar to the startup indicator counter m_cnt(n). Suppressor activated counter is to indicate a number of samples during which the NLP is activated before convergence of the ADF. This counter is incremented by one for every NLP ON decision before convergence is achieved for a speech frame. The suppressor activated counter also does not have factor into NLP decision logic after ADF convergence. Balance convergence counter, con_cnt(n), is to indicate the number of samples ADFs are converged within the expected convergence.


The last counter used is called hist counter, his_cnt(n) is to check the stability of the convergence. Another decision parameters, absolute short term average error signal eabs(n), absolute short term average microphone signal dabs(n) and long term average of reference signal amplitude ly(n) are estimated as per below equations.










tmp
=





s




(
n
)




-


e
abs



(

n
-
1

)












e
abs



(
n
)


=

{






e
abs



(
n
)


+

tmp
*

β
4






if






(





(




d


(

n
-

D
1


)




<

β
5


)

&&






(



d
abs



(
n
)


<



d


(

n
-

D
1


)





)




)










e
abs



(
n
)


+

tmp
*

β
6



,



otherwise









(
44
)







tmp
=




d


(

n
-

D
1


)




-


d
abs



(

n
-
1

)












d
abs



(
n
)


=

{






d
abs



(
n
)


+

tmp
*

β
4






if






(





(




d


(

n
-

D
1


)




<

β
5


)

&&






(



d
abs



(
n
)


<



d


(

n
-

D
1


)





)




)










d
abs



(
n
)


+

tmp
*

β
6



,



otherwise









(
45
)







ly


(
n
)


=


(


ly


(

n
-
1

)


*

(

1
-

β
7


)


)

+

(





x
2



(
n
)




*

β
7


)






(
46
)







D1 is a delay compensator factor for synchronizing microphone signal d(n) and error signal received from residual echo remover ś(n).


Another decision parameter is a convergence indicator and can be estimated (detection 307) as per pseudocode (47). When the ADF reaches convergence during single talk, the correlation between enhanced error signal and modified microphone signal decreases. Decreased correlation thus can be used as a detector for ADF convergence. For the detection of convergence, cross correlation edenr(n) is normalized by microphone energy denr(n) and compared with the predefined threshold. Since RENC 119 cancels background noise also, this normalized cross correlation check may pass during no speech region. So, convergence validation is checked during presence of speech activity using the v_cnt(l).

















if ((conv(n − 1) == 0) & & (v _ cnt(l) == 0))



{



  if (denr (n) * β9 > edenr (n))



  {









if ((his _ cnt(n − 1) > β10) & &









(adp _ cnt(n) > β37))









{









conv(n) = 1



sup _ cnt(n) = β11



m _ cnt(n) = β3









}



else



{









his _ cnt(n) = his_ cnt(n − 1) + 1









}









 }










 else
(47)









 {









 if (his _ cnt(n − 1) > β38)



{









 con _ cnt(n) =









con_ cnt(n − 1) + his _ cnt(n − 1)









  if (con _ cnt(n) > β10) & &









(adp _ cnt(n) > β37)









  {









conv(n) = 1



sup _ cnt(n) = β11



m _ cnt(n) = β3









}









}



his _ cnt(n) = 0









  }



}










Decision Logic—309 & 311


FIG. 7 depicts an example of NLP decision logic performed to update NLP decisions, in elements 309/311 of FIG. 6. The example of FIG. 7 is exemplary and not limiting. A person of ordinary skill can adapt these disclosures to other implementations. The decision logic has two main stages; (1) Decision before convergence and (2) Decision after convergence. A Startup Decision Maker 354 is NLP decision maker before expected convergence is achieved. There are five sub-stages in the decision making after expected convergence is achieved. They are detailed in the subsequent sub sections.


Startup Decision Maker 354

Startup Decision Maker 354 uses a relaxed threshold and there is possibility that NLP might be activated sometimes during double talk. The startup decision maker is active for a short time during startup, and thus does not have a major effect on a conversation. Also, occurrence of double talk during start of a call is uncommon.

















if ((m _ cnt(n) < β3) & &(sup _ cnt(n) < β11)









 & &(denr (n) * β12 > edenr (n)))









{









nlp(n) = 1










if (v _ cnt(l) == 0)
(48)









{









sup _ cnt(n) = sup _ cnt(n) + 1









}









}










Coarse Decision Maker 356

A Coarse Decision Maker 356 uses normalized cross correlation edenr(n)/denr(n) for decision making. If the validation check is passed, the DT hangover is broken and ST hangover is set to β14.

















if (denr (n) * β13 > edenr (n))



{









nlp(n) = 1










st _ hngovr(n) = β14
(49)









dt _ hngovr(n) = −1



distortion(n) = 1









}










Distorted Error Masker

A Distorted Error Masker 358 is an energy comparator for low level signal. When the error signal is at a low level and also is much lower than the microphone signal level, this decision directs NLP activation. Activating the NLP under such conditions reduces situations where distorted low level noise can be heard by the user.

















if ((denr (n) > eenr (n) * β15) & &(eenr (n) < β16))









∥ (denr (n) > eenr (n) * β17) & &(eenr (n) < β18))









{









(50)









nlp(n) = 1



dt _ hngovr(n) = −1









}










Coarse Decision Maker 360

A Coarse Decision Maker 360 uses a normalized cross correlation edenr(n)/eenr(n) as a basis for outputting decisions for NLP activation. If the validation check is passed, the DT hangover is broken and ST hangover is set to β20 if it is lower than that.

















if (eenr (n) > (edenr (n) * β19))



{









nlp(n) = 1



if (st _ hngovr(n) < β20)









(51)









st _ hngovr(n) = β20









dt _ hngovr(n) = −1



distortion(n) = 1









}










Double Talk Hangover Check

If the NLP decision is OFF with the above validations, a DT Hangover Check 362 is performed. DT hangover is checked for transmitting the nearend signal passed out of AES until a current point. The hangover counter is decremented by one for every sample processing.

















if (dt _ hngovr(n) > 0)



{









(52)









dt _ hngovr(n) = dt _ hngovr(n) − 1









}










Coarse Decision Maker 365

If all decision making logics failed, then the coarse decision maker 365 becomes active (this example shows a serial flow, where any positive decision causes a NLP=1 decision, and the remainder of the flow need not be performed. A Coarse decision maker 365 applies a different threshold on the normalized cross correlation edenr(n)/denr(n) based on the convergence state of the adaptive filter as given below.

















if (denr (n) * β21 > edenr (n) ∥ ((denr (n) *









β22 > edenr (n)) & &(conv(n) == 0)))









{









 nlp(n) = 1









(53)









 dt _ hngovr(n) = 0



 if (denr (n) * β23 > edenr (n))









st _ hngovr(n) = β24









}










The flow of FIG. 7 completes by returning a decision for nlp(n)=0 or nlp(n)=1 to complete the flow of FIG. 6.


NLP Energy Threshold Updating 315

If the NLP Decision Logic enables NLP, then NLP energy threshold is updated 315 as given below. This threshold will be used for breaking ST hangover later.

















tmp = eenr (n) − nlpenr (n)



if (tmp > 0)










nlpenr (n) = nlpenr (n) + tmp * β25
(54)









else









nlpenr (n) = nlpenr (n) + tmp * β26










Double Talk Hangover Breaker 317

Sometimes there is change of residual echo passed to user due to hangover. So, there should be decision or other mechanism to break DT hangover based on a sudden fall in nearend energy or sudden rise in echo energy. The DT hangover is broken in this scenario based on the below condition:

















if ((eenr (n) * β27 > denr (n))









∥ (denr (n) > eenr (n) * β28))









{









(55)









dt _ hngovr(n) = −1



nlp(n) = 1









}










Double Talk Hangover Setting 322

If the DT hangover breaking conditions failed and energy of the error signal is more than a predefined threshold, ST hangover is to be broken and DT hangover is to be set to another pre-defined value, as in the example below.

















if (eenr (n) > β29)



{










dt _ hngovr(n) = β20
(56)









st _ hngovr(n) = −1









}










Single Talk Hangover Breaker 320

The NLP threshold estimated is used for breaking the ST hangover. The ST hangover breaking validation condition is given below.

















if ((eenr (n) > nlpenr (n) * β30) ∥









(eenr (n) > (nlpenr (n) + β31))



& &(eenr (n) > β32) & &(distortion(n) == 0))









(57)









{









st _ hngovr(n) = −1









}










If the hangover breaking validation is failed and ST hangover count is greater than 0 (325), NLP is activated (329) and ST hang over count is decremented by 1 (329).


Refine NLP Decision and ST Hangover 331

Refining the NLP decision and ST hangover are done based on the long term average amplitude of the reference signal ly(n), absolute average of error and modified microphone output signal as given below.

















if (ly(n) < β33)



{










nlp(n) = 0
(58)









st _ hngovr(n) = −1









}



if (eabs (n) > dabs (n) + β34) ∥









(eabs (n) > dabs (n) * β35)



& &(dabs (n) > 0)










{
(59)









nlp(n) = 1



st _ hngovr(n) = β36









}

















TABLE 3







Constants used by NLP controller 109











Constatext missing or illegible when filed
Value
Remarks















β1
0.0189
Noise Energy



β2
0.1831
Smoothing factor



β3
64000
Max. value of





startup indication





counter



β4
0.5
Smoothing factor



β5
50
Constant



β6
0.03
Smoothing factor



β7
128
Constant



β8
480
Max. value of adap_cntext missing or illegible when filed



β9
0.0061
Multiplication Factor



β10
1400
his_cnt limit



β11
32000
Constant



β12
0.4577
Multiplication Factor



β13
0.0061
Multiplication Factor



β14
4000
st_hngovr limit



β15
3
Constant



β16
7500
eenr limit



β17
2
Constant



β18
2500
eenr limit



β19
2
Constant



β20
540
st_hngovr limit



β21
0.061
Multiplication Factor



β22
0.3662
Multiplication Factor



β23
0.4577
Multiplication Factor



β24
240
st_hngovr limit



β25
0.097
NLP Energy



β26
0.0061
Smoothing factor



β27
21845
dt_hngovr limit



β28
0.0313
Multiplication Factor



β29
35000
eenr limit



β30
4
Constant



β31
0.2136
Multiplication Factor



β32
12000
eenr limit



β33
6400
ly limit



β34
900
Constant



β35
8
Constant



β36
1200
st_hngovr limit



β37
300
Constant



β38
20
st_hngovr constant



K
300
Index








text missing or illegible when filed indicates data missing or illegible when filed







Embodiments can be implemented in Fixed Point C on a RISC application processor, such as an Advanced RISC Machines (ARM) processor, such as an ARM 9E. In some implementations, other applications can execute on the same application processor and in some examples, processes can have preemptive scheduling provided by an OS kernel, for time-critical tasks. Good performance is shown on real platforms that have general purpose application processors, such as laptops, tablets, and desktops, such as Microsoft Windows desktop, laptop and mobile, as well as Android-based handsets. To demonstrate the proposed system's performance here, the ensemble average results are provided in this section.


Real-time captured farend and microphone output signals on different platforms are fed to the AEC module and respective block's output signals are captured and analyzed. FIG. 8 depicts the ensemble average of ERLE for single talk test case. During single talk test case, microphone output signal has echo and background noise only.


In FIG. 8, it can be seen that ADFs (402) were able to provide ERLE of 8 dB only. With the Residual Echo and Noise Canceller (REnNC) 119, ERLE can be increased up to 60 dB using modified Wiener gain estimation (404) and 40 dB using modified MMSE LSA gain estimation (406). The proposed method based on MMSE-LSA provides much less residual noise when compared to Weiner, while there is no perceptible difference in the enhanced quality of speech between these two methods. Further, residual noise sounds more uniform (more white), which is subjectively preferable.



FIG. 9 depicts the ensemble average of ERLE for Double Talk (DT) test case. In the FIG. 9, two DT regions are present. In all test cases, there is no clipping of nearend speech and complete cancellation of background noise is observed.



FIGS. 10-13 depict aspects of the performance of an implementation of the proposed FAGC. From FIG. 10, it can be noted that the target level tracking of the proposed FAGC is fast and accurate.


NLP controller 109 performance for real-time captured signal is depicted in FIG. 14. The captured signal has the combination of single talk, double talk and nearend signal. NLP is active during single talk and echo alone regions during double talk and it is deactivated in all the nearend regions. FIG. 15 depicts the AES output for NLP decisions. AES output does not contain any residual echo.


Generally, any of the functions, methods, techniques or components described above can be implemented in modules using software, firmware, hardware (e.g., fixed logic circuitry), or any combination of these implementations. The terms “module,” “functionality,” “component”, “block” and “logic” are used herein to generally represent software, firmware, hardware, or any combination thereof.


In the case of a software implementation, the module, functionality, component or logic represents program code that performs specified tasks when executed on a processor (e.g. one or more CPUs). In one example, the methods described may be performed by a computer configured with software of a computer program product in machine readable form stored on a computer-readable medium. One such configuration of a computer-readable medium is signal bearing medium and thus is configured to transmit the instructions (e.g. as a carrier wave) to the computing device, such as via a network. The computer-readable medium may also be configured as a non-transitory computer-readable storage medium, which is not a propagating signal bearing medium (e.g., an EM signal propagating in free space or over a wire). Examples of a computer-readable storage medium include a random-access memory (RAM), read-only memory (ROM), an optical disc, flash memory, hard disk memory, and other memory devices that may use magnetic, optical, and other techniques to store instructions or other data and that can be accessed by a machine, but.


The software may be in the form of a computer program comprising computer program code for configuring a computer to perform the constituent portions of described methods or in the form of a computer program comprising computer program code means adapted to perform all the steps of any of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable medium. The program code can be stored in one or more computer readable media. The features of the techniques described herein are platform-independent, meaning that the techniques may be implemented on a variety of computing platforms having a variety of processors.


Those skilled in the art will also realize that all, or a portion of the functionality, techniques or methods may be carried out by a dedicated circuit, an application-specific integrated circuit, a programmable logic array, a field-programmable gate array, or the like. For example, the module, functionality, component or logic may comprise hardware in the form of circuitry. Such circuitry may include transistors and/or other hardware elements available in a manufacturing process. Such transistors and/or other elements may be used to form circuitry or structures that implement and/or contain memory, such as registers, flip flops, or latches, logical operators, such as Boolean operations, mathematical operators, such as adders, multipliers, or shifters, and interconnects, by way of example. Such elements may be provided as custom circuits or standard cell libraries, macros, or at other levels of abstraction. Such elements may be interconnected in a specific arrangement. The module, functionality, component or logic may include circuitry that is fixed function and circuitry that can be programmed to perform a function or functions; such programming may be provided from a firmware or software update or control mechanism. In an example, hardware logic has circuitry that implements a fixed function operation, state machine or process.


Aspects of the present disclosure encompass software (as represented by data recorded on a non-transitory medium) which “describes” or defines the configuration of hardware that implements a module, functionality, component or logic described above, such as HDL (hardware description language) software, as is used for designing integrated circuits, or for configuring programmable chips, to carry out desired functions. That is, there may be provided a computer readable storage medium having encoded thereon computer readable program code for generating a processing block configured to perform any of the methods described herein, or for generating a processing block comprising any apparatus described herein.


The term ‘processor’ and ‘computer’ are used herein to refer to any device, or portion thereof, with processing capability such that it can execute instructions, or a dedicated circuit capable of carrying out all or a portion of the functionality or methods, or any combination thereof.


Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. It will be understood that the benefits and advantages described above may relate to one example or may relate to several examples.


The actions of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate and unless indicated otherwise by context or explicitly. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought.

Claims
  • 1. A machine-implemented method of echo cancellation in a full duplex voice communication system, comprising: performing acoustic echo cancellation on a near-end signal using a short time domain adaptive filter, to produce a filtered near-end signal that may have non-linear residual echo and noise;tracking, in the frequency domain, the non-linear residual echo in the filtered near-end signal to output an error signal;producing an estimate of a portion of the filtered near-end signal to be removed, as a combination of the non-linear residual echo, and noise;imposing a limitation on the estimate based on a moving average of the error signal;controlling gains associated with a plurality of frequency bins in a Frequency domain Automatic Gain Controller (FAGC), based on the limited estimate, thereby suppressing the estimated portion of the filtered near-end signal to be removed; andrefining respective gains associated with the plurality of frequency bins of the FAGC.
  • 2. The method according to claim 1, wherein the tracking comprises defining frequency bins calculated from the time domain signal by scaling an echo estimate produced by an acoustic echo canceller performing the acoustic echo cancellation.
  • 3. The method according to claim 2, wherein the time domain adaptive filter comprises a two-band, 32 millisecond tail length echo cancellation filtering unit, and the scaling comprises producing a time domain echo estimate and multiplying the time domain echo estimate by a fixed correlation factor (ŕ′), that is pre-estimated for use with the echo cancellation filtering unit.
  • 4. The method according to claim 3, further comprising estimating an effect of the echo on a play-out signal using a moving average filter with low attack rate (α1) and fast release rate (α2).
  • 5. The method according to claim 4, wherein the estimating of the effect of the reverberation comprises estimating echo for a plurality of frequency bins, and the echo estimate Rk(l) for a kth frequency bin at a frame index (l) is calculated using the relation Rk(l)=α1Rk(l−1)+(1−α1)ŕYk(l) when (ŕYk(l)>Rk(l−1)) and Rk(l)=α2Rk(l−1)+(1−α2)ŕYk(l) when (r′Yk(l)≦Rk(l−1)) wherein Yk(l) is a Short Time Fourier Transform (STFT) of echo estimate y(n) and ŕ is the correlation factor.
  • 6. The method according to claim 1, wherein the producing of the estimate of a portion of the filtered near-end signal to be removed comprises: estimating a level of the noise using an external Voice Activity Detector (VAD) across the plurality of frequency bins, wherein the noise estimation for kth frequency bin at frame index l is calculated using the relation Vk(l)=α3Vk(l−1)+(1−α3)Vk(l) when (VAD=0) and Vk(l)=Vk(l−1) when (VAD=1), where Vk(l) is noise estimation for kth frequency bin and α3 is a smoothing factor, andcalculating the estimate of the portion to be removed for kth frequency bin at frame index l using the relation NRk(l)=Vk(l)+Rk(l), where Rk(l) is lengthened echo estimate.
  • 7. The method according to claim 1, wherein the imposing of the limitation on the estimate comprises calculating an estimation threshold, from a frequency domain error signal (Ek(l)) using dual alpha low pass Finite Impulse Response (FIR) filtering, wherein the estimation threshold comprises a respective threshold for each of the plurality of frequency bins, and kth frequency bin at frame index l is calculated using the relation Wk(l)=α4Wk(l−1)+(1+α4)*Ek(l), for Wk(l−1)>Ek(l) and Wk(l)=α5Wk(l−1)+(1−α5)*Ek(l) for (Wk(l−1) Ek(l)) wherein α5 is an attack coefficient and α4 is a release coefficient; andlimiting the maximum value of the estimated echo and noise for each of the plurality of frequency bins, wherein for the kth frequency bin at frame index l, the maximum value is calculated using the relation Pk(l)=minimum of {NRk(l) and Wk(l)}, wherein NRk(l) is combined estimated echo and noise and Wk(l) is the estimation threshold.
  • 8. The method according to claim 1, wherein the refining of the gains comprises smoothing gain for each of the plurality of frequency bins, wherein kth frequency bin at frame index l is calculated using the relation Gk(l)=α3*Gkw(l−1)+(1−α3)*Gkw(l−1) when (Gkw(l−1)>Gkw(l)) and Gk(l)=α4*Gkw(l−1)+(1−α4)*Gkw(l−1) if (Gkw(l−1)≦Gkw(l)) and Gk(l)=α4Gkw(l) when (l<T) wherein Gkw(l) is instantaneous gain and α3 is an attack coefficient and α4 is a release coefficient and T is a gain smoothing threshold; andfiltering the smoothed gain for each of the plurality of frequency bins, wherein for kth frequency bin at frame index l, the filtered smoothed gain is calculated using the relation GkF(l)=(α5Gk(l−1)+α6Gk(l))*(1/(α5+α6)) when (k>1), wherein Gk(l) is smoothed gain for kth frequency bin at frame index l and (α5,α6) are filtering coefficients.
  • 9. A system for controlling gain of a near-end signal in a voice enhancement system, the near-end signal being a speech signal of a user at a near end, the system comprising: a module for identifying a plurality of frequency bins which contain speech energy;a local gain estimator operating to estimate local gain for each of the plurality of frequency bins;a speech identifier operable to identify bins of the plurality of frequency bins that contain speech;a local gain smoothing module operable to smooth gain in the plurality of frequency bins; anda global gain regulator operable to estimate and regular a global gain over all of the plurality of frequency bins.
  • 10. The system according to claim 9, wherein the module for identifying frequency bins containing speech comprises a processor configured to compute the relation bvadk(l)=1 when (Gk(l)>λ1) and vad(l)=1 when (bvadk(l)=1, for any k), wherein, Gk(l) is the smoothed gain and λ1 is a speech decision threshold and bvadk(l) is a Voice Activity Detector (VAD) decision for kth frequency bin at frame index l and vad(l) is the VAD decision at frame index l.
  • 11. The system according to claim 9, wherein the estimation of regulated global gain is computed for kth frequency bin at frame index l using the relation Gr(l)=(1/sqrt(msqr(l))*(TL), wherein, TL is the target level and msqr(l) is the estimated target level. Local gain for frame index l is computed using the relation GM(l)=(1−λ5)*GM(l−1)+λ5*GrM(l) when (GM(l−1)>GrM(l)) and GM(l)=(1−λ6)*GM(l−1)+λ6*GrM(l) when (GM(l−1) GrM(l)), wherein Gr(I) is the regulated global gain and (λ5,λ6) are filter coefficients.
  • 12. The system according to claim 9, wherein the smoothing of local gain for speech bins is computed for kth frequency bin at frame index l using the relation GkAGC(l)=(1−λ7)*GkAGC(l−1)+λ7*GM(l) when (GkAGC(l−1)>GM(l)) and GkAGC(l)=(1−λ8)GkAGC(l−1)+λ8GM(l) when (GkAGC(l)≦GM(l)) and smooth local gain for noise bins is computed using the relation GkAGC(l)=λ9*GkAGC(l−1) when {(GkAGC(l−1)>1) or (GkAGC(l)>GM(l)} otherwise GkAGC(l)=GkAGC(l−1), wherein, GkAGC(l) is the smoothed local gain for kth frequency bin at frame index l and GM(l) is the local gain at frame index l and (λ7,λ8,λ9) are filter coefficients.
  • 13. A system for controlling a Non-Linear Processor (NLP) to activate and deactivate the NLP for complete removal of residual echo in an echo alone region of a microphone output signal without chopping of a near-end speech signal, the system comprising: an estimator configured to produce respective estimates for a plurality of decision parameters;a detector for detecting convergence of an adaptive echo cancellation filter;a controller to output a NLP decision for a frame of speech, wherein the NLP decision indicates whether the NLP is to be active or inactive;a module for updating an NLP energy threshold parameter;a Single Talk (ST) hangover breaker;a Double Talk (DT) hangover breaker; anda module for revising the NLP decision.
  • 14. The system according to claim 13, wherein the decision parameters consist of a first set of parameters and second set of parameters, each selected from the group consisting of: enhanced error signal (en(n)), modified microphone signal (d′((n)), echo indicator parameter (edenr(n)), enhanced error signal energy (eenr(n)), modified microphone signal energy (denr(n)), noise signal energy (venr(n)), long term average of reference signal amplitude (ly(n)), absolute error signal (eabs(n)), absolute microphone signal (dabs(n)), NLP energy threshold (nlpenr(n)), startup indicator counter (m_cnt(n)), recent noise frame counter (v_cnt(l)), adaptation counter (adp_cnt(n)), suppressor activated counter (sup_cnt(n)), hist counter (his_cnt(n)), single talk hangover counter (st_hngovr(n)), a double talk hangover (counter dt_hngovr(n)), convergence indicator (conv(n)) and a distortion indicator distortion(n).
  • 15. The system according to claim 13, wherein the estimator is operable to initialize the decision parameters both during startup of the voice enhancement system in which all parameters are set to zero, and during decision making for every near-end signal sample, sets NLP decision at time instant n (nlp(n)) to zero, distortion(n) to zero, sets st_hngovr(n) at time instant n to st_hngover(n−1) and sets nlpenr(n) to nlpenr(n−1).
  • 16. The system according to claim 13, wherein the estimator is operable to estimate an echo indicator parameter(edenr(n)) from a cross correlation between a modified microphone signal (d′(n)) and an enhanced error signal (en(n)), wherein the estimator is operable to produce d′(n) by scaling a microphone signal and to produce en(n) by scaling an error signal received a residual echo remover, and edenr(n) is computed as edenr(n)=edenr(n−1)−(d′(n−K)*en(n−K))+(d′(n)*en(n)) wherein K is window factor,the estimator is operable to estimate energy of the modified microphone signal (denr(n)) as denr(n)=denr(n−1)−[d′(n−K)*d′(n−K)+d′(n)*d′(n)] and to estimate energy of the enhanced error signal (eenr(n)) as eenr(n)=eenr(n−1)−[en(n−K)*en(n−K)+en(n)*en(n)].
  • 17. The system according to claim 16, wherein the ST hangover breaker is operable to calculate a noise energy using a moving average filter and the relation venr(n)=venr(n−1)−β1[eenr(n)−venr(n−1)] when (eenr(n)>venr(n−1) and venr(n)=venr(n−1)−β2[eenr(n)−venr(n−1)] when (eenr(n)≦venr(n−1).
  • 18. The system according to claim 13, wherein the estimator is operable to compute an absolute error signal eabs(n) as eabs(n)=eabs(n)+(|ś(n)|−eabs(n−1))*β4 when (|d(n−D1)|<β5 & dabs(n)<|d(n−D1)|), otherwise eabs(n)=eabs(n)+(|ś(n)|−eabs(n−1))*β6 wherein dabs(n) is the absolute microphone signal, ś(n) is the error signal received from a residual echo remover, d(n) is a microphone signal and D1 is a delay compensation factor between microphone signal d(n) and ś(n), and (β4,β5,β6) are predefined thresholds.
  • 19. The system according to claim 18, wherein calculation of dabs(n) is as dabs(n)=dabs(n)+(|d(n−D1)|−dabs(n−1))*β4 when (|d(n−D1)|<β5 & dabs(n)<|d(n−D1)|), otherwise dabs(n)=dabs(n)+(|d(n−D1)|−dabs(n−1))*β6.
  • 20. The system according to claim 16, wherein the estimator is operable to calculate a distortion indicator (distortion(n)) by calculating a ratio between enhanced error signal energy eenr(n) and echo indicator parameter edenr(n) for time instant n and comparing the ratio to a predefined threshold β19,calculating a ratio between modified microphone signal energy denr(n) and echo indicator parameter edenr(n) for the time instant n and comparing the ratio to a predefined threshold β13, andif either comparison is successful, setting distortion(n) to indicate distortion at time index n.
  • 21. The system according to claim 14, wherein the second set of parameters comprises m_cnt(n) calculated as m_cnt(n)=m_cnt(n)+1 when (m_cnt(n)<β3), for every processed sample from microphone,v_cnt(l), to indicate a number of recent noise frames observed and if the current frame is a noise frame, v_cnt(l)=0, when (VAD(l)=Voice); and v_cnt(l)=v_cnt(l−1)+1 if (VAD(l)=Noise), wherein VAD(l) is either Voice or Noise decision from a Voice Activity detector, and v_cnt(l) and v_cnt(l−1) are recent noise frame counter at frame indexes l and l−1 respectively,adp_cnt(n) to indicate number of samples for which echo cancellation filters have been adapted, adp_cnt(n)=adp_cnt(n−1)+1 when (ADAP(n)=1); and adp_cnt(n)=adp_cnt(n−1), when (ADAP(n)=0), wherein ADAP(n) is an adaptation indication flag estimated from the double talk detector at time instant n,sup_cnt(n) to indicate a number of samples for which NLP is activated before convergence of the echo cancellation filters calculated by incrementing for every NLP ON decision before convergence is achieved for a speech frame, andhis_cnt(n) that tracks stability of convergence
  • 22. The system according to claim 21, wherein the detector is operable to update a convergence indicator conv(n) by calculating a ratio between modified microphone signal energy denr(n) and echo indicator parameter edenr(n) for the time instant n,comparing the ratio to a predefined threshold β9 when adp_cnt(n)=0, and v_cnt(l)=0, wherein adap_cnt(n) and v_cnt(l) are adaptation counters at time instant n and recent noise frame counter at frame index l respectively,checking the continuous success in comparison using his_cnt(n),setting conv(n) at time index n to 1, if the continuous successful comparison is more than a predefined threshold β10,resetting his_cnt(n) to 0, if the continuous successful comparison is not lower than a predefined threshold β10 after the first failure in the comparison.
  • 23. The system according to claim 16, wherein the NLP controller is operable to make NLP decisions both before convergence of echo cancellation filters and after convergence of echo cancellation filters, wherein the NLP controller is operable to make decisions after convergence of the echo cancellation filter by a coarse decision based on the ratio between echo indicator parameter edenr(n) and modified microphone signal energy denr(n),based on distortion(n), coarse decision making based on a ratio between edenr(n) and ener(n), second level coarse decision making based on the ratio between edenr(n) and denr(n), andbreaking a double talk hangover based on amplification in error or attenuation of error more than an expected threshold, as determined by the ratios.
  • 24. The system according to claim 23, wherein the NLP energy threshold is calculated using the relation nlpenr(n)=nlpenr(n)+(eenr(n)−nlpenr(n))*β25 when enhanced error signal energy eenr(n) greater than NLP energy threshold nlpenr(n), (otherwise, nlpenr(n)=nlpenr(n)+(eenr(n)−nlpenr(n))*β26, wherein eenr(n) is enhanced error signal energy and (β25,β26) are predefined thresholds.
  • 25. The system according to claim 14, wherein the ST hangover breaker is operable to break an ST hangover, and reset st_hngovr(n), based on one or more of the following conditions being found to exist: a ratio between enhanced error signal energy ener(n) to NLP energy threshold nlpenr(n) is greater than a predefined threshold (β30) when enhanced error energy is greater than β32 and distortion(n) indicates no distortion,enhanced error signal energy eenr(n) is greater than NLP energy threshold nlpenr(n) by a predefined threshold β31 when error energy is greater than β32 and distortion(n) indicates no distortion, anda long term average of the reference signal ly(n) is greater than a predefined threshold β33.
  • 26. The system according to claim 14, wherein the DT hangover breaker is operable to break an DT hangover, and reset dt_hngovr(n), based on one or more of the following conditions being found to exist a ratio between echo indicator edenr(n) and modified microphone signal energy denr(n) lesser than predefined threshold β13,a ratio between denr(n) and enhanced error signal energy eenr(n) is greater than predefined threshold β15 when error signal is below the predefined threshold β16,a ratio between denr(n) and ener(n) is greater than predefined threshold β17 when error signal is below the predefined threshold β18,a ratio between edenr(n) and eenr(n) is less than a predefined threshold β19,a ratio between edenr(n) and denr(n) less than a predefined threshold β21 when conv(n) is zero,a ratio between edenr(n) and denr(n) is greater than predefined threshold β22 when conv(n) is zero,a ratio between denr(n) and eenr(n) less than a predefined threshold β27, anda ratio between denr(n) and eenr(n) is less than a predefined threshold β28.
  • 27. The system according to claim 26, wherein if any condition is found to exist, then the remaining conditions are not checked, and double talk hangover is broken.
  • 28. The system according to claim 14, wherein the DT hangover breaker is operable to break an DT hangover, and reset dt_hngovr(n), based on detecting that the enhanced error signal energy eenr(n) greater than a predefined threshold β29.
  • 29. The system according to claim 14, wherein the ST hangover breaker is operable to break an ST hangover, and reset st_hngovr(n), based on one or more of the following conditions being found to exist, and wherein if one condition is found to exist, the remaining conditions are not checked a ratio between echo indicator edenr(n) and modified microphone signal energy denr(n) is greater than a predefined threshold β13,a ratio between enhanced error signal eenr(n) and edenr(n) greater than a predefined threshold β19v and single talk hangover counter is less than β20,a ratio between edenr(n) and denr(n) greater than a predefined threshold β13, anda ratio between denr(n) and edenr(n) greater than a predefined threshold β19.
  • 30. The system according to claim 14, wherein the ST hangover breaker setting is based on a plurality of parameters and predefined group of validation conditions and wherein presence of at least one validation condition is sufficient to set single talk hangover and the system avoids checking remaining validation conditions.
  • 31. The system according to claim 14, wherein the ST hangover validation conditions comprise checking whether ratio between modified microphone signal energy denr(n) and echo indicator edenr(n) greater than a predefined threshold β21 and the ratio between echo indicator edenr(n) and modified microphone signal energy denr(n) greater than a predefined threshold β23;checking whether the absolute short term average error signal eabs(n) greater than absolute short term average microphone signal dabs(n) by predefined threshold β34 when dabs(n) is greater than zero; andchecking whether ratio between the absolute short term average error signal eabs(n) and absolute short term average microphone signal dabs(n) greater than predefined threshold β35 when dabs(n) is greater than zero.
  • 32. The system according to claim 14, wherein the ST hangover validation conditions comprise checking whether ratio between modified microphone signal energy denr(n) and echo indicator edenr(n) greater than a predefined threshold and the ratio between echo indicator edenr(n) and modified microphone signal energy denr(n) greater than a predefined threshold;checking whether the absolute short term average error signal eabs(n) greater than absolute short term average microphone signal dabs(n) by predefined threshold β34 when dabs(n) is greater than zero; andchecking whether ratio between the absolute short term average error signal eabs(n) and absolute short term average microphone signal dabs(n) greater than predefined threshold β35 when dabs(n) is greater than zero.
  • 33. The system according to claim 12, wherein a module for revising the NLP decision is operable to set the NLP decision to zero when long term average of reference signal amplitude ly(n) is less than predefined threshold β33set the NLP decision to one when the absolute short term average error signal eabs(n) greater than absolute short term average microphone signal dabs(n) by predefined threshold β34 when dabs(n) is greater than zero; andset the NLP decision to one when the ratio between the absolute short term average error signal eabs(n) and absolute short term average microphone signal dabs(n) greater than predefined threshold β35 when dabs(n) is greater than zero.
CROSS REFERENCE TO RELATED APPLICATIONS

This applications claims priority from U.S. Provisional Application No. 61/697,682, entitled “SYSTEMS AND METHODS OF ECHO & NOISE CANCELLATION IN VOICE COMMUNICATION”, which was filed on Sep. 6, 2012, and is hereby incorporated by reference in its entirety herein.

Provisional Applications (1)
Number Date Country
61697682 Sep 2012 US