Acoustic Echo Cancellation (AEC) is a digital signal processing technology which is used to remove the acoustic echo from a speaker phone in two-way (full duplex) or multi-way communication systems, such as traditional telephone or modern internet audio conversation applications.
1. Overview of AEC Processing
In the render stream path, the system receives audio samples from the other end, and places them into a render buffer 140 in periodic frame increments (labeled “spk[n]” in the figure). Then the digital to analog (D/A) converter 150 reads audio samples from the render buffer sample by sample and converts them to an analog signal continuously at a sampling rate, fsspk. Finally, the analog signal is played by speaker 160.
In systems such as that depicted by
Practically, the echo echo(t) can be represented by speaker signal spk(t) convolved by a linear response g(t) (assuming the room can be approximately modeled as a finite duration linear plant) as per the following equation:
echo(t)=spk(t)*g(t)=∫0T
where * means convolution, Te is the echo length or filter length of the room response.
In order to remove the echo for the remote user, AEC 210 is added in the system as shown in
The actual room response (that is represented as g(t) in the above convolution equation) usually varies with time, such as due to change in position of the microphone 110 or speaker 160, body movement of the near end user, and even room temperature. The room response therefore cannot be pre-determined, and must be calculated adaptively at running time. The AEC 210 commonly is based on adaptive filters such as Least Mean Square (LMS) adaptive filters 310, which can adaptively model the varying room response.
In addition to AEC, these voice communications systems (e.g., alternative system 400 shown in
The full-duplex communication experience can be further improved by two additional processes for processing the near-end speech signal. These include a residual echo suppression (RES) process that further suppresses the acoustic echo from the speakers; and a microphone array process that improves the signal to noise ratio of the speech captured from multiple microphones. One subcomponent of the microphone array process is a sound source localization (SSL) process used to estimate the direction of arrival (DOA) of the near-end, speech signal.
2. Overview of RES Processing
Upon starting, the RES process is initialized (510) to the following initial state of weight (wc), complex, frequency domain AEC residual (Xc(f,t)), and far-end signal power (Pc(f,t)):
wc(0)=0
Xc(f,t)=0 for t≦0
Pc(f,0)=∥Xc(f,1)∥2
where f is the individual frequency band and t is the frame index. Here, the weight is a factor applied in the RES process to predict the residual signal magnitude. The complex AEC residual is the output produced by the previous AEC process in the microphone channel. The far-end signal power is the power of the far-end signal calculated in the RES process.
As indicated at 520, 590 in
At action 540, the RES process 500 computes the error signal as a function of the magnitude of the AEC residual Mc, the residual signal magnitude estimate, and the noise floor NFc(f,t), via the equation:
Ec(f,t)=max(|Mc(f,t)|−{circumflex over (R)}c(f,t),NFc(f,t))) (1)
At action 550, the RES process 500 computes the smoothed far-end signal power using the calculation,
Pc(f,t)=αPc(f,t−1)+(1−α)∥Xc(f,t)|2
At action 560, the RES process 500 computes the normalized gradient
At action 570, the RES process 500 updates the weight with the following equation,
At action 580, the RES process 500 applies the gain to the AEC Residual phase to produce the RES process output (Bc(f,t)) using the following calculation, where φc(f,t) is the phase of the complex AEC residual, Xc(f,t),
Bc(f,t)=Ec(f,t)ejφ
The RES process 500 then continues to repeat this processing loop (actions 530-580) for a subsequent frame as indicated at 590. With this processing, the RES process is intended to predict and remove residual echo remaining from the preceding acoustic echo cancellation applied on the microphone channel. However, the RES process 500 includes a non-linear operation, i.e., the “max” operator in equation (1). The presence of this non-linear operation in the RES process 500 may introduce non-linear phase effects in the microphone channel that can adversely affect the performance of subsequent processes that depend upon phase and delay of the microphone channel, including the SSL/MA process.
3. Overview of Center Clipping
The CC process 600 operates as follows. A multiplier block 610 multiplies an estimate of the peak speaker signal (“SpkPeak”) by an estimate of the speaker to microphone gain (“SpkToMicGain”), producing a leak through estimate (“leak through”) of the peak speaker echo in the microphone signal. Next, the leak through estimate is filtered across the neighboring frequency bands in block 620 to produce a filtered leak through estimate. Reverberance block 630 scales the filtered, leak through estimate by another parameter to account for the amount of reverberance in the particular band, yielding the value labeled as A.
In parallel, filter blocks 640, 650 separately filter the instantaneous microphone power and residual power across the neighboring frequency bands. Block 660 selects the minimum of the two filtered power results to produce the value labeled as B. As indicated at block 670, if A>B, then a flag is set to 1. Otherwise, the flag is set to 0. If the flag is set to 0, then the AEC residual for band f is not changed, and block 680 outputs the AEC residual as the CC process output. However, if the flag is set to 1, then block 680 instead sets the AEC residual in band f to a complex value with the magnitude equal to the background noise floor, and the phase is set to a random value between 0 and 360 degrees produced at block 690.
This conventional center clipping process is designed to operate on a single microphone channel. This conventional process does not allow for the multiple microphone channels in a microphone array, due to the non-linear process in block 680. In addition, a microphone array has separate speaker to microphone gain for each microphone channel, and separate instantaneous microphone power.
The following Detailed Description presents various ways to integrate residual echo suppression and center clipping with sound source localization, and microphone array processing in two-way communication systems. In particular, various configurations or architectures for combining residual echo suppression into a system with audio echo cancellation and sound source localization with microphone arrays in ways that address the non-linear processing issue are presented. Additionally, modifications or extensions of the center clipping processing to permit integration with microphone array processing are presented.
This Summary is provided to introduce a selection of concepts in a simplified form that is further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. Additional features and advantages of the invention will be made apparent from the following detailed description of embodiments that proceeds with reference to the accompanying drawings.
The following description relates to implementations of residual echo suppression and center clipping that permit integration with sound source localization/microphone array processing, and their application in two-way audio/voice communication systems (e.g., traditional or internet-based telephony, voice chat, and other two-way audio/voice communications). Although the following description illustrates the inventive integration of residual echo suppression and center clipping with sound source localization/microphone array processing in the context of an internet-based voice telephony, it should be understood that this approach also can be applied to other two-way audio communication systems and like applications.
1. RES and SSL/MA Integration
As shown in
Although possible to arrange the AEC, RES, SSL/MA and CC processes in the processing order shown in
A second problem arises due to the integration of the center clipping (CC) process with the SSL/MA process. A typical CC process compares an estimate of the peak echo signal received at the microphone to the processed AEC residual. However, as shown in
This section presents three other alternative architectures of multiple microphone channel communications systems that more successfully integrate the RES and SSL/MA processes.
A first alternative architecture for a multiple microphone channel voice communications system 900 with integrated RES and SSL/MA processes is shown in
A voice communications system 1000 based on a second alternative integrated RES and SSL/MA architecture is shown in
The RES_Pred process 1010 is similar to the standard RES process 500 shown in
Bc(f,t)=Ec(f,t)ejφ
Simplifying leads to
Bc(f,t)=Mc(f,t)−{circumflex over (R)}c(f,t)ejφ
where Mc(f,t) is the AEC residual for channel c. As shown in
The outputs of the RES_Pred processes 1010 for the microphone channels are combined using a parallel implementation of the microphone array process 1020, and the final result is subtracted (at block 1030) from the result of the combined sound source localization/microphone array process 720 operating on the AEC residuals. The subtraction block 1030 in
When the difference in equation (1) is less than the noise floor for channel c, then the RES_Pred output {circumflex over (R)}c(f,t)ejφ
{circumflex over (R)}c(f,t)ejφ
Neglecting the microphone array gain for the channel, the resulting output for channel c after the subtraction in block 1030 is
Bc(f,t)=Mc(f,t)−(Mc(f,t)−NFc(f,t)ejφ
which has a magnitude equal to the second condition of the maximum in equation (2).
The voice communications system 1000 can also include a maximum operator applied at the output of the subtraction block 1030 to ensure the magnitude of the difference of the combined AEC residuals by the common SSL/MA process 720 and the combined predicted residual echoes by block 1020 is greater than or equal to the magnitude of the combined noise floor produced by the common SSL/MA process 720. This maximum operator compares the magnitude of the difference of the combined AEC residuals and the combined predicted residual echoes to the magnitude of the combined noise floor. If the magnitude of the output of the difference block is greater than or equal to the magnitude of the combined noise floor, the maximum operator passes the difference signal unchanged. Otherwise, the maximum operator sets the magnitude of the difference signal to the magnitude of the combined noise floor.
2. Center Clipping and SSL/MA Integration
In this section, we present an extension of a center clipping process that permits integration with the SSL/MA process.
As discussed in the Background section, the conventional center clipping process was designed to operate on a single channel. However, there are several problems related to running the conventional center clipping process after a microphone array process. For a microphone array, there is a separate speaker to microphone gain for each microphone channel. Likewise, there is a separate instantaneous microphone power for each microphone channel. As a result, there are separate leak through and instantaneous microphone power estimates for each channel.
Therefore, what is needed is a method to estimate a single leak through estimate for all channels. As illustrated in
For a beam forming type microphone array process, the multi-channel leak through selector 1210 computes the dot product of the separate filtered, leak through estimates for each channel with the corresponding microphone array coefficients. This resulting dot product represents the filtered, leak through estimate in the direction pointed to by the array.
Alternative methods that the multi-channel leak through selector 1210 may implement for computing a single filtered, leak through estimate are to use the maximum of the individual channel filtered, leak through estimates; the minimum filtered, leak through estimate; or some other weighting of the filtered, leak through estimates, such as equal weighting or weighting varying according to a set of weighting coefficients.
In the overall leak through process as shown in
A separate method to compute a single instantaneous microphone power estimate for all channels is also needed for microphone array integration.
Again, a preferred selector method for the beam-forming type microphone array process is to compute the dot product of the separate filtered, instantaneous microphone power estimates for each channel with the corresponding microphone array coefficients. This resulting dot product represents the filtered, instantaneous microphone power estimate in the direction pointed to by the array (i.e., the DOA estimate of the SSL process).
Alternative methods that the multi-channel selector 1310 may implement for computing a single filtered, instantaneous microphone power estimate include using the maximum, filtered instantaneous microphone power estimate; the minimum, filtered instantaneous microphone power estimate; or some other weighting of the filtered, instantaneous microphone power estimate such as equal weighting or weighting varying according to a set of weighting coefficients.
As shown in
The CC process 1400 with microphone array integration can be incorporated into the above described integrated RES and SSL/MA architectures (
3. Computing Environment
The above-described processing techniques for RES and CC with microphone array integration can be realized on any of a variety of two-way communication systems, including among other examples, computers; speaker telephones; two-way radio; game consoles; conferencing equipment; and etc. The AEC digital signal processing techniques can be implemented in hardware circuitry, in firmware controlling audio digital signal processing hardware, as well as in communication software executing within a computer or other computing environment, such as shown in
With reference to
A computing environment may have additional features. For example, the computing environment (1700) includes storage (1740), one or more input devices (1750), one or more output devices (1760), and one or more communication connections (1770). An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment (1700). Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment (1700), and coordinates activities of the components of the computing environment (1700).
The storage (1740) may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment (1700). The storage (1740) stores instructions for the software (1780) implementing the described audio digital signal processing for RES and CC with microphone array integration.
The input device(s) (1750) may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing environment (1700). For audio, the input device(s) (1750) may be a sound card or similar device that accepts audio input in analog or digital form from a microphone or microphone array, or a CD-ROM reader that provides audio samples to the computing environment. The output device(s) (1760) may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment (1700).
The communication connection(s) (1770) enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, compressed audio or video information, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
The described audio digital signal processing for RES and CC with microphone array integration techniques herein can be described in the general context of computer-readable media. Computer-readable media are any available media that can be accessed within a computing environment. By way of example, and not limitation, with the computing environment (1700), computer-readable media include memory (1720), storage (1740), communication media, and combinations of any of the above.
The described audio digital signal processing for RES and CC with microphone array integration techniques herein can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing environment on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment.
For the sake of presentation, the detailed description uses terms like “determine,” “generate,” “adjust,” and “apply” to describe computer operations in a computing environment. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation.
In view of the many possible embodiments to which the principles of out invention may be applied, we claim as our invention all such embodiments as may come within the scope and spirit of the following claims and equivalents thereto.
Number | Name | Date | Kind |
---|---|---|---|
4039753 | Balogh et al. | Aug 1977 | A |
4069395 | Nash | Jan 1978 | A |
4275398 | Parker et al. | Jun 1981 | A |
4359606 | Shoichi | Nov 1982 | A |
4636586 | Schiff | Jan 1987 | A |
4696015 | Palicot et al. | Sep 1987 | A |
4696032 | Levy | Sep 1987 | A |
5099472 | Townsend et al. | Mar 1992 | A |
5263019 | Chu | Nov 1993 | A |
5305307 | Chu | Apr 1994 | A |
5323459 | Hirano | Jun 1994 | A |
5353348 | Sendyk et al. | Oct 1994 | A |
5430796 | Komoda et al. | Jul 1995 | A |
5454041 | Davis | Sep 1995 | A |
5477534 | Kusano | Dec 1995 | A |
5542000 | Semba | Jul 1996 | A |
5559793 | Maitra et al. | Sep 1996 | A |
5619582 | Oltman et al. | Apr 1997 | A |
5646990 | Li | Jul 1997 | A |
5666407 | Pfeifer | Sep 1997 | A |
5680450 | Dent et al. | Oct 1997 | A |
5721730 | Genter | Feb 1998 | A |
5923749 | Gustafsson et al. | Jul 1999 | A |
6002776 | Bhadkamkar et al. | Dec 1999 | A |
6044150 | Rigstad et al. | Mar 2000 | A |
6205124 | Hamdi | Mar 2001 | B1 |
6215880 | Hasegawa | Apr 2001 | B1 |
6219418 | Eriksson et al. | Apr 2001 | B1 |
6324170 | McClennon et al. | Nov 2001 | B1 |
6377679 | Hashimoto et al. | Apr 2002 | B1 |
6418203 | Marcie | Jul 2002 | B1 |
6535609 | Finn et al. | Mar 2003 | B1 |
6574336 | Kirla | Jun 2003 | B1 |
6707910 | Valve et al. | Mar 2004 | B1 |
6724736 | Azriel | Apr 2004 | B1 |
6738358 | Bist et al. | May 2004 | B2 |
6748086 | Venkatesh et al. | Jun 2004 | B1 |
6868157 | Okuda | Mar 2005 | B1 |
7031269 | Lee | Apr 2006 | B2 |
7085370 | Arana-Manzano et al. | Aug 2006 | B1 |
7433463 | Alves et al. | Oct 2008 | B2 |
20020101982 | Elabd | Aug 2002 | A1 |
20030174847 | Lane et al. | Sep 2003 | A1 |
20030206624 | Domer et al. | Nov 2003 | A1 |
20040001597 | Marton | Jan 2004 | A1 |
20040001598 | Balan et al. | Jan 2004 | A1 |
20040013275 | Balan et al. | Jan 2004 | A1 |
20040125942 | Beaucoup et al. | Jul 2004 | A1 |
20040141528 | LeBlanc et al. | Jul 2004 | A1 |
20060018459 | McCree | Jan 2006 | A1 |
20070165837 | Zhong et al. | Jul 2007 | A1 |
20070165838 | Li et al. | Jul 2007 | A1 |
20070263849 | Stokes et al. | Nov 2007 | A1 |
20070263850 | Stokes et al. | Nov 2007 | A1 |
20070280472 | Stokes et al. | Dec 2007 | A1 |
20090207763 | Li et al. | Aug 2009 | A1 |
Number | Date | Country |
---|---|---|
2269968 | Mar 1996 | GB |
WO 2007147033 | Dec 2007 | WO |
Number | Date | Country | |
---|---|---|---|
20070263849 A1 | Nov 2007 | US |