Post-mixing acoustic echo cancellation systems and methods

Information

  • Patent Grant
  • 10367948
  • Patent Number
    10,367,948
  • Date Filed
    Friday, January 13, 2017
    7 years ago
  • Date Issued
    Tuesday, July 30, 2019
    5 years ago
Abstract
Acoustic echo cancellation systems and methods are provided that can cancel and suppress acoustic echo from the output of a mixer that has mixed audio signals from a plurality of acoustic sources, such as microphones. The microphones may have captured speech and sound from a remote location or far end, such as in a conferencing environment. The acoustic echo cancellation may generate an echo-cancelled mixed audio signal based on a mixed audio signal from a mixer, information gathered from the audio signal from each of the plurality of acoustic sources, and a remote audio signal. The systems and methods may be computationally efficient and resource-friendly.
Description
TECHNICAL FIELD

This application generally relates to acoustic echo cancellation performed after the mixing of audio signals from a plurality of acoustic sources, such as microphones used in a conferencing system. In particular, this application relates to systems and methods for cancelling and suppressing acoustic echo from the output of a mixer while efficiently utilizing computation resources.


BACKGROUND

Conferencing environments, such as boardrooms, conferencing settings, and the like, can involve the use of microphones for capturing sound from audio sources and loudspeakers for presenting audio from a remote location (also known as a far end). For example, persons in a conference room may be conducting a conference call with persons at a remote location. Typically, speech and sound from the conference room may be captured by microphones and transmitted to the remote location, while speech and sound from the remote location may be received and played on loudspeakers in the conference room. Multiple microphones may be used in order to optimally capture the speech and sound in the conference room.


However, the microphones may pick up the speech and sound from the remote location that is played on the loudspeakers. In this situation, the audio transmitted to the remote location may therefore include an echo, i.e., the speech and sound from the conference room as well as the speech and sound from the remote location. If there is no correction, the audio transmitted to the remote location may therefore be low quality or unacceptable because of this echo. In particular, it would not be desirable for persons at the remote location to hear their own speech and sound.


Existing echo cancellation systems may utilize an acoustic echo canceller for each of the multiple microphones, and a mixer can subsequently mix and process each echo-cancelled microphone signal. However, these types of systems may be computationally intensive and complex. For example, separate and dedicated processing may be needed to perform acoustic echo cancellation on each microphone signal. Furthermore, a typical acoustic echo canceller placed after a mixer would work poorly due to the need to constantly readapt to the mixed signal generated by the mixer should the mixer be dynamic, i.e., the gains on one or more of the mixer channels changes over time.


Accordingly, there is an opportunity for acoustic echo cancellation systems and methods that address these concerns. More particularly, there is an opportunity for acoustic echo cancellation systems and methods that cancel and suppress acoustic echo and work with a mixer that has mixed the audio of multiple acoustic sources, while being computationally efficient and resource-friendly.


SUMMARY

The invention is intended to solve the above-noted problems by providing acoustic echo cancellation systems and methods that are designed to, among other things: (1) generate an echo-cancelled mixed audio signal based on a mixed audio signal from a mixer, information gathered from the audio signal from each of the plurality of acoustic sources, and a remote audio signal; (2) generate the echo-cancelled mixed audio signal by selecting various tap coefficients of a background filter performing a normalized least-mean squares algorithm, a hidden filter, and a mix filter, based on comparing a background error power and a hidden error power; and (3) use a non-linear processor to generate an echo-suppressed mixed audio signal from the echo-cancelled mixed audio signal when the background filter and hidden filter have not yet converged.


In an embodiment, a system includes a memory, a plurality of acoustic sources, a mixer in communication with the plurality of acoustic sources and the memory, and an acoustic echo canceller in communication with the mixer, the memory, and a remote audio signal. The plurality of acoustic sources may each be configured to generate an audio signal. The mixer may be configured to mix the audio signal from each of the plurality of acoustic sources to produce a mixed audio signal. The acoustic echo canceller may be configured to generate an echo-cancelled mixed audio signal based on the mixed audio signal, information gathered from each of the plurality of acoustic sources, and the remote audio signal.


In another embodiment, a method includes receiving an audio signal from each of a plurality of acoustic sources; receiving a remote audio signal; mixing the audio signal from each of the plurality of acoustic sources using a mixer to produce a mixed audio signal; and generating an echo-cancelled mixed audio signal based on the mixed audio signal, information gathered from the audio signal from each of the plurality of acoustic sources, and the remote audio signal, using an acoustic echo canceller.


These and other embodiments, and various permutations and aspects, will become apparent and be more fully understood from the following detailed description and accompanying drawings, which set forth illustrative embodiments that are indicative of the various ways in which the principles of the invention may be employed.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic diagram of a communication system including an acoustic echo canceller, in accordance with some embodiments.



FIG. 2 is a schematic diagram of an acoustic echo canceller for use in the communication system of FIG. 1, in accordance with some embodiments.



FIG. 3 is a flowchart illustrating operations for performing acoustic echo cancellation using the communication system of FIG. 1, in accordance with some embodiments.



FIG. 4 is a flowchart illustrating operations for running a background filter and a hidden filter while performing acoustic echo cancellation using the communication system of FIG. 1, in accordance with some embodiments.



FIG. 5 is a flowchart illustrating operations for running a non-linear processor to generate an echo-suppressed mixed audio signal using the communication system of FIG. 1, in accordance with some embodiments.





DETAILED DESCRIPTION

The description that follows describes, illustrates and exemplifies one or more particular embodiments of the invention in accordance with its principles. This description is not provided to limit the invention to the embodiments described herein, but rather to explain and teach the principles of the invention in such a way to enable one of ordinary skill in the art to understand these principles and, with that understanding, be able to apply them to practice not only the embodiments described herein, but also other embodiments that may come to mind in accordance with these principles. The scope of the invention is intended to cover all such embodiments that may fall within the scope of the appended claims, either literally or under the doctrine of equivalents.


It should be noted that in the description and drawings, like or substantially similar elements may be labeled with the same reference numerals. However, sometimes these elements may be labeled with differing numbers, such as, for example, in cases where such labeling facilitates a more clear description. Additionally, the drawings set forth herein are not necessarily drawn to scale, and in some instances proportions may have been exaggerated to more clearly depict certain features. Such labeling and drawing practices do not necessarily implicate an underlying substantive purpose. As stated above, the specification is intended to be taken as a whole and interpreted in accordance with the principles of the invention as taught herein and understood to one of ordinary skill in the art.


The acoustic echo cancellation systems and methods described herein can generate an echo-cancelled mixed audio signal based on a mixed audio signal from a mixer, information gathered from the audio signal from each of the plurality of acoustic sources, and a remote audio signal, while being computationally efficient and resource-friendly. The systems and methods may eliminate the need for separate acoustic echo cancellers for each acoustic source, e.g., microphone, while maintaining the cancellation benefits of separate acoustic echo cancellers. Moreover, the decreased computational load may allow the use of less expensive hardware (e.g., processor and/or DSP), and/or enable other features to be included in the communication system 100. User satisfaction may be increased through use of the communication system 100 and acoustic echo canceller 112.



FIG. 1 is a schematic diagram of a communication system 100 for capturing sound from audio sources in an environment using microphones 102 and presenting audio from a remote location using a loudspeaker 104. FIG. 2 is a schematic diagram of the acoustic echo canceller 112 included in the communication system 100. The communication system 100 may generate an echo-cancelled mixed audio signal using the acoustic echo canceller 112 that processes a mixed audio signal from a mixer 106. The echo-cancelled mixed audio signal may mitigate the sound received from the remote location that is played on the loudspeaker 104. In this way, the echo-cancelled mixed audio signal may be transmitted to the remote location without the undesirable echo of persons at the remote location hearing their own speech and sound.


Environments such as conference rooms may utilize the communication system 100 to facilitate communication with persons at the remote location, for example. The types of microphones 102 and their placement in a particular environment may depend on the locations of audio sources, physical space requirements, aesthetics, room layout, and/or other considerations. For example, in some environments, the microphones may be placed on a table or lectern near the audio sources. In other environments, the microphones may be mounted overhead to capture the sound from the entire room, for example. The communication system 100 may work in conjunction with any type and any number of microphones 102. Various components included in the communication system 100 may be implemented using software executable by one or more servers or computers, such as a computing device with a processor and memory, and/or by hardware (e.g., discrete logic circuits, application specific integrated circuits (ASIC), programmable gate arrays (PGA), field programmable gate arrays (FPGA), etc.



FIGS. 3-5 illustrate embodiments of methods for utilizing the communication system 100 and the acoustic echo canceller 112. In particular, FIG. 3 illustrates a process 300 for performing acoustic echo cancellation using the communication system 100, FIG. 4 illustrates a method 324 for running a background filter 202 and a hidden filter 204 in the acoustic echo canceller 112, and FIG. 5 illustrates a method 312 for conditionally running a non-linear processor 212 in the acoustic echo canceller 112. In general, a computer program product in accordance with the embodiments includes a computer usable storage medium (e.g., standard random access memory (RAM), an optical disc, a universal serial bus (USB) drive, or the like) having computer-readable program code embodied therein, wherein the computer-readable program code is adapted to be executed by a processor (e.g., working in connection with an operating system) to implement the methods described below. In this regard, the program code may be implemented in any desired language, and may be implemented as machine code, assembly code, byte code, interpretable source code or the like (e.g., via C, C++, Java, Actionscript, Objective-C, Javascript, CSS, XML, and/or others).


Referring to FIG. 1, the communication system 100 may include the microphones 102, the loudspeaker 104, a mixer 106, a switch 108, a memory 110, the acoustic echo canceller 112, fast Fourier transform (FFT) modules 114, 116, 118, and an inverse fast Fourier transform module 120. Each of the microphones 102 may detect sound in the environment and convert the sound to an audio signal. In embodiments, some or all of the audio signals from the microphones 102 may be processed by a beamformer (not shown) to generate one or more beamformed audio signals, as is known in the art. Accordingly, while the systems and methods are described herein as using audio signals from microphones 102, it is contemplated that the systems and methods may also utilize any type of acoustic source, such as beamformed audio signals generated by a beamformer.


The audio signals from each of the microphones 102 may be received by the mixer 106, such as at step 318 of the process 300 shown in FIG. 3, to generate a mixed audio signal, such as at step 326. The mixed audio signal generated by the mixer 106 may conform to a desired audio mix such that the audio signals from certain microphones are emphasized and the audio signals from other microphones are deemphasized or suppressed. Exemplary embodiments of audio mixers are disclosed in commonly-assigned patents, U.S. Pat. Nos. 4,658,425 and 5,297,210, each of which is incorporated by reference in its entirety. The mixed audio signal generated at step 326 may be converted into the frequency domain using a fast Fourier transform module 116, such as at step 328.


In parallel, the audio signals from each of the microphones 102 may be converted to the frequency domain by fast Fourier transform modules 114, such as at step 320. One of these converted audio signals may be selected and conveyed at step 322 by a signal selection mechanism, such as a switch 108, for example. The signal selection mechanism may gather information about each acoustic source (or subset of acoustic sources), e.g., audio signals from the microphones 102 or beamformed audio signals, in order to optimize the adaptation for a mix of all of the acoustic sources. While a switch 108 is illustrated in FIG. 1, other signal selection mechanisms are contemplated, such as a second mixer that could select the audio signal from a particular microphone 102 by attenuating some or all of the audio signals from the other microphones 102.


Each of the audio signals from the microphones 102 can be selected by the switch 108 and processed in turn, such that a background filter 202 and a hidden filter 204 (in the acoustic echo canceller 112) work on one of the audio signals at a time. The switch 108 may enable adaptation on each of the audio signals from the microphones 102 within a particular duration so that the communication system 100 may properly perform echo cancellation regardless of the type of mixer 106, the current state of the mixer 106, or if the mixer 106 is undergoing a change in state. At step 324, the background filter 202 and the hidden filter 204 in the acoustic echo canceller 112 may run on the selected audio signal. Step 324 is described below in more detail with respect to FIG. 4.



FIG. 4 describes further details of an embodiment of step 324 for running a background filter 202 and a hidden filter 204 in the acoustic echo canceller 112. The background filter 202 may be a finite impulse response filter that runs a normalized least-mean squares algorithm on the selected audio signal, such as at step 402, and may generate an estimate ĥm[n] of the impulse response of a sample n for a microphone m in the environment. The background filter 202 may also measure a background error power of the selected audio signal, such as at step 404. The background filter 202 may have tap coefficients h that are used to scale a finite series of delay taps. A background error e[n] of the selected audio signal may be measured by the background filter 202 according to the equation:

e[n]=d[n]−{circumflex over (h)}[n]x[n]

where d[n] is the audio signal, x[n] is a vector of samples from a remote audio signal, and denotes a conjugate transpose operation. The background error power may be measured based on the background error e[n], such as by using a time average of the magnitude of the squared background error.


The hidden filter 204 may be a finite impulse response filter that is run at step 406, on a remote audio signal and a previous unweighted estimate of the echo-path impulse response made by the background filter 202. The unweighted previous estimate corresponds to an unweighted portion of the selected audio signal within a mix filter 208 (described below). The hidden filter 204 may measure a hidden error of the selected audio signal, such as at step 408, by subtracting the remote audio signal from the selected audio signal. A hidden error power may be measured based on the hidden error, such as by using a time average of the magnitude of the squared hidden error. The hidden filter 204 may have tap coefficients h that are used to scale a finite series of delay taps.


The background error power measured at step 404 and the hidden error power measured at step 408 may be compared at step 410 by an error comparison module 206. The error comparison module 206 may determine at step 410 whether the background error power is greater than the hidden error power. If it is determined that the background error power is greater than the hidden error power at step 410, then the process 324 may continue to step 412. At step 412, the tap coefficients of the background filter 202 may be selected and stored in a memory 110. At step 414, the stored tap coefficients from step 412 may be copied from the memory 110 and used to replace the tap coefficients of the hidden filter 204. The stored tap coefficients from step 412 may also be copied at step 414 from the memory 110 and used to update the tap coefficients of the mix filter 208, as described in more detail below.


Following step 414, the process 324 may continue to step 416. In addition, if it is determined at step 410 that the background error power is not greater than the hidden error power, then the process 324 may continue to step 416. At step 416, it may be determined whether a channel scaling factor a of the mixer 106 has changed. The channel scaling factor of the mixer 106 may change automatically or manually (e.g., by a user adjustment). If the channel scaling factor of the mixer 106 has changed at step 416, then the process 324 may continue to step 418. At step 418, the tap weights of the mix filter 208 may be updated corresponding to the changed channel scaling factor, such as by adding a difference in weight multiplied by a channel impulse response estimate, as described in more detail below.


Following step 418, the process 324 may continue to step 420. In addition, if it is determined that the channel scaling of the mixer 106 has not changed at step 416, then the process 324 may continue to step 420. At step 420, the tap coefficients of the background filter 202 may be updated, according to the equation:








h
^



[

n
+
1

]


=



h
^



[
n
]


+


α




x


[
n
]




2





e
*



[
n
]




x


[
n
]









where α is a step-size parameter, * denotes a complex conjugation operation, and ∥⋅∥ denotes a l2 norm. The process 324 may then return to the process 300 and in particular, to step 308, as described below.


Returning to the process 300 of FIG. 3, while the audio signals are received from the microphones 102 and processed in steps 318-328 of the process 300 and steps 402-420 of the process 324, a remote audio signal may be received from a remote location, i.e., a far end, such as step 302. The remote audio signal may be output on the loudspeaker 104 in the environment, such as at step 304. At step 306, the remote audio signal may also be converted into the frequency domain using a fast Fourier transform module 118. At this point, it can be seen that the acoustic echo canceller 112 may receive the mixed audio signal from the mixer 106, the selected audio signal from the switch 108, and the remote audio signal from the remote location (far end). Each of the mixed audio signal from the mixer 106, the selected audio signal from the switch 108, and the remote audio signal may have been converted into the frequency domain, as previously described, by the respective FFT modules 114, 116, 118. Accordingly, the acoustic echo canceller 112 may operate in the frequency domain so that the acoustic echo cancellation is performed faster and with high quality.


The acoustic echo canceller 112 may run a mix filter 208 at step 308. The mix filter 208 may be a weighted sum ĥmix[n] of the finite impulse responses of all the audio signals of the microphones 102, such that:









h
^

mix



[
n
]


=




m
=
0


M
-
1





a
m





h
^

m



[
n
]









where am is the channel scaling (weight or gain) of a particular microphone 102. The mix filter 208 processes the remote audio signal received from the far end and generates a filtered remote audio signal that is an estimate of the echo signal generated at the output of the mixer. In particular, the mix filter models the coupling between the echo paths detected by the microphones 102 and the mixer 106.


As described previously, the tap coefficients of the mix filter 208 may be updated by the tap coefficients of the background filter at step 414 of the process 324, if the background error power is greater than the hidden error power at step 410. When this occurs, the weighted sum ĥmix [n+1] for the next sample n+1 may be given by:









h
^

mix



[

n
+
1

]


=





m


m




M
-
1





a
m





h
^

m



[
n
]




+


a

m







h
^


m





[

n
+
1

]









where m′ is the selected audio signal of a particular microphone 102.


As also described previously, the tap weights of the mix filter 208 may be updated at step 418 of the process 324, if the channel scaling factor of the mixer 106 has changed at step 416. When this occurs, the update may be performed by adding the difference in weight multiplied by the channel impulse response estimate ĥm′. In particular, the weighted sum ĥmix [n+1] for the next sample n+1 may be given by:









h
^

mix



[

n
+
1

]


=





m


m




M
-
1





a
m





h
^

m



[
n
]




+


(



a

m





[

n
+
1

]


-


a

m





[
n
]



)





h
^


m





[
n
]








After the mix filter 208 generates the filtered remote audio signal at step 308, the process 300 may continue to step 310. At step 310, the echo-cancelled mixed audio signal may be generated by the acoustic echo canceller 112. In particular, the filtered remote audio signal generated by the mix filter 208 may be subtracted from the mixed audio signal from the mixer 106, as denoted by the summing point 214 shown in FIG. 2. The echo-cancelled mixed audio signal may be processed by a non-linear processor at step 312, depending on the coherence of the filtered remote audio signal from the mix filter 208 and the estimated residual echo power of the echo-cancelled mixed audio signal output from the summing point 214. Details of step 312 are described below with respect to FIG. 5.



FIG. 5 describes further details of an embodiment of step 312 for running a non-linear processor 212 in the acoustic echo canceller 112 to generate an echo-suppressed mixed audio signal. In particular, after the echo-cancelled mixed audio signal is generated at step 310, it can be determined whether to run the non-linear processor 212 to further suppress any echo and generate comfort noise (e.g., synthetic background noise), as necessary. The non-linear processor 212 may run, for example, in situations when there is only speech and sound from the remote location (far end) and when the background filter 202 and the hidden filter 204 have not yet converged.


At step 502, the output coherence of the filtered remote audio signal from the mix filter 208 may be measured by mix estimators 210. The output coherence is a measure of the relationship between the frequency content of the filtered remote audio signal and the audio signals from the microphones 102. The mix estimators 210 may measure the coherence from the output of the mixer 106 prior to echo cancellation at the summing point 214 and after echo cancellation at the summing point 214. If the coherence is high, then the signals may be deemed to be related in the frequency domain. The residual echo power of the echo-cancelled mixed audio signal output from the summing point 214 may be estimated at step 504 by the mix estimators 210. The non-linear processor 212 may process the echo-cancelled mixed audio signal at step 508 to generate an echo-suppressed mixed audio signal if (1) the output coherence is greater than a predetermined threshold (e.g., signifying that there is only an echo signal present in the microphones 102); or (2) the residual echo power is greater than half of the power of the mixed audio signal from the mixer 106. Following step 508, the process 312 may continue to step 314 of the process 300. However, if neither of these conditions is satisfied, then the process 312 may continue from step 506 to step 314 of the process 300.


Returning to FIG. 3, at step 314, the (1) echo-cancelled mixed audio signal generated at step 310 (if the non-linear processor 212 was not executed at step 312) or (2) the echo-suppressed mixed audio signal generated at step 508 (if the non-linear processor 212 was executed at step 312) may be converted to the time domain. The resulting echo-cancelled or echo-suppressed audio signal may be transmitted to the remote location (far end) at step 316. The process 300 may return to step 322 to select and convey another of the audio signals from the microphones 102 for processing at steps 324 and 308-316, as described previously. In this way, information from the audio signal from each of the plurality of microphones 102 may be utilized when generating the echo-cancelled or echo-suppressed audio signal.


Any process descriptions or blocks in figures should be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of the embodiments of the invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those having ordinary skill in the art.


This disclosure is intended to explain how to fashion and use various embodiments in accordance with the technology rather than to limit the true, intended, and fair scope and spirit thereof. The foregoing description is not intended to be exhaustive or to be limited to the precise forms disclosed. Modifications or variations are possible in light of the above teachings. The embodiment(s) were chosen and described to provide the best illustration of the principle of the described technology and its practical application, and to enable one of ordinary skill in the art to utilize the technology in various embodiments and with various modifications as are suited to the particular use contemplated. All such modifications and variations are within the scope of the embodiments as determined by the appended claims, as may be amended during the pendency of this application for patent, and all equivalents thereof, when interpreted in accordance with the breadth to which they are fairly, legally and equitably entitled.

Claims
  • 1. A system, comprising: (A) a memory;(B) a plurality of acoustic sources each configured to generate an audio signal;(C) a mixer in communication with the plurality of acoustic sources and the memory, the mixer configured to mix the audio signal from each of the plurality of acoustic sources to produce a mixed audio signal; and(D) an acoustic echo canceller in communication with the mixer, the memory, and a remote audio signal, the acoustic echo canceller configured to generate an echo-cancelled mixed audio signal based on the mixed audio signal, information gathered from the audio signal from each of the plurality of acoustic sources, and the remote audio signal, wherein the acoustic echo canceller comprises: a background filter having background filter tap coefficients and configured to measure a background error power of the audio signal from each of the plurality of acoustic sources using a normalized least-mean squares algorithm;a hidden filter having hidden filter tap coefficients and configured to measure a hidden error power of the audio signal from each of the plurality of acoustic sources, based on the audio signal from each of the plurality of acoustic sources and the remote audio signal; andan error comparison module in communication with the background filter and the hidden filter, the error comparison module configured to: compare the background error power and the hidden error power; andselect and store the background filter tap coefficients in the memory, if the background error power is greater than the hidden error power.
  • 2. The system of claim 1: further comprising a signal selection mechanism in communication with the plurality of acoustic sources and the acoustic echo canceller, the signal selection mechanism configured to select at least one audio signal from at least one of the plurality of acoustic sources and convey the at least one selected audio signal to the acoustic echo canceller;wherein the acoustic echo canceller is further configured to generate the echo-cancelled mixed audio signal based on the mixed audio signal, information gathered from the at least one selected audio signal, and the remote audio signal.
  • 3. The system of claim 1, wherein the error comparison module is further configured to copy the stored background filter tap coefficients from the memory to replace the hidden filter tap coefficients, if the background error power is greater than the hidden error power.
  • 4. The system of claim 1, wherein the background filter is configured to measure a background error e[n] according to the equation: e[n]=d[n]−ĥ†[n]x[n]where d[n] is one of the audio signals, x[n] is a vector of samples from the remote audio signal, and † denotes a conjugate transpose operation; wherein the background error power is estimated based on the background error.
  • 5. The system of claim 1, wherein the error comparison module is further configured to update the background filter tap coefficients according to the equation:
  • 6. The system of claim 1, wherein the acoustic echo canceller further comprises: a mix filter having mix filter tap coefficients and tap weights, and configured to filter the remote audio signal to generate a filtered remote audio signal.
  • 7. The system of claim 1, wherein: the acoustic echo canceller further comprises a mix filter having mix filter tap coefficients and tap weights, and configured to filter the remote audio signal to generate a filtered remote audio signal; andthe error comparison module is further configured to copy the stored background filter tap coefficients from the memory to update the mix filter tap coefficients by combining the hidden filter tap coefficients of each of the plurality of acoustic sources not currently under adaptation and the most recently updated background filter tap coefficients corresponding to the acoustic source currently under adaptation, if the background error power is greater than the hidden error power.
  • 8. The system of claim 1, wherein: the acoustic echo canceller further comprises a mix filter having mix filter tap coefficients and tap weights, and configured to filter the remote audio signal to generate a filtered remote audio signal; andthe mix filter is further configured to be updated if a channel scaling factor of the mixer has changed by updating the tap weights corresponding to the changed channel scaling factor by adding the difference in weight multiplied by a channel impulse response estimate.
  • 9. The system of claim 6, wherein the acoustic echo canceller is configured to generate the echo-cancelled mixed audio signal by subtracting the filtered remote audio signal from the mixed audio signal.
  • 10. The system of claim 9, wherein the acoustic echo canceller further comprises: a mix estimator in communication with the mixer, the mix filter, and the echo-cancelled mixed audio signal, the mix estimator configured to: measure an output coherence of the filtered remote audio signal; andestimate a residual echo power of the echo-cancelled mixed audio signal; anda non-linear processor configured to process the echo-cancelled mixed audio signal to generate an echo-suppressed mixed audio signal, if the output coherence exceeds a predetermined threshold or if the residual echo power exceeds half of an power of the mixed audio signal.
  • 11. A method, comprising: receiving an audio signal from each of a plurality of acoustic sources;receiving a remote audio signal;mixing the audio signal from each of the plurality of acoustic sources using a mixer to produce a mixed audio signal; andgenerating an echo-cancelled mixed audio signal based on the mixed audio signal, information gathered from the audio signal from each of the plurality of acoustic sources, and the remote audio signal, using an acoustic echo canceller, wherein generating the echo-cancelled mixed audio signal comprises: measuring a background error power of the audio signal from each of the plurality of acoustic sources using a normalized least-mean squares algorithm in a background filter having background filter tap coefficients;measuring a hidden error power of the audio signal from each of the plurality of acoustic sources, based on the audio signal from each of the plurality of acoustic sources and the remote audio signal, using a hidden filter having hidden filter tap coefficients;comparing the background error power and the hidden error power; andselecting and storing the background filter tap coefficients in a memory, if the background error power is greater than the hidden error power.
  • 12. The method of claim 11: further comprising selecting and conveying at least one selected audio signal from at least one of the plurality of acoustic sources using a signal selection mechanism to the acoustic echo canceller;wherein generating the echo-cancelled mixed audio signal comprises generating the echo-cancelled mixed audio signal based on the mixed audio signal, information gathered from the at least one selected audio signal, and the remote audio signal.
  • 13. The method of claim 11, further comprising copying the stored background filter tap coefficients from the memory to replace the hidden filter tap coefficients, if the background error power is greater than the hidden error power.
  • 14. The method of claim 11, wherein: measuring the background error power comprises measuring a background error e[n] according to the equation: e[n]=d[n]−ĥ†[n]x[n]where d[n] is one of the audio signals, x[n] is a vector of samples from the remote audio signal, and † denotes a conjugate transpose operation; and estimating the background error power based on the background error.
  • 15. The method of claim 11, further comprising updating the background filter tap coefficients according to the equation:
  • 16. The method of claim 11, further comprising filtering the remote audio signal to generate a filtered remote audio signal using a mix filter having mix filter tap coefficients and tap weights.
  • 17. The method of claim 11, further comprising: filtering the remote audio signal to generate a filtered remote audio signal using a mix filter having mix filter tap coefficients and tap weights; andcopying the stored background filter tap coefficients from the memory to update the mix filter tap coefficients by combining the hidden filter tap coefficients of each of the plurality of acoustic sources not currently under adaptation and the most recently updated background filter tap coefficients corresponding to the acoustic source currently under adaptation, if the background error power is greater than the hidden error power.
  • 18. The method of claim 11, further comprising: filtering the remote audio signal to generate a filtered remote audio signal using a mix filter having mix filter tap coefficients and tap weights; andif a channel scaling factor of the mixer has changed, updating the mix filter by updating the tap weights corresponding to the changed channel scaling factor by adding the difference in weight multiplied by a channel impulse response estimate.
  • 19. The method of claim 16, wherein generating the echo-cancelled mixed audio signal comprises subtracting the filtered remote audio signal from the mixed audio signal.
  • 20. The method of claim 19, further comprising: measuring an output coherence of the filtered remote audio signal using a mix estimator;estimating a residual echo power of the echo-cancelled mixed audio signal, using the mix estimator; andprocessing the echo-cancelled mixed audio signal to generate an echo-suppressed mixed audio signal using a non-linear processor, if the output coherence exceeds a predetermined threshold or if the residual echo power exceeds half of an power of the mixed audio signal.
US Referenced Citations (377)
Number Name Date Kind
3755625 Maston Aug 1973 A
3906431 Clearwaters Sep 1975 A
4070547 Dellar Jan 1978 A
4072821 Bauer Feb 1978 A
4096353 Bauer Jun 1978 A
4131760 Christensen Dec 1978 A
4184048 Alcaide Jan 1980 A
4198705 Massa Apr 1980 A
4237339 Bunting Dec 1980 A
4254417 Speiser Mar 1981 A
4305141 Massa Dec 1981 A
4308425 Momose Dec 1981 A
4311874 Wallace, Jr. Jan 1982 A
4330691 Gordon May 1982 A
4334740 Wray Jun 1982 A
4365449 Liautaud Dec 1982 A
4414433 Horie Nov 1983 A
4436966 Botros Mar 1984 A
4449238 Lee May 1984 A
4466117 Goerike Aug 1984 A
4485484 Flanagan Nov 1984 A
4489442 Anderson Dec 1984 A
4521908 Miyaji Jun 1985 A
4593404 Bolin Jun 1986 A
4653102 Hansen Mar 1987 A
4658425 Julstrom Apr 1987 A
4669108 Deinzer May 1987 A
4696043 Iwahara Sep 1987 A
4712231 Julstrom Dec 1987 A
4741038 Elko Apr 1988 A
4752961 Kahn Jun 1988 A
4815132 Minami Mar 1989 A
4860366 Fukushi Aug 1989 A
4881135 Heilweil Nov 1989 A
4903247 Van Gerwen Feb 1990 A
4923032 Nuernberger May 1990 A
4928312 Hill May 1990 A
5121426 Baumhauer Jun 1992 A
5214709 Ribic May 1993 A
5297210 Julstrom Mar 1994 A
5323459 Hirano Jun 1994 A
5335011 Addeo Aug 1994 A
5371789 Hirano Dec 1994 A
5384843 Masuda Jan 1995 A
5396554 Hirano et al. Mar 1995 A
5473701 Cezanne Dec 1995 A
5513265 Hirano Apr 1996 A
5525765 Freiheit Jun 1996 A
5550924 Helf Aug 1996 A
5574793 Hirschhorn Nov 1996 A
5602962 Kellermann Feb 1997 A
5633936 Oh May 1997 A
5661813 Shimauchi et al. Aug 1997 A
5673327 Julstrom Sep 1997 A
5687229 Sih Nov 1997 A
5706344 Finn Jan 1998 A
5761318 Shimauchi et al. Jun 1998 A
5787183 Chu Jul 1998 A
5796819 Romesburg Aug 1998 A
5848146 Slattery Dec 1998 A
5870482 Loeppert Feb 1999 A
5888412 Sooriakumar Mar 1999 A
6041127 Elko Mar 2000 A
6049607 Marash Apr 2000 A
6069961 Nakazawa May 2000 A
6125179 Wu Sep 2000 A
6137887 Anderson Oct 2000 A
6205224 Underbrink Mar 2001 B1
6215881 Azima Apr 2001 B1
6329908 Frecska Dec 2001 B1
6332029 Azima Dec 2001 B1
6442272 Osovets Aug 2002 B1
6449593 Valve Sep 2002 B1
6488367 Debesis Dec 2002 B1
6505057 Finn Jan 2003 B1
6556682 Gilloire et al. Apr 2003 B1
6704422 Jensen Mar 2004 B1
6731334 Maeng May 2004 B1
6741720 Myatt May 2004 B1
6768795 Feltstroem Jul 2004 B2
6885750 Egelmeers Apr 2005 B2
6895093 Ali May 2005 B1
6931123 Hughes Aug 2005 B1
6944312 Mason Sep 2005 B2
6968064 Ning Nov 2005 B1
6990193 Beaucoup et al. Jan 2006 B2
6993126 Kyrylenko Jan 2006 B1
7003099 Zhang et al. Feb 2006 B1
7031269 Lee Apr 2006 B2
7035415 Belt Apr 2006 B2
7054451 Janse May 2006 B2
7092516 Furuta Aug 2006 B2
7092882 Arrowood Aug 2006 B2
7098865 Christensen Aug 2006 B2
7120269 Lowell Oct 2006 B2
7269263 Dedieu Sep 2007 B2
7359504 Reuss Apr 2008 B1
7503616 Linhard Mar 2009 B2
7536769 Pedersen May 2009 B2
7660428 Rodman Feb 2010 B2
7701110 Fukuda Apr 2010 B2
7724891 Beaucoup May 2010 B2
7747001 Kellermann Jun 2010 B2
7756278 Moorer Jul 2010 B2
7831035 Stokes Nov 2010 B2
7831036 Beaucoup Nov 2010 B2
7925006 Hirai et al. Apr 2011 B2
7925007 Stokes, III et al. Apr 2011 B2
7970123 Beaucoup Jun 2011 B2
7970151 Oxford Jun 2011 B2
7991167 Oxford Aug 2011 B2
7995768 Miki Aug 2011 B2
8005238 Tashev Aug 2011 B2
8019091 Burnett Sep 2011 B2
8085947 Haulick et al. Dec 2011 B2
8098842 Florencio Jan 2012 B2
8098844 Elko Jan 2012 B2
8103030 Barthel Jan 2012 B2
8130969 Buck et al. Mar 2012 B2
8130977 Chu Mar 2012 B2
8135143 Ishibashi Mar 2012 B2
8175291 Chan May 2012 B2
8184801 Hamalainen May 2012 B1
8189765 Nishikawa et al. May 2012 B2
8189810 Wolff May 2012 B2
8199927 Raftery Jun 2012 B1
8204198 Adeney Jun 2012 B2
8213596 Beaucoup Jul 2012 B2
8213634 Daniel Jul 2012 B1
8219387 Cutler Jul 2012 B2
8229134 Duraiswami Jul 2012 B2
8233352 Beaucoup Jul 2012 B2
8249273 Inoda Aug 2012 B2
8275120 Stokes, III Sep 2012 B2
8284949 Farhang et al. Oct 2012 B2
8286749 Stewart Oct 2012 B2
8290142 Lambert Oct 2012 B1
8297402 Stewart Oct 2012 B2
8331582 Steele Dec 2012 B2
8385557 Tashev et al. Feb 2013 B2
8395653 Feng Mar 2013 B2
8403107 Stewart Mar 2013 B2
8433061 Cutler Apr 2013 B2
8437490 Marton May 2013 B2
8443930 Stewart, Jr. May 2013 B2
8447590 Ishibashi May 2013 B2
8479871 Stewart Jul 2013 B2
8483398 Fozunbal et al. Jul 2013 B2
8498423 Thaden Jul 2013 B2
8503653 Ahuja Aug 2013 B2
8515089 Nicholson Aug 2013 B2
8553904 Said Oct 2013 B2
8583481 Viveiros Nov 2013 B2
8600443 Kawaguchi Dec 2013 B2
8605890 Zhang et al. Dec 2013 B2
8631897 Stewart Jan 2014 B2
8638951 Zurek Jan 2014 B2
D699712 Bourne Feb 2014 S
8644477 Gilbert Feb 2014 B2
8654990 Faller Feb 2014 B2
8660274 Wolff Feb 2014 B2
8660275 Buck Feb 2014 B2
8672087 Stewart Mar 2014 B2
8676728 Velusamy Mar 2014 B1
8744069 Cutler Jun 2014 B2
8811601 Mohammad Aug 2014 B2
8818002 Tashev Aug 2014 B2
8842851 Beaucoup Sep 2014 B2
8855326 Derkx Oct 2014 B2
8855327 Tanaka Oct 2014 B2
8873789 Bigeh Oct 2014 B2
8886343 Ishibashi Nov 2014 B2
8893849 Hudson Nov 2014 B2
8903106 Meyer Dec 2014 B2
8942382 Elko Jan 2015 B2
9002028 Haulick Apr 2015 B2
9094496 Teutsch Jul 2015 B2
9113247 Chatlani Aug 2015 B2
9126827 Hsieh Sep 2015 B2
9129223 Velusamy Sep 2015 B1
9172345 Kok Oct 2015 B2
9215327 Bathurst et al. Dec 2015 B2
9215543 Sun et al. Dec 2015 B2
9226088 Pandey Dec 2015 B2
9237391 Benesty Jan 2016 B2
9247367 Nobile Jan 2016 B2
9253567 Morcelli Feb 2016 B2
9264553 Pandey Feb 2016 B2
9294839 Lambert Mar 2016 B2
9301049 Elko Mar 2016 B2
9319532 Bao et al. Apr 2016 B2
9319799 Salmon Apr 2016 B2
9326060 Nicholson Apr 2016 B2
9338549 Haulick May 2016 B2
9357080 Beaucoup et al. May 2016 B2
9403670 Schelling Aug 2016 B2
9462378 Kuech Oct 2016 B2
9479627 Rung Oct 2016 B1
9479885 Ivanov Oct 2016 B1
9489948 Chu Nov 2016 B1
9510090 Lissek Nov 2016 B2
9516412 Shigenaga Dec 2016 B2
9560451 Eichfeld Jan 2017 B2
9565493 Abraham Feb 2017 B2
9578413 Sawa Feb 2017 B2
9591404 Chhetri Mar 2017 B1
D784299 Cho Apr 2017 S
9615173 Sako Apr 2017 B2
9635186 Pandey Apr 2017 B2
D787481 Tysso May 2017 S
9641688 Pandey May 2017 B2
9641929 Li May 2017 B2
9641935 Ivanov May 2017 B1
9761243 Taenzer Sep 2017 B2
9813806 Graham Nov 2017 B2
9826211 Sawa Nov 2017 B2
9854101 Pandey Dec 2017 B2
9866952 Pandey Jan 2018 B2
9894434 Rollow, IV Feb 2018 B2
20020015500 Belt Feb 2002 A1
20020041679 Beaucoup Apr 2002 A1
20020131580 Smith Sep 2002 A1
20020149070 Sheplak Oct 2002 A1
20030053639 Beaucoup Mar 2003 A1
20030059061 Tsuji Mar 2003 A1
20030063762 Tajima Apr 2003 A1
20030107478 Hendricks Jun 2003 A1
20030118200 Beaucoup Jun 2003 A1
20030138119 Pocino Jul 2003 A1
20030161485 Smith Aug 2003 A1
20030185404 Milsap Oct 2003 A1
20040013038 Kajala Jan 2004 A1
20040013252 Craner Jan 2004 A1
20040105557 Matsuo Jun 2004 A1
20040125942 Beaucoup Jul 2004 A1
20040240664 Freed Dec 2004 A1
20050094580 Kumar May 2005 A1
20050094795 Rambo May 2005 A1
20050149320 Kajala Jul 2005 A1
20050175189 Lee Aug 2005 A1
20050213747 Popovich Sep 2005 A1
20050271221 Cerwin Dec 2005 A1
20050286698 Bathurst Dec 2005 A1
20060088173 Rodman Apr 2006 A1
20060104458 Kenoyer May 2006 A1
20060151256 Lee Jul 2006 A1
20060165242 Miki Jul 2006 A1
20060192976 Hall Aug 2006 A1
20060233353 Beaucoup Oct 2006 A1
20060239471 Mao Oct 2006 A1
20060262942 Oxford Nov 2006 A1
20060269080 Oxford Nov 2006 A1
20070053524 Haulick Mar 2007 A1
20070093714 Beaucoup Apr 2007 A1
20070116255 Derkx May 2007 A1
20070120029 Keung May 2007 A1
20070165871 Roovers Jul 2007 A1
20070230712 Belt Oct 2007 A1
20080056517 Algazi Mar 2008 A1
20080101622 Sugiyama May 2008 A1
20080130907 Sudo Jun 2008 A1
20080144848 Buck Jun 2008 A1
20080232607 Tashev Sep 2008 A1
20080247567 Kjolerbakken Oct 2008 A1
20080253553 Li Oct 2008 A1
20080259731 Happonen Oct 2008 A1
20080260175 Elko Oct 2008 A1
20080285772 Haulick Nov 2008 A1
20090003586 Lai Jan 2009 A1
20090030536 Gur Jan 2009 A1
20090052684 Ishibashi Feb 2009 A1
20090087000 Ko Apr 2009 A1
20090129609 Oh May 2009 A1
20090147967 Ishibashi Jun 2009 A1
20090150149 Cutter et al. Jun 2009 A1
20090169027 Ura et al. Jul 2009 A1
20090274318 Ishibashi Nov 2009 A1
20090310794 Ishibashi Dec 2009 A1
20100074433 Zhang Mar 2010 A1
20100111324 Yeldener May 2010 A1
20100119097 Ohtsuka May 2010 A1
20100128892 Chen May 2010 A1
20100131749 Kim May 2010 A1
20100150364 Buck Jun 2010 A1
20100189275 Christoph Jul 2010 A1
20100202628 Meyer Aug 2010 A1
20100215184 Buck Aug 2010 A1
20100217590 Nemer Aug 2010 A1
20100314513 Evans Dec 2010 A1
20110007921 Stewart Jan 2011 A1
20110038229 Beaucoup Feb 2011 A1
20110096915 Nemer Apr 2011 A1
20110164761 McCowan Jul 2011 A1
20110194719 Frater Aug 2011 A1
20110211706 Tanaka Sep 2011 A1
20110311064 Teutsch Dec 2011 A1
20110311085 Stewart Dec 2011 A1
20110317862 Hosoe Dec 2011 A1
20120002835 Stewart Jan 2012 A1
20120027227 Kok Feb 2012 A1
20120076316 Zhu Mar 2012 A1
20120080260 Stewart Apr 2012 A1
20120093344 Sun Apr 2012 A1
20120128160 Kim May 2012 A1
20120128175 Visser May 2012 A1
20120155688 Wilson Jun 2012 A1
20120169826 Jeong Jul 2012 A1
20120177219 Mullen Jul 2012 A1
20120182429 Forutanpour Jul 2012 A1
20120224709 Keddem Sep 2012 A1
20120243698 Elko Sep 2012 A1
20120262536 Chen Oct 2012 A1
20120288079 Burnett Nov 2012 A1
20120294472 Hudson Nov 2012 A1
20120327115 Chhetri Dec 2012 A1
20130004013 Stewart Jan 2013 A1
20130015014 Stewart Jan 2013 A1
20130016847 Steiner Jan 2013 A1
20130029684 Kawaguchi Jan 2013 A1
20130034241 Pandey Feb 2013 A1
20130039504 Pandey Feb 2013 A1
20130083911 Bathurst Apr 2013 A1
20130094689 Tanaka Apr 2013 A1
20130101141 McElveen Apr 2013 A1
20130136274 Aehgren May 2013 A1
20130206501 Yu Aug 2013 A1
20130251181 Stewart Sep 2013 A1
20130264144 Hudson Oct 2013 A1
20130271559 Feng Oct 2013 A1
20130297302 Pan Nov 2013 A1
20130336516 Stewart Dec 2013 A1
20130343549 Vemireddy Dec 2013 A1
20140016794 Lu et al. Jan 2014 A1
20140072151 Ochs Mar 2014 A1
20140098964 Rosca Apr 2014 A1
20140264654 Salmon Sep 2014 A1
20140265774 Stewart Sep 2014 A1
20140270271 Dehe Sep 2014 A1
20140286518 Stewart Sep 2014 A1
20140301586 Stewart Oct 2014 A1
20140307882 LeBlanc Oct 2014 A1
20140341392 Lambert Nov 2014 A1
20140357177 Stewart Dec 2014 A1
20150030172 Gaensler et al. Jan 2015 A1
20150055796 Nugent Feb 2015 A1
20150055797 Nguyen Feb 2015 A1
20150070188 Aramburu Mar 2015 A1
20150078581 Etter Mar 2015 A1
20150078582 Graham Mar 2015 A1
20150117672 Christoph Apr 2015 A1
20150118960 Petit Apr 2015 A1
20150126255 Yang et al. May 2015 A1
20150281832 Kishimoto Oct 2015 A1
20150350621 Sawa Dec 2015 A1
20160029120 Nesta et al. Jan 2016 A1
20160031700 Sparks Feb 2016 A1
20160080867 Nugent Mar 2016 A1
20160111109 Tsujikawa Apr 2016 A1
20160142548 Pandey May 2016 A1
20160142815 Norris May 2016 A1
20160148057 Oh May 2016 A1
20160150316 Kubota May 2016 A1
20160295279 Srinivasan Oct 2016 A1
20160300584 Pandey Oct 2016 A1
20160302002 Lambert Oct 2016 A1
20160302006 Pandey Oct 2016 A1
20160323668 Abraham Nov 2016 A1
20160330545 McElveen Nov 2016 A1
20160337523 Pandey Nov 2016 A1
20160353200 Bigeh Dec 2016 A1
20170105066 McLaughlin Apr 2017 A1
20170134849 Pandey May 2017 A1
20170134850 Graham May 2017 A1
20170164101 Rollow, IV Jun 2017 A1
20170264999 Fukuda Sep 2017 A1
20170374454 Bernardini Dec 2017 A1
20180160224 Graham Jun 2018 A1
Foreign Referenced Citations (67)
Number Date Country
2505496 Oct 2006 CA
2838856 Dec 2012 CA
2846323 Sep 2014 CA
102646418 Aug 2012 CN
102821336 Dec 2012 CN
102833664 Dec 2012 CN
102860039 Jan 2013 CN
104080289 Oct 2014 CN
104581463 Apr 2015 CN
2941485 Apr 1981 DE
0594098 Apr 1994 EP
0869697 Oct 1998 EP
1184676 Mar 2002 EP
0944228 Jun 2003 EP
1439526 Jul 2004 EP
1651001 Apr 2006 EP
1727344 Nov 2006 EP
1906707 Apr 2008 EP
1962547 Aug 2008 EP
2197219 Jun 2010 EP
2360940 Aug 2011 EP
2721837 Apr 2014 EP
2778310 Sep 2014 EP
3131311 Feb 2017 EP
H01260967 Oct 1989 JP
H07336790 Dec 1995 JP
3175622 Jun 2001 JP
2003087890 Mar 2003 JP
2004349806 Dec 2004 JP
2004537232 Dec 2004 JP
2005323084 Nov 2005 JP
2006094389 Apr 2006 JP
2006101499 Apr 2006 JP
4120646 Aug 2006 JP
4258472 Aug 2006 JP
4196956 Sep 2006 JP
2006340151 Dec 2006 JP
4760160 Jan 2007 JP
4752403 Mar 2007 JP
4867579 Jun 2007 JP
2007208503 Aug 2007 JP
2007228069 Sep 2007 JP
2007228070 Sep 2007 JP
2007274131 Oct 2007 JP
2007274463 Oct 2007 JP
2008005347 Jan 2008 JP
2008042754 Feb 2008 JP
5028944 May 2008 JP
2008154056 Jul 2008 JP
2008259022 Oct 2008 JP
2008312002 Dec 2008 JP
2009206671 Sep 2009 JP
2010028653 Feb 2010 JP
2010114554 May 2010 JP
2010268129 Nov 2010 JP
2011015018 Jan 2011 JP
100960781 Jan 2004 KR
2006049260 May 2006 WO
WO2006071119 Jul 2006 WO
2006121896 Nov 2006 WO
2010001508 Jan 2010 WO
2010144148 Dec 2010 WO
WO2010140084 Dec 2010 WO
2011104501 Sep 2011 WO
2012160459 Nov 2012 WO
2012174159 Dec 2012 WO
2016176429 Nov 2016 WO
Non-Patent Literature Citations (166)
Entry
Benesty, et al., “Adaptive Algorithms for Mimo Acoustic Echo Cancellation,” AI2 Allen Institute for Artifical Intelligence, 2003.
CTG Audio, Expand Your IP Teleconferencing to Full Room Audio, Obtained from website http://www.ctgaudio.com/expand-your-ip-teleconferencing-to-full-room-audio-while-conquering-echo-cancellation-issues.html, 2014.
Desiraju, et al., “Efficient Multi-Channel Acoustic Echo Cancellation Using Constrained Sparse Filter Updates in the Subband Domain,” Acoustic Speech Enhancement Research, Sep. 2014.
Gil-Cacho, et al., “Multi-Microphone Acoustic Echo Cancellation Using Multi-Channel Warped Linear Prediction of Common Acoustical Poles,” 18th European Signal Processing Conference, Aug. 23-27, 2010.
LecNet2 Sound System Design Guide, Lectrosonics, Jun. 2, 2006.
Multichannel Acoustic Echo Cancellation, Obtained from website http://www.buchner-net.com/mcaec.html, Jun. 2011.
Nguyen-Ky, et al., “An Improved Error Estimation Algorithm for Stereophonic Acoustic Echo Cancellation Systems,” 1st International Conference on Signal Processing and Communication Systems, Dec. 17-19, 2007.
Rane Acoustic Echo Cancellation Guide, AEC Guide Version 2, Nov. 2013.
Rao, et al., “Fast LMS/Newton Algorithms for Stereophonic Acoustic Echo Cancelation,” IEEE Transactions on Signal Processing, vol. 57, No. 8, Aug. 2009.
Reuven, et al., “Multichannel Acoustic Echo Cancellation and Noise Reduction in Reverberant Environments Using the Transfer-Function GSC,” IEEE 1-4244-0728, 2007.
Signal Processor MRX7-D Product Specifications, Yamaha Corporation, 2016.
Soundweb London Application Guides, BSS Audio, 2010.
SymNet Network Audio Solutions Brochure, Symetrix, Inc., 2008.
Tandon, et al., “An Efficient, Low-Complexity, Normalized LMS Algorithm for Echo Cancellation,” IEEE 0-7803-8322, Feb. 2004.
Wung, “A System Approach to Multi-Channel Acoustic Echo Cancellation and Residual Echo Suppression for Robust Hands-Free Teleconferencing,” Georgia Institute of Technology, May 2015.
XAP Audio Conferencing Brochure, ClearOne Communications, Inc., 2002.
Yamaha Conference Echo Canceller PJP-EC200 Brochure, Yamaha Corporation, Oct. 2009.
Zhang, et al., “Multichannel Acoustic Echo Cancelation in Multiparty Spatial Audio Conferencing with Constrained Kalman Filtering,” 11th International Workshop on Acoustic Echo and Noise Control, Sep. 14, 2008.
Affes et al., A Signal Subspace Tracking Algorithm for Microphone Array Processing of Speech, IEEE Trans. on Speech and Audio Processing, vol. 5, No. 5, Sep. 1997, pp. 425-437.
Affes et al., A Source Subspace Tracking Array of Microphones for Double Talk Situations, 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings, May 1996, pp. 909-912.
Affes et al., An Algorithm for Multisource Beamforming and Multitarget Tracking, IEEE Trans. on Signal Processing, vol. 44, No. 6, Jun. 1996, pp. 1512-1522.
Affes et al., Robust Adaptive Beamforming via LMS-Like Target Tracking, Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, Apr. 1994, pp. IV-269-IV-272.
Benesty et al., Frequency-Domain Adaptive Filtering Revisited, Generalization to the Multi-Channel Case, and Application to Acoustic Echo Cancellation, 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing Proceedings, Jun. 2000, pp. 789-792.
Bruel & Kjaer, by J.J. Christensen and J. Hald, Technical Review: Beamforming, No. 1, 2004, 54 pgs.
Buchner et al., Generalized Multichannel Frequency-Domain Adaptive Filtering: Efficient Realization and Application to Hands-Free Speech Communication, Signal Processing 85, 2005, pp. 549-570.
Buchner et al., Multichannel Frequency-Domain Adaptive Filtering with Application to Multichannel Acoustic Echo Cancellation, Adaptive Signal Processing, 2003, pp. 95-128.
Chan et al., Uniform Concentric Circular Arrays with Frequency-Invariant Characteristics—Theory, Design, Adaptive Beamforming and DOA Estimation, IEEE Transactions on Signal Processing, vol. 55, No. 1, Jan. 2007, pp. 165-177.
Chu, Desktop Mic Array for Teleconferencing, 1995 International Conference on Acoustics, Speech, and Signal Processing, May 1995, pp. 2999-3002.
Dahl et al., Acoustic Echo Cancelling with Microphone Arrays, Research Report 3/95, Univ. of Karlskrona/Ronneby, Apr. 1995, 64 pgs.
Fan et al., Localization Estimation of Sound Source by Microphones Array, Procedia Engineering 7, 2010, pp. 312-317.
Flanagan et al., Autodirective Microphone Systems, Acustica, vol. 73, 1991, pp. 58-71.
Flanagan et al., Computer-Steered Microphone Arrays for Sound Transduction in Large Rooms, J. Acoust. Soc. Am. 78 (5), Nov. 1985, pp. 1508-1518.
Gazor et al., Robust Adaptive Beamforming via Target Tracking, IEEE Transactions on Signal Processing, vol. 44, No. 5, Jun. 1996, pp. 1589-1593.
Gazor et al., Wideband Multi-Source Beamforming with Adaptive Array Location Calibration and Direction Finding, 1995 International Conference on Acoustics, Speech, and Signal Processing, May 1995, pp. 1904-1907.
Gentner Communications Corp., AP400 Audio Perfect 400 Audioconferencing System Installation & Operation Manual, Nov. 1998, 80 pgs.
Herbordt, Combination of Robust Adaptive Beamforming with Acoustic Echo Cancellation for Acoustic Human/Machine Interfaces, Friedrich-Alexander University, 2003, 293 pgs.
Julstrom et al., Direction-Sensitive Gating: A New Approach to Automatic Mixing, J. Audio Eng. Soc., vol. 32, No. 7/8, Jul./Aug. 1984, pp. 490-506.
Kahrs, Ed., The Past, Present, and Future of Audio Signal Processing, IEEE Signal Processing Magazine, Sep. 1997, pp. 30-57.
Kallinger et al., Multi-Microphone Residual Echo Estimation, 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, Apr. 2003, 4 pgs.
Kobayashi et al., A Microphone Array System with Echo Canceller, Electronics and Communications in Japan, Part 3, vol. 89, No. 10, Feb. 2, 2006, pp. 23-32.
Luo et al., Wideband Beamforming with Broad Nulls of Nested Array, Third Int'l Conf. on Info. Science and Tech., Mar. 23-25, 2013, pp. 1645-1648.
McGowan, Microphone Arrays: A Tutorial, Apr. 2001, 36 pgs.
Mohammed, A New Robust Adaptive Beamformer for Enhancing Speech Corrupted with Colored Noise, AICCSA, Apr. 2008, pp. 508-515.
Mohammed, Real-time Implementation of an efficient RLS Algorithm based on IIR Filter for Acoustic Echo Cancellation, AICCSA, Apr. 2008, pp. 489-494.
Oh et al., Hands-Free Voice Communication in an Automobile With a Microphone Array, 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing, Mar. 1992, pp. I-281-I-284.
Pettersen, Broadcast Applications for Voice-Activated Microphones, db, Jul./Aug. 1985, 6 pgs.
Plascore, PCGA-XR1 3003 Aluminum Honeycomb Data Sheet, 2008, 2 pgs.
Polycom Inc., Vortex EF2211/EF2210 Reference Manual, 2003, 66 pgs.
Polycom, Inc., Polycom SoundStructure C16, C12, C8, and SR12 Design Guide, Nov. 2013, 743 pgs.
Ristimaki, Distributed Microphone Array System for Two-Way Audio Communication, Helsinki Univ. of Technology, Master's Thesis, Jun. 15, 2009, 73 pgs.
Rombouts et al., An Integrated Approach to Acoustic Noise and Echo Cancellation, Signal Processing 85, 2005, pp. 849-871.
Shure AMS Update, vol. 1, No. 1, 1983, 2 pgs.
Shure AMS Update, vol. 1, No. 2, 1983, 2 pgs.
Shure AMS Update, vol. 4, No. 4, 1997, 8 pgs.
Tetelbaum et al., Design and Implementation of a Conference Phone Based on Microphone Array Technology, Proc. Global Signal Processing Conference and Expo (GSPx), Sep. 2004, 6 pgs.
Tiete et al., SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization, Sensors, Jan. 23, 2014, pp. 1918-1949.
Van Trees, Optimum Array Processing: Part IV of Detection, Estimation, and Modulation Theory, 2002, 54 pgs., pp. i-xxv, 90-95, 201-230.
Weinstein et al., LOUD: A 1020-Node Microphone Array and Acoustic Beamformer, 14th International Congress on Sound & Vibration, Jul. 2007, 8 pgs.
Yamaha Corp., PJP-100H IP Audio Conference System Owner's Manual, Sep. 2006, 59 pgs.
Zhang et al., Selective Frequency Invariant Uniform Circular Broadband Beamformer, EURASIP Journal on Advances in Signal Processing, vol. 2010, pp. 1-11.
Tandon et al., An Efficient, Low-Complexity, Normalized LMS Algorithm for Echo Cancellation, 2nd Annual IEEE Northeast Workshop on Circuits and Systems, Jun. 2004, pp. 161-164.
Van Compemolle, Switching Adaptive Filters for Enhancing Noisy and Reverberant Speech from Microphone Array Recordings, Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Apr. 1990, pp. 833-836.
Van Veen et al., Beamforming: a Versatile Approach to Spatial Filtering, IEEE ASSP Magazine, vol. 5, issue 2, Apr. 1988, pp. 4-24.
Wang et al., Combining Superdirective Beamforming and Frequency-Domain Blind Source Separation for Highly Reverberant Signals, EURASIP Journal on Audio, Speech, and Music Processing, vol. 2010, pp. 1-13.
Wung, A System Approach to Multi-Channel Acoustic Echo Cancellation and Residual Echo Suppression for Robust Hands-Free Teleconferencing, Georgia Institute of Technology, May 2015, 167 pgs.
Yamaha Corp., MRX7-D Signal Processor Product Specifications, 2016, 12 pgs.
Yamaha Corp., PJP-EC200 Conference Echo Canceller, Oct. 2009, 2 pgs.
Yan et al., Convex Optimization Based Time-Domain Broadband Beamforming with Sidelobe Control, Journal of the Acoustical Society of America, vol. 121, No. 1, Jan. 2007, pp. 46-49.
Yensen et al., Synthetic Stereo Acoustic Echo Cancellation Structure with Microphone Array Beamforming for VOIP Conferences, 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing, Jun. 2000, pp. 817-820.
Zhang et al., Multichannel Acoustic Echo Cancellation in Multiparty Spatial Audio Conferencing with Constrained Kalman Filtering, 11th International Workshop on Acoustic Echo and Noise Control, Sep. 2008, 4 pgs.
Zheng et al., Experimental Evaluation of a Nested Microphone Array with Adaptive Noise Cancellers, IEEE Transactions on Instrumentation and Measurement, vol. 53, No. 3, Jun. 2004, p. 777-786.
International Search Report and Written Opinion for PCT/US2018/013155 dated Jun. 8, 2018.
Herbordt et al., GSAEC—Acoustic Echo Cancellation embedded into the Generalized Sidelobe Canceller, 10th European Signal Processing Conference, Sep. 2000, 5 pgs.
Herbordt et al., Multichannel Bin-Wise Robust Frequency-Domain Adaptive Filtering and Its Application to Adaptive Beamforming, IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, No. 4, May 2007, pp. 1340-1351.
Herbordt, et al., Joint Optimization of LCMV Beamforming and Acoustic Echo Cancellation for Automatic Speech Recognition, IEEE International Conference on Acoustics, Speech, and Signal Processing, Mar. 2005, pp. III-77-III-80.
Huang et al., Immersive Audio Schemes: The Evolution of Multiparty Teleconferencing, IEEE Signal Processing Magazine, Jan. 2011, pp. 20-32.
International Search Report and Written Opinion for PCT/US2016/029751 dated Nov. 28, 2016, 21 pp.
InvenSense Inc., Microphone Array Beamforming, Dec. 31, 2013, 12 pgs.
Ishii et al., Investigation on Sound Localization using Multiple Microphone Arrays, Reflection and Spatial Information, Japanese Society for Artificial Intelligence, JSAI Technical Report, SIG-Challenge-B202-11, 2012, pp. 64-69.
Ito et al., Aerodynamic/Aeroacoustic Testing in Anechoic Closed Test Sections of Low-speed Wind Tunnels, 16th AIAA/CEAS Aeroacoustics Conference, 2010, 11 pgs.
Johansson et al., Robust Acoustic Direction of Arrival Estimation using Root-SRP-PHAT, a Realtime Implementation, IEEE International Conference on Acoustics, Speech, and Signal Processing, Mar. 2005, 4 pgs.
Johansson, et al., Speaker Localisation using the Far-Field SRP-PHAT in Conference Telephony, 2002 International Symposium on Intelligent Signal Processing and Communication Systems, 5 pgs.
Kammeyer, et al., New Aspects of Combining Echo Cancellers with Beamformers, IEEE International Conference on Acoustics, Speech, and Signal Processing, Mar. 2005, pp. III-137-III-140.
Kellermann, A Self-Steering Digital Microphone Array, 1991 International Conference on Acoustics, Speech, and Signal Processing, Apr. 1991, pp. 3581-3584.
Kellermann, Acoustic Echo Cancellation for Beamforming Microphone Arrays, in Brandstein, ed., Microphone Arrays: Techniques and Applications, 2001, Springer-Verlag Berlin Heidelberg, pp. 281-306.
Kellermann, Integrating Acoustic Echo Cancellation with Adaptive Beamforming Microphone Arrays, Forum Acusticum, Berlin, Mar. 1999, pp. 1-4.
Kellermann, Strategies for Combining Acoustic Echo Cancellation and Adaptive Beamforming Microphone Arrays, 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing, Apr. 1997, 4 pgs.
Knapp, et al., The Generalized Correlation Method for Estimation of Time Delay, IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. ASSP-24, No. 4, Aug. 1976, pp. 320-327.
Kobayashi et al., A Hands-Free Unit with Noise Reduction by Using Adaptive Beamformer, IEEE Transactions on Consumer Electronics, vol. 54, No. 1, Feb. 2008, pp. 116-122.
Lebret et al., Antenna Array Pattern Synthesis via Convex Cptimization, IEEE Trans. on Signal Processing, vol. 45, No. 3, Mar. 1997, pp. 526-532.
Lectrosonics, LecNet2 Sound System Design Guide, Jun. 2006, 28 pgs.
Lee et al., Multichannel Teleconferencing System with Multispatial Region Acoustic Echo Cancellation, International Workshop on Acoustic Echo and Noise Control (IWAENC2003), Sep. 2003, pp. 51-54.
Lindstrom et al., An Improvement of the Two-Path Algorithm Transfer Logic for Acoustic Echo Cancellation, IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, No. 4, May 2007, pp. 1320-1326.
Liu et al., Adaptive Beamforming with Sidelobe Control: A Second-Order Cone Programming Approach, IEEE Signal Proc. Letters, vol. 10, No. 11, Nov. 2003, pp. 331-334.
Lobo et al., Applications of Second-Order Cone Programming, Linear Algebra and its Applications 284, 1998, pp. 193-228.
Marquardt et al., A Natural Acoustic Front-End for Interactive TV in the EU-Project DICIT, IEEE Pacific Rim Conference on Communications, Computers and Signal Processing, Aug. 2009, pp. 894-899.
Martin, Small Microphone Arrays with Postfilters for Noise and Acoustic Echo Reduction, in Brandstein, ed., Microphone Arrays: Techniques and Applications, 2001, Springer-Verlag Berlin Heidelberg, pp. 255-279.
Maruo et al., On the Optimal Solutions of Beamformer Assisted Acoustic Echo Cancellers, IEEE Statistical Signal Processing Workshop, 2011, pp. 641-644.
Mohammed, A New Adaptive Beamformer for Optimal Acoustic Echo and Noise Cancellation with Less Computational Load, Canadian Conference on Electrical and Computer Engineering, May 2008, pp. 000123-000128.
Myllyla et al., Adaptive Beamforming Methods for Dynamically Steered Microphone Array Systems, 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, Mar-Apr. 2008, pp. 305-308.
Nguyen-Ky et al., An Improved Error Estimation Algorithm for Sterephonic Acoustic Echo Cancellation Systems, 1st International Conference on Signal Processing and Communication Systems, Dec. 2007, 5 pgs.
Omologo, Multi-Microphone Signal Processing for Distant-Speech Interaction, Human Activity and Vision Summer School (HAVSS), INRIA Sophia Antipolis, Oct. 3, 2012, 79 pgs.
Pados et al., An Iterative Algorithm for the Computation of the MVDR Filter, IEEE Trans. on Signal Processing, vol. 49, No. 2, Feb. 2001, pp. 290-300.
Polycom, Inc., Setting Up the Polycom HDX Ceiling Microphone Array Series, https://support.polycom.com/content/dam/polycom-support/products/Telepresence-and-Video/HDX%20Series/setup-maintenance/en/ndx_ceiling_microphone_array_setting_up.pdf, 2010, 16 pgs.
Polycom, Inc., Vortex EF2241 Reference Manual, 2002, 68 pgs.
Powers, Proving Adaptive Directional Technology Works: A Review of Studies, The Hearing Review, http://www.hearingreview.com/2004/04/proving-adaptive-directional-technology-works-a-review-of-studies/, Apr. 2004, 8 pgs.
Sabinkin et al., Estimation of Wavefront Arrival Delay Using the Cross-Power Spectrum Phase Technique, 132nd Meeting of the Acoustical Society of America, Dec. 1996, pp. 1-10.
Rane Corp., Halogen Acoustic Echo Cancellation Guide, AEC Guide Version 2, Nov. 2013, 16 pgs.
Sao et al., Fast LMS/Newton Algorithms for Sterophonic Acoustic Echo Cancellation, IEEE Transactions on Signal Processing, vol. 57, No. 8, Aug. 2009, pp. 2919-2930.
Seuven et al., Joint Acoustic Echo Cancellation and Transfer Function GSC in the Frequency Domain, 23rd IEEE Convention of Electrical and Electronics Engineers in Israel, Sep. 2004, pp. 412-415.
Reuven et al., Joint Noise Reduction and Acoustic Echo Cancellation Using the Transfer-Function Generalized Sidelobe Canceller, Speech Communication, vol. 49, 2007, pp. 623-635.
Reuven et al., Multichannel Acoustic Echo Cancellation and Noise Reduction in Reverberant Environments Using the Transfer-Function GSC, 2007 IEEE International Conference on Acoustics, Speech and Signal Processing—ICASSP 07, Apr. 2007, pp. I-81-I-84.
Sasaki et al., A Predefined Command Recognition System Using a Ceiling Microphone Array in Noisy Housing Environments, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sep. 2008, pp. 2178-2184.
Sennheiser, New microphone solutions for ceiling and desk installation, https://en-us.sennheiser.com/news-new-microphone-solutions-for-ceiling-and-desk-installation, Feb. 2011, 2 pgs.
Shure Inc., MX395 Low Profile Boundary Microphones, 2007, 2 pgs.
Silverman et al., Performance of Real-Time Source-Location Estimators for a Large-Aperture Microphone Array, IEEE Transactions on Speech and Audio Processing, vol. 13, No. 4, Jul. 2005, pp. 593-606.
Sinha, Ch. 9: Noise and Echo Cancellation, in Speech Processing in Embedded Systems, Springer, 2010, pp. 127-142.
Soda et al., Introducing Multiple Microphone Arrays for Enhancing Smart Home Voice Control, The Institute of Electronics, Information and Communication Engineers, Technical Report of IEICE, Jan. 2013, 6 pgs.
Symetrix, Inc., SymNet Network Audio Solutions Brochure, 2008, 32 pgs.
Advanced Network Devices, IPSCM Ceiling Tile IP Speaker, Feb. 2011, 2 pgs.
Armstrong World Industries, Inc., I-Ceilings Sound Systems Speaker Panels, 2002, 4 pgs.
Arnold, et al., “A directional acoustic array using silicon micromachined piezoresistive microphones,” Journal of Acoustical Society of America, 113 (1), pp. 289-298, Jan. 2003 (10 pp.).
Atlas Sound, I128SYSM IP Compliant Loudspeaker System with Microphone Data Sheet, 2009, 2 pgs.
Atlas Sound,1‘X2’ IP Speaker with Micophone for Suspended Ceiling Systems, https://www.atlasied.com/i128sysm, retrieved Oct. 25, 2017, 5 pgs.
Audio Technica, ES945 Omnidirectional Condenser Boundary Microphones, https://eu.audio-technica.com/resources/ES945%20Specifications.pdf, 2007, 1 pg.
Audix Microphones, Audix Introduces Innovative Ceiling Mics, http://audixusa.com/docs_12/latest_news/EFplFkAAkIOtSdolke.shtml, Jun. 2011, 6 pgs.
Audix Microphones, M70 Flush Mount Ceiling Mic, May 2016, 2 pgs.
Beh et al., Combining Acoustic Echo Cancellation and Adaptive Beamforming for Achieving Robust Speech Interface in Mobile Robot, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sep. 2008, pp. 1693-1698.
Benesty et al., A New Class of Doubletalk Detectors Based on Cross-Correlation, IEEE Transactions on Speech and Audio Processing, vol. 8, No. 2, Mar. 2000, pp. 168-172.
Benesty et al., Adaptive Algorithms for Mimo Acoustic Echo Cancellation, https://publik.tuwien.ac.at/files/pub-et_9085.pdf, 2003, pp. 1-30.
Beyer Dynamic, Classis BM 32-33-34 DE-EN-FR 2016, 1 pg.
Boyd, et al., Convex Optimization, Mar. 15, 1999, 216 pgs.
Brandstein et al., Eds., Microphone Arrays: Signal Processing Techniques and Applications, Digital Signal Processing, Springer-Verlag Berlin Heidelberg, 2001, 401 pgs.
BSS Audio, Soundweb London Application Guides, 2010, 120 pgs.
Buchner et al., An Acoustic Human-Machine Interface with Multi-Channel Sound Reproduction, IEEE Fourth Workshop on Multimedia Signal Processing, Oct. 2001, pp. 359-364.
Buchner et al., Full-Duplex Communication Systems Using Loudspeaker Arrays and Microphone Arrays, IEEE International Conference on Multimedia and Expo, Aug. 2002, pp. 509-512.
Buchner et al., An Efficient Combination of Multi-Channel Acoustic Echo Cancellation with a Beamforming Microphone Array, International Workshop on Hands-Free Speech Communication (HSC2001), Apr. 2001, pp. 55-58.
Buchner, Multichannel Acoustic Echo Cancellation, http://www.buchner-net.com/mcaec.html, Jun. 2011.
Buck, Aspects of First-Order Differential Microphone Arrays in the Presence of Sensor Imperfections, Transactions on Emerging Telecommunications Technologies, vol. 13, No. 2, Mar.-Apr. 2002, pp. 115-122.
Buck, et al., Self-Calibrating Microphone Arrays for Speech Signal Acquisition: A Systematic Approach, Signal Processing, vol. 86, 2006, pp. 1230-1238.
Burton et al., A New Structure for Combining Echo Cancellation and Beamforming in Changing Acoustical Environments, IEEE International Conference on Acoustics, Speech and Signal Processing, 2007, pp. 1-77-1-80.
Campbell, Adaptive Beamforming Using a Microphone Array for Hands-Free Telephony, Virginia Polytechnic Institute and State University, Feb. 1999, 154 pgs.
Chen et al., Design of Robust Broadband Beamformers with Passband Shaping Characteristics using Tikhonov Regularization, IEEE Transactions on Audio, Speech, and Language Processing, vol. 17, No. 4, May 2009, pp. 665-681.
Chen, et al., A General Approach to the Design and Implementation of Linear Differential Microphone Arrays, Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, 2013, 7 pgs.
Chou, “Frequency-Independent Beamformer with Low Response Error,” 1995 International Conference on Acoustics, Speech, and Signal Processing, pp. 2995-2998, May 9, 1995, 4 pp.
ClearOne Communications, XAP Audio Conferencing White Paper, Aug. 2002, 78 pgs.
ClearOne, Beamforming Microphone Array, Mar. 2012, 6 pgs.
ClearOne, Ceiling Microphone Array Installation Manual, Jan. 9, 2012, 20 pgs.
Cook, et al., An Alternative Approach to Interpolated Array Processing for Uniform Circular Arrays, Asia-Pacific Conference on Circuits and Systems, 2002, pp. 411-414.
Cox et al., Robust Adaptive Beamforming, IEEE Trans. Acoust., Speech, and Signal Processing, vol. ASSP-35, No. 10, Oct. 1987, pp. 1365-1376.
CTG Audio, Ceiling Microphone CTG CM-01, Jun. 5, 2008, 2 pgs.
CTG Audio, CM-01 & CM-02 Ceiling Microphones, 2017, 4 pgs.
CTG Audio, Expand Your IP Teleconferencing to Full Room Audio, http://www.ctgaudio.com/expand-your-ip-teleconferencing-to-full-room-audio-while-conquering-echo-cancellation-issues.html, Jul. 29, 2014, 3 pgs.
CTG Audio, Installation Manual, Nov. 21, 2008, 25 pgs.
CTG Audio, White on White—Introducing the CM-02 Ceiling Microphone, https://ctgaudio.com/white-on-white-introducing-the-cm-02-ceiling-microphone/, Feb. 20, 2014, 3 pgs.
Desiraju et al., Efficient Multi-Channel Acoustic Echo Cancellation Using Constrained Sparse Filter Updates in the Subband Domain, ITG-Fachbericht 252: Speech Communication, Sep. 2014, 4 pgs.
DiBiase et al., Robust Localization in Reverberent Rooms, in Brandstein, ed., Microphone Arrays: Techniques and Applications, 2001, Springer-Verlag Berlin Heidelberg, pp. 157-180.
Do et al., A Real-Time SRP-PHAT Source Location Implementation using Stochastic Region Contraction (SRC) on a Large-Aperture Microphone Array, 2007 IEEE International Conference on Acoustics, Speech and Signal Processing—ICASSP '07, , Apr. 2007, pp. I-121-I-124.
Frost, III, An Algorithm for Linearly Constrained Adaptive Array Processing, Proc. IEEE, vol. 60, No. 8, Aug. 1972, pp. 926-935.
Gannot et al., Signal Enhancement using Beamforming and Nonstationarity with Applications to Speech, IEEE Trans. on Signal Processing, vol. 49, No. 8, Aug. 2001, pp. 1614-1626.
Gansler et al., A Double-Talk Detector Based on Coherence, IEEE Transactions on Communications, vol. 44, No. 11, Nov. 1996, pp. 1421-1427.
Gentner Communications Corp., XAP 800 Audio Conferencing System Installation & Operation Manual, Oct. 2001, 152 pgs.
Gil-Cacho et al., Multi-Microphone Acoustic Echo Cancellation Using Multi-Channel Warped Linear Prediction of Common Acoustical Poles, 18th European Signal Processing Conference, Aug. 2010, pp. 2121-2125.
Gritton et al., Echo Cancellation Algorithms, IEEE ASSP Magazine, vol. 1, issue 2, Apr. 1984, pp. 30-38.
Hamalainen et al., Acoustic Echo Cancellation for Dynamically Steered Microphone Array Systems, 2007 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, Oct. 2007, pp. 58-61.
Herbordt et al., A Real-time Acoustic Human-Machine Front-End for Multimedia Applications Integrating Robust Adaptive Beamforming and Stereophonic Acoustic Echo Cancellation, 7th International Conference on Spoken Language Processing, Sep. 2002, 4 pgs.
Related Publications (1)
Number Date Country
20180205830 A1 Jul 2018 US