Microphone arrays are increasingly being recognized as an effective tool to combat noise, interference, and reverberation for speech acquisition in adverse acoustic environments. Applications include robust speech recognition, hands-free voice communication and teleconferencing, hearing aids, to name just a few. Beamforming is a traditional microphone array processing technique that provides a form of spatial filtering: receiving signals coming from specific directions while attenuating signals from other directions. While spatial filtering is possible, it is not optimal in the minimum mean square error (MMSE) sense from a signal reconstruction perspective.
One conventional method for post-filtering is the multichannel Wiener filter (MCWF), which can be decomposed into a minimum variance distortionless response (MVDR) beamformer and a single-channel post-filter. Currently known conventional post-filtering methods are capable of improving speech quality after beamforming; however, such existing methods have two common limitations or deficiencies. First, these methods assume the relevant noise is only either white (incoherent) noise or diffuse noise, thus the methods do not address point interferers. Point interferers are, for example, in an environment with multiple persons speaking and where one person is a desired audio source, the unwanted noise coming from other speakers. Second, these existing approaches apply a heuristic technique where post-filter coefficients are estimated using two microphones at a time and then averaged over all microphone pairs, which leads to sub-optimal results.
This Summary introduces a selection of concepts in a simplified form in order to provide a basic understanding of some aspects of the present disclosure. This Summary is not an extensive overview of the disclosure, and is not intended to identify key or critical elements of the disclosure or to delineate the scope of the disclosure. This Summary merely presents some of the concepts of the disclosure as a prelude to the Detailed Description provided below.
In general, one aspect of the subject matter described in this specification can be embodied in methods, apparatuses, and computer-readable medium. An example apparatus includes one or more processing devices and one or more storage devices storing instructions that, when executed by the one or more processing devices, cause the one or more processing devices to implement an example method. An example computer-readable medium includes sets of instructions to implement an example method. One embodiment of the present disclosure relates to a method for estimating coefficient values to reduce noise for a post-filter, the method comprising: receiving audio signals via a microphone array from sound sources in an environment; hypothesizing a sound field scenario based on the received audio signals; calculating fixed beamformer coefficients based on the received audio signals; determining covariance matrix models based on the hypothesized sound field scenario; calculating a covariance matrix based on the received audio signals; estimating power of the sound sources to find solution that minimizes the difference between the determined covariance matrix models and the calculated covariance matrix; calculating and applying post-filter coefficients based on the estimated power; and generating an output audio signal based on the received audio signals and the post-filter coefficients.
In one or more embodiments, the methods described herein may optionally include one or more of the following additional features: hypothesizing multiple sound field scenarios to generate multiple output signals, wherein the multiple generated output signals are compared and the output signal with the highest signal-to-noise ratio among the multiple output generated signals; estimating the power based on the Frobenius norm, wherein the Frobenius norm is computed using the Hermitian symmetry of the covariance matrices; determining the location of at least one of the sound sources using sound-source location methods to hypothesize the sound field scenario, determining the covariance matrix models, and calculating the covariance matrix; and generating the covariance matrix models based on a plurality of hypothesized sound field scenarios, wherein a covariance matrix model is selected to maximize an objective function that reduces noise, and wherein an objective function is the sample variance of the final output audio signal.
Further scope of applicability of the present disclosure will become apparent from the Detailed Description given below. However, it should be understood that the Detailed Description, while describing preferred embodiments, is given by way of illustration only, since various changes and modifications within the spirit and scope of the disclosure will become apparent to those skilled in the art from this Detailed Description.
These and other objects, features and characteristics of the present disclosure will become more apparent to those skilled in the art from a study of the following Detailed Description in conjunction with the appended claims and drawings, all of which form a part of this specification. In the drawings:
The headings provided herein are for convenience only and do not necessarily affect the scope or meaning of the claims.
The present disclosure generally relates to systems and methods for audio signal processing. More specifically, aspects of the present disclosure relate to post-filtering techniques for microphone array speech enhancement.
The following description provides specific details for a thorough understanding and enabling description of the disclosure. One skilled in the relevant art will understand, however, that the embodiments described herein may be practiced without many of these details. Likewise, one skilled in the relevant art will also understand that the example embodiments described herein can include many other obvious features not described in detail herein. Additionally, some well-known structures or functions may not be shown or described in detail below, so as to avoid unnecessarily obscuring the relevant description.
Certain embodiments and features of the present disclosure relate to methods and systems for post-filtering audio signals that utilizes a signal model that accounts for not only diffuse and white noise, but also point interfering sources. As will be described in greater detail below, the methods and systems are designed to achieve a globally optimized least-squares (LS) solution of microphones in a microphone array. In certain implementations, the performance of the disclosed method is evaluated using real recorded impulse responses for the desired and interfering sources, including synthesized diffuse and white noise. The impulse response is the output or reaction of a dynamic system to a brief input signal called an impulse.
In this example embodiment, a hypothesized sound field scenario includes one interfering source. In other example embodiments, hypothesized sound field scenarios may be more complex, including numerous interfering scenarios.
Also, in other example embodiments, multiple hypothesized sound field scenarios may be determined to generate multiple output signals. One skilled in the relevant art will understand that multiple sound field scenarios may be hypothesized based on various factors, such as information that may be known or determined about the environment. One skilled in the art will also understand that the quality of the output signals may be determined using various factors, such as measuring the signal-to-noise ratio (as measured, for example, in the experiments discussed below). In other example embodiments, a person skilled in the art may apply other methods to hypothesize sound field scenarios and determine the quality of the output signals.
The microphone array (130) includes a plurality of individual omnidirectional microphones (115, 120, 125). This embodiment assumes omnidirectional microphones. Other example embodiments may implement other types of microphones which may alter the covariance matrix models. The audio signals (109) received by each of the microphones M1 to Mn (where “n” is an arbitrary integer) (115, 120, 125) may be converted to the frequency domain via a transformation method, such as, for example, Discrete-time Fourier Transformation (DTFT) (116, 121, 126). Other example transformation methods may include, but are not limited to, FFT (Fast Fourier Transformation), or STFT (Short-time Fourier Transformation). For simplicity, the output signals generated via each of the DTFT's (116, 121, 126) corresponding to one frequency are represented by a single arrow. For example, the DTFT audio signal at the first frequency bin, F1 (165a), generated by audio received by microphone M1 (115) is represented as a single arrow 117a.
A more detailed description of capturing the environment noise (109) and generating the beamformed single-channel output signal (137a) and the beamforming filters (136a) are described here. Suppose a microphone array (130) of M elements (115, 120, 125), where M, an arbitrary integer value, is the number of microphones in the array (130), to capture the signal s(t) from a desired point sound source (106) in a noisy acoustic environment (105). The output of the mth microphone in the time domain is written as
xm(t)=gs,m*s(t)+ψm(t),m=1,2, . . . ,M, (1)
where gs,m denotes the impulse response from the desired component (106) to the mth microphone (e.g. 125), * denotes linear convolution, and ψm(t) is the unwanted additive noise (i.e., sound generated by noise components 107 and 108).
The disclosed method is capable of dealing with multiple point interfering sources; however, for clarity, one point interferer is described in the examples provided herein. The additive noise commonly consists of three different types of sound components: 1) coherent noise from a point interfering source, v(t), 2) diffuse noise, um(t), and 3) white noise, wm(t). Also,
ψm(t)gv,m*v(t)+um(t)+wm(t), (2)
where gv,m is the impulse response from the point noise source to the mth microphone. In this example embodiment, the desired signal and these noise components (106-108) are presumed short-time stationary and mutually uncorrelated. In other example embodiments, the noise components may be comprised differently. For example, a noise environment which contains multiple desired sound sources moving around and the target desired sound source may alternate over a time period. In other words, a crowded room where two people are walking while having a conversation.
In the frequency domain, this generalized microphone array signal model in Equation (1) is transformed into
where j√{square root over (−1)}, ω is the angular frequency, and Xm(jω), Gs,m(jω), S(jω), Gv,m(jω), V(jω), U(jω), W(jω) are the discrete-time Fourier transforms (DTFTs) of xm(t), gs,m, s(t), gv,m, v(t), u(t), and w(t), respectively. In the example embodiments, DTFT is implemented; however, it should not be construed to limit the scope of the invention. Other example embodiments may implement other methods such as STFT (Short Time Fourier Transformation) or FFT (Fast Fourier Transformation). Equation (3) in a vector/matrix form is as follows
x(jω)=S(jω)ga(jω)+V(jω)gv(jω)+u(jω)+w(jω), (4)
where
z(jω)[Z1(jω)Z2(jω) . . . ZM(jω)]T,zε{x,u,w},
gz(jω)[Gz,1(jω)Gz,2(jω) . . . Gz,M(jω)]T,zε{s,v},
(•)T denotes the transpose of a vector or a matrix. The microphone array spatial covariance matrix is then determined as
Rxx(jω)=σs2(ω)Pg
where mutually uncorrelated signals are assumed,
Rxx(jω)E{z(jω)zH(jω)},zε{x,ψ,u,w},
Pg
σz2(ω)E{Z(jω)Z*(jω)},zε{s,v},
and E{•}, (•)H, and (•)* denote the mathematical expectation, the Hermitian transpose of a vector or matrix, and the conjugate of a complex variable, respectively.
A beamformer (135a) filters each microphone signal by a finite impulse response (FIR) filter Hm(jω) (m=1, 2, . . . , M) and sums the results to produce a single-channel output (137a)
and beamforming filters (136a), where
In Equation (6), the covariance matrix of the desired sound source is also modeled. Its model is similar to that of the interfering source since both the desired and the interfering sources are point sources. They differ in their directions with respect to the microphone array.
In an actual environment, the makeup of noise components, i.e. the number and location of point interfering sources and the presence of white or diffuse noise sources may not be known. Thus, a sound field scenario is hypothesized. Equation (2) above represents a scenario with one point interfering source, diffuse noise, and white noise, resulting in four unknowns. If the scenario hypothesizes or assumes no point interfering source, only white and diffuse noise, the above Equation (5) can then be simplified resulting in only three unknowns.
In Equation (5), three interference/noise-related components (106-108) are modeled as follows:
(1) Point Interferer:
The covariance matrix Pg
gv(jω)=[e−jωT
which incorporates only the interferer's time differences of arrival at the multiple microphones τv,m (m=1, 2, . . . , M) with respect to a common reference point.
(2) Diffuse Noise:
A diffuse noise field is considered spherically or cylindrically isotropic, in that it is characterized by uncorrelated noise signals of equal power propagating in multiple directions simultaneously. Its covariance matrix is given by
Ruu(jω)=σu2(ω)uu(ω), (8)
where the (p, q)th element of Γuu(ω) is
dpq is the distance between the pth and qth microphones, c is the speed of sound, and J0(•) is the zero-order Bessel function of the first kind.
(3) White Noise:
The covariance matrix of additive white noise is simply a weighted identity matrix:
Rww(jω)=σw2(ω)·IM×M. (10)
When a microphone array is used to capture a desired wideband sound signal (e.g., speech and/or music), the intention is to minimize the distance between Y (jω) in Equation (6) and S(jω) for ω's. The MCWF that is optimal in the MMSE sense can be decomposed into a MVDR beamformer followed by a single-channel Wiener filter (SCWF):
where
are the power of the desired signal and noise at the output of the MVDR beamformer, respectively. This decomposition leads to the following structure for microphone array speech acquisition: the SCWF is regarded as a post-filter after the MVDR beamformer.
{circumflex over (R)}xx(jω,i)=λ{circumflex over (R)}xx((jω,i−1)+(1−λ)x(jω,i)xH(jω,i), (12)
where 0<λ<1 is a forgetting factor.
Again, similar to Equation (7), reverberation may be ignored, resulting in
gs(jω)=[e−jωT
where τs,m is the desired signal's time difference of arrival for the mth microphone with respect to the common reference point.
In another example, suppose that both Σs,m and τv,m are known and do not change over time. Thus, according to Equation (5), using Equation (8) and Equation (10), at the ith time frame, the determination of the covariance matrix models (140a) may be determined as follows:
Rxx(jω,i)=σs2(ω,i)Pg
This equality allows defining a criterion based on the Frobenius norm of the difference between the left and the right hand sides of Equation (14). By minimizing such a criterion, an LS estimator for {σs2(ω, k), σv2(ω, k), σu2(ω, k), σw2(ω, k)} may be deduced. Note that the matrices in Equation (14) are Hermitian. Redundant information in this formulation has been omitted for clarity.
For an M×M Hermitian matrix A=[apq], two vectors may be defined. One vector is the diagonal elements and the other is the off-diagonal half vectorization (odhv) of its lower triangular part
diag{A}[a11a22 . . . aMM]T. (15)
odhv{A}[a21 . . . aM1 a32 . . . aM2 . . . aM(M-1)]T. (16)
A plurality of N Hermitian matrices of the same size may be defined as
diag{A1, . . . ,AN}[diag{A1} . . . diag{AN}], (17)
odhv{A1, . . . ,AN}[odhv{A1} . . . odhv{AN}], (18)
By using these notations, Equation (14) is reorganized to get
{circumflex over (φ)}xx(k)=Θ·χ(k), (19)
where parameter jω is omitted for clarity, and
Here, the result is M (M+1)/2 equations and 4 unknowns. If M≧3, this is an overdetermined problem. That is, there are more equations than unknowns.
The aforementioned error criterion is written as
J∥{circumflex over (φ)}xx(k)−Θ·χ(k)∥2. (20)
Minimizing this criterion, implemented as estimating the power of sound sources (150a), leads to
{circumflex over (χ)}LS(k)={(ΘHΘ)−1ΘH{circumflex over (φ)}xx(k)}, (21)
where {•} denotes the real part of a complex number/vector. Presumably the estimation errors in {circumflex over (φ)}xx(k) are IID (independent and identically distributed) random variables. Thus, as implemented in calculating the post-filter coefficients (155a), the LS (least-squares) solution given in Equation (21) is optimal in the MMSE sense. Substituting this estimate into Equation (11) leads to, as referred to in this disclosure, a LS post-filter (LSPF) (160a).
In the above described example embodiment, the deduced LS solution assumes that M≧3. This is due to the use of a more generalized acoustic-field model that consists of four types of sound signals. In other example embodiments, where additional information regarding the acoustic field is available, such that some types of interfering signals can be ignored (e.g., no point interferer and/or merely white noise), then those columns in Equation (19) that correspond to these ignorable sound sources can be removed and an LSPF as described in the present disclosure may still be developed even with M=2.
Referring to
As mentioned above, conventional post-filtering methods are not optimal and have deficiencies when compared to methods and systems described herein. The limitations and deficiencies of existing approaches, with respect to the present disclosure, are further described below.
(a) Zelinski's Post-Filter (ZPF) assumes: 1) no point interferer, i.e., σv2(ω)=0, 2), no diffuse noise, i.e., σu2 (ω)=0, and 3) only additive incoherent white noise. Thus, Equation (19) is simplified as follows
Instead of calculating the optimal LS solution for σs2 (k) using Equation (21), the ZPF uses only the bottom odhv-part of Equation (22) to get
Note, from Equation (13) that {odhv{Pg
If the same acoustic model for the LSPF is used for ZPF (e.g., only white noise), it can be shown that the ZPF and the LSPF are equivalent when M=2. However, they are fundamentally different when M≧3.
(b) McCowan's Post-Filter (MPF) assumes: 1) no point interferer, i.e., σv2 (ω)=0, 2), no additive white noise, i.e., σw2 (ω)=0, and 3) only diffuse noise. Under these assumptions, Equation (19) becomes
Note from Equation (9) that diag {Γuu}=1M×1.
Equation (25) is an overdetermined system. Again, instead of finding a global LS solution by following Equation (21), the MPF applies three equations from Equation (25) that correspond to the pair of the pth and qth microphones to form a subsystem like the following
where
The MPF method solves Equation (26) for σs2 as
Since there are M (M−1)/2 different microphone pairs, the final MPF estimate is simply the average of the subsystems' results, as follows:
The diffuse noise model is more common in practice than the white noise model. The latter can be regarded as a special case of the former when Γuu=IM×M. But the MPF's approach to solving Equation (25) is heuristic and is also not optimal. Again, if LSPF uses a diffuse-noise-only model, it is equivalent to the MPF when M=2, but they are fundamentally different when M≧3.
(c) Leukimmiatis's Post-Filter follows the algorithm proposed in the MPF to estimate σs2 (k). Leukimmiatis et al. simply fixes the bug in Zelinski's and McCowan's postfilters that the denominator of the post-filter in (11) should be σs
The following provides results of example speech enhancement experiments performed to validate the LSPF method and systems of the present disclosure.
In the above experiments, three full-band measures are defined to characterize a sound field (subscript SF): namely, the signal-to-interference ratio (SIR), signal-to-noise ratio (SNR), and diffuse-to-white-noise ratio (DWR), as follows
SIRSF10·log10{σs2/σv2}, (29)
SNRSF10·log10{σs2/(σu2+σw2)}, (30)
DWRSF10·log10{σu2/σw2}, (31)
where σz2E{z2(t)} and z ε{s,v,u,w}.
For performance evaluation, two objective metrics are analyzed: the signal-to-interference-and-noise ratio (SINR) and the perceptual evaluation speech quality (PESQ). The SINR's and PESQ's are computed at each microphone and averaged as the input SINR and PESQ, respectively. The output SINR and PESQ (denoted by SINRo and PESQo, respectively) are similarly estimated. The difference between the input and output measures (i.e., the delta values) are analyzed. To better assess the amount of noise reduction and speech distortion at the output, the interference and noise reduction (INR) and the desired-speech only PESQ (dPESQ) are also calculated. For dPESQ's, the processed desired speech and clean speech are passed to the PESQ estimator. The output PESQ indicates the quality of the enhanced signal while the dPESQ value quantifies the amount of speech distortion introduced. The Hu & Loizou's Matlab codes for PESQ are used in this study.
To avoid the well-known signal cancellation problem in the MVDR (minimum variance distortionless response) beamformer due to room reverberation, the delay-and-sum (D&S) beamformer is implemented for front-end processing and compared to the following four different post-filtering algorithms: none, ZPF, MPF, and LSPF. The D&S-only implementation is used as a benchmark. For ZPF and MPF, Leukimmiatis's correction has been employed. Tests were conducted under the following three different setups: 1) White Noise ONLY: SIRSF=30 dB, SNRSF=5 dB, DWRSF32-30 dB, 2) Diffuse Noise ONLY: SIRSF=30 dB, SNRSF=10 dB, DWRSF=30 dB, 3) Mixed Noise/Interferer: SIRSF=0 dB, SNRSF=10 dB, DWRSF=0 dB. The results are as follows:
In these tests, the square-root Hamming window and 512-point FFT are used for the STFT analysis. Two neighboring windows have 50% overlapped samples. The weighted overlap-add method is used to reconstruct the processed signal.
The experimental results are summarized in Table 1. First, the results for the white-noise-only sound field are analyzed. Since this is the type of sound field addressed by the ZPF method, the ZPF does a reasonably good job in suppressing noise and enhancing speech quality. However, the proposed LSPF achieves more noise reduction and offers higher output PESQ, albeit it introduces more speech distortion with a slightly lower dPESQ. The MPF produces a deceptively high INR since its SINR gain is lower than that of the ZPF and LSPF. This means that the MPF significantly suppresses not only noise but also speech signals. Its PESQ and dPESQ are lower than that of the LSPF.
In the second sound field, as expected, the D&S beamformer is less effective to deal with diffuse noise and the ZPF's performance degrades too. In this case the MPF's performance is reasonably good while still the LSPF yields evidently best results.
The third sound field is apparently the most challenging case to tackle due to the presence of a time-varying interfering speech source. However, the LSPF outperforms the other conventional methods in all metrics.
Finally, it is noteworthy that these purely objective performance evaluation results are consistent with subjective perception of the four techniques in informal listening tests carried out with a small number of our colleagues.
The present disclosure describes methods and systems for a LS post-filtering method for microphone array applications. Unlike conventional post-filtering techniques, the method described considers not only diffuse and white noise but also point interferers. Moreover it is a globally optimal solution that exploits the information collected by a microphone array more efficiently than conventional methods. Furthermore, the advantages of the disclosed technique over existing methods has been validated and quantified by simulations in various acoustic scenarios.
Depending on different configurations, the processor (710) can be a microprocessor (μP), a microcontroller (μC), a digital signal processor (DSP), or any combination thereof. The processor (710) can include one or more levels of caching, such as a L1 cache (711) and a L2 cache (712), a processor core (713), and registers (714). The processor core (713) can include an arithmetic logic unit (ALU), a floating point unit (FPU), a digital signal processing core (DSP Core), or any combination thereof. A memory controller (715) can either be an independent part or an internal part of the processor (710).
Depending on the desired configuration, the system memory (720) can be of any type including but not limited to volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.) or any combination thereof. System memory (720) typically includes an operating system (721), one or more applications (722), and program data (724). The application (722) may include a post-filtering component (726) or a system and method to apply globally optimized least-squares post-filtering (723) for speech enhancement. Program Data (724) includes storing instructions that, when executed by the one or more processing devices, implement a system and method for the described method and component. (723). Or instructions and implementation of the method may be executed via post-filtering component (726). In some embodiments, the application (722) can be arranged to operate with program data (724) on an operating system (721).
The computing device (700) can have additional features or functionality, and additional interfaces to facilitate communications between the basic configuration (701) and any required devices and interfaces.
System memory (720) is an example of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 700. Any such computer storage media can be part of the device (700).
The computing device (700) can be implemented as a portion of a small-form factor portable (or mobile) electronic device such as a cell phone, a smart phone, a personal data assistant (PDA), a personal media player device, a tablet computer (tablet), a wireless web-watch device, a personal headset device, an application-specific device, or a hybrid device that includes any of the above functions. The computing device (700) can also be implemented as a personal computer including both laptop computer and non-laptop computer configurations.
The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. In one embodiment, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers, as one or more programs running on one or more processors, as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and/or firmware would be well within the skill of one skilled in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of non-transitory signal bearing medium used to actually carry out the distribution. Examples of a non-transitory signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium. (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.)
With respect to the use of any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous.
Number | Name | Date | Kind |
---|---|---|---|
5729613 | Poletti | Mar 1998 | A |
8392184 | Buck et al. | Mar 2013 | B2 |
20040001598 | Balan | Jan 2004 | A1 |
20040220800 | Kong | Nov 2004 | A1 |
20100217590 | Nemer | Aug 2010 | A1 |
20110305345 | Bouchard | Dec 2011 | A1 |
20140056435 | Kjems | Feb 2014 | A1 |
Number | Date | Country |
---|---|---|
2 026 597 | Nov 2009 | EP |
2 081 189 | Sep 2010 | EP |
2738762 | Jun 2014 | EP |
Entry |
---|
I.A. McCowan and H. Bourlard, “Microphone Array Post-Filter Based on Noise Field Coherence,” IEEE Trans. Speech Audio Proc., vol. 11, pp. 709-716, Nov. 2003. |
Rainer Zelinski, “A Microphone Array with Adaptive Post-Filtering for Noise Reduction in Reverberant Rooms,” in Proc. IEEE ICASSP, Apr. 1988, vol. 5, pp. 2578-2581. |
S. Leukimmiatis and P. Maragos, “Optimum Post-Filter Estimation for Noise Reduction in Multichannel Speech Processing,” in Proc. EUSIPCO, Sep. 2006, pp. 1-5. |
Pan et al., “On the Noise Reduction Performance of the MVDR Beamformer in Noisy and Reverberant Environments”, 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP), May 4, 2014, pp. 815-819. |
Peled et al., “Linearly Constrained Minimum Variance Method for Spherical Microphone Arrays in a Coherent Environment”, IEEE 2011 Joint Workshop on Hands-Free Speech Communication and Microphone Arrays (HSCMA), May 30, 2011, pp. 86-91. |
International Search Report in corresponding PCT/US2017/016187, dated Apr. 26, 2017, 5 pp. |