Various embodiments of the present application relate to a method and a processing system that enhances one or more microphone signals, by estimating and reducing reverberation. The present application relates to any electronic device such as hearing aid devices, ear phones, mobile phones, active ear protection systems, public address systems, teleconference systems, hands-free devices, automatic speech recognition systems, multimedia software and systems, systems for professional audio, dect phones, desktop or laptop computers, tablets, etc.
When a sound is emitted in a closed space, it is usually distorted from reverberation. This degradation is detrimental to sound quality and to speech intelligibility and it significantly degrades the performance of Automatic Speech Recognition (ASR) systems. Reverberation is also harmful for most speech-related applications, such as automatic speaker recognition, automatic emotion recognition, speech detection, speech separation, pitch tracking, speech segregation, etc. In addition, reverberation degrades the quality of music signals and decreases the performance of music-related tasks such as music signal classification, automatic music transcription, analysis and melody detection, source separation, etc. Therefore, there is a great need for dereverberation methods and systems.
In room acoustics, room reverberation can be considered as the combination of early reverberation (alternatively called early reflections) and late reverberation. The early reflections arrive right after the direct sound and they mainly result to a spectral degradation which is perceived as coloration. The early reflections are not considered harmful for speech intelligibility, ASR or any other signal-processing task, however they can typically alter the signal's timbre. Late reverberation arrives after the early reverberation and produces a noise-like effect, generated by the signal's reverberant tails. Late reverberation is detrimental for the signal's quality, the intelligibility of speech and it severely degrades the performance of signal processing algorithms. In addition, late reverberation is also responsible for a severe degradation of speech intelligibility in hearing impaired listeners, even when they use hearing assistive devices such as hearing aids or cochlear implants.
In signal processing, when assuming a Linear and Time Invariant system, deconvolution can be typically applied in order to suppress a convolutive distortion. Since reverberation is a convolutive distortion, deconvolution is the ideal way of confronting the reverberation problem.
y(n)=x(n)*h(n) (1)
where * denotes time-domain convolution. In theory, the RIR h(n) can be blindly estimated from the reverberant signal or acoustically measured via an appropriate technique 106. This estimation or measurement of the RIR can be used to deconvolve the reverberant signal from the RIR D(y(n)) 108 and to obtain an estimation of the clean signal {circumflex over (x)}(n) 110. When the RIR is exactly known, the estimation {circumflex over (x)}(n) is equal to the anechoic signal x(n). So in theory, an ideal inversion (deconvolution) of the Room Impulse Response (RIR) will completely remove the effect of both early reflections and late reverberation. However, there are several problems with this ideal approach. First of all, typical RIRs have thousands of coefficients and an exact blind estimation is practically impossible. Moreover, the RIR is known to have non-minimum phase characteristics, the inverse filters are to a large extent non-causal and exact measurements of the RIR must be available for the specific source/receiver room positions. When the sound source is moving, the RIR constantly changes and accurate measurements are impossible. Hence, for real-life applications RIR measurements are not available and other blind dereverberation options that do not try to accurately estimate the RIR or use any prior information of the acoustic channel are needed.
Blind dereverberation (i.e. dereverberation without any other prior knowledge other than the reverberant signal) is a difficult task and it produces signal processing artifacts. Hence, the produced output signal is often of insufficient quality. Despite engineering efforts, the dereverberated signals often fail to improve signal quality and speech intelligibility. In many cases, blind dereverberation methods produce artifacts that are more harmful than the original reverberation distortion. Accordingly, a need exists to overcome the above mentioned drawbacks and to provide a method and a system for significant dereverberation of digital signals without producing processing artifacts.
Typical dereverberation methods confront either the early or the late reverberation problem. In order to tackle reverberation as a whole, early and late reverberation suppression methods have been used sequentially. An early reverberation suppression method is typically used as a first step to reduce the early reflections. Usually, in a second step a late reverberation suppression approach suppresses the signal's reverberant tail. However, early and late reverberation suppression methods have not been used in parallel. The goal of processing early and late reverberation in parallel, or combining multiple late/early reverberation estimation methods is to provide new artifact-free clean signal estimations.
In addition, the required amount of dereverberation strongly depends on the room acoustic characteristics and the source-receiver position or positions. Dereverberation algorithms should inherently include an estimation of relevant room acoustic characteristics and also estimate the correct suppression rate (e.g the amount and steepness of dereverberation), given that for a moving source or receiver the acoustic environment constantly changes. When the reverberation suppression rate is incorrect, it causes processing artifacts. Therefore, taking into consideration the acoustic environment (e.g. room characteristics such as dimensions and materials, acoustic interferences, source location, receiver location, etc.) there is a need for a method of controlling the reverberation suppression rate, either by a user or automatically.
Aspects of the invention relate to processing early and late reverberation in parallel and/or or combining multiple late/early reverberation estimation methods.
Aspects of the invention also relate to estimation of relevant room acoustic characteristics and also estimate the correct suppression rate (e.g the amount and steepness of dereverberation).
Aspects of the invention also relate to taking into consideration the acoustic environment (e.g. room characteristics such as dimensions and materials, acoustic interferences, source location, receiver location, etc.)
Aspects of the invention also relate to controlling the reverberation suppression rate, either by a user or automatically.
Additional exemplary, non-limiting aspects of the invention include;
For a more complete understanding of the invention, reference is made to the following description and accompanying drawings, in which:
Hereinafter, embodiments of the present invention will be described in detail in accordance with the references to the accompanying drawings. It is understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present application.
The exemplary systems and methods of this invention will also be described in relation to reducing reverberation in audio systems. However, to avoid unnecessarily obscuring the present invention, the following description omits well-known structures and devices that may be shown in block diagram form or otherwise summarized.
For purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the present invention. It should be appreciated however that the present invention may be practiced in a variety of ways beyond the specific details set forth herein. The terms determine, calculate and compute, and variations thereof, as used herein are used interchangeably and include any type of methodology, process, mathematical operation or technique.
Following one exemplary model, the RIR captures the acoustic characteristics of a closed space. An exemplary RIR is shown as an example in
h(n)=hdir(n)+hear(n)+hlat(n) (2)
where n is the discrete time index. The direct part of the RIR hdir(n) can be modeled as Kronecker delta function, shifted ns samples and attenuated by a factor κ
hdir(n)=κδ(n−ns) (3)
where κ and ns mainly depend on the source-receiver distance and the physical characteristics of the propagation medium.
For illustrative reasons, one exemplary model for reverberation is described below. According to
where xj(n) represents the jth discrete-time anechoic source signal, hij(n) is the impulse response that models the acoustic path between the jth source and the ith receiver and the * sign denotes time-domain convolution. According to the equations 2, 3, 4 a captured reverberant signal accounts for three components: (i) an anechoic part, (ii) early reverberation part and (iii) late reverberation part
Considering now a direct part consisting of the anechoic part and the early reflections part {circumflex over (x)}i(n) and a late reverberation part {circumflex over (r)}i(n), equation 5 becomes
Although the effect of reverberation can be observed in the time domain signal, the effect of the acoustic environment and in particular the room dimensions and materials are best observed in the frequency domain. Dereverberation can be theoretically achieved either in the time or in the frequency domain. As a consequence, it is beneficial to utilize dereverberation estimation and reduction techniques in the time-frequency domain, using a relevant transform. The time-domain reverberant signal of equation 5 can be transformed in the time-frequency domain using any relevant technique. For example, this can be done via a short-time Fourier transform (STFT), a wavelet transform, a polyphase filterbank, a multi rate filterbank, a quadrature mirror filterbank, a warped filterbank, an auditory-inspired filterbank, etc. Each one of the above transforms will result to a specific time-frequency resolution, that will change the processing accordingly. All embodiments of the present application can use any available time-frequency transform.
The reverberant signal yi(n) can be transformed to the Yi(ω, μ) where ω is a frequency index and μ is a time index. In exemplary embodiments, ω denotes the index of the frequency bin or the sub-band and μ denotes the index of a time frame or a time sample. In some embodiments, the Short Time Fourier Transform technique can be used, together with an appropriate overlap analysis-synthesis technique such as the overlap add or overlap save. Analysis windows can be set, for example, at 32, 64, 128, 256, 512, 1024, 2048, 4096 and 8192 samples for a sampling frequencies of 4000, 8000, 12000, 16000, 44100, 48000 and 96000, 192000 Hz. According to equation 4 the captured reverberant signal in the time-frequency domain can be represented as
where Xj(ω, μ) and Hij(ω, μ) are the time-frequency representations of xj(n) and hij(n) respectively.
Generally speaking, reverberation is a convolutive distortion; however since late reverberation arrives in the diffuse field, it is not highly correlated with the original sound source. Given the foregoing, it can be sometimes considered as an additive degradation with noise-like characteristics. Considering late reverberation as an additive distortion and by transforming equation 6 in the time-frequency domain the reverberant signals can be modeled as
Yi(ω,μ)={circumflex over (X)}i(ω,μ)+{circumflex over (R)}i(ω,μ) (10)
where {circumflex over (X)}i(ω, μ) represents the direct sound received in the ith microphone (containing the anechoic signal and the early reverberation) and {circumflex over (R)}i(ω, μ) is the late reverberation received in the ith microphone. Following this model we can estimate the direct part of the sound signals. Many techniques can be used for this such as spectral subtraction, Wiener filtering, Kalman filtering, a Minimum Mean Square Estimators (MMSE), Least Means Square (LMS) filtering, etc. All relevant techniques are in the scope of the present application. As an example application and without, departing from the scope of the present invention spectral subtraction (i.e. a subtraction in the time-frequency domain) will be mostly used thereafter:
{circumflex over (X)}i(ω,μ)=Yi(ω,μ)−{circumflex over (R)}i(ω,μ) (11)
The estimation of the clean signals can be derived by applying appropriate gains Gi(ω, μ) on the reverberant signals i.e.:
and in an exemplary embodiment where spectral subtraction is used
The term gain in such techniques is not just a typical amplification gain (although the signal may be amplified in some cases). The dereverberation gain functions mentioned in embodiments of the present invention can be viewed as scale factors that modify the signal in the time-frequency domain. Given that {circumflex over (X)}i(ω, μ) and {circumflex over (R)}i(ω, μ) can be assumed uncorrelated (due to the nature of late reverberation), equation 10 can be written as
|Yi(ω,μ)=|{circumflex over (X)}i(ω,μ)+|{circumflex over (R)}i(ω,μ) (15)
For certain embodiments =1, 2 and the described model is implemented in the magnitude or power spectrum domain respectively. All embodiments of the present invention are relevant for any value. In order to keep the notations simple, the magnitude spectrum is discussed in detail but any value can be used.
Equation 12 presents an example for producing a signal where late reverberation has been removed. The gain function G is calculated based on the received (reverberant) signal and knowledge of the nature of late reverberation in the acoustic environment. G can be measured or known a priori, or stored from previous measurements. G is a function of frequency (ω) and time (μ) but can also be a scalar or a function of just ω or μ.
The gain functions Gi(ω, μ) of equations 12, 13, 14 can be bounded in the closed interval [0, 1]. When Gi(ω, μ)=0 we consider that the signal component consists entirely of late reverberation and we totally suppress the original signal. When Gi(ω, μ)=1 we consider that the reverberant signal does not contain any late reverberation and the reverberant signal remains intact. Spectral subtraction is not the only way to derive gain functions Gi(ω, μ). As mentioned before, in other exemplary embodiments the gain functions Gi(ω, μ) can be extracted according to equation 13 by any technique that provides a first estimation of a clean signal {circumflex over (X)}i(ω, μ), such as Wiener filtering, subspace, statistically based, perceptually-motivated, etc.
Ideally, both early and late reverberation must be suppressed from the reverberant signal. However, it is known that: (i) late reverberation is considered more harmful than the early reflections, (ii) blind dereverberation methods, where no knowledge other than the reverberant signal is used, usually result to severe processing artifacts and (iii) the aforementioned processing artifacts are more likely to appear when we are trying to completely remove all signal distortions rather than just reducing the more harmful ones. Hence, in exemplary embodiments we might be interested in removing only late reverberation.
A metric for measuring the reverberation degradation is the Signal to Reverberation Ratio (SRR), which is the equivalent to the Signal to Noise Ratio (SNR) when reverberation is considered a form of additive noise. High SRR regions are not severely contaminated from reverberation and they are usually located in signal components where the energy of the anechoic signal is high. Therefore, in such signal parts, the anechoic sound source is dominant and they are mainly contaminated by early reverberation, typically as a form of spectral coloration. On the other hand, low SRR signal parts are significantly distorted from reverberation. Such signal components are likely to be found in places where the anechoic signal was quiet. (i.e. low-energy anechoic signal components). These regions are usually located at the signal's reverberant tails.
In an exemplary embodiment, the energy of the reverberant signal's magnitude spectrum can be calculated in each frame as
where Ω is the number of frequency bins. Since this energy was found to be directly related to the amount of reverberation degradation, it can be used in exemplary embodiments in order to provide a dereverberation gain and used to remove reverberation, as explained for example in equation 12. In order to bound the Ei(μ) values between [0,1], the energy values are normalized using an appropriate normalization factor fΩ. Hence, the direct sound can be estimated as
where Ei(μ)/fΩ represents the gain G; as a function of time at the ith receiver. The factor fΩ is typically related to the size of the reverberant frame. In one example, the factor fΩ can be computed as the energy of a white noise frame of length Ω and of the maximum possible amplitude allowed by the reproduction system. In another example, fΩ can be obtained as the maximum spectral frame energy selected from a large number of speech samples, reproduced at the maximum amplitude allowed by the system. In other exemplary embodiments, instead of calculating the mean energy over each frame, the mean energy over specific sub-bands can be calculated. In examples, these sub-bands can be defined from the mel scale or the bark scale, they can rely on properties of the auditory system or they can be signal-dependent.
In another embodiment of the present application, we can assume that low energy frequency bins are more likely to contain significant amounts of reverberation and high energy frequency bins are more likely to contain direct signal components. This can be also verified from
where λ>1 is a factor controlling the suppression rate and f is a normalization factor. This approach disproportionately increases the energy of high energy frequency bins when compared to the energy of low frequency bins. The normalization factor f is directly linked to the maximum amplitude that the system can reproduce without distortion. The factor f can be measured or known and may also change with time.
Blind methods for the suppression of late reverberation typically produce processing artifacts, mainly due to late reverberation estimation errors. Embodiments of the present invention minimize or totally avoid such detrimental processing artifacts. In exemplary embodiments this is achieved by combining different reverberation estimation methods, in order to improve the quality of the dereverberated signal. An output signal resulting from a dereverberation method that compensates for early reverberation, ideally contains: (i) an anechoic signal and (ii) late reverberation. An output signal resulting from a dereverberation method that compensates for late reverberation, ideally contains: (i) an anechoic signal and (ii) early reverberation.
Given the foregoing,
In another embodiment, two or more late reverberation estimation methods can be combined to provide a new method for late reverberation suppression, with minimal or no processing artifacts. All embodiments of the present application relating to methods of dereverberation can be either single-channel, binaural or multi-channel.
An exemplary case of the general reverberation concept (previously illustrated in
In binaural setups such as the one described in
For illustrative reasons one binaural model for reverberation will be described. Assuming a speaker and a listener having one receiver in his left ear and one receiver in his right ear. According to equation 10 the time-frequency domain discrete-time signal YL(ω, μ) received in the listener's left ear is described as
YL(ω,μ)=XL(ω,μ)+RL(ω,μ) (19)
and the captured signal in his right ear receiver can be expressed in the time-frequency domain YR(ω, μ) is described as
YR(ω,μ)=XR(ω,μ)+RR(ω,μ) (20)
where XL(ω, μ) and XR(ω, μ) are the direct signals (including the anechoic and the early reverberation parts) for the left and right channels respectively and RL(ω, μ) and RR(ω, μ) are the late reverberation components for the left, and right channels respectively. Since we want to apply identical processing, we can derive a hybrid signal containing information from both the left and right ear channels. Therefore, we derive a new signal {tilde over (Y)}(ω, μ) representing the sum of the left and right captured signals
{tilde over (Y)}(ω,μ)=YR(ω,μ)+YL(ω,μ) (21)
Now using {tilde over (Y)}(ω, μ), we can broadly estimate late reverberation for both channels {tilde over (R)}(ω, μ). In other embodiments, any combination of the left and right channel and can be used in order to derive {tilde over (Y)}(ω, μ). Alternatively the new signal {tilde over (Y)}(ω, μ) can be derived in the time domain and then transformed to the time frequency domain. Any known method for estimating late reverberation {tilde over (R)}(ω, μ) can be used. However, some examples are presented in the embodiments described below.
In one embodiment, late reverberation {tilde over (R)}(ω, μ) of both channels can be estimated by the spectral energy of each frame of {tilde over (Y)}(ω, μ), as described in equations 16 and 17
In an exemplary embodiment, late reverberation {tilde over (R)}(ω, μ) of both channels can be estimated by the spectral energy of each frame of {tilde over (Y)}(ω, μ), as described in equation 18
In an exemplary embodiment, late reverberation is considered as a statistical quantity that does not dramatically change, across different room positions in the same room. Then h(n) is modeled as a discrete non-stationary stochastic process:
where b(n) is a zero-mean stationary Gaussian noise. The short time spectral magnitude of the reverberation is estimated as:
where |SNRpri(ω, μ)| is the a priori Signal to Noise Ratio that can be approximated by a moving average of the a posteriori Signal to Noise Ratio |SNRpost(ω, μ)| in each frame:
|SNRpri(ω,μ)|=β|SNRpri(ω,μ−1)|+(1−β)max(0,|SNRpost(ω,μ)−1|) (26)
where β is a constant taking values close to 1.
In an exemplary embodiment, the late reverberation estimation is motivated by the observation that the smearing effect of late reflections produces a smoothing of the signal spectrum in the time domain. Hence, the late reverberation power spectrum is considered a smoothed and shifted version of the power spectrum of the reverberant speech:
|{tilde over (R)}(ω,μ)|2=γω(μ−ρ)*|{tilde over (Y)}(ω,μ)|2 (27)
where ρ is a frame delay, γ a scaling factor. The term ω(μ) represents an asymmetrical smoothing function given by the Rayleigh distribution:
where α represents a constant number of frames.
In an exemplary embodiment, the short time power spectrum of late reverberation in each frame can be estimated as the sum of filtered versions of the previous frames of the reverberant signal's short time power spectrum:
where K is the number of frames that corresponds to an estimation of the RT60 and al(ω, μ) are the coefficients of late reverberation. The coefficients of late reverberation can be derived from
After having estimated the late reverberation {tilde over (R)}(ω, μ) from {tilde over (Y)}(ω, μ), this estimate is used in a dereverberation process. This can be done with many techniques including spectral subtraction, Wiener filtering, etc. For example, following the spectral subtraction approach, the binaural dereverberation gain {tilde over (G)}(ω, μ) will be
Since we want to preserve the binaural localization cues, this gain is then applied separately both on the left and right channels (according for example to equation 12), in order to obtain the estimation of the dereverberated signals for the left and right ear channel respectively. In equation 15 it is shown that for specific embodiments of the present application any exponent of the frequency transformation of the reverberant signal can be used. Hence, the binaural gain can be derived from and applied to |YL(ω, μ) and |YR(ω, μ) for any , but it can also be applied directly to the complex spectrum of left and right channels.
An example method of the present invention provides dereverberation for binaural or 2-channel systems. Spectral processing tends to produce estimation artifacts. Looking at these artifacts with respect to the dereverberation gain (see equation 12, 13, 14), there are mainly two types of errors that result:
In a first step of an exemplary embodiment, the coherence Φ(ω, μ) between the left YL(ω, μ) and the right YR(ω, μ) reverberant channel is derived. The coherence can provide an estimation of distortion produced from early and late reverberation. There are many ways to calculate the coherence and they can all be used in different embodiments of the present application. As an example the coherence can be calculated as
The coherence is (or can be) bounded in the closed interval [0,1]. Reverberation has an impact on the derived coherence values: Φ(ω, μ) values are smaller when reverberation is dominant and there is evidence that coherence can be seen as a measure of subjective diffuseness. Given the foregoing, we can assume that
In exemplary embodiments of the present application the above findings are used to correct the reverberation estimation errors and produce dereverberated signals without artifacts. One way to do this, is by manipulating the derived dereverberation gain and extracting a new room-adaptive gain. This room-adaptive gain modification can be performed using any relevant technique such as a function, a method, a lookup table, an equation, a routine, a system, a set of rules etc. In exemplary embodiments four gain modification schemes can be assumed:
In an example application, we can use the coherence estimation in order to correct the estimation errors of any dereverberation algorithm. A new room-adaptive gain is obtained through the following function:
Gcoh(ω,μ)=({tilde over (G)}(ω,μ)1−Φ(ω,μ)
where γ is a tuning parameter. This gain can be used to obtain the dereverberated left and right signals as
XL(ω,μ)=Gcoh(ω,μ)YL(ω,μ) (37)
and
XR(ω,μ)=Gcoh(ω,μ)YR(ω,μ) (38)
Again, the derived gain can be derived from and applied to |YL(ω, μ) and |YR(ω, μ) for any , but it can also be derived from and applied directly to the complex spectrum of left and right channels. Then the dereverberated time domain signals for the left xL(n) and right channels xR(n) can be obtained through an inverse transformation from the frequency to the time domain.
The effect of coherence in the gain function of equation 36 is explained in the example illustrated in
The first gain estimation of equation 39 1002 is shown as an example in
In
In
In other exemplary embodiments of the present application, the aforementioned process may be applied in any multichannel dereverberation scenario. This can be done by any appropriate technique. For example, the coherence can be calculated between consecutive pairs of input channels, or between groups of channels, etc.
In an exemplary embodiment, the amount of dereverberation is controlled, in relation to a modification of a dereverberation gain G(ω, μ). If a linear control is applied, all gain values will be equally treated:
Gnew(ω,μ)=ζ(G(ω,μ) (40)
where ζ is the operator that changes the suppression rate. This linear operation is not necessarily a good choice for dereverberation. Reverberation is a convolutive degradation, it is highly correlated to the input signal and a simple linear control of the dereverberation gain might not be sufficient. In this exemplary embodiment dereverberation is controlled in accordance to the original gain values:
In an example, the gain function of a dereverberation filter G(ω, μ) is controlled through a parameter v, in order to extract a new filter Gnew(ω, μ) as
Gnew(ω,μ)=(G(ω,μ))ν (41)
where ν>0. In
Even though embodiments of the present invention are related to the suppression of late reverberation, the methods presented in this application are also appropriate for the suppression of ambient noise. All assumptions made for late reverberation in the diffuse field (e.g. stationarity, stochastic characteristics, noise-like) broadly stand for ambient noise. Hence, the embodiments presented in this application inherently suppress both ambient noise and late reverberation and they are valid for ambient noise reduction as well.
While the above-described flowcharts have been discussed in relation to a particular sequence of events, it should be appreciated that changes to this sequence can occur without materially effecting the operation of the invention. Additionally, the exemplary techniques illustrated herein are not limited to the specifically illustrated embodiments but can also be utilized and combined with the other exemplary embodiments and each described feature is individually and separately claimable.
Additionally, the systems, methods and protocols of this invention can be implemented on a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element(s), an ASIC or other integrated circuit, a digital signal processor, a hard-wired electronic or logic circuit such as discrete element circuit, a programmable logic device such as PLD, PLA, FPGA, PAL, a modem, a transmitter/receiver, any comparable means, or the like. In general, any device capable of implementing a state machine that is in turn capable of implementing the methodology illustrated herein can be used to implement the various communication methods, protocols and techniques according to this invention.
Furthermore, the disclosed methods may be readily implemented in software using object or object-oriented software development environments that provide portable source code that can be used on a variety of computer or workstation platforms. Alternatively the disclosed methods may be readily implemented in software on an embedded processor, a micro-processor or a digital signal processor. The implementation may utilize either fixed-point or floating point operations or both. In the case of fixed point operations, approximations may be used for certain mathematical operations such as logarithms, exponentials, etc. Alternatively, the disclosed system may be implemented partially or fully in hardware using standard logic circuits or VLSI design. Whether software or hardware is used to implement the systems in accordance with this invention is dependent on the speed and/or efficiency requirements of the system, the particular function, and the particular software or hardware systems or microprocessor or microcomputer systems being utilized. The systems and methods illustrated herein can be readily implemented in hardware and/or software using any known or later developed systems or structures, devices and/or software by those of ordinary skill in the applicable art from the functional description provided herein and with a general basic knowledge of the audio processing arts.
Moreover, the disclosed methods may be readily implemented in software that can be stored on a storage medium, executed on programmed general-purpose computer with the cooperation of a controller and memory, a special purpose computer, a microprocessor, or the like. In these instances, the systems and methods of this invention can be implemented as program embedded on personal computer such as an applet, JAVA® or CGI script, as a resource residing on a server or computer workstation, as a routine embedded in a dedicated system or system component, or the like. The system can also be implemented by physically incorporating the system and/or method into a software and/or hardware system, such as the hardware and software systems of an electronic device.
It is therefore apparent that there has been provided, in accordance with the present invention, systems and methods for reducing reverberation in electronic devices. While this invention has been described in conjunction with a number of embodiments, it is evident that many alternatives, modifications and variations would be or are apparent to those of ordinary skill in the applicable arts. Accordingly, it is intended to embrace all such alternatives, modifications, equivalents and variations that are within the spirit and scope of this invention.
This application is a Continuation of U.S. patent application Ser. No. 14/739,225, filed Jun. 15, 2015, now U.S. Pat. No. 9,414,158, which is a Continuation of U.S. patent Ser. No. 13/798,799, filed Mar. 13, 2013, now U.S. Pat. No. 9,060,052, entitled “Single-Channel, Binaural and Multi-Channel Dereverberation, the entirety of which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
6711536 | Rees | Mar 2004 | B2 |
8903722 | Jeub et al. | Dec 2014 | B2 |
9060052 | Tsilfidis et al. | Jun 2015 | B2 |
9414158 | Tsilfidis et al. | Aug 2016 | B2 |
20120328112 | Jeub et al. | Dec 2012 | A1 |
Entry |
---|
Furuya, Ken'ichi et al., “Robust Speech Dereverberation Using Multichannel Blind Deconvolution with Spectral Subtraction,” IEEE Trans. Audio, Speech and Lang. Process., vol. 15, No. 5, Jul. 2007, pp. 1579-1591. |
Lebart, K. et al., “A New Method Based on Spectral Subtraction for Speech Dereverberation,” S. Hirzel Verlag, EAA, Acta Acust. Acustica, vol. 87, 2001, pp. 359-366. |
Wu, Mingyan, et al., “A Two-Stage Algorithm for One—Microphone Reverberant Speech Enhancement,” IEEE Trans. Audio, Speech and Lang. Process., vol. 14, No. 3, May 2006, pp. 774-784. |
Jeub, Marco et al. “Model-Based Dereverberation Preserving Binaural Cues” IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, No. 7, Sep. 2010. |
Office Action for U.S. Appl. No. 13/798,799, dated Feb. 18, 2015. |
Notice of Allowance for U.S. Appl. No. 13/798,799, dated Apr. 30, 2015. |
Office Action for U.S. Appl. No. 14/739,225, dated Feb. 11, 2016. |
Notice of Allowance for U.S. Appl. No. 14/739,225, dated May 27, 2016. |
Number | Date | Country | |
---|---|---|---|
20160351179 A1 | Dec 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14739225 | Jun 2015 | US |
Child | 15231451 | US | |
Parent | 13798799 | Mar 2013 | US |
Child | 14739225 | US |