System and method for noisy automatic speech recognition employing joint compensation of additive and convolutive distortions

Information

  • Patent Application
  • 20070033034
  • Publication Number
    20070033034
  • Date Filed
    August 03, 2005
    19 years ago
  • Date Published
    February 08, 2007
    17 years ago
Abstract
A system for, and method of, noisy automatic speech recognition employing joint compensation of additive and convolutive distortions and a digital signal processor incorporating the system or the method. In one embodiment, the system includes: (1) an additive distortion factor estimator configured to estimate an additive distortion factor, (2) an acoustic model compensator coupled to the additive distortion factor estimator and configured to use estimates of a convolutive distortion factor and the additive distortion factor to compensate acoustic models and recognize a current utterance, (3) an utterance aligner coupled to the acoustic model compensator and configured to align the current utterance using recognition output and (4) a convolutive distortion factor estimator coupled to the utterance aligner and configured to estimate an updated convolutive distortion factor based on the current utterance using first-order differential terms but disregarding log-spectral domain variance terms.
Description
TECHNICAL FIELD OF THE INVENTION

The present invention is directed, in general, to speech recognition and, more specifically, to a system and method for noisy automatic speech recognition (ASR) employing joint compensation of additive and convolutive distortions.


BACKGROUND OF THE INVENTION

Over the last few decades, the focus in ASR has gradually shifted from laboratory experiments performed on carefully enunciated speech received by high-fidelity equipment in quiet environments to real applications having to cope with normal speech received by low-cost equipment in noisy environments.


In the latter case, an ASR system has to be robust to at least two sources of distortion. One is additive in nature—background noise, such as a computer fan, a car engine or road noise. The other is convolutive in nature—changes in microphone type (e.g., a hand-held microphone or a hands-free microphone) or position relative to the speaker's mouth. In mobile applications of speech recognition, both background noise and microphone type and relative position are subject to change. Therefore, it is critical that ASR systems be able to compensate for the two distortions jointly.


Various approaches have been taken to address this problem. One approach involves pursuing features that are inherently robust to distortions. Techniques using this approach include relative spectral technique-perceptual linear prediction, or RASTA-PLP, analysis (see, e.g., Hermansky, et al., “Rasta-PLP Speech Analysis Technique,” in ICASSP, 1992, pp. 121-124) and cepstral normalization such as cepstrum mean normalization, or CMN, analysis (see, e.g., Rahim, et al., “Signal Bias Removal by Maximum Likelihood Estimation for Robust Telephone Speech Recognition,” IEEE Trans. on Speech and Audio Processing, vol. 4, no. 1, pp. 19-30, January 1996) and histogram normalization (see, e.g., Hilger, et al., “Quantile Based Histogram Equalization for Noise Robust Speech Recognition,” in EUROSPEECH, 2001, pp. 1135-1138). The second approach is called “feature compensation,” and works to reduce distortions of features caused by environmental interference.


Spectral subtraction (see, e.g., Boll, “Suppression of Acoustic Noise in Speech Using Spectral Subtraction,” IEEE Trans. on ASSP, vol. 27, pp. 113-120, 1979) is widely used to mitigate additive noise. More recently, the European Telecommunications Standards Institute (ETSI) proposed an advanced front-end (see, e.g., D. Macho, et al., “Evaluation of a Noise-Robust DSR Front-End on Aurora Databases” in ICSLP, 2002, pp. 17-20) that combines Wiener filtering with CMN.


Using stereo data for training and testing, compensation vectors may be estimated via code-dependent cepstral normalization, or CDCN, analysis (see, e.g., Acero, et al., “Environment Robustness in Automatic Speech Recognition” in ICASSP 1990, 849-852) and SPLICE (see, e.g., Deng, et al., “High-Performance Robust Speech Recognition Using Stereo Training Data,” in ICASSP, 2001, pp. 301-304). Unfortunately, stereo data is unheard-of in mobile applications.


Another approach involves vector Taylor series, or VTS, analysis (see, e.g., Moreno, et al., “A Vector Taylor Series Approach for Environment-Independent Speech Recognition,” in ICASSP, 1996, vol. 2, pp. 733-736), which uses a model of environmental effects to recover unobserved clean speech features.


The third approach is called “model compensation.” Probably the most well-known model compensation techniques are multi-condition training and single-pass retraining. Unfortunately, these techniques require a large database to cover a variety of environments, which renders them unsuitable for mobile or other applications where computing resources are limited.


Other model compensation techniques make use of maximum likelihood linear regression (MLLR) (see, e.g., Woodland, et al., “Improving Environmental Robustness in Large Vocabulary Speech Recognition,” in ICASSP, 1996, pp. 65-68 and Sankar, et al., “A Maximum-Likelihood Approach to Stochastic Matching for Robust Speech Recognition,” IEEE Trans. on Speech and Audio Processing, vol. 4, no. 3, pp. 190-201, 1996) or maximum a posteriori probability estimation (see, e.g., Chou, et al. “Maximum A Posterior Linear Regression based Variance Adaptation on Continuous Density HMMs” technical report ALR-2002-045, Avaya Labs Research, 2002) to estimate transformation matrices from a smaller set of adaptation data. However, such estimation still requires a relatively large amount of adaptation data, which may not be available in mobile applications.


Using an explicit model of environment effects, the method of parallel model combination, or PMC (see, e.g., Gales, et al., “Robust Continuous Speech Recognition using Parallel Model Combination” in IEEE Trans. On Speech and Audio Processing, vol. 4, no. 5, 1996, pp. 352-359) and its extensions, such as sequential compensation (see, e.g., Yao, et al., “Noise Adaptive Speech Recognition Based on Sequential Noise Parameter Estimation,” Speech Communication, vol. 42, no. 1, pp. 5-23, 2004) may adapt model parameters with fewer frames of noisy speech. However, for mobile applications with limited computing resources, direct use of model compensation methods such as Gales, et al., and Yao, et al., both supra, almost always prove impractical.


What is needed in the art is a superior system and method for model compensation that functions well in a variety of background noise and microphone environments, particularly noisy environments, and is suitable for applications where computing resources are limited, e.g., digital signal processors (DSPs), especially those in mobile applications.


SUMMARY OF THE INVENTION

To address the above-discussed deficiencies of the prior art, one aspect of the present invention provides a system for noisy automatic speech recognition employing joint compensation of additive and convolutive distortions. In one embodiment, the system includes: (1) an additive distortion factor estimator configured to estimate an additive distortion factor, (2) an acoustic model compensator coupled to the additive distortion factor estimator and configured to use estimates of a convolutive distortion factor and the additive distortion factor to compensate acoustic models and recognize a current utterance, (3) an utterance aligner coupled to the acoustic model compensator and configured to align the current utterance using recognition output and (4) a convolutive distortion factor estimator coupled to the utterance aligner and configured to estimate an updated convolutive distortion factor based on the current utterance using first-order differential terms but disregarding log-spectral domain variance terms.


In another aspect, the present invention provides a method of noisy automatic speech recognition employing joint compensation of additive and convolutive distortions. In one embodiment, the method includes: (1) estimating an additive distortion factor, (2) using estimates of a convolutive distortion factor and the additive distortion factor to compensate acoustic models and recognize a current utterance, (3) aligning the current utterance using recognition output and (4) estimating an updated convolutive distortion factor based on the current utterance using first-order differential terms but disregarding log-spectral domain variance terms.


In yet another aspect, the present invention provides a DSP. In one embodiment, the DSP includes data processing and storage circuitry controlled by a sequence of executable instructions configured to: (1) estimate an additive distortion factor, (2) use estimates of a convolutive distortion factor and the additive distortion factor to compensate acoustic models and recognize a current utterance, (3) align the current utterance using recognition output and (4) estimate an updated convolutive distortion factor based on the current utterance using first-order differential terms but disregarding log-spectral domain variance terms.


The foregoing has outlined preferred and alternative features of the present invention so that those skilled in the art may better understand the detailed description of the invention that follows. Additional features of the invention will be described hereinafter that form the subject of the claims of the invention. Those skilled in the art should appreciate that they can readily use the disclosed conception and specific embodiment as a basis for designing or modifying other structures for carrying out the same purposes of the present invention. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the invention.




BRIEF DESCRIPTION OF THE DRAWINGS

For a more complete understanding of the invention, reference is now made to the following descriptions taken in conjunction with the accompanying drawing, in which:



FIG. 1 illustrates a high level schematic diagram of a wireless telecommunication infrastructure containing a plurality of mobile telecommunication devices within which the system and method of the present invention can operate;



FIG. 2 illustrates a high-level block diagram of a DSP located within at least one of the mobile telecommunication devices of FIG. 1 and containing one embodiment of a system for noisy ASR employing joint compensation of additive and convolutive distortions constructed according to the principles of the present invention;



FIG. 3 illustrates a flow diagram of one embodiment of a method of noisy ASR employing joint compensation of additive and convolutive distortions carried out according to the principles of the present invention;



FIG. 4 illustrates a plot of convolutive distortion estimates by an illustrated embodiment of the present invention and a prior art joint additive/convolutive compensation technique, averaged over all testing utterances for three exemplary driving conditions: parked, city-driving and highway;



FIG. 5 illustrates a plot of the standard deviation of channel estimates by an illustrated embodiment of the present invention and a prior art joint additive/convolutive compensation technique, averaged over all testing utterances for the three exemplary driving conditions of FIG. 4;



FIG. 6 illustrates a plot of word error rate by an illustrated embodiment of the present invention as a function of a forgetting factor;



FIG. 7 illustrates a plot of word error rate by an illustrated embodiment of the present invention as a function of a discounting factor;



FIG. 8 illustrates a plot of performance by an illustrated embodiment of the present invention as a function of discounting factor and forgetting factor in a parked condition;



FIG. 9 illustrates a plot of performance by an illustrated embodiment of the present invention as a function of discounting factor and forgetting factor in a city-driving condition; and



FIG. 10 illustrates a plot of performance by an illustrated embodiment of the present invention as a function of discounting factor and forgetting factor in a highway condition.




DETAILED DESCRIPTION

The present invention introduces a novel system and method for model compensation that functions well in a variety of background noise and microphone environments, particularly noisy environments, and is suitable for applications where computing resources are limited, e.g., mobile applications.


Using a model of environmental effects on clean speech features, an embodiment of the present invention to be illustrated and described updates estimates of distortion by a segmental E-M type algorithm, given a clean speech model and noisy observation. Estimated distortion factors are related inherently to clean speech model parameters, which results in overall better performance than PMC-like techniques, in which distortion factors are instead estimated directly from noisy speech without using a clean speech model.


Alternative embodiments employ simplification techniques in consideration of the limited computing resources found in mobile applications, such as wireless telecommunications devices. To accommodate possible modeling error brought about by use of simplification techniques, a discounting factor is introduced into the estimation process of distortion factors.


First, the theoretical underpinnings of an exemplary technique falling within the scope of the present invention will be set forth. Then, an exemplary system and method for noisy ASR employing joint compensation of additive and convolutive distortions will be described. Then, results from experimental trials of one embodiment of a technique carried out according to the teachings of the present invention will be set forth in an effort to demonstrate the potential efficacy of the new technique. The results will show that the new technique is able to attain robust performances in a variety of conditions, achieving significant performance improvement as compared to a baseline technique that has no noise compensation and a conventional compensation technique.


Accordingly, a discussion of the theoretical underpinnings of the exemplary technique will being by first establishing the relationship between distorted speech, additive and convolutive distortion factors.


A speech signal x(t) may be observed in noisy environments that contains background noise n(t) and a distortion channel h(t). For typical mobile applications, n(t) typically arises from office noise, vehicle engine and road noise. h(t) typically arises from the make and model of the mobile telecommunication device used and the relative position of the person speaking to the microphone in the mobile telecommunication device. These environmental effects are assumed to cause linear distortions on the clean signal x(t).


If y(t) denotes the observed noisy speech signal, the following Equation (1) results:

y(t)=x(t)*h(t)+n(t)  (1)


After transforming to a linear frequency domain, the power spectrum of y(t) can be written as:

Ylin(k)=Xlin(k)Hlin(k)+Nlin(k)  (2)


The cepstral feature is derived from a conventional discrete cosine transform (DCT) of the log-compressed linear spectral feature. In the log-spectral domain, due to the non-linear log-compression, the above linear function becomes non-linear:

Yl(k)=g(Xl(k),Hl(k),Nl(k))  (3)

where:

g(Xl(k),Hl(k),Nl(k))=log(exp(Xl(k)+Hl(k))+exp(Nl(k))).  (4)


Assuming the log-normal distribution and ignoring variance of the above terms, the following Equation (5) results:

E{Yl(k)}={circumflex over (μ)}l=gl,Hl,Nl),  (5)


where μl is the clean speech mean vector and {circumflex over (μ)}l is the compensated mean vector.


The overall objective is to derive a segmental technique for estimating distortion factors. It is assumed that continuous-density hidden Markov models (CD-HMMs) ΛX for X(k)l are trained on clean Mel frequency cepstral coefficient, or MFCC, feature vectors and represented as ΛX={{πq,aqq,cqpqpcqpc}: q,q′=1 . . . S, p=1 . . . M, μqpc={μqpdc: d=1 . . . D}, Σqpc={σqpdc2: d=1 . . . D}}. (Ordinarily, c would be superscripted to denote the cepstral domain; however, for simplicity of expression, feature vectors will be assumed to be in the cepstral domain and the superscript omitted.)


Distortion factors are estimated via the conventional maximum-likelihood principle. A conventional E-M algorithm (see, e.g., Rabiner, A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition, in proceedings of the IEEE, 77(2), 1989, pp. 257-286) is applied for the maximum-likelihood estimation, because ΛX contains an unseen state sequence.


R is defined to be the number of utterances available for estimating distortion factors. Kr is defined to be the number of frames in an utterance r. m denotes a mixture component in a state s. Using the E-M algorithm, an auxiliary function is constructed as follows:
Q(R)(λ|λ_)=r=1Rk=1Kskmkp(sk=q,mk=p|Yr(1:Kr),λ_)logp(Yr(k)|sk=q,mk=p,λ),(6)

where λ=(Hl,Nl) and λ=( Hl, Nl) respectively denote the to-be-estimated distortion factors and estimated distortion factors.


It will be assumed that environmental effects do not distort the variance of a Gaussian density. Thus the form for p(Yr(k)|sk=q,mk=p,λ) is:

p(Yr(k)|sk=q,mk=p,λ)=bqp(Yr(k))˜N(Yr(k);{circumflex over (μ)}qpqp2).  (7)


The posterior probability p(sk=q,mk=p|Yr(1:Kr), λ) is usually denoted as Yqpr(k), which is also called the “sufficient statistic” of the E-M algorithm.


In the illustrated embodiment, the sufficient statistics are obtained through the well-known forward-backward algorithm (e.g. Rabiner). In the forward step of the forward-backward algorithm, the forward variable αq(k) is defined as p(Yr(1:k),sk=q| λ). The forward variable αq(k) is inductively obtained as follows:
αq(k+1)=[iαi(k)aiq]bq(Yr(k+1)|λ_),(8)

where aiq is the state transition probability from i to q and:
bq(Yr(k+1)|λ_)=mcqm𝒩(Yr(k+1)|μ_qm,σqm2)(9)

where cqm is the mixture weight of Gaussian component m at state q. Note that μqm is obtained via Equation (5) by substituting Hl and Nl with corresponding parameters in λ. The backward step in the forward-backward algorithm can also be found in Rabiner, et al., supra.


Sufficient statistics are vital to the performance of the E-M and similar-type algorithms. State sequence segmentation will be assumed to be available, allowing what is usually called “supervised estimation.” However, recognition results can provide the segmentation in practical applications, which is usually called “unsupervised estimation.”


Maximizing Equation (6) with respect to the convolutive distortion factor involves iterative estimation. The well-known Newton-Raphson method may be used to update the convolutive distortion estimate due to its rapid convergence rate. The new estimate of the convolutive distortion factor is given as:
Hl=H_l-ΔHlQ(λ|λ_)ΔHl2Q(λ|λ_)|Hl-H_1.(10)


Using the chain rule of differentiation, ΔHlQ(λ| λ), the first-order differentiation of the auxiliary function (6) with respect to Hl is given as:
ΔHlQ(R)(λ|λ_)=-r=1Rk=1Krqpγqpr(k)1σqp2l[g(μqpl,Hl,Nl)-C-1Yr(k)]ΔHlg(μqpl,Hl,Nl),(11)

where C−1 denotes an inverse discrete cosine transformation. σqp2l is the variance vector in log-spectral domain. Equation (13) gives the first-order differential term ΔHlg(μqpl,Hl,Nl).


The second order differentiation of Equation (6) with respect to the convolutive distortion factor Hl is given as:
ΔHl2Q(R)(λ|λ_)=-r=1Rk=1Krqpγqpr(k)1σqp2l[(ΔHlg(μqpl,Hl,Nl))2+(g(μqpl,Hl,Nl)-C-1Yr(k))ΔHl2g(μqpl,Hl,Nl)],(12)

where the second-order term ΔHl2g(μqpl,Hl,Nl) is given in Equation (14).


Straightforward algebraic manipulation of Equation (5) results in the first- and second-order differentials of g(μqpl,Hl,Nl):
ΔHlg(μqpl,Hl,Nl)=exp(Hl+μqpl)exp(Hl+μqpl)+exp(Nl),(13)ΔHl2g(μqpl,Hl,Nl)-ΔHlg(μqpl,Hl,Nl)(1-ΔHlg(μqpl,Hl,Nl)).(14)


With the same approach described above, the updating formula for the additive distortion factor may be obtained as:
Nl=N_l-ΔNlQ(R)(λ|λ_)ΔNl2Q(R)(λ|λ_)|Nl=N_1,(15)

where the first- and second-order differentials in the equation are given in Equation (24) and (25), respectively.


Although Hl and Nl can be estimated in the above similar way, their usages are entirely different. The convolutive distortion is slowly varying; its estimate may be used for the following utterance. In contrast, the additive distortion has been found to be highly variable in mobile environments. Unless second-pass estimation is allowed, an estimate by Equation (15) may not help performance.


Since the present invention may find advantageous use in applications having limited computing resources, updating formulae in Equation (11) and (12) may be further simplified. Those skilled in the pertinent art will observe that the variance term in log-spectral domain is costly to obtain due to heavy transformations between the cepstral and log-spectral domains. Therefore, a simplified solution is in order.


Ignoring the variance term, results in the following Equations (16) and (17):
ΔHlQ(R)(λ|λ_)=-r=1Rk=1Krqpγqpr(k)[g(μqpl,Hl,Nl)-C-1Yr(k)]ΔHlg(μqpl,Hl,Nl),(16)ΔHl2Q(R)(λ|λ_)=-r=1Rk=1Krqpγqpr(k)[(ΔHlg(μqpl,Hl,Nl))2+(g(μqpl,Hl,Nl)-C-1Yr(k))ΔHl2g(μqpl,Hl,Nl)].(17)


A further simplification arrives at the technique presented in Gong, “Model-Space Compensation of Microphone and Noise for Speaker-Independent Speech Recognition,” in ICASSP, 2003, pp. 660-663, which sets forth the following Equations (18) and (19):
ΔHlQ(R)(λ|λ_)=-r=1Rk=1Krqpγqpr(k)[g(μqpl,Hl,Nl)-C-1Yr(k)],(18)ΔHl2Q(R)(λ|λ_)=-r=1Rk=1Krqpγqpr(k)ΔHlg(μqpl,Hl,Nl).(19)


Equations (18) and (19) result from Equations (16) and (17) when ΔHlg(μqpl,Hl,Nl) is removed and the following assumption is made:

1−ΔHlgqpl,Hl,Nl)<<ΔHlgqpl,Hl,Nl).  (20)


By Equation (13), Equation (20) is equivalent to exp(Nl)<<exp(Hlqpl). Equations (18) and (19) are therefore based on the assumption that additive noise power is much smaller than convoluted speech power. As a result, Equations (18) and (19) may not perform as well as Equations (16) and (17) when noise levels are closer in magnitude to convoluted speech power. Experiments set forth below will verify this statement.


The present invention introduces an optional forgetting factor ρ, lying in the range of zero to one, to force parameter updating with more emphasis on recent utterances. With ρ, Equations (16) and (17) can be updated as an utterance-by-utterance way, i.e.:
ΔHlQ(R)(λλ_)=-r=1RρR-rk=1Krqmγqpr(k)[g(μqpl,Hl,Nl)-C-1Yr(k)]ΔHg(μqpl,Hl,Nl)=ρΔHlQ(R-1)(λλ_)-qmγqpr(k)[g(μqpl,Hl,Nl)-C-1Yr(k)]ΔHlg(μqpl,Hl,Nl)(21)ΔHl2Q(R)(λλ_)=-r=1RρR-rk=1Krqmγqpr(k)[(ΔHlg(μqpl,Hl,Nl))2+(g(μqpl,HlNl)-C-1Yr(k))ΔHl2g(μqpl,Hl,Nl)]=ρΔHl2Q(R-1)(λλ_)-k=1KRqmγqpR(k)[(ΔHlg(μqpl,Hl,Nl))2+(g(μqpl,Hl,Nl)-C-1Yr(k))ΔHl2g(μqpl,Hl,Nl)].(22)


The simplifications described above may introduce some modeling error under some conditions. As a result, updating Equation (10) may result in a biased convolutive distortion factor estimate. To counteract this, the present invention introduces an optional discounting factor ξ, also lying in the range of zero to one. The discounting factor is multiplied with the previous estimate. The new updating equation is given as:
Hl=ξH_l-ΔHlQ(R)(λλ_)ΔHl2Q(R)(λλ_)Hl=ξH_1,(23)


Importantly, calculation of the sufficient statistics does not incur such discounting factor. Therefore, introduction of the discounting factor ξ causes a mismatch between Hl used for the sufficient statistics and Hl for calculating derivatives in g(μqpl,Hl,Nl). Fortunately, by adjusting ξ, modeling error may be alleviated. The effects of ξ on recognition performance will be described below.


The additive distortion factor Nl may be updated via Equation (15). Using the well-known chain rule of differentiation, ΔNlQ(R)(λ| λ), the first-order differentiation of Equation (6) is obtained with respect to Nl as:
ΔNlQ(R)(λλ_)=-r=1Rk=1Krqpγqpr(k)[g(μqpl,Hl,Nl)-C-1Yr(k)]ΔNlg(μqpl,Hl,Nl),(24)

where the first-order differential term ΔNlg(μqpl,Hl,Nl) is given in Equation (26).


The second order differentiation of Equation (6) with respect to Nl is given as:
ΔNl2Q(R)(λλ_)=-r=1Rk=1Krqpγqpr(k)[(ΔNlg(μqpl,Hl,Nl))2+(g(μqpl,Hl,Nl)-C-1Yr(k))ΔNl2g(μqpl,Hl,Nl)],(25)

where the second-order term ΔNl2g(μqpl,Hl,Nl) is given in Equation (27).


A straightforward algebraic manipulation of Equation (5) yields the first- and second-order differential of g(μqpl,Hl,Nl), shown below as:
ΔNlg(μqpl,Hl,Nl)=exp(Nl)exp(Hl+μqpl)+exp(Nl),(26)ΔNl2g(μqpl,Hl,Nl)=ΔNlg(μqpl,Hl,Nl)(1-ΔNlg(μqpl,Hl,Nl))(27)


Having set forth the theoretical underpinnings of an exemplary technique falling within the scope of the present invention, an exemplary system and method for noisy ASR employing joint compensation of additive and convolutive distortions can now be described.


Accordingly, referring to FIG. 1, illustrated is a high level schematic diagram of a wireless telecommunication infrastructure, represented by a cellular tower 120, containing a plurality of mobile telecommunication devices 110a, 110b within which the system and method of the present invention can operate.


One advantageous application for the system or method of the present invention is in conjunction with the mobile telecommunication devices 110a, 110b. Although not shown in FIG. 1, today's mobile telecommunication devices 110a, 110b contain limited computing resources, typically a DSP, some volatile and nonvolatile memory, a display for displaying data and a keypad for entering data.


Certain embodiments of the present invention described herein are particularly suitable for operation in the DSP. The DSP may be a commercially available DSP from Texas Instruments of Dallas, Tex. An embodiment of the system in such a context will now be described.


Turning now to FIG. 2, illustrated is a high-level block diagram of a DSP located within at least one of the mobile telecommunication devices of FIG. 1 and containing one embodiment of a system for noisy ASR employing joint compensation of additive and convolutive distortions constructed according to the principles of the present invention. Those skilled in the pertinent art will understand that a conventional DSP contains data processing and storage circuitry that is controlled by a sequence of executable software or firmware instructions. Most current DSPs are not as computationally powerful as microprocessors. Thus, the computational efficiency of techniques required to be carried out in DSPs in real-time is a substantial issue.


The system contains an additive distortion factor estimator 210. The additive distortion factor estimator 210 is configured to estimate an additive distortion factor, preferably from non-speech segments of a current utterance. The initial ten frames of input features may advantageously be averaged. The average may then be used as the additive distortion factor estimate Nl.


Coupled to the additive distortion factor estimator 210 is an acoustic model compensator 220. The acoustic model compensator 220 is configured to use the estimates of distortion factors Hl and Nl to compensate acoustic models ΛX and recognize the current utterance R. (The convolutive distortion factor Hl is initially set at zero and thereafter carried forward from the previous utterance.)


Coupled to the acoustic model compensator 220 is an utterance aligner 230. The utterance aligner 230 is configured to align the current utterance R using recognition output. sufficient statistics γqpR(k) are preferably obtained for each state q, mixture component p and frame k.


Coupled to the utterance aligner 230 is a convolutive distortion factor estimator 240. The convolutive distortion estimator 240 is configured to estimate the convolutive distortion factor Hl based on the current utterance using first-order differential terms but disregarding log-spectral domain variance terms. In doing so, the illustrated embodiment of the convolutive distortion factor estimator 240 accumulates sufficient statistics via Equations (21) and (22) and updates the convolutive distortion estimate for the next utterance by Equation (23).


Analysis of the next utterance R then begins, which invokes the additive distortion factor estimator 210 to start the process anew.


Turning now to FIG. 3, illustrated is a flow diagram of one embodiment of a method of noisy ASR employing joint compensation of additive and convolutive distortions carried out according to the principles of the present invention. Since convolutive distortion can be considered as slowly varying, in contrast to additive distortion, the method treats the two separately.


The method begins in a start step 310 wherein it is desired to recognize potentially noisy speech. In a step 320, an estimate of the convolutive distortion factor Hl is initialized, e.g., to zero. In a step 330, an estimate of an additive distortion factor Nl is obtained from non-speech segments of the current utterance. As stated above, the initial (e.g., ten) frames of input features may be averaged to extract the mean of the frames. The mean may be used as the additive distortion factor estimate. In a step 340, the estimates of the distortion factors Hl, Nl are used to compensate the acoustic models ΛX and recognize the current utterance R.


In a step 350, the current utterance R is aligned using recognition output. In a step 360, sufficient statistics γqpR(k) are obtained for each state q, mixture component p and frame k. In a step 370, sufficient statistics are accumulated via Equations (21) and (22), and the convolutive distortion factor estimate is updated for the next utterance by Equation (23).


In a decisional step 380, it is determined whether the current utterance is the last utterance. If not, R←R+1, and the method repeats beginning at the step 330. If so, the method ends in an end step 390.


One embodiment of the novel technique of the present invention will hereinafter be called “IJAC.” To assess the performance of the new technique, it will now be compared to a prior art joint additive/convolutive compensation technique introduced in Gong, supra, which will hereinafter be called “JAC.”


IJAC, JAC and SVA will be performed with respect to exemplary “hands-free” databases of spoken digits and names. The digit database was recorded in a car, using an AKG M2 hands-free distant talking microphone, in three recording sessions: parked (engine off), stop-n-go (car driven on a stop-and-go basis to simulate city driving), and highway (at highway speeds). In each session, 20 speakers (10 male, 10 female) read 40 sentences each, resulting in 800 utterances. Each sentence is either a 10, 7 or 4 digit sequence, with equal probabilities. The digits database is sampled at 8 kHz, with a frame rate of 20 ms. 10-dimensional MFCC features were derived from the speech.


The CD-HMMs are trained on clean speech data recorded in a laboratory. The HMMs contain 1957 mean vectors and 270 diagonal variances. Evaluated on a test set, the recognizer gives a 0.36% word error rate.


Given the above HMM models, the hands-free database presents a severe mismatch. First, the microphone is distant talking band-limited, as compared to a high-quality microphone used to collect clean speech data. Second, a substantial amount of background noise is present due to the car environment, with the signal-to-noise ratio (SNR) decreasing to 0 dB in the highway condition.


The variances of the CD-HMMs are adapted by MAP with some slightly noisy data in parked condition. Such adaptation does not affect recognition of clean speech, but reduces variance mismatch between HMMs and the noisy speech.


Ideally, the convolutive distortion corresponding to the microphone should be independent of the testing utterance. However, due to varying noise distortion and utterance length, the estimated convolutive distortion may vary from utterance to utterance. Moreover, since IJAC and JAC employ different updating mechanisms, different estimates may result.


Turning now to FIGS. 4 and 5, illustrated are, in FIG. 4, a plot of convolutive distortion estimates by IJAC and JAC, averaged over all testing utterances for three exemplary driving conditions: parked, city-driving and highway and, in FIG. 5, a plot of the standard deviation of channel estimates by IJAC and JAC, averaged over all testing utterances for the three exemplary driving conditions of FIG. 4.


The following should be apparent. First, for each technique, the estimates in different driving conditions are generally in agreement. This observation shows that the estimation techniques are not much dependent on the noise level. Second, FIG. 4 shows a bias between estimates by IJAC and JAC. JAC appears to under-estimate convolutive distortion. Third, FIG. 5 clearly shows that, in lower-frequency bands, IJAC has a smaller estimation variance than JAC. Note that, in these frequency bands, estimation variance in higher noise levels by JAC is larger than its estimate in the parked condition. In contrast, IJAC does not experience higher estimation variance due to higher noise level.


According to the above observations and analysis, IJAC produces a smaller estimation error than JAC. Speech recognition experiments will now be set forth that verify the superiority of IJAC.


IJAC is again compared with JAC. Speech enhancement by spectral subtraction (SS) (see, e.g., Boll, supra) may be combined with these two techniques. Recognition results are summarized in Table 1, below. In Table 1, IJAC is configured with ξ=0.3 and ρ=0.6.

TABLE 1Word error rate of digit recognitionWER (%)ParkedCity DrivingHighwayBaseline1.3830.373.2JAC0.320.612.48JAC + SS0.340.561.99IJAC0.320.522.43IJAC + SS0.310.541.83


Table 1 reveals several things. First, performance of the baseline (without noise robustness techniques) degrades severely. Second, JAC substantially reduces the word error rate (WER) under all driving conditions. Third, SS benefits both JAC and IJAC in the highway condition. Fourth, IJAC performs consistently better than JAC.

TABLE 2Relative word error rate reduction (ERR) of digit recognitionERR (%)ParkedCity DrivingHighwayIJAC vs. baseline76.898.396.7IJAC + SS vs. baseline77.598.297.5IJAC vs. JAC0.014.82.0IJAC + SS vs. JAC + SS8.83.58.0


Table 2 further elaborates on the comparison results by showing relative word error rate reduction (ERR), of IJAC as compared to baseline and JAC. It should be observed that IJAC significantly reduces word error rate as compared to the baseline, and it also performs consistently better than JAC.


The reported results were obtained for IJAC implemented in floating point. Parameters, such as ξ and ρ, in IJAC may need careful adjustment when the IJAC is implemented in fixed-point C. For example, IJAC's best performance may be realized in fixed-point C with ξ=0.3 and ρ=0.6. Whereas baseline JAC has 0.27%, 0.59%, and 2.28% WER, respectively, in parked, city driving, and highway conditions, IJAC attains 0.23%, 0.52%, and 2.23% WER in the three driving conditions. This results in a 9% relative WER reduction.


The name database was collected using the same procedure as the digit database. The database contains 1325 English name utterances collected in cars. Therefore, the utterances in the database were noisy. Another difficulty was due to multiple pronunciation of names. It is therefore interesting to see the performance of different compensation techniques on this database.


The baseline acoustic model CD-HMM was the generative tied-mixture HMM (GTM-HMM) (see, Yao, supra, and incorporated herein by reference) which was trained in two stages. The first stage trained the acoustic model from the Wall Street Journal (WSJ) with a manual dictionary. Decision-tree-based state tying was applied to train the gender-dependent acoustic model. As a result, the model had one mixture per state and 9573 mean vectors. In the second stage, a mixture-tying mechanism was applied to tie mixture components from a pool of Gaussian densities. After the mixture tying, the acoustic model was re-trained using the WSJ database.


The recognition results are summarized in Table 3. IJAC is again compared with JAC. Features were 10-dimensional MFCC and its delta coefficients.

TABLE 3Word error rate of name recognitionWER (%)ParkedCity DrivingHighwayBaseline2.250.282.9JAC0.281.044.99IJAC0.240.963.52


In Table 3, IJAC is configured with ξ=0.7 and ρ=0.6. Table 3 shows several things. First, performance of the baseline (without noise robustness techniques) degrades severely as noise increases. Second, JAC substantially reduces the WER for all driving conditions. Third, IJAC's performance is significantly better than JAC under all driving conditions.


Table 4 shows relative word error rate reduction of IJAC as compared to baseline and JAC.

TABLE 4Relative word error rate reduction (ERR) of name recognitionachieved by IJAC as compared to the baseline and JACERR (%)ParkedCity DrivingHighwayIJAC vs. baseline89.198.195.8IJAC vs. JAC14.37.729.5


Table 4 shows relative word error rate reduction of IJAC as compared to baseline and JAC. It is observed that IJAC performs consistently better than JAC under all driving conditions. More importantly, in the highway condition, IJAC significantly reduced ERR by 29.5%, as compared to JAC. Together with the experiments set forth herein, the results confirmed Equation (20), which holds that IJAC in principle has better performance in high noise level than JAC.


Notice that a segmental updating technique by Equations (21) and (22) may be used to implement IJAC. It is thus interesting to study effects of the forgetting factor ρ on system performance.


Accordingly, turning now to FIG. 6, illustrated a plot of word error rate by IJAC as a function of a forgetting factor. FIG. 6 plots word error rate achieved by IJAC (ξ=0.8) as a function of the forgetting factor ρ, together with JAC, in different driving conditions.


Several things are evident. First, performance by IJAC in the highway condition is significantly better than JAC. WER reduction by ρ=0.4 attained 25.3%. The highest WER reduction was achieved by setting ρ=1.0, corresponding to 38.5%. Second, IJAC does not perform much differently due to varying forgetting factor ρ, in all three driving conditions. Third, because of slowly varying convolutive distortion, the forgetting factor for segmental updating does not incur much effects on the performance.


Distortion factors are updated by Equation (23) which uses a discounting factor ξ to modify the previous estimates. As suggested above, IJAC may accommodate modeling error.


Accordingly, turning now to FIG. 7, illustrated is a plot of word error rate by IJAC as a function of a discounting factor. FIG. 7 plots word error rate of IJAC (ρ=0.6) as a function of ξ, together with performances by JAC. The right-most points show performance of updating convolutive distortion without discounting factor, corresponding to ξ=1.0.


The following observations may be made. First, performance in parked condition was similar to that achieved by JAC. Moreover, performance did not vary much with changes of ξ. Second, significant performance difference arise between IJAC and JAC in the highway condition. The highest WER reduction is achieved at ξ=0.8, corresponding to 30.6%. Furthermore, because the highway condition has a particularly low SNR, IJAC achieves better performance than JAC in a wide range of 0.2≦ξ≦0.9. Third, a certain range of ξ makes IJAC perform better than JAC under all driving conditions. In this example, the range is 0.3≦ξ≦0.8.


The first and second observations suggest that IJAC is indeed able to perform better than JAC due to its more strict formulae in Equations (16) and (17) for accumulating sufficient statistics. The above results also confirm the effectiveness of a discounting factor in dealing with possible modeling error.


Now, the performance of IJAC as a function of discounting factor ξ and forgetting factor ρ will be described. Accordingly, turning now to FIGS. 8, 9 and 10, illustrated are a plot of performance by IJAC as a function of discounting factor and forgetting factor in a parked condition (FIG. 8), a plot of performance by IJAC as a function of discounting factor and forgetting factor in a city-driving condition (FIG. 9) and a plot of performance by IJAC as a function of discounting factor and forgetting factor in a highway condition (FIG. 10). Each are plotted in FIGS. 8, 9 and 10 as WER (%) as a function of ξ and ρ for parked, city-driving, and highway conditions. The WER is scaled to log10 to show detailed performance differences due to different ξ and ρ. The following observations result.


First, the worst performance in all three conditions is at ξ=1.0, ρ=1.0, corresponding to the following assumptions: (1) distortions are stationary (ρ=1.0) and (2) no modeling error results from simplifications. Those skilled in the pertinent art should understand that these two assumptions are rarely correct.


Second, ranges of ξ and ρ exist where IJAC is able to achieve the lowest WER. However, the best ranges are dependent on driving conditions. For example, the best range may be 0.4≦ξ≦0.8 and 0.4≦ρ≦1.0 for the highway condition, whereas the best range may be ξ≦0.6 and ρ≦0.8 for the city-driving condition. Performance in the parked condition appears to be independent from ξ and ρ, except the extreme of ξ=1.0, ρ=1.0 mentioned above. Nevertheless, IJAC is able to achieve low WER within a wide range of ξ and ρ.


Although the present invention has been described in detail, those skilled in the art should understand that they can make various changes, substitutions and alterations herein without departing from the spirit and scope of the invention in its broadest form.

Claims
  • 1. A system for noisy automatic speech recognition employing joint compensation of additive and convolutive distortions, comprising: an additive distortion factor estimator configured to estimate an additive distortion factor; an acoustic model compensator coupled to said additive distortion factor estimator and configured to use estimates of a convolutive distortion factor and said additive distortion factor to compensate acoustic models and recognize a current utterance; an utterance aligner coupled to said acoustic model compensator and configured to align said current utterance using recognition output; and a convolutive distortion factor estimator coupled to said utterance aligner and configured to estimate an updated convolutive distortion factor based on said current utterance using first-order differential terms but disregarding log-spectral domain variance terms.
  • 2. The system as recited in claim 1 wherein said convolutive distortion factor estimator is further configured to estimate said updated convolutive distortion factor based on a discounting factor.
  • 3. The system as recited in claim 1 wherein said convolutive distortion factor estimator is further configured to estimate said updated convolutive distortion factor based on a forgetting factor.
  • 4. The system as recited in claim 1 wherein said convolutive distortion factor estimator is further configured to obtain sufficient statistics for each state, mixture component and frame of said current utterance.
  • 5. The system as recited in claim 1 wherein said additive distortion factor estimator is configured to estimate said additive distortion factor from non-speech segments of said current utterance.
  • 6. The system as recited in claim 1 wherein said additive distortion factor estimator is configured to estimate said additive distortion factor by averaging initial frames of input features.
  • 7. The system as recited in claim 1 wherein said system is embodied in a digital signal processor of a mobile telecommunication device.
  • 8. A method of noisy automatic speech recognition employing joint compensation of additive and convolutive distortions, comprising: estimating an additive distortion factor; using estimates of a convolutive distortion factor and said additive distortion factor to compensate acoustic models and recognize a current utterance; aligning said current utterance using recognition output; and estimating an updated convolutive distortion factor based on said current utterance using first-order differential terms but disregarding log-spectral domain variance terms.
  • 9. The method as recited in claim 8 wherein said estimating said updated convolutive distortion factor comprises estimating said updated convolutive distortion factor based on a discounting factor.
  • 10. The method as recited in claim 8 said estimating said updated convolutive distortion factor comprises estimating said updated convolutive distortion factor based on a forgetting factor.
  • 11. The method as recited in claim 8 wherein said estimating said updated convolutive distortion factor comprises obtaining sufficient statistics for each state, mixture component and frame of said current utterance.
  • 12. The method as recited in claim 8 wherein said estimating said additive distortion factor comprises estimating said additive distortion factor from non-speech segments of said current utterance.
  • 13. The method as recited in claim 8 wherein said estimating said additive distortion factor comprises estimating said additive distortion factor by averaging initial frames of input features.
  • 14. The method as recited in claim 8 wherein said method is carried out in a digital signal processor of a mobile telecommunication device.
  • 15. A digital signal processor (DSP), comprising: data processing and storage circuitry controlled by a sequence of executable instructions configured to: estimate an additive distortion factor; use estimates of a convolutive distortion factor and said additive distortion factor to compensate acoustic models and recognize a current utterance; align said current utterance using recognition output; and estimate an updated convolutive distortion factor based on said current utterance using first-order differential terms but disregarding log-spectral domain variance terms.
  • 16. The DSP as recited in claim 15 wherein said instructions estimate said updated convolutive distortion factor based on a discounting factor.
  • 17. The DSP as recited in claim 15 wherein said instructions estimate estimating said updated convolutive distortion factor based on a forgetting factor.
  • 18. The DSP as recited in claim 15 wherein said instructions obtain sufficient statistics for each state, mixture component and frame of said current utterance.
  • 19. The DSP as recited in claim 15 wherein said instructions estimate said additive distortion factor from non-speech segments of said current utterance.
  • 20. The DSP as recited in claim 15 wherein said instructions estimate said additive distortion factor by averaging initial frames of input features.
CROSS-REFERENCE TO RELATED APPLICATION

The present invention is related to U.S. patent application No. [Attorney Docket No. TI-39685] by Yao, entitled “System and Method for Creating Generalized Tied-Mixture Hidden Markov Models for Automatic Speech Recognition,” filed concurrently herewith, commonly assigned with the present invention and incorporated herein by reference.