This invention relates to speech recognition and more particularly to compensation for both background noise and channel distortion.
A speech recognizer trained with relatively a quiet office environment speech data and then operating in a mobile environment may fail due to at least to the two distortion sources of back ground noise and microphone changes. The background noise may, for example, be from a computer fan, car engine, and/or road noise. The microphone changes may be due to the quality of the microphone, whether the microphone is hand-held or hands-free and, the position of the microphone to the mouth. In mobile applications of speech recognition, both the microphone conditions and background noise are subject to change.
Cepstral Mean Normalization (CMN) removes utterance mean and is a simple and effective way of dealing with convolutive distortion such as telephone channel distortion. See “Effectiveness of Linear Prediction Characteristics of the Speech Wave for Automatic Speaker Identification and Verification” of B. Atal in Journal of Acoustics Society of America, Vol. 55: 1304–1312, 1974. Spectral Subtraction (SS) reduces background noise in the feature space. See article “Suppression of Acoustic Noise in Speech Using Spectral Subtraction” of S. F. Boll in IEEE Transactions on Acoustics, Speech and Signal Processing, ASSP-27(2): 113–129, April 1979. Parallel Model Combination (PMC) gives an approximation of speech models in noisy conditions from noise-free speech models and noise estimates. See “An Improved Approach to the Hidden Markov Model Decomposition of Speech and Noise” of M. J. F. Gales and S. Young in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, Volume 1, pages 233–236, U.S.A., April 1992. The techniques do not require any training data.
Joint compensation of additive noise and convolutive noise can be achieved by the introduction of a channel model and a noise model. A spectral bias for additive noise and a cepstral bias for convolutive noise are introduced in an article by M. Afify, Y. Gong, and J. P. Haton. This article is entitled “A General Joint Additive and Convolutive Bias Compensation Approach Applied to Noisy Lombard Speech Recognition” in IEEE Trans. on Speech and Audio Processing, 6(6): 524–538, November 1998. The two biases can be calculated by application of Expectation Maximization (EM) in both spectral and convolutive domains. A procedure by J. L. Gauvain, et al, is presented to calculate the convolutive component, which requires rescanning of training data. See J. L. Gauvain, L. Lamel, M. Adda-Decker, and D. Matrouf entitled “Developments in Continuous Speech Dictation using the ARPA NAB News Task.” In Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 73–76, Detroit, 1996. Solution of the convolutive component by a steepest descent method has also been reported. See Y. Minami and S. Furui entitled “A Maximum Likelihood Procedure for a Universal Adaptation Method Based on HMM Composition.” See Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing, pages 129–132, Detroit, 1995. A method by Y. Minami and S. Furui needs additional universal speech models, and re-estination of channel distortion with the universal models when channel changes. See Y. Minami and S. Furui entitled “Adaptation Method Based on HMM Composition and EM Algorithm” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, pages 327–330, Atlanta 1996.
The techniques presented by M. F. J. Gales in “PMC for Speech Recognition in Additive and Convolutional Noise,” Technical Report TR-154, CUED/F-INFENG, December 1993 needs two passes of the test utterance, e.g., parameter estimation followed by recognition, several transformations between cepstral and spectral domains, and a Gaussian mixture model for clean speech.
Alternatively, the nonlinear changes of both type of distortions can be approximated by linear equations, assuming that the changes are small. A Jacobian approach, which models speech model parameter changes as the product of a jacobian matrix and the difference in noisy conditions, and statistical linear approximation are along this direction. See S. Sagayama, Y. Yamaguchi, and S. Takahashi entitled “Jacobian Adaptation of Noisy Speech Models,” in Proceedings of IEEE Automatic Speech Recognition Workshop, pages 396–403, Santa Barbara, Calif., USA, December 1997. IEEE Signal Processing Society. Also see “Statistical Linear Approximation for Environment Compensation” of N. S. Kim, IEEE Signal Processing Letters, 5(1): 8–10, January 1998.
Maximum Likelihood Linear Regression (MLLR) transforms HMM parameters to match the distortion factors. See “Maximum Likelihood Linear Regression for Speaker Adaptation of Continuous Density HMMs” by C. J. Leggetter and P. C. Woodland in Computer, Speech and Language, 9(2): 171–185, 1995. This method is effective for both sources but requires training data and introduces the dependence to the speakers.
In accordance with one establishment of the present invention a new method is disclosed that simultaneously handles noise and channel distortions to make a speaker independent system robust to a wide variety of noises and channel distortions.
Referring to
Referring to
The second Step 2 is to calculate the mean mel-scaled cesptrum coefficients (MFCC) vector over the trained database. Scan all data and calculate the mean to get b.
The third Step 3 is to add mean b to each of this mean vector pool represented by mp,j,k equation (1) to get:
{overscore (m)}p,j,k=mp,j,k+b. (1)
For example, there could be 100 HMMs, 3 states per HMM and 2 vectors per state, or a total of 600 vectors.
The fourth Step 4 is for a given input test utterance, an estimate of the background noise vector {tilde over (X)} is calculated.
Let ulΔ [u
We introduce the combination operator ⊕ such that:
wlΔul⊕vl=[w
with
w
In Step 5, we calculate the mean vectors adapted to the noise {tilde over (X)} using equation 4.
{circumflex over (m)}p,j,k=IDFT(DFT({overscore (m)}p,j,k)⊕DFT({tilde over (X)})). (4)
where DFT and IDFT are, respectively, the DFT and inverse DFT operation, {circumflex over (m)}p,j,k is the noise compensated mean vector.
Equation 4 involves several operators. DFT is the Discrete Fourier Transform and IDFT is the Inverse Discrete Fourier Transform, which are respectively used to convert from the cepstrum domain to the log spectrum domain, and vice versa. The ⊕ is an operation applied to two log spectral vectors to produce a log spectral vector representing the linear sum of spectra. The operation ⊕ is defined by equations 2 and 3. Equation 2 defines the operation ⊕ which operates on two D dimensional vectors u and v and the result is a vector of D dimensions, [w
In the following steps, we need to remove the mean vector {circumflex over (b)} of the noisy data y over the noisy speech space N (from the resultant model). One may be able to synthesize enough noisy data from compensated models but this requires a lot of calculation. In accordance with the present invention the vector is calculated using statistics of the noisy models. The whole recognizer will operate with CMN (cepstral mean normalization mode), but the models in Equation 4 are no longer mean normalized. We have dealt with additive noise. The second half of the processing is removing the cepstral mean of our models defined in Equation 4. This is not difficult because we have the models in Equation 4. In Step 6, we need to integrate all the samples generated by Equation 4 to get the mean {circumflex over (b)}. Equation 5 is this integration.
Let H be the variable denoting HMM index, J be the variable for state index, and K be the variable for mixing component index.
Since
p(y|p,j,k)=N(y,IDFT(DFT({overscore (m)}p,j,k)⊕DFT({tilde over (X)})), —p,j,k) (6)
We have
Equation 7 shows that {circumflex over (b)} can be worked out analytically, and it is not necessary to do the physical generation and integration. The final result is represented by Equation 7 which reduces the integration into sums over HMMs, over states and over mixing components. Finally the estimated noise-compensated channel bias, {circumflex over (b)}, is removed from the compensated model means to get the target model means. This is Step 7. The target model is:
{dot over (m)}p,j,k={circumflex over (m)}p,j,k−{circumflex over (b)} (8)
This resulting target model means are the desired modified parameters of the HMM models used in the recognizer. This operation is done for each utterance.
Calculation of {circumflex over (b)} requires the knowledge of the probabilities of each PDF. There are two issues with the probilities:
They needs additional storage space.
They are dependent of the recognition task e.g. vocabulary, grammar.
Although it is possible to obtain the probabilities, we can also consider the following simplified cases.
The operations to calculate {circumflex over (b)} can be simplified by assuming
PH(p)=C
PJ|H(j|p)=D
PK|H,J(k|p,j)=E (10)
C, D and B are selected such that they represent equal probabilities. Therefore we have the following: C is chosen such that it provides a probability such that each HMM is likely, so C=1/(number of HMM models); D is chosen such that each state of a given HMM is equally likely, where the HMM is indexed by p, so D=1/(number of states in HMM(p)); and E is chosen such that each mixing component of a state of an HMM is equally likely, where the state of an HMM is indexed by j, so E1/(number of mixing components in HMM(p) state(j).
In fact, the case described in Eq-10 consists in averaging the compensated mean vectors {circumflex over (m)}p,j,k. Referring to Eq-4 and Eq-1, it can be expected that the averaging reduces the speech part mp,j,k just as CMN does. Therefore, Eq-7 could be further simplified into:
{circumflex over (b)}=IDFT(DFT(b)⊕DFT({tilde over (X)})). (11)
The model {dot over (m)}p,j,k, of Eq-8 is then used with CMN on noisy speech.
A database containing recordings in a car was used.
HMMs used in all experiments were trained using clean speech data. Utterance-based cepstral mean normalization was used.
This application claims priority under 35 USC § 119(e)(1) of provisional application No. 60/275,487, filed Mar. 14, 2001.
Number | Name | Date | Kind |
---|---|---|---|
5537647 | Hermansky et al. | Jul 1996 | A |
5924065 | Eberman et al. | Jul 1999 | A |
6691091 | Cerisara et al. | Feb 2004 | B1 |
6912497 | Gong | Jun 2005 | B1 |
Number | Date | Country | |
---|---|---|---|
20020173959 A1 | Nov 2002 | US |
Number | Date | Country | |
---|---|---|---|
60275487 | Mar 2001 | US |