Discriminative training of HMM models using maximum margin estimation for speech recognition

Information

  • Patent Application
  • 20070083373
  • Publication Number
    20070083373
  • Date Filed
    October 11, 2005
    19 years ago
  • Date Published
    April 12, 2007
    17 years ago
Abstract
An improved discriminative training method is provided for hidden Markov models. The method includes: defining a measure of separation margin for the data; identifying a subset of training utterances having utterances misrecognized by the models; defining a training criterion for the models based on maximizing the separation margin; formulating the training criterion as a constrained minimax optimization problem; and solving the constrained minimax optimization problem over the subset of training utterances, thereby discriminatively training the models.
Description
FIELD OF THE INVENTION

The present invention relates generally to discriminative model training and, more particularly, to an improved method for discriminative training of hidden Markov models (HMMs) based on maximum margin estimation.


BACKGROUND OF THE INVENTION

Discriminative training has been extensively studied over the past decade and has proved to be quite effective for improving automatic speech recognition performance. Minimum classification error (MCE) and maximum mutual information (MMI) are two of the more popular discriminative training methods. Despite their significant progress in this area, many issues related to discriminative training remain unsolved. One issue reported by many researches is that discriminative training methods for speech recognition suffer from the problem of poor generalization capability. In other words, discriminative training can dramatically reduce the error rate for the training data but such significant performance gains cannot be maintained for unseen test data.


Therefore, it is desirable to provide a discriminative training method for hidden Markov models which improves the generalization capability of the models.


SUMMARY OF THE INVENTION

An improved discriminative training method is provided for hidden Markov models. The method includes: defining a measure of separation margin for the data; identifying a subset of training utterances having utterances misrecognized by the models; defining a training criterion for the models based on the principle of maximizing the separation margin; formulating the training criterion as a constrained minimax optimization problem; and solving the constrained minimax optimization problem over the subset of training utterances, thereby discriminatively training the models.


Further areas of applicability of the present invention will become apparent from the detailed description provided hereinafter. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.







DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

In automatic speech recognition, given any speech utterance X, a speech recognizer will choose the word Ŵ as output based on the MAP decision rule as follows:
W^=argmaxwP(W|X)=argmaxwP(W)·P(X|W)=argmaxwP(W)·P(X|λW)=argmaxwF(X|λW)(1)

where λw denotes the HMM representing the word W and F(X|λw)=P(W)·P(X|λw) is called discriminant function. Depending on the problem of interest, a word W is used herein to mean any linguistic unit, such as a phoneme, a syllable, a word, a phrase or a sentence. For discussions purposes, this work is focused on hidden Markov models λw and assume P(W) is fixed. While the following description is provided with reference to hidden Markov models, it is readily understood that the broader aspects of the present invention are also applicable to other types of acoustic models.


For a speech utterance, Xi assuming its true word identity as WiT, the multi-class separation margin for Xi is defined as:
d(Xi)=F(Xi|λwiT)-maxwjΩwjwiTF(Xi|λwj)=minwjΩwjwiT[F(Xi|λwiT)-F(Xi|λwj)](2)(3)

where Ω denotes the set of all possible words.


Obviously, if d(Xi)<0, Xi will be incorrectly recognized by the current HMM set, denoted as Λ; if d(Xi)>0, Xi will be correctly recognized by the models Λ.


Given a set of training data D={X1, X2 . . . , XN}, we usually know the true word identities for all utterances in D, denoted as L={W1T, W2T, . . . WNT}. Thus, we can calculate the separation margin (also referred to hereafter as margin) for every utterance in D based on the definition in equation (2) or (3). If we want to estimate the HMM parameters Λ, one desirable estimation criterion is to minimize the total number of utterances in the whole training set which have negative margins as in the standard MCE estimation. Furthermore, motivated by the large margin principle in machine learning, even for those utterances which all have positive margins, we may still want to maximize the minimum margin among them towards an HMM-based large margin classifier. Based on the machine learning theory, a large margin classifier usually leads to a much lower generalization error rate in a new testing set and shows a more robust and better generalization capability. In this report, we will show how to estimate HMMs for speech recognition based on the above-mentioned principle of maximizing minimum multi-class separation margin.


First of all, from all utterances in D, we need to identify a subset of utterances,

S={Xi|XiεD and 0≦d(Xi)≦γ}  (4)

where γ>0 is a pre-set positive number. Analogically, we call S as support vector set and each utterance in S is called a support token, which has relatively small positive margin among all utterances in training set D. In other words, all utterances in S are relatively close to the classification boundary even though all of them locate in the right decision regions. To achieve a better generalization power, it is desirable to adjust decision boundaries, which are implicitly determined by all models, through optimizing HMM parameters Λ to make all support tokens as far from the decision boundaries as possible, which will result in a robust classifier with better generalization capability. This idea leads to estimating the HMM models Λ based on the criterion of maximizing the minimum margin of all support tokens, which is named as large margin estimation (LME) or maximum margin estimation (MME) of HMM:
Λ~=argmaxΛminXiSd(Xi)(5)

where the above maximization and minimization are performed subject to the constraints that d(Xi)>0 for all XiεS. The HMM models, {tilde over (Λ)}, estimated in this way, are called large margin or maximum margin HMMs. For simplicity of explanation, we will only use the term large margin estimation hereafter.


Considering equation (3), large margin HMMs can be equivalently estimated as follows:
Λ~=argmaxΛminXiS,wjΩ,wjwiT[F(Xi|λwiT)-F(Xi|λwj)](6)

subject to

F(Xi|λWiT)−F(Xi|λwj)>0  (7)

for all XiεS and wjεΩwj≠WiT.


Finally, the above optimization can be converted into a standard minimax optimization problem as:
Λ~=argmaxΛmaxXiS,wjΩ,wjwiT[F(Xi|λwj)-F(Xi|λwiT)](8)

where the minimax optimization is subject to the following constraint:

F(Xi|λwj)−F(Xi|λWiT)<0  (9)

for all XiεS and wjεΩwj≠wiT.


Since large margin estimation is derived from support vector machines in machine learning, the definition of the training set is analogous to that of the support vector set for support vector machines as seen in equation (4) above. In other words, the support vector set only consists of positive tokens (i.e., training data correctly recognized by the baseline model). Negative or misrecognized tokens are discarded in the large margin estimation approach. As a result, large margin estimation typically uses minimum classification error training to bootstrap the training (i.e., uses the MCE model as a seed model to start the training).


The present invention proposes to further include the negative tokens in the support vector set. A new definition of the support vector set is defined as follows:

S={Xi|XiεD and d(Xi)≦γ}  (10)

where γ is a positive constant. In other words, a subset of training data is identified which includes data misrecognized by the models. However, the subset of training data may also include data correctly recognized by the models. Accordingly, the minimax optimization problem may be solved using this new support vector set. It is readily understood that different optimization approaches for solving this problem are within the scope of the present invention.


Assuming there are misrecognized tokens, the minimization in the criterion of equation (5) will chose the most negative token which is farthest from the decision boundary and locates in the wrong decision region. This is very different from the original large margin estimation training where the minimization will always choose the token that is nearest to the decision boundary but locates in the correct decision region. According to the criterion, the maximization will push the negative tokens to cross the decision boundaries so they will have positive margins. This is similar to the minimum classification error training but in a more direct and effective fashion. In this way, large margin estimation no longer needs to use MCE to bootstrap, thereby completely removing any need for MCE in the training process.


The present invention directly applies the large margin estimation (LME) to both misrecognized data and correctly recognized data, as opposed to previous method in which only correctly recognized training data can be used in the training. It takes full benefit of LME because more training data participate in the training, and therefore can achieve higher accuracy than the existing LME method. Furthermore, in large vocabulary continuous speech recognition (LVCSR) tasks, only a very small percentage of training data will be correctly recognized by the baseline models. In the previous LME method, the benefit of large margin estimation will be greatly limited due to lack of applicable training data, or it may not be able to apply at all when none of the training data is correctly recognized, which is common for LVCSR tasks. But this invention has no such problem and can be directly applied to LVCSR tasks. Another advantage of this invention is that it does not need to use MCE to bootstrap the training as opposed to the existing LME method, so the overall training time is shorter.


Constraints for the large margin estimation do not guarantee the existence of a minimax point. As an illustration of this, let's assume a simple case with only two classes m1 and m2 and there is a support token X close to the decision boundary. If we pull m1 and m2 together at the same time, we can keep the boundary unchanged but increase the margin defined in equation (3) as much as we want. As models move toward X, the absolute values of both F(X|m1) and F(X|m2) increase, so does the margin as well, although the relative position of X related to the boundary actually does not change at all.


More constraints must be introduced in the minimax optimization procedure to make sure that the optimal point does exist. In one exemplary approach, a localized optimized strategy is adopted. Rather than optimizing parameters of all models at the same time, only one selected model is adjusted in each step and then the process iterates to update another model until the minimum margin is maximized.


The iterative localized optimization may be summarized as follows:

    • Repeat
      • 1. Identify the support set S based on the current model set Λ(n).
      • 2. Choose the support token, to say Xk, from S which currently gives the minimum margin; Choose the true model of Xk, to say λk(n) for optimization in this iteration.
      • 3. Minimizing the margin by ONLY updating the model λk:

        λk(n)custom characterλk(n+1).
      • 4. n=n+1.
    • until some convergence conditions are met.


In the above iterative localized optimization method, in each iteration, only one model, to say λk, is updated based on the minimax optimization given in equation (8) so that we only need to consider those functions which are relevant to the currently selected model. The minimax optimization can be re-formulated as:
λk(n+1)=argminmaxXiS,ijj=kori=k[F(Xi|λwj)-F(Xi|λwiT)](11)

subject to the constraints in equation (10). This localized minimax optimization can be numerically solved by using some optimization software tools. Given a large number of parameters in HMMs, it is usually too slow to use a general purpose minimax tool to solve this optimization problem.


One alternative is to use a GPD-based algorithm to solve the minimax problem in equation (11) in an approximate way. First of all, based on equation (11), we construct a differentiable objective function as follows:
Q(λk)=1ηlog{XiSjii=korj=kexp[ηF(Xi|λWj-ηF(Xi|λWi)]}(12)

where η>1 is a constant. As η→∞, Q(λk) will approach the maximization in equation (11). Then, the GPD algorithm can be used to update the model parameters, λk, in order to minimize the above approximate objective function, Q(λk).


Assume each speech unit, e.g., a word W, is modeled by an N-state CDHMM with parameter vector λ=(π, A, θ), where π is the initial state distribution, A={aij|1≦i, j≦N} is transition matrix, and θ is parameter vector composed of mixture parameters θi={wik, mik, rik}k=1, 2, . . . , k for each state i, where K denotes number of Gaussian mixtures in each state. The state observation p.d.f. is assumed to be a mixture of multivariate Gaussian distribution. In many cases, we prefer to use multivariate Gaussian distribution with diagonal precision matrix. Given any speech utterance Xi={xi1, xi2, . . . xiR}, F(Xi|λwj) can be calculated as:
F(Xi|λwj)=log(P(Xi|λwj)P(wj))logP(Wj)+logS1+t=2TlogaSt-1St+12t=1Td=1D[logrStltd-rStltd(xitd-mStltd)2](13)


Here we only consider a simple case, where we only re-estimate mean vectors of CDHMMs based on the large margin principle while keeping all other CDHMM parameters constant during the large margin estimation. For any utterance Xi in the support token set S, we can re-write F(Xii) and F(Xij) according to equation (13) as follows:
F(Xi|λi)C-12t=1Td=1D[logrStltd-rStltd(xitd-mStltd)2](14)F(Xi|λj)C-12t=1Td=1D[logrStltd-rStltd(xitd-mStltd)2](15)

where C′ and C″ are two constants independent from mean vectors. In this case, the discriminant functions F(Xii) and F(Xij) can be represented as a summation of some quadratic functions related to mean values of CDHMMs. Then we can represent the decision margin F(Xii)−F(Xij) as:


From eqs. (12) and (16), it is straightforward to calculate the gradient of the objective function, Q(λk), with respect to each mean vector in the model λk.


At last, we can use the GPD algorithm to adjust λk to minimize the objective function as follows:
F(Xi|λi)-F(Xi|λj)C-t=1Td=1D[rStltd(xitd-mStltd)2-rStltd(xitd-mStltd)2]whereC=C-C(16)μsql(n+1)=μsql(n)-Q(λk)μsql|λk=λk(n)(17)

where μsql(n+1) denotes the I-th dimension of Gaussian mean vector for the q-th mixture component of state S of HMM model λk at (n+1 )-th iteration.


In an alternative approach, the definition of margin may be changed to a relative separation margin as defined below:
d~(Xi)=minwjΩwjwiT[F(Xi|λwiT)-F(Xi|λwj)F(Xi|λwiT)](18)


If the discriminant functions F(·) are defined as in equation (1), for all support tokens in the set S defined in equation (10), the relative margin d(Xi) will be less than 1. Since the relative margin has an upperbound by definition, the maximum value of relative margin always exists. However, in many cases, F(Xi|λ) is defined as the log-likelihood of Xi given model set Λ, so F(Xi|λwiT)<0. To make the relative margin meaningful (i.e., positive values for correctly recognized data and negative values for misrecognized data), we slightly modify its definition as:
d~(Xi)=minwjΩwjwiT[F(Xi|λwj)-F(Xi|λwiT)F(Xi|λwj)](19)

Thus, for correctly recognized data, F(Xi|λwj)<F(Xi|λWiT), d(Xi)>0. Similarly, we define the support vector set S as equation (10). Therefore, our new training criterion is defined as
Λ~=argminΛmaxxiS,wjΩ,wjwiT[F(Xi|λwiT)F(Xi|λwj)-1](20)

where Ω denotes the set of all possible words. This technique is referred to large relative margin estimation (LRME) or maximum relative margin estimation (MRME) of HMMs. In this case, different optimization approaches can be used for updating all model parameters at the same time.


For example, an iterative approach is proposed based on the generalized probabilistic descent (GPD) algorithm. First, a differentiable objective function is constructed. To do so, a summation of exponential functions to approximate the maximization in equation (20) as follows:
maxXiS,wjΩ,wjwiT[F(Xi|λwiT)F(Xi|λwj)-1]log{XiS,wjΩ,wjwiTexp[ηd(Xi,λwj,λwiT]}1/ηd(Xi,λWj,λWiT)=dij=F(Xi|λwiT)F(Xi|λwj)-1(21)

where η>1. As η→∞, the continuous function in the right hand side of equation (21) will approach the maximization in the left hand side.


Therefore, we define the objective function as:
Q(Λ)=1ηlog{XiS,wjΩ,wjwiTexp(ηdij)}(22)=1ηlog{XiS,wjΩ,wjwiTexp(ηdij)}(23)=1ηlogQ1(24)


Now, we can use GPD algorithm to adjust Λ to minimize the objective function of Q(Λ). To maintain HMM model constraints during the optimization process, we need to define the same transformations for model parameters as known in minimum classification error training methods. For Gaussian means, the transformation is
μ~sklm=μsklmσsklm

where {tilde over (μ)}sklm is the transformed Gaussian mean, μsklm and σsklm are the original Gaussian mean and variance, respectively. Then it can be shown that the iterative adjustment of Gaussian means follows
μ~sklm(n+1)=μ~sklm(n)-Q(Λ)μ~sklm|Λ=Λn(25)μsklm(n+1)=σsklmμ~sklm(n+1)(26)

where μsklm(n+1) is the I-th dimension of Gaussian mean vector for the k-th mixture component of state s of HMM model m at n+1 iteration.
Q(Λ)Q1=1η1Q1(27)Q1μ~sklm=XiS{wjΩ,wjwiTηexp(ηdij)dijμ~sklm}=XiS{δ(WiT-m)ηF(Xi|λm)μ~sklmWjΩ,Wjmexp(ηdij)(Xi|λWj)-(1-δ(WiT-m))F(Xi|λWiT)F2(Xi|λm)ηexp(ηdij)F(Xi|λm)μ~sklm}(28)

where δ(WiT−m)=1 when WiT=m, that is, the true model for utterance Xi is the m-th model in the model set Λ. δ(WiT−m)=0 when WiT≠m. As
F(Xi|λm)=logL(Xi|λm)logL(Xi,q;λm)=t=1T[logaqt-1qtm+bqtm(xt)]+logπq0m(29)bjm(xt)=k=1KcjkmN[xt;μjkm,Rjkm]so(30)F(Xi|λm)μ~sklm=t=1Tδ(qt-s)logbsm(xt)μ~sklm(31)

where
logbsm(xi)μ~sklm=cskm(2π)-D2Rskm-12(bsm(xt))-1(xtl-μsklmσskl)exp{-12l=1D(xtl-μsklmσskl)2}(32)

D is the dimension of feature vectors. Rskm is the covariance matrix for state s and Gaussian mixture component k for HMM model m. Here we assume it is diagonal. q is the best state sequence obtained by aligning Xi using HMM model λm.


Combining equations from (27) to (32), we can easily obtain ∂Q(Λ)/∂{tilde over (μ)}sklm for equation (25). Similar derivations for the variances, mixture weights and transition probabilities can be easily accomplished.


Note that there may be alterative definitions to the one given in equation (19). One alternative definition is
d~(Xi)=minwjΩwjwiT[exp(F(Xi|λwiT))-exp(F(Xi|λwj))exp(F(Xi|λwjT))](33)

Based on the alternative definition, it is readily understood that the estimation formula for HMM model parameters can be derived.


The description of the invention is merely exemplary in nature and, thus, variations that do not depart from the gist of the invention are intended to be within the scope of the invention. Such variations are not to be regarded as a departure from the spirit and scope of the invention.

Claims
  • 1. A discriminative training method for hidden Markov models, comprising: defining a measure of separation margin for the data; identifying, based on the definition of the separation margin, a subset of training data having data misrecognized by the models; defining a training criterion for the models based on maximum margin estimation; formulating the training criterion as a minimax optimization problem; and solving the constrained minimax optimization problem over the subset of training data, thereby discriminatively training the models.
  • 2. The discriminative training method of claim 1 wherein each datum of the subset of training data has a separation margin from classification boundaries of the models which is equal to or less than a threshold value.
  • 3. The discriminative training method of claim 1 wherein the subset of training data, S, is
  • 4. The discriminative training method of claim 1 wherein the training criterion is further defined as
  • 5. The discriminative training method of claim 1 wherein a maximum margin estimation is further defined as a large margin estimation or a large relative margin estimation.
  • 6. The discriminative training method of claim 4 wherein defining the separation margin is as follows
  • 7. The discriminative training method of claim 6 wherein solving the constrained minimax optimization problem uses an iterative localized optimization algorithm.
  • 8. The discriminative training method of claim 4 wherein defining the separation margin is as follows
  • 9. The discriminative training method of claim 4 wherein defining the separation margin is as follows
  • 10. The discriminative training method of claim 8 wherein solving the constrained minimax optimization problem uses a generalized probabilistic descent algorithm.
  • 11. The discriminative training method of claim 9 wherein solving the constrained minimax optimization problem uses a generalized probabilistic descent algorithm.
  • 12. A discriminative training method for hidden Markov models, comprising: defining a measure of separation margin for the data; defining a training criterion for the models based on maximum margin estimation; formulating the training criterion as a constrained minimax optimization problem; and solving the constrained minimax optimization problem over a subset of training utterances, where the subset of training utterances, S, is S={Xi|XiεD and d(Xi)≦γ}where Xi is a speech utterance in a set of training data D, d(Xi) is a separation margin for the speech utterance and γ is a predefined positive number.
  • 13. The discriminative training method of claim 12 wherein the training criterion is further defined as
  • 14. The discriminative training method of claim 12 wherein a maximum margin estimation is further defined as a large margin estimation or a large relative margin estimation.
  • 15. The discriminative training method of claim 13 further comprises defining the separation margin as follows
  • 16. The discriminative training method of claim 15 wherein solving the constrained minimax optimization problem uses an iterative localized optimization algorithm.
  • 17. The discriminative training method of claim 13 further comprises defining the separation margin as follows
  • 18. The discriminative training method of claim 13 further comprises defining the separation margin as follows
  • 19. The discriminative training method of claim 17 wherein solving the constrained minimax optimization problem uses a generalized probabilistic descent algorithm.
  • 20. The discriminative training method of claim 18 wherein solving the constrained minimax optimization problem uses a generalized probabilistic descent algorithm.
  • 21. A discriminative training method for acoustic models, comprising: defining a measure of separation margin for the data; identifying a subset of training utterances having utterances recognized by the acoustic models and utterances misrecognized by the acoustic models; defining a training criterion for the acoustic models based on maximum margin estimation; formulating the training criterion as a minimax optimization problem; and solving the constrained minimax optimization problem over the subset of training utterances, thereby discriminatively training the acoustic models.