The present invention relates generally to discriminative model training, more particularly, to an improved method for discriminatively training acoustic models used in verification.
Speaker verification (SV) is the process of verifying an unknown speaker whether he/she is the person as claimed. Speech verification or utterance verification (UV) is the process of verifying the claimed content of a spoken utterance (for example, verifying the hypotheses output from an automatic speech recognition system). Both SV and UV technologies have many applications. For example, SV systems can be used in places (e.g. security gates) where access is only allowed to certain registered people. UV systems can be used to enhance speech recognition systems by rejecting non-reliable hypotheses and therefore improve the user interface. Sometimes UV is included as a component of a speech recognition system to verify the hypotheses from the speech recognition process.
Acoustic model training is a very important process in building any speaker verification systems and speech or utterance verification systems. Acoustic model training has been extensively studied over the past two decades and various methods have been proposed.
Maximum likelihood estimation (ML) is the most widely used parametric estimation method for training acoustic models, largely because of its efficiency. ML assumes the parameters of models are fixed but unknown and aims to find the set of parameters that maximizes the likelihood of generating the observed data. ML training criterion attempts to match the models to their corresponding training data to maximize the likelihood.
Although ML is found to be efficient, discriminative training methods have proven to achieve better models. One example of a discriminative training method is Minimum Verification Error (MVE) training. MVE training criterion attempts to adjust model parameters so as to minimize the approximate verification errors on the training data. While the above mention methods have both proven to be effective, discriminative training methods for creating more accurate and more robust models are still being pursued.
An improved method is provided for discriminatively training acoustic models for an automated speaker verification systems (SV) and speech (or utterance) verification systems (UV). The method includes: defining a likelihood ratio for a given speech segment, whose speaker identity (in the case of speaker verification system) or linguist identity (in the case of utterance verification system) is known, using a corresponding acoustic model (referred to as a true model), and an alternative acoustic model which represents all other speakers (in SV) or all other speech identities (in UV); determining an average likelihood ratio score for the likelihood ratio scores over a set of training utterances whose speaker identities (in SV) or linguist identities (in UV) are the same, and let's refer to it as true data set; determining an average likelihood ratio score for the likelihood ratio over a competing set of training utterances which excludes the speech data in the true data set (also referred to as a competing data set); and optimizing a difference between the average likelihood ratio score over the true data set and the average likelihood ratio score over the competing data set, thereby improving the acoustic model.
Further areas of applicability of the present invention will become apparent from the detailed description provided hereinafter. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
An improved method is provided for discriminatively training acoustic models used in speech (or utterance) verification systems (UV). For ease of discussion the proposed method will be discussed in the context of speech verification. However, it is readily understood that the described techniques are also applicable to acoustic models used in speaker verification (SV) applications. Thus, the following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses.
For a given speech segment X, assume that a speech recognizer recognizes it as word W. Speech (utterance) verification is a post-processing step that examines the reliability of the hypothesized recognition result. Under the framework of statistical hypothesis testing, two complementary hypotheses are proposed, namely the null hypothesis H0 and the alternative hypothesis H1 as follows:
H0: x is a spoken word of W
H1: X is NOT a spoken word of W
While the description is provided with reference to a word speech segment, it is readily understood that models intended to verify other types of speech segments, such as phoneme, syllable, phrase, and sentence, are also within the scope of the present invention. Likewise, speaker identification models are also within the scope of the present invention.
The verification process tests the hypotheses H0 against H1 to determine whether to accept the recognition result or reject it. Under some conditions, the optimal method for performing the test is based on a likelihood ratio test (LRT) determined by Neyman-Pearson Lemma, shown mathematically as:
If the likelihood ratio test of X is greater than τ (i.e., LRT(X)>τ), then H0 is accepted; whereas, if the likelihood ratio test of X is less than τ (i.e., LRT(X)<τ), then H1 is accepted, where τ is the decision threshold.
Two key issues affect the performance of such testing. The first is how to determine and collect the appropriate data for H0 and H1. The second is how to accurately calculate P(X|H0) and P(X|H1) given the collected training data. This disclosure focuses on the second issue.
For exemplary purposes, statistical models such as Hidden Markov Models (HMMs) will be used to represent the data for H0 and H1. Thus, the model for H0 is represented as λW (also referred to as the true model). The corresponding training data set for the true model is named the true data (TD) set ST, where the true data set contains samples of data (e.g., words) for word W. The model for H1 is represented as λ
In practice, a log-likelihood ratio test can be used in place of the likelihood ratio test to prevent underflow in machine computation. The log-likelihood ratio test for the true model λW and the anti-model λ
In order to more effectively train acoustic models for speaker and/or speech verification, a margin of log likelihood ratio test scores for the true data set and the competing set should be optimized.
In an exemplary embodiment, parameters of the acoustic model are iteratively adjusted in order to maximize the difference between an average of LRT scores over the true data set ST and an average of LRT scores over the competing data set SC, as shown below:
where |ST| is the size of (i.e., number of utterances in) ST and |SC| is the size of |SC|. An objective function Q(λW,λ
where θ is a constant. Substituting the log-likelihood ratio test into this equation provides:
Using Λ to denote the model pair (λW,λ
Alternatively, parameters of the acoustic models can be iteratively adjusted in order to minimize the difference between an average of LRT scores for the competing data set SC and an average of LRT scores for the true data set ST, as shown below:
An objective function Q1(λW,λ
Substituting the log-likelihood ratio test into this equation provides:
Simplified as:
In this alternative, parameters of the acoustic models are iteratively adjusted to minimize Q1(λW,λ
As can be appreciated, the invention contemplates other mathematically equivalent variations of the above stated equations involving the average of LRT scores over the true data set and the competing data set such as:
To perform model estimation according to these new training criteria, optimization methods such as the Generalized Probabilistic Descent (GPD) algorithm or Quickprop can be used to iteratively adjust the model parameters to solve the above minimization/maximization problem. It is envisioned that other optimization methods may also be used to solve this problem.
For example, the formula to estimate Gaussian means for the true model and the anti-model (for HMM based acoustic models) based on the training criterion given in equation 9, using the Generalized Probabilistic Descent algorithm, are:
where μk(n+1) is the k-th Gaussian mean in the true model λW at (n+1)-th iteration and
As compared to ML training, the proposed discriminative training method has been shown to achieve better models than ML training. As compared to minimum verification error (MVE) training, the proposed discriminative training method directly aims to maximize the difference or margin of the LRT scores for the true data set and the competing data set, while MVE embeds that margin into a sigmoid function to approximate the total verification error count, and then aims to minimize that function. The verification system or classifier built by the proposed training method can be regarded as a type of large margin classifier. According to the machine learning theory, a large margin classifier generally has better robustness, therefore the proposed training method may be able to achieve more robust models than MVE training does.
Those skilled in the art can now appreciate from the foregoing description that the broad teachings of the present invention can be implemented in a variety of forms. Therefore, while this invention has been described in connection with particular examples thereof, the true scope of the invention should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and the following claims.