Method and system for training of a classifier

Information

  • Patent Grant
  • 6728674
  • Patent Number
    6,728,674
  • Date Filed
    Monday, July 31, 2000
    24 years ago
  • Date Issued
    Tuesday, April 27, 2004
    21 years ago
Abstract
A method and a system for corrective training of speech models includes changing a weight of a date sample whenever a data sample is incorrectly associated with a classifier and retraining each classifier with the weights.
Description




BACKGROUND OF THE INVENTION




Speech recognition is a classification task. In maximum likelihood classifiers, each classifier is trained by examples that belong to its class. For example, the classifier which recognizes the digit “1” is trained by multiple pronunciations of the digit “1”.




A commonly used classifier is a Hidden Markov Model (HMM). Each word is modeled by a different HMM which serves as an abstract “picture” of this word, with all its possible variations. The HMM consists of a sequence of “states”, each state is responsible for the description of a different part of the word. The use of HMM in speech recognition consists of two phases: the training phase and the recognition phase. In the training phase, repetitions of each word from the training data are used to construct the corresponding HMM. In the recognition phase, the word models may be used to identify unknown speech by checking the unknown speech against the existing models.




Some words sound similar to each other and can therefore be incorrectly recognized. Using digits as examples, “go” (5) and “rok” (6) in Japanese and “seven” and “eleven” in English sound sufficiently similar to cause an incorrect recognition.











BRIEF DESCRIPTION OF THE DRAWINGS




The present invention will be understood and appreciated more fully from the following detailed description taken in conjunction with the appended drawings in which:





FIG. 1

is a block diagram illustration of a corrective training system, which may be constructed and operative in accordance with an embodiment of the present invention;





FIG. 2A

is a schematic illustration of speech signal for the phrase: noise “seven” noise;





FIG. 2B

is a schematic illustration of a Hidden Markov Model (HMM) which may be match the signal of

FIG. 2A

;





FIG. 3

is a schematic illustration of the Viterbi algorithm,





FIG. 4

is a schematic illustration of a pair of gaussian functions,





FIG. 5

is a flow chart illustration of the corrective training that may be used in accordance with the present invention; and





FIG. 6

is an exemplary state sequence for incorrectly segmented connected words “3,4” wherein the correctly and incorrectly associated frames are noted.











DETAILED DESCRIPTION OF THE PRESENT INVENTION




In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However it will be understood by those of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present invention.




Some portions of the detailed description which follow are presented in terms, of algorithms and symbolic representations of operations on data bits or binary digital signals within a computer memory. These algorithmic descriptions and representations may be the techniques used by those skilled in the data processing arts to convey the substance of their work to others skilled in the art.




An algorithm is here, and generally, considered to be a setf-consistent sequence of acts or operations leading to a desired result. These include physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers or the like. It should be understood, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities.




The present invention will be described herein for full word models, it is fully applicable for models of at least one portion of a word, such as monophones or triphones, and these are included in the scope of the present invention.




In general, the present invention describes a method of corrective training for classifiers that are trained initially by a maximum likelihood-based training system. The general procedure may be as follows:




The first step may be initialization of classes. For each class, train a model based only on same-class data.




The second step may be classification which classifies the data of all of the classes using all of the classifiers. Each data sample is associated with one classifier.




The third step may be emphasis of errors. Whenever a data sample is incorrectly associated with a classifier, a weight associated with that data sample is increased.




The fourth step may be training with new weights. Each classifier is re-trained, but with the relative weights that were set in the previous step. Finally, the four steps are repeated as necessary, e.g., until a termination criterion is met.




Reference is now made to

FIG. 1

, which illustrates a system of corrective training of speech models. An embodiment of the present invention may be a Hidden Markov Model (HMM) classifier. Reference is further made to

FIGS. 2A and 2B

, which show an exemplary speech signal and its model and are useful in understanding the embodiment of FIG.


1


.




The system may comprise a feature extractor


100


(FIG.


1


), a speech database


102


of speech signals, a corrective trainer


104


and a model database


106


of speech models.




Speech database


102


may store multiple versions of speech phrases to be trained and a notation of the word content of the phrase.

FIG. 2A

shows an exemplary speech signal


110


, as stored in database


102


, where the phrase “seven” is spoken, surrounded by noise before and after.

FIG. 2B

shows the resultant word models as stored in model database


106


. Noise may be represented as a separate word and hence, has its own word model.




Speech signal


110


may be sampled at 8000 Hz and then may be divided into a multiplicity of frames


112


, of 30 milliseconds each. Thus, each frame includes 240 samples. Feature extractor


100


processes speech signal


110


and a smaller number of features may be obtained, e.g. around twenty. In some embodiments of the invention, cepstral features and their time derivatives are used These features are the basis for speech recognition and, in some embodiments of the present invention, are provided to corrective trainer


104


.




In the example of

FIG. 2A

, there are 11 frames. Frames


1


,


2


,


9


,


10


and


11


are frames of noise, frame


3


includes the sound “se”, frames


4


and


5


include the sound “eh” (after an s), frame


6


includes the sound “ve”, frame


7


includes the sound “eh” (after a v), and frame


8


includes the sound “n”.




Model database


106


stores HMM models for each of the possible words, phrases or sounds that the system can recognize. In each HMM model, each of the sounds may be considered a different state. The phrase of

FIG. 2A

is modeled as an ordered sequence of states. Thus, the states progress from a noise state


114


, to a “se” state


116


, to an “eh” state


117


, to a “ve” state


118


, to an “eh” state


119


, to a “n” state


120


, and finally to second noise state


114


.




States may be spread over several frames. Thus, frames


1


and


2


may correspond to noise state


114


, frames


4


and


5


may correspond to “eh” state


117


, and frames


9


,


10


and


11


may correspond to noise state


114


.




From each state, two types of motion may be possible, to remain at the state as indicated by loops


124


, or to transition from one state to the next as indicated by arcs


126


. When a left-to-right HMM remains in the same state, as indicated by loops


124


, then the state may comprise more than one frame. When the left-to-right HMM transitions to the next state, as indicated by arcs


126


, then the next frame may correspond to a different state.




Corrective trainer


104


may receive the features of a stored speech signal from feature extractor


100


and the associated content of the speech signal from speech database


102


. Corrective trainer


104


then may perform a modified version of the “segmental k-means algorithm” (known in the art) to train the models. Corrective trainer


104


may compare the speech signal to the word models stored in model database


106


and may find the best match. As described in more detail hereinbelow, corrective trainer


104


may note errors in the matching and may use this information to correct the associated speech model.




In an embodiment of the present invention, the training data may be segmented twice, once in a supervised fashion and once in an unsupervised manner. These two segmentations should have the same results. If not, then the supervised segmentation may be regarded as the correct one. Any incorrectly recognized segment (frame) of speech may be emphasized and may receive extra weight. During a re-estimation procedure, each frame associated with the word being re-estimated may be weighted differently, depending on whether it has been correctly recognized or not.




In a segmental k-means algorithm training session, each word has its own model. The session starts with an initial HMM model, that may be crafted “manually”, and with several repetitions of the word being trained. The training process consists of two parts: supervised segmentation (or “forced alignment”) and re-estimation of model parameters. In supervised segmentation, the contents of the data is known, e.g. the word “seven” is known but which frames belong to which states is not known. Once frames have been associated with states, the state parameters are re-estimated based on the frames that were associated with it.




Reference is now made to

FIG. 3

, which generally illustrates the process of supervised segmentation.

FIG. 3

shows a grid of known states vs. frames and two paths


130


and


132


through the grid. Path


130


is the path discussed hereinabove (state


1


and


2


are noise, etc.). Path


132


is an alternative path that also matches the data. Supervised segmentation produces a plurality of paths through the grid and selects the one path that provides the best match to the data. This process is also known as the Viterbi algorithm.




At each step, a path variable ψ


t,j


and a score variable δ


r,j


are temporarily stored. Score variable δ


t,j


stores the score of the best path (e.g. the path with the highest score) that terminates at gridpoint (t,j), where t is the frame index and j is the state index. Score variable δ


t,j


may be determined as follows:






δ


t,j


=max


i





t−1,i


+log


a,




j


]+log


b




j


({overscore (x)}


1


),  Equation 1






where i may have the value of either j or j−1, a,


j


is the transition probability from state i to state j given in equation 10 and b


j


({overscore (x)}


t


) is the measure of the match of frame t with state j and is defined hereinbelow in equation 4.




Path variable ψ


t,j


points to the previous step in the path, for example, path


130


includes the point (


3


,se). Path variable ψ


t,j


at that point stores the value (


2


,noise), the previous point on path


130


. Path variable ψ


t,j


is determined by:






ψ


i,j


=arg max


i[δ




t−1,f


+log


a




jf


],  Equation 2






where i can be either j or j−1.




When all paths are complete, the path with the highest score is the deemed the best path, e.g. the optimal segmentation.




After selecting the best path, the path is “backtracked” in order to associate a frame to a state. At each gridpoint, the value stored in path variable ψ


t,j


is unpacked in order to determine the previous point of the path. Furthermore, the value stored in path variable ψ


t,j


determines the state q


t


to which frame t belongs. Mathematically, this is written:








q




t





t+1,






qi+1




  Equation 3






Once the path has been backtracked, the frames can be associated with their corresponding states.




The model of a state is a mixture of gaussian probability functions which give the probability that a feature or vector of features {overscore (x)} of a frame of the speech data exists in a state.












b
iq



(

x
_

)


=



s











W
iqg




P
iqg



(

x
_

)





,




Equation





4













For example,

FIG. 4

to which reference is now made, is a representation of a mixture of two gaussian functions P


1


and P


2


of state q and word i. Each gaussian distribution P


iqg


({overscore (x)}) function has an average μ and a variance σ


3


.




The weights W


tqg


for each gaussian function g determine the contribution of the gaussian to the overall distribution. For instance in

FIG. 4

, gaussian P


2


has a lower height and smaller weight W


2


than gaussian P


1


.




The averages μ are estimated by:











μ
iqgd

=




f





ρ
iqg



(


x
_

f

)




x
fd






f




ρ
iqg



(


x
_

f

)





,




Equation





5













where the summation Σ, is over all the frames f that belong to state q of word i, and d is one of the elements of vector x.




The variances may be estimated by










σ
iqgd
2

=



x
iqgd
2

_

-


μ
iqgd
2






where






Equation





6








x
_

iqgd
2

=





f





ρ
iqg



(


x
_

f

)




x
fd
2






f




ρ
iqg



(


x
_

f

)









and





Equation





7








ρ
iqg



(

x
_

)


=




W
iqg




P
iqg



(

x
_

)






h




W
iqh




P
iqh



(

x
_

)





.





Equation





8













The weights of the gaussians may be given by










W
iqg

=




f




ρ
iqg



(


x
_

f

)







g
,
f





ρ
iqg



(


x
_

f

)








Equation





9













and the transition probabilities may be given by










a

q
,

q
+
1



=


1

average





duration





in





state





q


.





Equation





10













Reference is now made to

FIG. 5

, which illustrates one embodiment of a method performed by corrective trainer


104


.




In the method of

FIG. 5

, the incorrectly recognized states may be emphasized during re-estimation. For each phrase recorded in the database, two segmentations may be performed. In step


140


, supervised segmentation, described hereinabove, provides the correct association (q,i,f)


4


of state q of word i with frame f. An unsupervised segmentation (step


142


) produces a second state association (q,i,f)


M


.




Unsupervised segmentation is an operation similar to the segmentation of recognition in which the speech signal may be compared to all of the models in model database


106


(FIG.


1


). This operation is known in the art and, therefore, will not be further described herein.




An embodiment of the present invention compares (step


144


) the state associations from the unsupervised and supervised segmentations. For a frame whose association in the unsupervised segmentation is incorrect, a weight m


f


is increased (step


146


) by some amount Δ, where Δ may be 1. It is noted that a weight m


f


may be initialized to 1 at the beginning of the process (step


143


).




In accordance with an embodiment of the present invention, re-estimation (step


148


) is now performed to recalculate the model of each word. However, in the re-estimation of an embodiment of the present invention, each frame f is weighted by its associated weight m


f


.




Specifically, the re-estimation equations are:










μ
iqgd

=





f





ρ
iqg



(


x
_

f

)




m
f



x
fd






f





ρ
iqg



(


x
_

f

)




m
f









and





Equation





11













where the summation Σ, is over all the frames f that belong to state q of word i, and d is one of the elements of vector x.




The variances may be estimated by










σ
iqgd
2

=



x
iqgd
2

_

-

μ
iqgd
2






Equation





12













where












x
iqgd
2

_

=




f





ρ
iqg



(


x
_

f

)




m
f



x
fd
2






f





ρ
iqg



(


x
_

f

)




m
f





,




Equation





13













{overscore (x)}


2


is the average of x squared, and











ρ
iqg



(

x
_

)


=




W
iqg




P
iqg



(

x
_

)






h




W
iqh




P
iqh



(

x
_

)





.





Equation





14













The new weights of the gaussians are given by:










W
iqg

=




f





ρ
iqg



(


x
_

f

)




m
f







g
,
f






ρ
iqg



(


x
_

f

)




m
f








Equation





15













and the new transition probabilities are given by:










α

q
,

q
+
1



=


1

average





duration





in





state











q


.





Equation  16













An embodiment of the present invention may comprise a termination criterion, calculated in step


150


. An example of the criterion may be to calculate the sum of the weights m


f


for all frames and normalize by the number of frames, or:






criterion=Σ


f




m




f





f


1  Equation 17






The value of the criterion is 1 before corrective training (since no frames have yet been emphasized). If the termination criterion exceeds a predetermined threshold (such as 2) or if there are no more classification errors, the process terminates (step


152


).




Reference is now made to

FIG. 6

, which presents the output of the supervised and unsupervised segmentation operations for a speech signal in which the combination “3,4” was said. In the unsupervised segmentation, the digit “3” was mis-recognized as “5”. The word noise is represented by the letter “n”.




The first row indicates the frame number, the second and third row indicate the word and state association produced by the supervised segmentation (step


140


) and the fourth and fifth row indicate the word and state association produced by the unsupervised segmentation (step


142


). As can be seen, the results match for frames


1


and


6


-


12


. Thus, these frames may be indicated as being correct and their weights m


f


will remain as they were previously.




The remaining frames are incorrectly matched. Frames


2


,


3


and


4


should be associated with digit


3


and are incorrectly matched to digit


5


. These frames, which are part of the re-estimation of digit


3


, may have their weights m


f


increased. Frame


5


is incorrectly matched to digit


5


when it should be matched to digit


4


. This frame, which is part of the re-estimation of digit


4


, also has its weight m


f


increased.




The methods and apparatus disclosed herein have been described without reference to specific hardware or software. Rather, the methods and apparatus have been described in a manner sufficient to enable persons of ordinary skill in the art to readily adapt commercially available hardware and software as may be needed to reduce any of the embodiments of the present invention to practice without undue experimentation and using conventional techniques.




It will be appreciated by persons skilled in the art that the present invention is not limited by what has been particularly shown and described herein above. Rather the scope of the invention is defined by the claims that follow:



Claims
  • 1. A method comprising:comparing between a first classification of a plurality of data samples based on unsupervised segmentation and a second classification of said plurality of data samples based on supervised segmentation using a plurality of classification parameters; adjusting weights of said data samples by emphasizing the weight of one or more data samples which are incorrectly classified by said first classification; and re-estimating said classification parameters using said adjusted weights.
  • 2. A method according to claim 1 further comprising re-classifying said data samples using said re-estimated classification parameters.
  • 3. A method according to claim 2 further comprising repeating said comparing, said adjusting, said re-estimating and said re-classifying until a termination criterion is met.
  • 4. A method according to claim 3 wherein said termination criterion is met when said second classification is substantially unchanged by said re-classifying.
  • 5. A method according to claim 3 wherein said termination criterion is met when the sum of all of said weights normalized by the number of said data samples exceeds a predetermined value.
  • 6. A method according to claim 1 wherein said data samples are classified in a plurality of classes, each class corresponding to at least a portion of a word.
  • 7. A method according to claim 6 wherein said at least a portion of a word comprises at least one individual word.
  • 8. A method according to claim 6 wherein said at least a portion of a word comprises at least two connected words.
  • 9. A method according to claim 6 wherein said at least a portion of a word comprises a monophone.
  • 10. A method according to claim 6 wherein said at least a portion of a word comprises a triphone.
  • 11. A method for corrective training of Hidden Markov Models (HMMs) of at least a portion of a word, the method comprising:providing initial weights to frames of said at least a portion of a word; comparing between a first classification of said frames based on unsupervised segmentation and a second classification of said frames based on supervised segmentation using a plurality of classification parameters; adjusting said weights by increasing the weight of one or more frames which are incorrectly classified by said first classification; re-estimating said classification parameters using said adjusted weights; and re-classifying said frames using said re-estimated classification parameters.
  • 12. A method according to claim 11 further comprising repeating said comparing, said adjusting, said re-estimating, and said re-classifying until a termination criterion is met.
  • 13. A method according to claim 12 wherein said termination criterion is met when said second classification is substantially unchanged by said re-classifying.
  • 14. A method according to claim 12 wherein said termination criterion is met when the sum of all of said weights normalized by the number of said frames exceeds a predetermined value.
  • 15. A method according to claim 11 wherein said at least a portion of a word comprises at least one individual word.
  • 16. A method according to claim 11 wherein said at least a portion of a word comprises at least two connected words.
  • 17. A method according to claim 11 wherein said at least a portion of a word comprises a monophone.
  • 18. A method according to claim 11 wherein said at least a portion of a word comprises a triphone.
  • 19. A system comprising:a trainer to train a plurality of classifiers with a plurality of data samples based on supervised segmentation; an estimator to classify said data samples based on unsupervised segmentation; and a reviewer to determine if a data sample is incorrectly classified based on said unsupervised segmentation, and, if so, to emphasize a weight of said data sample.
  • 20. A system according to claim 19 further comprising a terminator to terminate the operation of said trainer, estimator and reviewer when a termination criterion is met.
  • 21. A system according to claim 20 wherein said trainer is able to retrain said classifiers using adjusted weights including at least one emphasized weight provided by said reviewer.
  • 22. A system according to claim 21 wherein said termination criterion is met when said classifiers are substantially unchanged by said retraining.
  • 23. A system according to claim 22 wherein said termination criterion is met when the sum of all of said weights normalized by the number of said frames data samples exceeds a predetermined value.
  • 24. A system according to claim 20 wherein said data samples are classified in a plurality of classes, each class corresponding to at least a portion of a word.
  • 25. A system according to claim 24 wherein said at least a portion of a word comprises at least one individual word.
  • 26. A system according to claim 24 wherein said at least a portion of a word comprises at least two connected words.
  • 27. A system according to claim 24 wherein said at least a portion of a word comprises a monophone.
  • 28. A system according to claim 24 wherein said at least a portion of a word comprises a triphone.
  • 29. A method for corrective training of Hidden Markov Models (HMMs) of at least one portion of a word, the method comprising:providing an initial weight to frames of each said at least one portion of a word; training each of said at least one portion of a word; increasing the weight of any of said frames which is misclassified during said training; re-estimating said states using said weights; and repeating said providing, training, increasing and re-estimating until the sum of all of said weights normalized by the number of said frames exceeds a predetermined value.
US Referenced Citations (10)
Number Name Date Kind
4827521 Bahl et al. May 1989 A
5638486 Wang et al. Jun 1997 A
5638487 Chigler Jun 1997 A
5805731 Yaeger et al. Sep 1998 A
5806029 Buhrke et al. Sep 1998 A
5839103 Mammone et al. Nov 1998 A
5937384 Huang et al. Aug 1999 A
6044344 Kanevsky Mar 2000 A
6131089 Campbell et al. Oct 2000 A
6539352 Sharama et al. Mar 2003 B1
Non-Patent Literature Citations (3)
Entry
B.H. Juang, et al., “Discriminative Learing for Minimum Error Classification”, IEEE Transactions on Signal Processing, Dec. 1992, vol. 40, No. 12, pp. 3043-3054.
W. Reichl et al., “Discriminative Training for Continuous Speech Recognition”, EUROSPEECH '95, 4th European Conference on Speech Communication and Technology, Madrid, Sep. 1995, pp. 537-540, ISSN 1018-4074.
L. Rabiner et al., “Fundamentals of Speech Recognition”, pp. 376-377.