This invention relates to comparison of acoustic events, and in particular to comparison of linguistic events in a word spotting system.
Determining the similarity of acoustic events, which may be instances of utterances of a same word or phrase, is useful in various aspects of speech recognition. One such aspect is for rescoring, or secondary scoring, of putative instances of words or phrases located using a word spotting system. In such a system, a variety of approaches have been suggested for comparing the acoustic information from a putative instance of the word or phase with a reference instance of that word or phrase.
In one aspect, in general, the invention features a method and associated software and computer system that makes use of models for a set of subword units. For each of two acoustic events and for each of a series of times in each of the events, a probability associated with each of the models of the set of subword units is computed. Then, a quantity characterizing a comparison of the two acoustic events, one occurring in each of the two acoustic signals, is computed using the computed probabilities associated with each of the models.
Aspects of the invention can include one or more of the following features:
The two acoustic events include a reference instance of a query and a putative instance of a query identified using a word spotting approach. For example, the reference instance of the query may have been spoken by a user wishing to locate instances of the query. The two acoustic events can alternatively include two putative instances of a query identified using a word spotting approach
Each of the models of the set of subword units can characterize a time variation of instances of the unit, for example, using a Hidden Markov Model (HMM) with one or more states for each unit.
Computing the probability associated with each of the models can include computing a probability distribution over the set of subword units, and can include computing a probability distribution over the individual states of the subword units.
Computing the probability distribution over the set of subword units can include computing a probability associated with each of the set of subword units based a time evolution of the acoustic signal over an interval including multiple subword units. For example, a Forward algorithm and/or a Backward HMM algorithm, or a Viterbi algorithm, can be used to compute the state probabilities.
Computing the quantity characterizing the comparison of the two acoustic events can include comparing pairs of the probability distributions, one each of each pair associated with each of the two acoustic signals. One way of comparing the pairs of the probability distributions includes computing a Kullback-Leibler distance between the distributions.
Computing the quantity characterizing the comparison of the two acoustic events can include time aligning the two acoustic events, for example, by optimizing a time alignment based on an optimization of the quantity characterizing the comparison. One way to implement the optimization is to apply a dynamic programming, or dynamic time warping (DTW), approach.
Aspects of the invention can include one or more of the following advantages:
Using the probabilities associated with the subword units can improve the accuracy of a comparison of the acoustic events. Increased accuracy can be used to reduce the number of false alarms or to increase the detection rate of a word spotter.
Other features and advantages of the invention are apparent from the following description, and from the claims.
Referring to
The word spotting system 100 makes use of a rescoring phase 103, which is used to calculate scores for the putative events. These scores are separate from the optional scores produced by the second phase 102 of the word spotting system. The scores calculated in the rescoring phase provide a way of selecting or sorting the putative events so that events that are more likely to be hits have better (in this version of the system, smaller) scores.
In the first stage 101, a statistical preprocessor 125 takes as input each of the query speech 140 and the unknown speech 120 and produces processed speech 145 and processed speech 135, respectively. The statistical preprocessor 125 makes use of models 130, which includes statistically-derived parameters for a set of subword models. In this system, the subword models include Hidden Markov Models (HMMs) for a set of English-language phonemes. Each phoneme is represented by a network of model states that are interconnected by possible transitions. One example of such networks are three-state “left-to-right” models in which each state is associated with a probability distribution for features (e.g., spectral band energies, cepstra, etc.) determined from signal processing the query speech and the unknown speech.
The output of the first phase, processed speech 145 and 135, includes for each of a series of times in the query and unknown speech (e.g., once every 15 millisecond “frame”) a probability distribution over the states of all the models of the subword models of models 130. That is, if there are 40 subword models with 3 states each, then at each time there are 120 probabilities that add up to one. There are a number of different ways of computing these probabilities. One way to compute these probabilities is to assume that the subword models can follow one another in any order and the probability of occupying each state is computed conditionally on the entire input (both before and after the time in question). Such a computation can be calculated efficiently using the Forward-Backward algorithm. Alternatively, the probabilities of occupying each of the states can be conditioned on less that the full input, for instant, conditioned on inputs up to the time in question, on inputs after the time in question, or on inputs at or in the vicinity of the time in question, and other equivalent quantities such as the probabilities of the inputs conditioned on being at a particular state at a particular time can be used. In alternative versions of the system, other algorithms can be used to compute the probability distributions, such as the Viterbi algorithm. Also, the assumptions regarding the statistics of the sequences of subword models can be chosen to be more complex, for example, forming a Markov chain.
In the second phase 102, a query processor 150 forms a processed query 160 from the processed speech 145, which was computed from the query speech 140. The processed query 160 includes data that characterizes possible sequences of subword models, optionally characterizing relative probabilities of the possible sequences, that represent the query events. In a simple version of the system, a processed query 160 includes data that identifies a single sequence of subword units, which in the case of phonetic subword units corresponds to a phonetic “spelling” of the word or phrase of the query.
A word spotting engine 180 scans through the processed speech 135 and detects possible occurrences of the query based on the processed query 160, as well as on the models 130. That is, processed query 160 together with models 130 characterize the query in terms of the features modeled in the subword models of models 130. The putative events 190 include, for each possible occurrence of a query event, an estimate of the starting and the ending time for the event. These starting and ending times correspond to particular times, and their associated probability distributions, in processed speech 135. The word spotting engine 180 computes a probability that a query starts at a each time conditioned on all the computed features of the unknown speech 120, using a variant of the Forward-Backward algorithm, and declares the presence of the putative events at times that have relatively high (e.g., above a threshold) values of the computed probability.
The rescoring phase 103 makes use of the processed speech 145 and 135 for the query speech 140 and unknown speech 120, respectively, to compute a new score for each putative event. For example, suppose the query speech 140 includes a single instance of a word or phase that is to be located in the unknown speech, and that instance of the query is represented by a series of M distributions (i.e., M frames) in processed speech 145. Suppose also that a putative event corresponding to the desired word or phase is represented by N distributions (i.e. N frames) in processed speech 135, where the length N is determined from the estimated starting time and ending time as computed by the word spotting engine 180. A rescorer 110 takes as input the series of M distributions associated with the query speech 140 and the series of N distributions associated with the event in the unknown speech 120 and produces as output a scalar score that compares the two series of distributions.
The rescorer 110 accounts for the possible different lengths (e.g., M and N) of the two inputs, as well as possible time variations within each of the inputs even if the lengths are the same, using a dynamic programming, also known as dynamic time warping (DTW), procedure. This DTW procedure makes use of a procedure for comparing pairs of probability distributions, one from a time in the query speech and one from a time in the unknown speech, that produces a scale “distance” that characterizes the dissimilarity of the distributions. The more dissimilar the distributions, the larger the computed distance.
One possible distance measure that can be used is based on the Kullback-Leibler distance between distributions. In particular, we use the notation Pr(sk|Ot
A second option for the distance measure is symmetric in the sense that the roles of the query speech and the unknown speech can be reversed without affecting the distance:
For a third option for the distance measure, we use the notation P(Ot
The third option for the distance measure takes the form
d(t1,t2)=Σk(P(Ot
The exponent γ can be chosen to be unity, or can be chosen to be a quantity less than one to reduce the effect of the range of the observation probabilities, or can be a quantity greater than one to accentuate the effect of the range of observation probabilities. This third option is related to the second option by assuming that the prior state probabilities Pr(sk) are all equal, and ignoring unconditional observation probabilities of the form P(Ot
Referring to
The value D(M,N) then minimizes the sum of the individual distance terms over different time alignments. Referring to
In alternative versions of the system, different distance measures and/or constraints on allows time alignments can be used. For example, the observations Ot(i) can correspond to a series of observations up to, or beginning at, time t rather a single observation occurring at time t.
In an alternative version of the approach, the word spotting engine 180 estimates only one of the starting and the ending time, but not the other, for each putative event. That is, the length of the query, M, is known, but the length of the unknown speech, N is not known. Suppose the starting time is known, then the time alignment procedure is carried out as before, except that the alignments are optimized over the unknown length N by minimizing D(M,n) over a range of allowable lengths n. In this version of the system, the alignment is optimized over possible lengths N≦2M.
In alternative versions of the system, the approach described above for computing a dissimilarity score between query speech and unknown speech can be applied to other comparisons of speech events. For example, different putative instances of a query can be compared using such an approach.
Also, in the description above, only a single instance of a query is compared with the unknown speech. In practice, multiple instances of the query speech can be used can each compared to a putative event. The multiple rescorer scores are then combined to yield a single overall score, for example, by averaging or minimizing the individual scores.
Alternative systems that implement the techniques described above can be implemented in software, in firmware, in digital electronic circuitry, or in computer hardware, or in combinations of them. The system can include a computer program product tangibly embodied in a machine-readable storage device for execution by a programmable processor, and method steps can be performed by a programmable processor executing a program of instructions to perform functions by operating on input data and generating output. The system can be implemented in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. Each computer program can be implemented in a high-level procedural or object-oriented programming language, or in assembly or machine language if desired; and in any case, the language can be a compiled or interpreted language. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, a processor will receive instructions and data from a read-only memory and/or a random access memory. Generally, a computer will include one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM disks. Any of the foregoing can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
It is to be understood that the foregoing description is intended to illustrate and not to limit the scope of the invention, which is defined by the scope of the appended claims. Other embodiments are within the scope of the following claims.
This application claims the benefit of U.S. Provisional Application No. 60/489,391 filed Jul. 23, 2003, which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4803729 | Baker | Feb 1989 | A |
4829578 | Roberts | May 1989 | A |
4937870 | Bossemeyer, Jr. | Jun 1990 | A |
4977599 | Bahl et al. | Dec 1990 | A |
5073939 | Vensko et al. | Dec 1991 | A |
5220609 | Watanabe et al. | Jun 1993 | A |
5333275 | Wheatley et al. | Jul 1994 | A |
5613037 | Sukkar | Mar 1997 | A |
5625748 | McDonough et al. | Apr 1997 | A |
5675706 | Lee et al. | Oct 1997 | A |
5748840 | La Rue | May 1998 | A |
5787393 | Inazumi | Jul 1998 | A |
5819223 | Takagi | Oct 1998 | A |
5950159 | Knill | Sep 1999 | A |
6061652 | Tsuboka et al. | May 2000 | A |
6073095 | Dharanipragada et al. | Jun 2000 | A |
6076053 | Juang et al. | Jun 2000 | A |
6078885 | Beutnagel | Jun 2000 | A |
6272463 | Lapere | Aug 2001 | B1 |
6389389 | Meunier et al. | May 2002 | B1 |
6757652 | Lund et al. | Jun 2004 | B1 |
7231351 | Griggs | Jun 2007 | B1 |
20010018654 | Hon et al. | Aug 2001 | A1 |
20030055640 | Burshtein et al. | Mar 2003 | A1 |
20040148163 | Baker | Jul 2004 | A1 |
Number | Date | Country |
---|---|---|
WO 8704836 | Aug 1987 | WO |
Number | Date | Country | |
---|---|---|---|
60489391 | Jul 2003 | US |