1. Technical Field
The present invention relates to speech recognition and particularly to a method of word transcription.
2. Description of Related Art
A conventional art speech recognition engine, typically incorporated into a digital signal processor (DSP), inputs a digitized speech signal, and processes the speech signal. The input speech signal is sampled, digitized and cut into frames of equal time windows or time duration, e.g. 25 millisecond window with 10 millisecond overlap. The frames of the digital speech signal are typically filtered, e.g. with a Hamming filter and then input into a circuit including a processor which performs a transform for instance a fast Fourier transform (FFT) using one of the known FFT algorithms.
Mel-frequency cepstral coefficients (MFCC) are commonly derived by taking the Fourier transform of a windowed excerpt of a signal to produce a spectrum. The powers of the spectrum are then mapped onto the mel scale, using the overlapping windows. Differences in the shape or spacing of the windows used to map the scale can be used. The logs of the powers at each of the mel frequencies are taken, followed by the discrete cosine transform of the mel log powers. The Mel-frequency cepstral coefficients (MFCCs) are the amplitudes of the resulting spectrum.
Conventional speech recognition systems employ probabilistic models known as hidden Markov models (HMMs). A hidden Markov model includes multiple states. A transition probability is defined for each transition from each state to every other state, including transitions to the same state. An observation is probabilistically associated with each unique state. The transition probabilities between states (the probabilities that an observation will transition from one state to the next) are not all the same. Therefore, a search technique, such as a Viterbi algorithm, is employed in order to determine a most likely state sequence for which the overall probability is maximum, given the transition probabilities between states and observation probabilities.
In conventional speech recognition systems, speech has been viewed as being generated by a hidden Markov process. Consequently, HMMs have been employed to model observed sequences of speech spectra, where specific spectra are probabilistically associated with a state in an HMM. In other words, for a given observed sequence of speech spectra, there is a most likely sequence of states in a corresponding HMM.
This corresponding HMM is thus associated with the observed sequence. This technique can be extended, such that if each distinct sequence of states in the HMM is associated with a sub-word unit, such as a phoneme, then a most likely sequence of sub-word units can be found. Moreover, using models of how sub-word units are combined to form words, then using language models of how words are combined to form sentences, complete speech recognition can be achieved.
Conventional speech recognition systems can typically be classified in two types. Continuous speech recognition (CSR) system which is capable of recognizing fluent speech and an isolated speech recognition (ISR) system which is typically employed to recognize only isolated speech (or discrete speech). The conventional CSR system is trained (i.e., develops acoustic models) based on continuous speech data in which one or more readers read training data into the system in a continuous or fluent fashion. The acoustic models developed during training are used to recognize speech.
The isolated speech recognition (ISR) system which is typically employed to recognize only isolated speech (or discrete speech). The conventional ISR system is typically trained (i.e., develops acoustic models) based on discrete or isolated speech data in which one or more readers are asked to read training data into the system in a discrete or isolated fashion with pauses between each word. An ISR system is also typically more accurate and efficient than continuous speech recognition systems because word boundaries are more definite and the search space is consequently more tightly constrained. Also, isolated speech recognition systems have been thought of as a special case of continuous speech recognition, because continuous speech recognition systems generally can accept isolated speech as well.
Conventional speech recognition systems of either type may index the input speech signal. During indexing, speech is processed and stored in a structure relatively easy to search known as an index. The input speech signal is tagged using a sequence of recognized words or phonemes. In the search stage, the index is searched and the exact location (time) of the target word is determined. An example of such an index is sometimes known as a phoneme lattice.
TIMIT is a corpus of phonemically and lexically transcribed speech of American English speakers of different sexes and dialects. Each transcribed element has been delineated in time. TIMIT was designed to further acoustic-phonetic knowledge and automatic speech recognition systems. It was commissioned by DARPA and worked on by many sites, including Texas Instruments (TI) and Massachusetts Institute of Technology (MIT), hence the corpus' name. The 61 phoneme classes presented in TIMIT can been further collapsed or folded into 39 classes using a standard folding technique by one skilled in the art.
In human language, the term “phoneme” as used herein is the smallest unit of speech or a basic unit of sound in a given language that distinguishes one word from another. An example of a phoneme would be the ‘t’ found in words like “tip”, “stand”, “writer”, and “cat”. The term “sub-phoneme” as used herein is a portion of a phoneme found by dividing the phoneme into two or three parts.
The term “term” as used herein refers to a sequence of words For example placed in sequence to form a “term” as in for example the “term”; “The Duke of Westminster”, where “Duke” is an example of a “word”.
The term “speech” as used herein refers to unconstrained audio speech. The term “unconstrained” refers to random speech as opposed to prompted responses.
A “phonemic transcription” of a word is a representation of the word comprising a series of phonemes. For example, the initial sound in “cat” and “kick” may be represented by the phonemic symbol ‘k’ while the one in “circus” may be represented by the symbol ‘s’. Further, ‘ ’ is used herein to distinguish a symbol as a phonemic symbol, unless otherwise indicated. In contrast to a phonemic transcription of a word, the term “orthographic transcription” of the word refers to the typical spelling of the word.
The terms “word transcription” or “transcription” as used herein refers to the sequence phonemic transcriptions of a word, or a term including spaces between words
The term “frame” and “phoneme frame” are used herein interchangeably and refers to portions of a speech signal of substantially equal durations or time windows.
The terms “model” and “phoneme model” are used herein interchangeably and used herein to refer to a mathematical representation of the essential aspects of acoustic data of a set of phonemes.
The term “length” as used herein refers to a time duration typically of a “phoneme” or “sub-phoneme”, a “word” or a “term”.
According to embodiments of the present invention there is provided a computerized method for continuous speech recognition using a speech recognition engine and a phoneme model. The computerized method inputs a speech signal into the speech recognition engine. Based on the phoneme model, the speech signal is indexed by scoring for the phonemes of the phoneme model and a time-ordered list of phoneme candidates and respective scores resulting from the scoring are produced. The phoneme candidates are input with the scores from the time-ordered list. Word transcription candidates are typically input from a dictionary and words are built by selecting from the word transcription candidates based on the scores. A stream of transcriptions is outputted corresponding to the input speech signal. The stream of transcriptions is re-scored by searching for and detecting anomalous word transcriptions in the stream of transcriptions to produce second scores.
The second scores are output and a word building is performed again based on the second scores. A second stream of transcriptions is output based upon the second word building. Statistical information may be received of the scores from a database of word transcriptions. The re-scoring is performed based on the statistical information. The statistical information may include a mean and a standard deviation of the scores and the searching for and detecting of anomalous word transcriptions is performed based on the mean and standard deviation. Alternatively, statistical information is calculated directly from the scores of word transcriptions. The scoring may be frame by frame over a time period for the phonemes of the phoneme model and/or the scoring is for multiple phonemes of the phoneme model over respective lengths or time periods for the phonemes. The indexing and or selection of the word transcriptions may be based on phoneme duration statistics. The phoneme model may explicitly include as a parameter; the length of the phonemes.
According to embodiments of the present invention there is provided a computer readable medium encoded with processing instructions for causing a processor to execute the methods disclosed herein.
The invention is herein described, by way of example only, with reference to the accompanying drawings, wherein:
a illustrates conceptually an example of a phoneme lattice, constructed according to the method of
b illustrates conceptually another embodiment of a phoneme lattice constructed according to the of
a and 5b illustrate schematically word scoring/building after anomaly detection of two words “Washington” and “of” respectively.
The foregoing and/or other aspects will become apparent from the following detailed description when considered in conjunction with the accompanying drawing figures.
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below to explain the present invention by referring to the figures.
The embodiments of the present invention may comprise a general-purpose or special-purpose computer system including various computer hardware components, which are discussed in greater detail below. Embodiments within the scope of the present invention also include computer-readable media for carrying or having computer-executable instructions, computer-readable instructions, or data structures stored thereon. Such computer-readable media may be any available media, which is accessible by a general-purpose or special-purpose computer system. By way of example, and not limitation, such computer-readable media can comprise physical storage media such as RAM, ROM, EPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other media which can be used to carry or store desired program code means in the form of computer-executable instructions, computer-readable instructions, or data structures and which may be accessed by a general-purpose or special-purpose computer system.
In this description and in the following claims, a “computer system” is defined as one or more software modules, one or more hardware modules, or combinations thereof, which work together to perform operations on electronic data. For example, the definition of computer system includes the hardware components of a personal computer, as well as software modules, such as the operating system of the personal computer. The physical layout of the modules is not important. A computer system may include one or more computers coupled via a computer network. Likewise, a computer system may include a single physical device (such as a mobile phone or Personal Digital Assistant “PDA”) where internal modules (such as a memory and processor) work together to perform operations on electronic data.
Reference is now made to
In this description and in the following claims, a “network” is defined as any architecture where two or more computer systems may exchange data. Exchanged data may be in the form of electrical signals that are meaningful to the two or more computer systems. When data is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer system or computer device, the connection is properly viewed as a computer-readable medium. Thus, any such connection is properly termed a computer-readable medium. Combinations of the above should also be included within the scope of computer-readable media. Computer-executable instructions comprise, for example, instructions and data which cause a general-purpose computer system or special-purpose computer system to perform a certain function or group of functions.
Before explaining embodiments of the invention in detail, it is to be understood that the invention is not limited in its application to the details of design and the arrangement of the components set forth in the following description or illustrated in the drawings. The invention is capable of other embodiments or of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.
By way of introduction, embodiments of the present invention are directed to a method for performing continuous speech recognition.
A phoneme model is constructed using a speech recognition engine trained on the known phoneme classes of a speech database. A speech database with known phonemes such as TIMIT for the 61 phoneme classes or the folded database of 39 phoneme classes is provided. The phoneme classes are often modeled as state probability density functions for each phoneme. Well known phoneme models include hidden Markov models, Gaussian mixture models and hybrid combinations thereof. After training on the database, the model parameters of the probability distribution functions are determined for each of the phoneme classes.
Construction of a phoneme model appropriate for embodiments of the present invention is disclosed in co-pending U.S. patent application Ser. No. 12/475,879 filed 1 Jun. 2009 by the present inventors. The acoustic data of the phoneme is divided into either two or three sub-phonemes. A parametrized model of the sub-phonemes is built, in which the model includes multiple Gaussian parameters based on Gaussian mixtures and a length dependency such as according to a Poisson distribution. A probability score is calculated while adjusting the length dependency of the Poisson distribution. The probability score is a likelihood that the parametrized model represents the phoneme.
It should be noted that although the methods as disclosed herein include recognition of words the teachings herein are similar applicable to the recognition of “terms” by allowing for the recognition of spaces between the words of the term according to methods known in the prior art. Hereinafter, the “term” and “word” are used interchangeably and the term “word” should be understood as including the meaning of “term”.
Referring now to the drawings,
Reference is now also made to
Reference is now also made to
Reference is now also made to
Alternatively or in addition, word scoring may be performed using duration statistics 120 previously stored in or with phoneme lattice 115 during phoneme recognition 111 (
According to embodiments of the present invention, transcription stream 34 produced in word building (step 39) is re-scored (step 33) by detecting anomalies in transcription stream 34. Anomaly detection (step 33) may be performed by inputting statistical data, e.g. mean and standard deviation using an external database 314, e.g. TIMIT, of word transcriptions. Alternatively or in addition, statistical data may be calculated from transaction stream 34. In either case, the statistical data word build (step 39) may be performed again based on anomaly detection (step 33) and a re-scored transcription stream 38 is output. Re-scored transcription stream 38 based on anomaly detection typically reflects more accurately the actual words spoken in input speech signal 101 than transcription stream 34.
Reference is now made to
Reference is now made to
The definite articles “a”, “an” is used herein, such as “a phoneme model”, “a speech recognition engine” have the meaning of “one or more” that is “one or more phoneme models” or “one or more speech recognition engines”.
Although selected embodiments of the present invention have been shown and described, it is to be understood the present invention is not limited to the described embodiments. Instead, it is to be appreciated that changes may be made to these embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and the equivalents thereof.