The present invention relates generally to speech recognition, and, more particularly, to automatic identification of sentence boundaries.
Automatic Speech Recognition (ASR) has proven useful for a number of applications. Typically, the output of an ASR system is a stream of words, in particular in outputting text corresponding to audio files. Generally, automatic transcriptions, and sometimes manual transcriptions, of conversations do not contain any punctuation that indicates sentence boundaries. Also, punctuations in manual transcriptions are not put inserted in a consistent manner. However, many applications, such as information retrieval and natural language processing benefit from (or even require) a sentence structure. State of the art Natural Language Processing (NLP) tools, such as Parts of Speech (POS) taggers and syntactic parsers, require input to be a single sentence. Application of such NLP tools to textual data based on ASR output ends up with significant errors.
The inherent word recognition error of an ASR system and the presence of noise, such as repetitions, false starts, and filler words in conversational speech, make identifying structural information a more challenging task as compared to well-written text.
According to an exemplary embodiment, a method for automatically identifying sentence boundaries in noisy conversational transcribed data is provided. Sentence boundaries in noisy conversational data are automatically identified. Noise and transcriptions symbols are removed from the transcribe data. An initial set of boundaries is marked in an automatic speech recognition output based on silence durations. Long silences are assumed to mark sentence boundaries. This marked output forms the training set. Alternately, the training set can also comprise manual transcriptions with sentence boundaries marked. Frequencies of head and tail n-grams that occur at the beginning and ending of sentences are determined from the training set, and n-grams that occur a significant number of times in the middle of sentences in relation to the frequencies at which they occur at the beginning or sending of sentences are filtered out. A boundary is marked in the conversational data test before every head n-gram and after every tail n-gram that occurs in the conversational data and is also remaining in the training set after filtering. Turns are identified in the conversational data, indicating a speaker change. A boundary is marked after each turn in the conversational data, unless the turn ends with an impermissible tail word or includes a word indicating an incomplete turn. The marked boundaries in the conversational data identify sentence boundaries.
ASR output does not contain punctuations. However, some of the sentence boundaries in the textual data of ASR output may be identified by assuming that a sufficiently long silence indicates a sentence boundary. It may be further assumed that a sufficiently long silence, e.g., >0.7 seconds, follows an end of a sentence and is followed by a head of a sentence.
In ASR transcribed data, the presence of a pause or silence in conversation may be an indication of a sentence boundary. However, due to the presence of spontaneity, hesitation, repetition, interruption in conversation and other factors, boundaries marked using only silence information are not entirely accurate. Using silence as an indication of sentence boundaries may result in the marking of false boundaries and may result in missed boundaries, because people may not pause appropriately between sentences. However, it is possible to build a language model from a test set/training set with boundaries noisily marked using silence information and then use the language model to remove false boundaries and add the boundaries missed by the silence model.
According to an exemplary embodiment, if data has already been manually transcribed, punctuations, such as periods, already present may be used to generate n-gram models of head and tail words.
While grammar based language models are easy to understand, they are not generally useful for large vocabulary applications, simply because it is so difficult to write a grammar based language model with sufficient coverage of the language. The most common kind of language model in use today is based on estimates of word string probabilities from large collections of text or transcribed speech. In order to make these estimates trackable, the probability of a word given the preceding sequence is approximated to the probability given the preceding one (bigram) or two (trigram) words (in general, these are called n-gram models).
According to exemplary embodiment, boundary detection uses head phrase and tail phrases. Silence information is used to identify sentence boundaries to form test data/training data as described above, and then probable head and tail phrases are learned and used to mark sentence boundaries in the test/training data. Phrases that occur at the beginning or end of a sentence are referred to as a head or a tail phrase, respectively. Some phrases are more likely to be head or tail phrase than others. For example, phrases such as “hello, this is”, “Would you like”, etc. may be assumed to occur at the beginning of a sentence, whereas phrases such as “help you today”, “click on OK”, etc. may commonly be found at the end of the sentence.
Based on these assumptions, n-gram models of expressions (and their frequencies) may be generated for heads of sentences and ends of sentences. This model may then be then applied on test data/training data to identify sentence boundaries.
As those skilled in the art will appreciate, an n-gram is a sub-sequence of n items from a given sequence. The items in question can be letters, words or base pairs according to the application. An n-gram model models sequences, notably natural languages, using the statistical properties of n-grams. When used for language modeling independence assumptions are made so that each word depends only on the last n words. This assumption is important because it massively simplifies the problem of learning the language model from data. In addition, because of the open nature of language, it is common to group words unknown to the language model together.
Next, at step 120, training/test data is prepared by marking sentence boundaries based on long silences. A long silence is assumed to indicate a sentence boundary. Alternately, if manual transcriptions are available with sentence boundary information that has been marked by the transcriber, the manually marked transcription can be used as the training data.
Next, head and tail phrases are identified, along with the frequencies of their occurrence, at step 130. According to an exemplary embodiment, a pass is made over the test/training data annotated with boundaries based on silence information as explained above. Head and tail phrases or “n-grams” are identified, along with their frequencies. For example, any phrase of length k that appears at the beginning or end of a sentence may be selected as the head phrase or the tail phrase, respectively. The value of k may be set depending on the data set.
At step 140, this list of phrases is “pruned” or filtered using thresholds. For example, a frequency threshold may be set such that the frequency of occurrence of each phrase must be greater than the threshold to consider it a head phrase or a tail phrase. Also, phrases that often appear at the beginning or end of a sentence and also in the middle of a sentence are removed based on the ratio of occurrences of the phrase at the beginning or end of a sentence to the total number of occurrences. If the ratio is close to one, the phrase is considered a head or tail phrase or “n-gram”. As people often do not finish their sentences, the list of tail phrases may be pruned more significantly. Finally, to combat the noise introduced by ASR, the list of head and tail phrases may be pruned based on impermissible head words and tail words, e.g., conjunctions, prepositions, etc.
After the head and tail phrases are identified and “filtered” as described above, sentence boundaries are inserted into the conversational data based on the remaining n-grams in the test set at step 150. The head and tail phrases are used for annotating the transcript, i.e., the conversational data. A boundary is marked before every head phrase and after every tail phrase. Next, “turns” indicating speaker changes in the transcript are identified at step 160. Each “turn” may be treated as an end of a sentence unless the turn ends with an impermissible tail word. Interruption and continuation across turns may be handled separately. Thus, words indicating incomplete turns, e.g., get, and, etc, may not be treated as sentences. Boundaries are inserted after turns in the conversational data at step 170.
If any false boundaries that have been inserted, i.e., boundaries that should not have been inserted based on the ratio of occurrences of head/tail phrases to the total number of occurrences and the impermissible head/tail words, they may be removed during a second pass through the transcript.
According to exemplary embodiments, a technique for identifying boundaries in noisy speech is provided without requiring supervision, so there is no need to create heuristic rules. The technique is data driven, i.e., the most suitable rules for each data set are automatically generated. Identifying sentence boundaries in transcriptions of noisy conversational data may be done automatically. Also, no manually labeled training data is required. Training data may be automatically annotated (marked with boundaries) based on silence/pause information.
Exemplary embodiments described above may be embodied in the form of computer-implemented processes and apparatuses for practicing those processes. Embodiments of the invention may also be embodied in the form of computer program code containing instructions embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other computer-readable storage medium, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. The present invention can also be embodied in the form of computer program code, for example, whether stored in a storage medium, loaded into and/or executed by a computer, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. When implemented on a general-purpose microprocessor, the computer program code segments configure the microprocessor to create specific logic circuits.
While the invention has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out this invention, but that the invention will include all embodiments falling within the scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
5806021 | Chen et al. | Sep 1998 | A |
5918240 | Kupiec et al. | Jun 1999 | A |
6473754 | Matsubayashi et al. | Oct 2002 | B1 |
6549614 | Zebryk et al. | Apr 2003 | B1 |
6879951 | Kuo | Apr 2005 | B1 |
6928407 | Ponceleon et al. | Aug 2005 | B2 |
20010009009 | Iizuka | Jul 2001 | A1 |
20020032564 | Ehsani et al. | Mar 2002 | A1 |
20020178002 | Boguraev et al. | Nov 2002 | A1 |
20030212541 | Kinder | Nov 2003 | A1 |
20050086048 | Nakayama | Apr 2005 | A1 |
20070071206 | Gainsboro et al. | Mar 2007 | A1 |
Entry |
---|
The Second Workshop on Building Educational Applications Using NLP, Proceedings of the Workshop, Jun. 29, 2005, copyright 2005 The Association for Computational Linguistics, pp. 1-93. |
Language Identification and Language Specific Letter-to-Sound Rules, Lewis et al. |
Comparing and Combining Generative and Posterior Probability Models: Some Advances in Sentence Boundary Detection in Speech, Liu et al. |
Using Maximum Entropy for Text Classification, Nigam et al. |
Adding Sentence Boundaries to Conversational Speech Transcriptions using Noisily Labelled Examples, Nasukawa et al., AND 2007, pp. 71-78. |
Number | Date | Country | |
---|---|---|---|
20090063150 A1 | Mar 2009 | US |