None.
1. Field of the Invention
The present invention relates generally to speech recognition methods and systems. More specifically, the present invention relates to a method, device, and computer program product for multi-lingual speech recognition.
2. Description of the Related Art
This section is intended to provide a background or context. The description herein may include concepts that could be pursued, but are not necessarily ones that have been previously conceived or pursued. Therefore, unless otherwise indicated herein, what is described in this section is not prior art to the claims in this application and is not admitted to be prior art by inclusion in this section.
Automatic Speech Recognition (ASR) technologies have been adopted in mobile phones and other hand-held communication devices. A speaker-trained name dialer is probably one of the most widely distributed ASR applications. In the speaker-trained name dialer, the user has to train the models for recognition, and it is known as a speaker dependent name dialing (SDND) application. Applications that rely on more advanced technology do not require the user to train any models for recognition. Instead, the recognition models are automatically generated based on the orthography of the multi-lingual words. Pronunciation modeling, also called text-to-phoneme mapping (TTP), based on orthography of the multi-lingual words is used, for example, in the Multilingual Speaker-Independent Name Dialing (ML-SIND) system, as disclosed in Viikki et al. (“Speaker- and Language-Independent Speech Recognition in Mobile Communication Systems”, in Proceedings of International Conference on Acoustics, Speech, and Signal Processing, Salt Lake City, Utah, USA 2002). Due to globalization as well as the international nature of the markets and future applications in mobile phones, the demand for multilingual speech recognition systems is growing rapidly.
Automatic language identification (LID) is an integral part of multi-lingual systems that use dynamic vocabularies. LID module detects the language of the vocabulary item. Once the language has been determined, the language dependent pronunciation model is applied to obtain the phoneme sequence associated with the written form of the vocabulary item. Finally, the recognition model for each vocabulary item is constructed by concatenating the multilingual acoustic models. Using these basic modules the recognizer can, in principle, automatically cope with multilingual vocabulary items without any assistance from the user. The pronunciation of a given text in a particular language can usually be found in automatic speech recognition and text-to-speech systems. However, conventional systems are generally unable to find the pronunciations of the texts in any other language supported by the system. Other languages may be considered mismatched languages. It is common to have the mismatched languages due to some reasons, e.g. LID errors, non-native vocabulary items, N-Best or multiple pronunciation scheme, etc. It is not trivial to find the pronunciations of the given texts in mismatched languages because different languages have different alphabet sets and different pronunciation rules. For example, one cannot directly find English pronunciation of Russian text “” because of the different alphabet sets between English and Russian.
There is a need to handle multi-lingual textual input in multi-lingual automatic speech recognition systems. Further, there is a need to process multi-lingual automatic speech recognition such that TTP can be applied to find the pronunciation for textual input in any supported language.
In general, exemplary embodiments include a symbol conversion table that maps the alphabet set of any language to the alphabet set of any other language. As such, a text-to-phoneme module of a multi-lingual speech recognition system can provide pronunciations for words in the vocabulary of a language. Techniques to carry out the symbol mapping can include language-dependent and language-independent parts.
One exemplary embodiment relates to a method of multi-lingual speech recognition. This method can include determining whether characters in a word are in a source list of a language-specific alphabet mapping table for a language, converting each character not in the source list according to a general alphabet mapping table and, where such conversion is performed, converting each converted character according to the language-specific alphabet mapping table for the language. The method further includes verifying that each character in the word is in a character set of the language, removing characters not in the character set of the language, and identifying a pronunciation of the word.
Another exemplary embodiment relates to a device for multi-lingual speech recognition. The device can include a LID module that assigns language identifiers to words, a TTP module that applies language-specific TTP models to generate a multi-lingual phoneme sequence associated with words identified by the LID module, a multi-lingual acoustic modeling module that constructs a recognition model for each vocabulary entry according to phonetic transcription, and a processor. The processor executes programmed instructions to determine whether characters in a word are in a source list of a language-specific alphabet mapping table for a language, convert each character not in the source list according to a general alphabet mapping table, convert each converted character according to the language-specific alphabet mapping table, and remove characters not in the character set of the language.
Another exemplary embodiment relates to a computer program product including computer code that determines whether characters in a word are in a source list of a language-specific alphabet mapping table for a language, computer code that searches a general alphabet mapping table for each character in the word that is not in the source list, computer code that converts each character not in the source list according to the general alphabet mapping table, computer code that converts each converted character according to the language-specific alphabet mapping table, and computer code that removes characters not in a character set of the language.
Another exemplary embodiment relates to a device for multi-lingual speech recognition. The device can include means for assigning language identifiers to words, means for applying language-specific text-to-phoneme (TTP) models to generate a multi-lingual phoneme sequence associated with words identified by the assigning means, means for constructing a recognition model for each vocabulary entry according to phonetic transcription, and means for executing programmed instructions to determine whether characters in a word are in a source list of a language-specific alphabet mapping table for a language, convert each character not in the source list according to a general alphabet mapping table, convert each converted character according to the language-specific alphabet mapping table, and remove characters not in the character set of the language.
The speaker independent multi-lingual speech recognition (ML-ASR) system 10 operates on a vocabulary of words given in textual form. The words in the vocabulary may originate from multiple languages, and the language identifiers are not given by the user. Instead, the LID module 12 of the ML-ASR system 10 provides the language IDs for the words. The LID module 12 makes errors and the language IDs are not always correct. By way of any example, the LID module 12 was trained on four languages, including English, French, Portuguese, and Spanish.
Table 1 below presents language identification rates for these languages.
As seen from Table 1, LID rates can be low. Due to such low LID rates, an N-best set of LID languages is utilized. If the N-best list is of suitable size, the correct language is one of the language IDs in the list and the correct pronunciation can be found for the word. In addition, some texts, for example many names, may belong to different languages, so multiple languages are used. Many loanwords also require proper handling of the mismatch languages. For the non-native speakers, it is common to speak the multi-lingual texts in their mother tongue that may be other than the matched language.
Different languages have different alphabet sets. The multi-lingual speech recognition system 10 includes a mapping from the character set of any language to the character set of any other language. With this kind of mapping, the TTP module 14 can provide the pronunciations for the N-best list of languages of each word in the vocabulary.
In an operation 22, an N-best list of languages is provided for each word in a vocabulary. In an operation 24, an alphabet mapping table is provided for each language supported by the ML-ASR system. In addition to the language specific alphabet mapping table, there is a general alphabet mapping table composed of the characters in all the language-specific alphabet sets. In the creation of the mapping tables, a standard alphabet set is defined. By way of example, normal Latin characters [a-z] are used. The ith language-specific and the standard alphabet sets are denoted as LSi, and SS, respectively. As such,
LSi={ci,1, ci,2, . . . , ci,ni}; i=1, 2, . . . , N
SS={s1, s2, . . . , sM};
where ci,k, and sk are the kth characters in the ith language-specific and the standard alphabet sets. ni and M are the sizes of the ith language-dependent and the standard alphabet sets.
Two kinds of alphabet mapping tables are created. One is the general alphabet mapping table defined for all the characters in all the language specific alphabet sets. Another one is the language specific alphabet mapping table for each language supported by the ML-ASR system. In the mappings tables, the character mapping pairs are defined from the source character (L_source) to the target character (L_target) as follows:
In the general mapping table, a source character belongs to the union of all the characters in all the language specific alphabet sets LSi; and a target character belongs to the standard alphabet set SS. The corresponding mapping is denoted as General(.).
In the ith language specific mapping table, a source character belongs to the union of the language specific alphabet set LSi and the standard alphabet set SS; and a target character belongs to the language specific alphabet set LSi only. The corresponding mapping is denoted as Languagei(.).
Languagei.L_source∈LSi∪SS
Languagei.L_target∈LSi
All the language specific characters in the language i are mapped to themselves, i.e. if L_source∈LSi then
Languagei.L_target=Languagei(L_source)=Languagei.L_source
Taking any letter from a given word, for the matching language k, the word remains unchanged since the mapping applies to all the letters in the word.
For mismatch language i, if letter∈LSi∪SS, then the language dependent mapping is applied:
Languagei.L_target=Languagei(letter)
For mismatch language j, if letter∉LSj∪SS but ;letter
then the general mapping is applied
General.L_target=General (letter)
After this mapping application, if General(letter)∈LSi∪SS, then a second mapping, a language dependent mapping, is carried out to map the character back to the alphabet set LSi of the jth language:
Languagej.L_target=Languagej(General.L_target)=Languagej[General(letter)]
The mapping back to the alphabet set is achieved by the proper definition of the mapping Languagei(.). Accordingly, by applying the mappings above, the characters of the word after the mapping belong to the language specific alphabet whether the language matches the word or not.
In an operation 26, the characters of a word are processed one character at a time. For each character, the language specific alphabet mapping table is checked. A decision is made in operation 28 if the character is in the L_source list of the language specific alphabet mapping table. If it is in the L_source list, it is kept unchanged. If the character is not in the L_source list of the language specific alphabet mapping table, the general alphabet mapping table is checked in an operation 30 and the character is converted according to the general alphabet mapping table in an operation 32.
After the application of the language specific and the general alphabet mapping tables, it is verified whether the character is included in the character set of the language in an operation 34. If the character is in the alphabet, it is kept in an operation 36. Otherwise, it is removed in an operation 38. A determination is made in operation 40 whether the current character is the last character of the word. If not, the next character is examined. If it is the last character, a pronunciation for the word is found using the TTP module.
In addition to the languages with the Latin character set, the languages with the Cyrillic character set need to be supported as well. Therefore, the Cyrillic characters are included in the language dependent character tables. In addition, due to the N-best approach, it is possible that a Latin name is identified as a Cyrillic word. Therefore, the standard set of characters needs to be mapped to the alphabet of the Cyrillic language. Due to these reasons, the language dependent character set of Ukrainian, for example, is illustrated in the table of
Given these mapping tables, the pronunciation of the given texts in any of the languages supported by the ML-ASR system can be found. Assume that there is a given text that needs to be converted into the language that has ID i. Then, all the letters of the text are mapped to the alphabet set of the given language according to the following exemplary logic.
The logic is applied to all the letters in the text, and the result is the text that has been converted to the alphabet set of the language i.
While several embodiments of the invention have been described, it is to be understood that modifications and changes will occur to those skilled in the art to which the invention pertains. Accordingly, the claims appended to this specification are intended to define the invention precisely.
Number | Name | Date | Kind |
---|---|---|---|
5649214 | Bruso et al. | Jul 1997 | A |
5758314 | McKenna | May 1998 | A |
5778213 | Shakib et al. | Jul 1998 | A |
5784071 | Tang et al. | Jul 1998 | A |
5793381 | Edberg et al. | Aug 1998 | A |
6204782 | Gonzalez et al. | Mar 2001 | B1 |
6389385 | King | May 2002 | B1 |
6460015 | Hetherington et al. | Oct 2002 | B1 |
7035801 | Jimenez-Feltstrom | Apr 2006 | B2 |
7218252 | Fauque | May 2007 | B2 |
7389474 | Rettig et al. | Jun 2008 | B2 |
7506254 | Franz | Mar 2009 | B2 |
7555433 | Otani | Jun 2009 | B2 |
20030033334 | Banerjee et al. | Feb 2003 | A1 |
20030050779 | Riis et al. | Mar 2003 | A1 |
20040078191 | Tian et al. | Apr 2004 | A1 |
20050010392 | Chen et al. | Jan 2005 | A1 |
20050114145 | Janakiraman et al. | May 2005 | A1 |
20050144003 | Iso-Sipila | Jun 2005 | A1 |
Number | Date | Country | |
---|---|---|---|
20060229864 A1 | Oct 2006 | US |