The invention relates to the area of voice interfaces, and specifically to speech synthesis of a word in a given language. Voice interfaces are used e.g. in communication devices, and in particular in mobile communication devices and personal digital assistants (PDA:s).
A current trend in Automated Speech Recognition (ASR) is towards speaker-independent systems which are capable of handling several different languages. This typically requires extensive research work for each supported language. At the same time, it is often desirable to also include a speech synthesis, or Text-To-Speech (TTS), system, e.g. for generating voice dialing feedback to the user when no user training is required. A TTS system comprises a TTS engine, developed for a specific language and adapted to generate audio output based on a given list of pronunciation phonemes belonging to this language.
Language support of a TTS system (i.e. a new TTS engine) is more difficult to develop than language support for speech recognition, as more phonetics knowledge and speech resources are required. Furthermore, evaluation of a TTS engine is more demanding and more subjective in its nature. Consequently, prior art systems typically support more languages for speech recognition than for TTS.
An object of the present invention is to reduce the above mentioned problem, and to provide a cost efficient way to increase the number of languages supported by a TTS system.
Generally, this and other objects are achieved by a method for speech synthesis, a computer program product for performing the method, a speech synthesizer, and a communication device including such a speech synthesizer according to that which is disclosed below.
A first aspect of the invention relates to a method for speech synthesis of a word in a first language, comprising dividing the word into a first sequence of pronunciation phonemes in the first language, mapping the first phoneme sequence to a second sequence of pronunciation phonemes in at least one second language, and generating an audio output of the phonemes in the second phoneme sequence using prosody or intonation models for the at least one second language.
According to this method, an audio output of a word in a first language can be generated by a speech synthesizing engine not having actual support for this language. Instead, the pronunciation phonemes of the word are mapped onto phonemes of at least one second language, for which the speech synthesizing engine does have support.
That a speech synthesizing engine “has support” for a specific language means that it contains digital models for intonation (pitch, gain and duration) of a given phoneme occurring in said language. These models are here referred to as “prosody models”.
Conventional speech synthesizer systems thus only support those languages that have a speech synthesizing engine developed for that particular language. According to the invention, this limitation is overcome, and the number of supported languages will be greater than the number of existing speech synthesizing engines. Typically, a speech synthesizing system according to the invention will support all languages that are supported by the speech recognition system in the same device.
The process of mapping the phonemes of one language to the phonemes of at least one second language is referred to as language morphing.
The at least one second language is advantageously selected based on the first language. In other words, the phonemes of the first language (source language) may be more suitable for mapping onto the phonemes of one particular language (target language) than another. If so, this fact should be used to select the most suitable target language for which a speech synthesizing engine exists.
The second set of phonemes may belong to a plurality of different languages, if this can improve the language morphing. It is possible that one language successfully maps a subset of the phonemes of the first language, while a different language successfully maps a different subset of the phonemes. In such a case, the speech synthesizing engines of both languages may be used to provide the best result.
The mapping is preferably performed so as to optimize the sound correspondence between the first and second set of phonemes. This will ensure that the audio output is satisfactory. In practice, the mapping may be performed by using a look-up table, based on information about such sound correspondence.
The method can also comprise processing the audio output in order to smoothen transitions between different phonemes. Such smoothening may be advantageous e.g. when the mapping has resulted in a sequence of phonemes not normally occurring in the second language, or when phonemes from different languages have been combined. The smoothening process will then improve the final result.
A second aspect of the invention relates to a speech synthesizer, comprising a text-to-phoneme module for dividing said word into a first sequence of pronunciation phonemes in said first language, processing means for mapping said first phoneme sequence to a second sequence of pronunciation phonemes in at least one second language, and a text-to-speech engine for generating an audio output of the phonemes in the second phoneme sequence using prosody models for the at least one second language. Such a speech synthesizer can be implemented in a communication device such as a mobile phone or a PDA.
These and other aspects of the present invention will now be described in more detail, with reference to the appended drawings showing a currently preferred embodiment of the invention.
The speech synthesizer 6 in
The TTP module 11, the mapping module 13 and the TTS engine 15 can be embodied as computer software code portions stored in the memory 3, adapted to be loaded into and executed by the processor 2, while the databases 12, 14 and 16 can be embodied as memory areas in the memory 3, accessible from the processor 2.
The TTP module 11 can be a conventional TTP module as used in a speech recognition system. In fact, this module 11 and its database 12 can be shared by the speech recognition system 2 in the communication device 1. The TTP module 11 is capable of dividing a word in a given language into phonemes, which then can be compared to different parts of a word pronounced by the user. This is required for all languages that are to be supported by the recognition system 2, and the database 12 thus includes pronunciation models for all such languages.
The TTS engine 15 is also known per se, and is capable of generating an audio output (typically a WAV-file), based on a sequence of phonemes in a given language and prosody models (pitch, gain and duration) of these phonemes. The database 16 includes prosody models for all phonemes of the languages supported by the TTS engine 15.
It should be noted that presently the number of languages supported by conventional TTS engines is considerably smaller than the number of languages supported by conventional TTP modules. Developing a prosody model involves a significant amount of work, and research in this area is therefore slow.
The mapping module 13 is arranged to map a set of phonemes in one language to a set of phonemes in at least one different language. The database 14 can for this purpose comprise a look-up table 17, indicating which phoneme in one language that most closely corresponds to the pronunciation of a phoneme in a different language.
In the following, and with reference to
First, in step S1, the TTP module 11 is provided with a word 20 to be pronounced and its language A. Typically, this word is the response of the voice recognition system to a spoken input from the user.
Then, in step S2, the TTP module 11 divides the word 20 into a sequence 21 of phonemes, by applying a pronunciation model corresponding to the language of the word 20.
Next, in step S3, the mapping module 13 selects a target language B, which is supported by the TTS engine 15. Preferably, each language supported by the TTP module is simply associated with a suitable language that is supported by the TTS engine 15, and this information can be stored in a look-up table in the database 14. It is possible that some languages are associated with a plurality of target languages, if this is considered to improve performance.
In step S4, the mapping module 13 maps the phoneme sequence 21 onto a second sequence 22 of phonemes in language B. In the case of several target languages, the phoneme sequence 22 can contain phonemes from different languages. The mapping is performed so that the best sound correspondence between the source language and target language can be maintained.
In case of identical phonemes in the source and target language, the conversion of these is trivial. Other phonemes, with clear similarities, can simply be mapped according to a predefined look-up table 17 in the database 14. Some situations, like for example when a combination of phonemes in the source language A can be represented by two or more phonemes in the target language B, are more difficult to represent in a lookup table. In such cases, or if preferred for other reasons, other methods such as neural networks, decision trees or more complex rules can be used. In case of some diftong sounds in the source/target language, rules for several phonemes can be applied (not necessary in the present example).
The prosody models used can be slightly adapted versions of the prosody models used in conventional speech engines, in order to improve the result of the language morphing.
It should be noted that if the TTS engine 15 supports the language A, steps S3 and S4 will not be effected, and sequence 22 will be identical to sequence 21.
Some combinations of phonemes resulting from the mapping step S4 do not normally occur in the language B, and may require special processing in order to improve transitions between consecutive phonemes. Any such post processing of the phoneme sequence 22 is performed in step S5.
In step S6, finally, an audio output 23 is generated by TTS engine 15 based on the (post processed) phoneme sequence 22. The audio output is in a form suitable for driving the speaker 4, e.g. in WAV format.
An example of speech synthesizing according to the above embodiment of the invention will now be described.
The word 20 received by the TTP module 11 in step S1 is here “Bernhard Völger”, and language A is German. The sequence 21 of phonemes forming the German pronunciation of the word 20 is in step S2 found to be “b-E-R-n-h-a-R-t-v-9-l-g-6”, here shown with the SAMPA (Speech Assessment methods phonetic alphabet) notation, incorporated herewith in the form of appendix.
In step S3, the target language is selected as US English. (Note that this is only an example. In reality, a TTS engine exists that supports German, and it is doubtful if German and US English would be a suitable pair of source and target languages.)
The mapping in step S4 is performed next. The phoneme sequence 22 corresponding to a pronunciation of the word 20 Bernhard Völger in US English phoneme notation is in step S4 found to be “b-E-r-n-h-A-r-t-v-@-l-g-@”, again in SAMPA notation. The following table describes the phoneme conversion for the example word, phoneme-by-phoneme, where changed phonemes are shown in bold font.
This phoneme sequence is given to the TTS engine 15 provided with a US English prosody model, as if it were a native pronunciation. Hence, the TTS engine in step S5 uses its US English prosody model to produce the waveform output for the utterance.
Further examples of phoneme conversion for other German words are presented in the following tables, where again changed phonemes are shown in bold font.
In the above examples, the mapping is quite simple. For some languages, the mappings can be more complex, leading to phoneme clustering (one phoneme replaced with several) or phoneme deletion (several phonemes replaced with one), depending on the situation. As mentioned, some combinations of phonemes may also require post processing before the phoneme sequence 22 is supplied to the TTS engine 15. In any case, the mapping should be designed so as to achieve an audio output using a TTS engine for the target language TTS engine corresponding as closely as possible with the audio output that would have resulted if there existed a TTS engine for the first language.
SAMPA (Speech Assessment Methods Phonetic Alphabet) is a machine-readable phonetic alphabet. It was originally developed under the ESPRIT project 1541, SAM (Speech Assessment Methods) in 1987-89 by an international group of phoneticians, and was applied in the first instance to the European Communities languages Danish, Dutch, English, French, German, and Italian (by 1989); later to Norwegian and Swedish (by 1992); and subsequently to Greek, Portuguese, and Spanish (1993). Under the BABEL project, it has now been extended to Bulgarian, Estonian, Hungarian, Polish, and Romanian (1996). Under the aegis of COCOSDA it is hoped to extend it to cover many other languages (and in principle all languages). On the initiative of the OrienTel project, Arabic, Hebrew, and Turkish have been added. Other recent additions: Cantonese, Croatian, Czech, Russian, Slovenian, Thai. Coming shortly: Japanese, Korean.
Unless and until ISO 10646/Unicode is implemented internationally, SAMPA and the proposed X-SAMPA (Extended SAMPA) constitute the best international collaborative basis for a standard machine-readable encoding of phonetic notation.
Note about Unicode: Recent version of the Internet Explorer and Netscape browsers are capable of handling WGL4, the subset of Unicode needed for the orthography of all the languages of Europe. Test yours by looking at this page, or download an up-to-date browser and a WGL4 font. Unicode SAMPA pages are now available with correct local orthography, for those with this capacity, for Bulgarian, Czech, Greek, Hungarian, Polish, Romanian, and Slovenian. See if your browser can cope with Unicode IPA symbols by looking at this special version of the English SAMPA page. For IPA in Unicode, see here.
SAMPA basically consists of a mapping of symbols of the International Phonetic Alphabet onto ASCII codes in the range 33 . . . 127, the 7-bit printable ASCII characters. Associated with the coding (mapping) are guidelines for the transcription of the languages to which SAMPA has been applied. Unlike other proposals for mapping the IPA onto ASCII, SAMPA is not one single author's scheme, but represents the outcome of collaboration and consultation among speech researchers in many different countries. The SAMPA transcription symbols have been developed by or in consultation with native speakers of every language to which they have been applied, but are standardized internationally.
A SAMPA transcription is designed to be uniquely parsable. As with the ordinary IPA, a string of SAMPA symbols does not require spaces between successive symbols.
SAMPA has been applied not only by the SAM partners collaborating on EUROM 1, but also in other speech research projects (e.g. BABEL, Onomastica, OrienTel) and by Oxford University Press. It is included among the resources listed by the Linguistic Data Consortium.
In its basic form SAMPA was seen as catering essentially for segmental transcription, particularly of a traditional phonemic or near-phonemic kind. Prosodic notation was not adequately developed. This shortcoming has now been remedied by a proposed parallel system of prosodic notation, SAMPROSA. It is important that prosodic and segmental transcriptions be kept distinct from one another, on separate representational tiers (because certain symbols have different meanings in SAMPROSA from their meaning in SAMPA: e.g. H denotes a labial-palatal semivowel in SAMPA, but High tone in SAMPROSA).
A proposal for an extended version of the segmental alphabet, X-SAMPA, extends the basic agreed conventions so as to make provision for every symbol on the Chart of the International Phonetic Association, including all diacritics. In principle this makes it possible to produce a machine-readable phonetic transcription for every known human language.
The present SAMPA recommendations (as devised for the basic six languages) are set out in the following table. All IPA symbols that coincide with lower-case letters of the Latin alphabet remain the same; all other symbols are recoded within the ASCII range 37 . . . 126. In this current WWW document the IPA symbols cannot be shown, but the columns indicate respectively a SAMPA symbol, its ASCII/ANSI number, the shape of the corresponding IPA symbol, the Unicode number (hex, decimal) for the IPA symbol, and the symbol's meaning or use.
The Phonemic Notation of Individual Languages
These pages provide a brief outline of the phonemic distinctions in various languages: Arabic, Bulgarian, Cantonese, Czech, Croatian, Danish, Dutch, English, Estonian, French, German, Greek, Hebrew, Hungarian, Italian, Norwegian, Polish, Portuguese, Romanian, Russian, Spanish, Swedish, Thai, Turkish.
Extensions
These pages provide extensions of the basic segmental SAMPA: SAMPROSA (prosodic), X-SAMPA (other symbols, mainly segmental).
UCL Phonetics and Linguistics home page, University College London home page.
A utility: Instant IPA in Word—converts SAMPA to IPA.
For queries please contact John Wells by e-mail or at