The present invention relates to letter-to-sound conversion systems. In particular, the present invention relates to generating graphonemes used in letter-to-sound conversion.
In letter-to-sound conversion, a sequence of letters is converted into a sequence of phones that represent the pronunciation of the sequence of letters.
In recent years, an n-gram based system has been used for letter-to-speech conversion. The n-gram system utilizes “graphonemes” which are joint units representing both letters and the phonetic pronunciation of those letters. In each graphoneme, there can be zero or more letters in the letter part of the graphoneme and zero or more phones in the phoneme part of the graphoneme. In general, the graphoneme is denoted as l*:p*, where l* means zero or more letters and p* means zero or more phones. For example, “tion:sh&ax&n” represents a graphoneme unit with four letters (tion) and three phones (sh, ax, n). The delimiter “&” is added between phones because phone names can be longer than one character.
The graphoneme n-gram model is trained based on a dictionary that has spelling entries for words and phoneme pronunciations for each word. This dictionary is called the training dictionary. If the letter to phone mapping in the training dictionary is given, the training dictionary can be converted into a dictionary of graphoneme pronunciations. For example, assume
phone ph:f o:ow n:n e:# is given somehow. The graphoneme definitions for each word are then used to estimate the likelihood of sequences of “n” graphonemes. For example, in a graphoneme trigram, the probability of sequences of three graphonemes, Pr(g3|g1g2), are estimated from the training dictionary with graphoneme pronunciations.
Under many systems of the prior art that use graphonemes, when a new word is provided to the letter-to-sound conversion system, a best first search algorithm is used to find the best or n-best pronunciations based on the n-gram scores. To perform this search, one begins with a root node that contains the beginning symbol of the graphoneme n-gram model, typically denoted by <s>. <s> indicates the beginning of a sequence of graphonemes. The score (log probability) associated with the root node is log(Pr(<s>)=1)=0. In addition, each node in the search tree keeps track of the letter location in the input word. Let's call it the “input position”. The input position of <s> is 0 since no letter in the input word is used yet. To sum up, a node in the search tree contains the following information for the best-first search:
Meanwhile a heap structure is maintained in which the highest scoring of search nodes is found at the top of the heap. Initially there is only one element in the heap. This element points to the root node of the search tree. At any iteration of the search, the top element of the heap is removed, which gives us the best node so far in the search tree. One then extends child nodes from this best node by looking up the graphoneme inventory those graphonemes whose letter parts are a prefix of the left-over letters in the input word starting from the input position of the best node. Each such graphoneme generates a child node of the current best node. The score of a child node is the score of the parent node (i.e. the current best node), plus the n-gram graphoneme score to the child node. The input position of the child node is advanced to be the input position of the parent node plus the length of the letter part of the associated graphoneme in the child node. Finally the child node is inserted into the heap.
Special attention has to be paid when all the input letters are consumed. If the input position of the current best node has reached the end of the input word, a transition to the end symbol of the n-gram model, </s>, is added to the search tree and the heap.
If the best node removed from the heap contains </s> as its graphoneme id, a phonetic pronunciation corresponding to the complete spelling of the input word has been obtained. To identify the pronunciation, the path from the last best node </s> all the way back to the root node <s> is traced and the phoneme parts of the graphoneme units along that path are output.
The first best node with </s> is the best pronunciation according to the graphoneme n-gram model, as the rest of the search nodes have scores that are worse than this score already and future paths to </s> from any of the rest of search nodes are going to make the scores only worse (because log(probability) <0). If elements continue to be removed from the heap, the 2nd best, 3rd best, etc. pronunciations can be identified until either there are no more elements in the heap or the n-th best pronunciation is worse than the top 1 pronunciation by a threshold. The n-best search then stops.
There are several ways to train the n-gram graphoneme model, such as maximum likelihood, maximum entropy, etc. The graphonemes themselves can also be generated in different ways. For example, some prior art uses hidden Markov models to generate initial alignments between letters and phonemes of the training dictionary, followed by merging of frequent pairs of these l:p graphonemes into larger graphoneme units. Alternatively a graphoneme inventory can also be generated by a linguist who associates certain letter sequences with particular phone sequences. This takes a considerable amount of time and is error-prone and somewhat arbitrary because the linguist does not use a rigorous technique when grouping letters and phones into graphonemes.
A method and apparatus are provided for segmenting words and phonetic pronunciations into sequence of graphonemes. Under the invention, mutual information for pairs of smaller graphoneme units is determined. Each graphoneme unit includes at least one letter. At each iteration, the best pair with maximum mutual information is combined to form a new longer graphoneme unit. When the merge algorithm stops, a dictionary of words is obtained where each word is segmented into a sequence of graphonemes in the final set of graphoneme units.
With the same mutual-information based greedy algorithm but without the letters being considered, phonetic pronunciations can be segmented into syllable pronunciations. Similarly, words can also be broken into morphemes by assigning the “pronunciation” of a word to be the spelling and again ignoring the letter part of a graphoneme unit.
The invention is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, telephony systems, distributed computing environments that include any of the above systems or devices, and the like.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention is designed to be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules are located in both local and remote computer storage media including memory storage devices.
With reference to
Computer 110 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 110 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 110. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
The system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132. A basic input/output system 133 (BIOS), containing the basic routines that help to transfer information between elements within computer 110, such as during start-up, is typically stored in ROM 131. RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120. By way of example, and not limitation,
The computer 110 may also include other removable/non-removable volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media discussed above and illustrated in
A user may enter commands and information into the computer 110 through input devices such as a keyboard 162, a microphone 163, and a pointing device 161, such as a mouse, trackball or touch pad. Other input devices (not shown) may include a joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 120 through a user input interface 160 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A monitor 191 or other type of display device is also connected to the system bus 121 via an interface, such as a video interface 190. In addition to the monitor, computers may also include other peripheral output devices such as speakers 197 and printer 196, which may be connected through an output peripheral interface 195.
The computer 110 is operated in a networked environment using logical connections to one or more remote computers, such as a remote computer 180. The remote computer 180 may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 110. The logical connections depicted in
When used in a LAN networking environment, the computer 110 is connected to the LAN 171 through a network interface or adapter 170. When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173, such as the Internet. The modem 172, which may be internal or external, may be connected to the system bus 121 via the user input interface 160, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 110, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
Under one embodiment of the present invention, graphonemes that can be used in letter-to-sound conversion are formed using mutual information criterion.
In step 200 of
Thus, each initial graphoneme unit has exactly one letter and zero or more phones. These initial units can be denoted generically as l:p*.
After the initial alignment, the method of
Where p(p*|l) is the probability of phone sequence p* being aligned with letter l, c(p* |l) is the count of the number of times that the phone sequence p* was aligned with the letter l in the dictionary, and c(s* |l) is the count for the number of times the phone sequence s* was aligned with the letter l, where the summation in the denominator is taken across all possible phone sequences as s* that are aligned with letter l in the dictionary.
After the alignment probabilities have been determined, new alignments are formed at step 204, again assigning one letter per graphoneme with zero or more phones associated with each graphoneme. This new alignment is based on the alignment probabilities determined in step 202. In one particular embodiment, a Viterbi decoding system is used in which a path through a Viterbi trellis, such as the example trellis of
The trellis of
For each state at each state index, a single path into the state is selected by determining the probability for each complete path leading to the state. For example, for state 308, Viterbi decoding selects either path 310 or path 312. The score for path 310 includes the probability of the alignment p:# of path 314 and the probability of the alignment h:f of path 310. Similarly, the score for path 312 includes the probability of the alignment p:f of path 316 and the alignment of h:# of path 312. The path into each state with the highest probability is selected and the other path is pruned from further consideration. Through this decoding process, each word in the dictionary is segmented into a sequence of graphonemes. For example, in
At step 206, the method of the present invention determines if more alignment iterations should be performed. If more alignment iterations are to be performed, the process returns to step 202 to determine the alignment probabilities based on the new alignments formed at step 204. Steps 202, 204 and 206 are repeated until the desired number of iterations has been performed.
The iterations of steps 202, 204 and 206 result in a segmentation of each word in the dictionary into a sequence of graphoneme units. Each grapheme unit contains exactly one letter in the spelling part and zero or more phonemes in the phone part.
At step 210, a mutual information is determined for each consecutive pair of the graphoneme units found in the dictionary after alignment step 204. Under one embodiment, the mutual information of two consecutive graphoneme units is computed as:
where MI(u1,u2) is the mutual information for the pair of graphoneme units u1 and u2. Pr(u1,u2) is the joint probability of graphoneme unit u2 appearing immediately after graphoneme unit u1. Pr(u1) is the unigram probability of graphoneme unit u1 and Pr(u2) is the unigram probability of graphoneme unit u2. The probabilities of Equation 2 are calculated as:
where count(u1) is the number of times graphoneme unit u1 appears in the dictionary, count(u2) is the number of times graphoneme unit u2 appears in the dictionary, count(u1u2) is the number of times graphoneme unit u2 follows immediately after graphoneme unit u1 in the dictionary and count(*) is the number of instances of all graphoneme units in the dictionary.
Strictly speaking, Equation 2 is not the mutual information between two distributions and therefore is not guaranteed to be non-negative. However, its formula resembles the mutual information formula and thus has been mistakenly named mutual information in the literature. Therefore, within the context of this application, we will continue to call the computation of Equation 2 a mutual information computation.
After the mutual information has been computed for each pair of neighboring graphoneme units in the dictionary at step 210, the strength of each new possible graphoneme unit u3 is determined at step 212. A new possible graphoneme unit results from the merging of two existing smaller graphoneme units. However, two different pairs of graphoneme units can result in the same new graphoneme unit. For example, graphoneme pair (p:f, h:#) and graphoneme pair (p:#, h:f) both form the same larger graphoneme unit (ph:f) when they are merged together. Therefore, we define the strength of a new possible graphoneme unit u3 to be the summation of all the mutual information formed by merging different pairs of graphoneme units that result in the same new unit u3:
where strength(u3) is the strength of the possible new unit u3, and u1u2=u3 means merging u1 and u2 will result in u3. Therefore the summation of Equation 6 is done over all such pair units u1 and u2 that create u3.
At step 214 the new unit with the largest strength is created. The dictionary entries that include the constituent pairs that form the selected new unit are then updated by substituting the pair of the smaller units with the newly formed unit.
At step 218, the method determines if more larger graphoneme units should be created. If so, the process returns to step 210 and recalculates the mutual information for pairs of graphoneme units. Notice some old units may now not be needed by the dictionary anymore (i.e., count(u1)=0) after the previous merge. Steps 210, 212, 214, 216, and 218 are repeated until a large enough set of graphoneme units has been constructed. The dictionary is now segmented into graphoneme pronunciations.
The segmented dictionary is then used to train a graphoneme n-gram at step 222. Methods for constructing an n-gram can include maximum entropy based training as well as maximum likelihood based training, among others. Those skilled in the art of building n-grams understand that any suitable method of building an n-gram language model can be used with the present invention.
By using mutual information to construct the larger graphoneme units, the present invention provides an automatic technique for generating large graphoneme units for any spelling language and requires no work from a linguist in identifying the graphoneme units manually.
Once the graphoneme n-gram is produced in step 222 of
With the same algorithm, one can also segment phonetic pronunciations into syllabic pronunciations by generating a syllable inventory, training a syllable n-gram and then performing a forced alignment on the pronunciation of the word.
In step 400 of
At step 406, the method determines if there are more iterations to perform. If there are more iterations, the process returns to step 400 and a mutual information score is generated for each phone pair in the dictionary. Steps 400, 402, 404 and 406 are repeated until a suitable set of syllable units have been formed.
At step 408, the dictionary, which has now been divided into syllable units, is used to generate a syllable n-gram. The syllable n-gram model provides the probability of sequences of syllables as found in the dictionary. At step 410, the syllable n-gram is used to identify the syllables of a new word given the pronunciation of the new word. In particular, a forced alignment is used wherein the phones of the pronunciation are grouped into the most likely sequence of syllable units based on the syllable n-gram. The result of step 410 is a grouping of the phones of the word into syllable units.
This same algorithm may be used to break words into morphemes. Instead of using the phones of a word, the individual letters of the words are used as the word's “pronunciation” . To use the greedy algorithm described above directly, the individual letters are used in place of the phones in the graphonemes and the letter side of each graphoneme is ignored. So at step 400, the mutual information for pairs of letters in the training dictionary is identified and the pair with the highest mutual information is selected at step 402. A new morpheme unit is then formed for this pair. At step 404, the dictionary entries are updated with the new morpheme unit. When a suitable number of morpheme units has been created, the morpheme units found in the dictionary are used to train an n-gram morpheme model that can later be used to identify morphemes for a word from the word's spelling with the above forced alignment algorithm. Using this technique, a word such as “transition” may be divided into morpheme units of “tran si tion”.
Although the present invention has been described with reference to particular embodiments, workers skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the invention.