Machine translation (MT) concerns the automatic translation of natural language sentences from a first language (e.g., French) into another language (e.g., English). Systems that perform MT techniques are said to “decode” the source language into the target language.
A statistical MT system that translates French sentences into English has three components: a language model (LM) that assigns a probability P(e) to any English string; a translation model (TM) that assigns a probability P(f|e) to any pair of English and French strings; and a decoder. The decoder may take a previously unseen sentence f and try to maximize the e that find P(e|f), or equivalently maximizes P(e)·P(f|e).
A statistical translation memory (TMEM) may be generated by training a translation model with a naturally generated TMEM. A number of tuples may be extracted from each translation pair in the TMEM. The tuples may include a phrase in a source language and a corresponding phrase in a target language. The tuples may also include probability information relating to the phrases generated by the translation model.
A number of phrases in the target language may be paired with the same phrase in the source language. The target language phrase having the highest probability of correctness may be selected as a translation equivalent. Alternatively, the target language phrase occurring most frequently in the extracted phrases may be selected as a translation equivalent.
A pre-compiled TMEM 110 including naturally-generated translation pairs may be used as the basis of the statistical TMEM 105. For example, for a French/English MT, a TMEM such as the Hansard Corpus, or a portion thereof, may be used. The Hansard Corpus includes parallel texts in English and Canadian French, drawn from official records of the proceedings of the Canadian Parliament. The Hansard Corpus is presented as a sequences of sentences in a version produced by IBM Corporation. The IBM collection contains nearly 2.87 million parallel sentence pairs in the set.
Sentence pairs from the corpus may be used to train a statistical MT module 115. The statistical MT module 115 may implement a translation model 120, such as the IBM translation model 4, described in U.S. Pat. No. 5,477,451. The IBM translation model 4 revolves around the notion of a word alignment over a pair of sentences, such as that shown in
The word alignment in
The head of one English word is assigned a French string position based on the position assigned to the previous English word. If an English word Ee−1 translates into something at French position j, then the French head word of ei is stochastically placed in French position k with distortion probability d1(k−j|class (ei−1), class (fk)), where “class” refers to automatically determined word classes for French and English vocabulary items. This relative offset k−j encourages adjacent English words to translate into adjacent French words. If ei−1 is infertile, then j is taken from ei−2, etc. If ei−1 is very fertile, then j is the average of the positions of its French translations.
If the head of English word ei is placed in French position j, then its first non-head is placed in French position k (>j) according to another table d>1(k−j|class (fk)). The next non-head is placed at position q with probability d22 1(q−k|class (fq)), and so forth.
After heads and non-heads are placed, NULL-generated words are permuted into the remaining vacant slots randomly. If there are Ø0 NULL-generated words, then any placement scheme is chosen with probability 1/Ø0!.
These stochastic decisions, starting with e, result in different choices of f and an alignment of f with e. The string e is mapped onto a particular <a,f> pair with probability:
where the factors separated by “x” symbols denote fertility, translation, head permutation, non-head permutation, null-fertility, and null-translation probabilities, respectively. The symbols in this formula are: l (the length of e), m (the length of f), ei (the ith English word in e), e0 (the NULL word), øi (the fertility of ei), ø0 (the fertility of the NULL word), τik (the kth French word produced by ei in a), πik (the position of τik in f), ρi (the position of the first fertile word to the left of ei in a), cρi (the ceiling of the average of all πρik for ρi, or 0 if ρi is undefined)
In view of the foregoing, given a new sentence f, then an optimal decoder will search for an e that maximizes P(e|f)>>P(e)·P(f|e). Here, P(f|e) is the sum of P(a,f|e) over all possible alignments a. Because this sum involves significant computation, typically it is avoided by instead searching for an <e,a> pair that maximizes P(e,a|f)>>P(e)·P(a,f|e). It is assumed that the language model P(e) is a smoothed n-gram model of English.
When a different translation model is used, the TMEM may contain in addition to the contiguous French/English phrase adjacent information specific to the translation model that is employed.
The tuples may be selected based on certain criteria. The tuples may be limited to “contiguous” alignments, i.e., alignments in which the words in the English phrase generated only words in the French phrase and each word in the French phrase was generated either by the NULL word or a word from the English phrase. The tuples may be limited to those in which the English and French phrases contained at least two words. The tuples may be limited to those that occur most often in the data. Based on these conditions, six tuples may be extracted from the two sentences in
Extracting all tuples of the form <e; f; a> from the training corpus may produce French phrases that were paired with multiple English translations. To reduce ambiguity, one possible English translation equivalent may be chosen for each French phrase (block 415). Different methods for choosing a translation equivalent may be used to construct different probabilistic TMEMs (block 420). A Frequency-based Translation Memory (FTMEM) may be created by associating with each French phrase the English equivalent that occurred most often in the collection of phrases that were extracted. A Probability-based Translation Memory (PTMEM) may be created by associating with each French phrase the English equivalent that corresponded to the alignment of highest probability.
The exemplary statistical TMEMs explicitly encode not only the mutual translation pairs but also their corresponding word-level alignments, which may be derived according to a certain translation model, e.g., IBM translation model 4. The mutual translations may be anywhere between two words long to complete sentences. In an exemplary statistical TMEM generation process, an FTMEM and a PTMEM were generated from a training corpus of 500,000 sentence pairs of the Hansard Corpus. Both methods yielded translation memories that contained around 11.8 million word-aligned translation pairs. Due to efficiency considerations, only a fraction of the TMEMs were used, i.e., those that contained phrases at most 10 words long. This yielded a working FTMEM of 4.1 million and a working PTMEM of 5.7 million phrase translation pairs aligned at the word level using the IBM statistical model 4.
To evaluate the quality of both TMEMs, two hundred phrase pairs were randomly extracted from each TMEM. These phrases were judged by a bilingual speaker as perfect, almost perfect, or incorrect translation. A phrase was considered a perfect translations if the judge could imagine contexts in which the aligned phrases could be mutual translations of each other. A phrase was considered an almost perfect translation if the aligned phrases were mutual translations of each other and one phrase contained one single word with no equivalent in the other language. For example, the translation pair “final, le secrétaire de” and “final act, the secretary of” were labeled as almost perfect because the English word “act” has no French equivalent. A translation was considered an incorrect translations if the judge could not imagine any contexts in which the aligned phrases could be mutual translations of each other.
The results of the evaluation are shown in
A number of embodiments have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the invention. For example, blocks in the flowcharts may be skipped or performed out of order and still produce desirable results. Accordingly, other embodiments are within the scope of the following claims.
Exemplary embodiments of the present invention include an article comprising a machine-readable medium including instructions operative to cause a machine to perform various functions. For example, the machine may train a translation model with a plurality of translation pairs, each translation pair including a text segment in a source language and a corresponding text segment in a target language. Additionally, the machine may generate a plurality of tuples from each of the plurality of translation pairs, each tuple comprising a phrase in the source language, a phrase in the target language, and probability information relating to the phrases. The machine may further store the tuples in a statistical translation memory, wherein the statistical translation memory may be included in the article or other apparatus.
This application claims the benefit of, and incorporates herein, U.S. Provisional Patent Application No. 60/291,852, filed May 17, 2001.
The research and development described in this application were supported by DARPA-ITO under grant number N66001-00-1-9814. The U.S. Government may have certain rights in the claimed inventions.
Number | Name | Date | Kind |
---|---|---|---|
5477451 | Brown et al. | Dec 1995 | A |
5724593 | Hargrave et al. | Mar 1998 | A |
6131082 | Hargrave et al. | Oct 2000 | A |
6236958 | Lange et al. | May 2001 | B1 |
6278969 | King et al. | Aug 2001 | B1 |
6304841 | Berger et al. | Oct 2001 | B1 |
6393388 | Franz et al. | May 2002 | B1 |
6393389 | Chanod et al. | May 2002 | B1 |
6415250 | van den Akker | Jul 2002 | B1 |
6473729 | Gastaldo et al. | Oct 2002 | B1 |
7107204 | Liu et al. | Sep 2006 | B1 |
Number | Date | Country | |
---|---|---|---|
20030009322 A1 | Jan 2003 | US |
Number | Date | Country | |
---|---|---|---|
60291852 | May 2001 | US |