Machine translation is a process by which a textual input in a first language (a source language) is automatically translated into a textual output in a second language (a target language). Some machine translation systems attempt to translate a textual input word for word, by translating individual words in the source text into individual words in the target language. However, this has led to translations that are not very fluent.
Therefore, some systems currently translate based on phrases. Machine translation systems that translate sequences of words in the source text, as a whole, into sequences of words in the target language, as a whole, are referred to as phrase based translation systems.
During training, these systems receive a word-aligned bilingual corpus, where words in a source training text are aligned with corresponding words in a target training text. Based on the word-aligned bilingual corpus, phrase pairs are extracted that are likely translations of one another. By way of example, using English as the source text and French as the target text, phrase based translation systems find a sequence of words in English for which a sequence of words in French is a translation of that English word sequence.
Phrase translation tables are important to these types of phrase-based statistical machine translation systems. The phrase translation tables provide pairs of phrases that are used to construct a large set of potential translations for each input sentence, along with feature values associated with each phrase pair. The feature values are used to select a best translation from a given set of potential translations.
For purposes of the present discussion, a “phrase” can be a single word or any contiguous sequence of words. It need not correspond to a complete linguistic constituent.
There are a variety of ways of building phrase translation tables. One current system for building phrase translation tables selects, from a word alignment provided for a parallel bilingual training corpus, all pairs of phrases (up to a given length) that meet two criteria. A selected phrase pair must contain at least one pair of words linked by the word alignment and must not contain any words that have word-alignment links to words outside the phrase pair.
If the word alignment of the training corpus includes many unaligned words, there is considerable uncertainty as to where the word sequences constituting phrase pairs begin and end. Therefore, this type of procedure typically generates many phrase pairs that result in translation candidates that are not even remotely reasonable.
The discussion above is merely provided for general background information and is not intended to be used as an aid in determining the scope of the claimed subject matter.
In a machine translation system, possible phrase pairs are extracted from a word-aligned training corpus. Feature values associated with the phrase pairs are calculated and parameters of a translation model for use in a decoder are trained. The translation model is then used to re-extract a subset of phrase pairs from the original set of extracted phrase pairs. The feature values associated with the subset of phrase pairs are recalculated, and the translation model parameters are re-trained based on the newly extracted subset of phrase pairs and the features values associated with those phrase pairs.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. The claimed subject matter is not limited to implementations that solve any or all disadvantages noted in the background.
System 100 trains a translation model for use in decoder 110 such that it translates input sentences by selecting an output that maximizes the score of a weighted linear model, such as that set out below:
where s is the input (source) sentence, t is the output (target) sentence, and a is a phrasal alignment that specifies how t is constructed from s. Weight parameters λi are associated with each feature fi, and the weight parameters are tuned to maximize the quality of the translation hypothesis selected by the decoding procedure that computes t set out in Eq. 1.
Once a word-aligned, bilingual corpus is generated, initial phrase pair extraction component 104 extracts an initial set of phrase pairs from the word-aligned, bilingual corpus for inclusion in the phrase translation table. Extracting the initial phrase pairs is indicated by block 124 in
Table 1 shows an example of a more full list of initial phrase pairs 105 consistent with the word alignment of sentence pair 204 in
In any case, for each extracted phrase pair (s,t) (where s is the source portion of the phrase pair and t is the target portion of the phrase pair) feature value computation component 106 calculates values of features associated with the phrase pairs. Calculation of the feature values is indicated by block 126 in
The particular features for which values are calculated can be any of a wide variety of different features. Those discussed herein are for exemplary purposes only, and are not intended to limit the invention.
In any case, one translation feature is referred to as the phrase translation probability. It sums the logarithms of estimated conditional probabilities p(s|t) of each source language phrase s given the corresponding target language phrase t. An analogous feature sums the logarithms of estimated conditional probabilities p(t|s). In one embodiment, estimating the probabilities p(s|t) is performed in terms of relative frequencies as follows:
where count(s,t) is the number of time the phrase pairs with the source language phrase s and the target language phrase t was selected from any aligned sentence pair for inclusion in the phrase translation table; and
is the number of times phrase pairs with any source language phrase and the same target language phrase t were selected from any aligned sentence pair.
Another feature is referred to as a lexical score feature and provides a simple form of smoothing by weighting a phrase pair based on how likely individual words within the phrases are to be translations of each other. According to one embodiment, this is calculated as follows:
where n is the number of words in s, m is the number of words in t, and the p(si|tj) are estimated word translation probabilities.
Decoder 110, in performing statistical machine translation, produces translation by dividing the source sentence into a sequence of phrases, choosing a target language phrase as a translation for each source language phrase, and ordering the chosen target language phrases to build the final translated sentence. Each potential translation is scored according to a weighted linear model, such as that set out in Eq. 1 above. In one embodiment, the decoder uses the three features discussed above, along with four additional features.
Those four additional features can include a target language model which is the logarithm of the probability of the full target language sentence, p(t), estimated using a tri-gram language model. A second feature is a distortion penalty that discourages reordering of the words. The penalty is illustratively proportional to the total number of words between the source language phrases corresponding to adjacent target language phrases. Another feature is a target sentence word count which is simply the total number of words in the full sentence translation. A final feature is the phrase pair count which is the number of phrase pairs that were used to build the full sentence translation.
Parameter training component 108 accesses training data in translation model training corpus 109 and estimates the parameters λi (indicated by 115 in
After the initial phrase translation table 107 is generated and the translation model for use in decoder 110 is initially trained, phrase pair re-extraction component 112 determines whether the phrase translation table 107 contains the final set of extracted phrase pairs, or whether it only contains the initial set of extracted phrase pairs. This is indicated by block 134 in
However, if only the initial set of phrase pairs has been extracted in the phrase translation table 107, then component 112 re-extracts phrase pairs, selecting a subset of the initial set of phrase pairs in the phrase translation table 107. This is indicated by block 136 in
It will be noted that it is important to select high quality phrase pairs for the phrase translation table 107. Since phrase translation probabilities are estimated based on counting phrase pairs extracted from the word alignments, the quality of the estimates depends on the quality of the extracted pairs. If bad phrase pairs are included in the phrase translation table 107, not only do they provide more possible ways of producing bad translations, but they add noise to the translation probability estimates for the phrases they contain from their use in the denominator of the estimation formula set out in Eq. 2 above.
Therefore, in extracting the subset of phrase pairs, phrase pair re-extraction component 112 attempts to extract that subset of phrase pairs (indicated by block 113 in
Scoring the phrase pairs is performed using a metric that may desirably yield high scores for phrase pairs that lead to high quality translations and low scores to those that decrease translation quality. One such metric is provided by the overall translation model in decoder 110. The scoring metric, q(s,t), is therefore computed by first extracting a full phrase translation table, then training a full translation model (for decoder 110) as discussed above with respect to
More specifically, in one embodiment, the scoring metric is computed as follows:
q(s,t)=φ(s,t)·λ Eq. 4
where φ(s,t) is a length three vector that contains the feature values stored with the pair (s,t) in the initial phrase translation table 107. In other words, the logarithms of the conditional translation probabilities p(s|t) and p(t|s) and the lexical score l(s,t) are the three feature values in the vector. Also, λ is a vector of the three weight parameters that were learned for these features in the full translation model used by decoder 110. They are combined in Eq. 4 by the vector dot product operation, which sums the product of the value and the weight for each of the features.
The rest of the features discussed above used in initially calculating the translation model for decoder 110 are, in one illustrative embodiment, not used because they are either constant or because they depend on the target language sentence which is fixed during phrase extraction. Basically, in the present embodiment being discussed, the subpart of the full translation model for decoder 110 that is used to score phrase pairs during re-extraction is that part of the translation model that actually considers phrase pair identity, and applies a score based on how much the full model would prefer this phrase pair.
Once the initially extracted phrase pairs 105 are scored by the portion of the full translation model for decoder 110 that utilizes these features, a subset of the original phrase pairs is then selected based upon the scores calculated. This is indicated by block 304 in
There are a variety of different ways to select the subset of phrase pairs based on their scores, as indicated by block 304.
Therefore, assuming that all of the phrase pairs for the given sentence pair are sorted by score, re-extraction component 112 selects the best scoring phrase pair based upon the score calculated. This is indicated by block 350 in
Re-extraction component 112 then removes both the source and target language phrases in the selected phrase pair from further consideration. This is indicated by block 354 in
By way of example, consider the phrase pairs in Table 1 above and assume that these phrase pairs have already been sorted by score q(s,t). The global competitive linking mechanism set out in
Another mechanism by which re-extraction component 112 can select a subset of the initial phrase pairs based on their score (as indicated by block 304 in
It will be assumed that a sentence pair has been selected and all of the initial phrase pairs 105 identified for that sentence pair have been scored and ordered based on that score, as discussed above. Re-extraction component 112 first selects a source language phrase from the sorted phrase pairs. This is indicated by block 450 in
Component 112 then marks the highest scoring phrase pair occurring in the sentence pair for the selected source language phrase. This is indicated by block 452. Component 112 repeats this process for each distinct source language phrase in the set of initial phrase pairs 105 occurring in the sentence pair. This is indicated by block 454 in
Component 112 then selects a target language phrase from the ordered set of phrase pairs. This is indicated by block 456. Component 112 then marks the highest scoring phrase pair occurring in the sentence pair for the selected target language phrase. This is indicated by block 458. Component 112 repeats this process, selecting a target language phrase and marking the highest scoring phrase pair occurring in the sentence pair for the selected target language phrase, for all distinct target language phrases in the initial set of phrase pairs 105 occurring in the sentence pair. This is indicated by block 460 in
Once the phrase pairs are marked in this way, component 112 selects all of the marked phrase pairs for inclusion in the phrase translation table. These marked phrase pairs taken from all sentence pairs then form the subset of phrase pairs 113 that ultimately end up in the phrase translation table. This is indicated by block 462 in
It can be seen that the local competitive linking mechanism described with respect to
For example, again consider the phrase pairs in Table 1 above. Assume also that they are sorted by their scores. The local competitive linking mechanism set out in
It can thus be seen that both the global and local competitive linking mechanisms prune the full phrase translation table from what it was initially. It has been observed that both of these mechanisms significantly reduce the size of the phrase translation table. For instance, in one embodiment, it was seen that global competitive linking reduced the size of the phrase translation table to approximately one-third the initial size. Similarly, the local competitive linking mechanism reduced the size of the phrase translation table by approximately 45 percent. While global competitive linking reduced the size of the phrase translation table the most, it resulted in a slight loss of translation quality (as reflected by the BLEU score). Local competitive linking, on the other hand, not only reduced the size of the phrase translation table significantly, but also resulted in an increase in translation quality, as reflected by the BLEU score.
Embodiments are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with various embodiments include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, telephony systems, distributed computing environments that include any of the above systems or devices, and the like.
Embodiments may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Some embodiments are designed to be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules are located in both local and remote computer storage media including memory storage devices.
With reference to
Computer 510 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 510 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 110. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
The system memory 530 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 531 and random access memory (RAM) 532. A basic input/output system 533 (BIOS), containing the basic routines that help to transfer information between elements within computer 510, such as during start-up, is typically stored in ROM 531. RAM 532 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 520. By way of example, and not limitation,
The computer 510 may also include other removable/non-removable volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media discussed above and illustrated in
A user may enter commands and information into the computer 510 through input devices such as a keyboard 562, a microphone 563, and a pointing device 561, such as a mouse, trackball or touch pad. Other input devices (not shown) may include a joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 520 through a user input interface 560 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A monitor 591 or other type of display device is also connected to the system bus 521 via an interface, such as a video interface 590. In addition to the monitor, computers may also include other peripheral output devices such as speakers 597 and printer 596, which may be connected through an output peripheral interface 595.
The computer 510 is operated in a networked environment using logical connections to one or more remote computers, such as a remote computer 580. The remote computer 580 may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 510. The logical connections depicted in
When used in a LAN networking environment, the computer 510 is connected to the LAN 571 through a network interface or adapter 570. When used in a WAN networking environment, the computer 510 typically includes a modem 572 or other means for establishing communications over the WAN 573, such as the Internet. The modem 572, which may be internal or external, may be connected to the system bus 521 via the user input interface 560, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 510, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.