Text-to-text applications, such as machine translation systems, often operate based on training data. A machine translation system may automatically learn from translated documents. The quality of the actual translation is based on the amount and quality of the data, and the precision of the training process. The processing uses a tradeoff between the data quality and the speed of processing.
Machine translation systems learn from language pairs, and thereafter may translate documents from one language in the pair to the other language in the pair. Translation quality may be greatly improved by providing field specific translation information, that is, translation information that is specific to the field of the information that is being translated.
The present application teaches a system which allows a generic training to be done by the translation developer. The translation developer can use a slow but accurate training technique for that generic training. A text to text application system may be done in two parts, generic training done first, with emphasis on accuracy, followed by specific training done with emphasis on speed. The data from the specific training is used to “adapt” the data created by the generic training.
The system may be provided to the customer with the generic training already completed. The field—specific training can be done by the customer, as part of a customization process for the translation. The field specific training uses a technique that is faster but less accurate than the generic training.
Techniques are disclosed for the faster training techniques, and for merging the two training parts without completely re-training.
The general structure and techniques, and more specific embodiments which can be used to effect different ways of carrying out the more general goals are described herein.
Under the current system, a generic training is first carried out, and customers may also provide their specific training material. The software build is then customized to the training information. This may customize the software to work better than the generic translation software. However, it was noticed that customers often have data that they do not want to disclose. The data may be proprietary data, or may be classified data. For example, a military customer may have classified data which is available in both languages and could be used as very powerful specific training information. However, security restrictions may not allow that data to leave the customer's control.
According to an embodiment, a generic translation system is provided, which allows system adaptation on the customer's machine which is already running the pre-trained generic translation software. Alternately, the adaptation can be done at the same location as the generic training, to create an adapted parameter set that can be used for a text-to-text operation.
An embodiment is shown in
The data information is also supplemented with user training data shown as 125. The user training data is optimized for use in the specific translation field, and hence is domain specific. The fast training module 130 processes this data. The fast training module 130 may use a different training technique then the in-house training system, and one which is optimized for speed as compared with accuracy. The user training data may include fewer words than the generic training data. The user training data may be between ½ million and 5 million words. The user training system may train 2-10 times faster than the in-house training system 115.
The user training creates the user domain specific parameter base 135.
The parameters are merged using a merge module 140 which creates a merged parameter base 145 that includes information from both generic training data 110 and the user training data 125. The merged parameter base is then used by the text to text application 100, which may be a general purpose computer or processor that is programmed for a text-to-text application, such as translation. A foreign language sentence 150 is input into the translator 100 and converted to an English-language translation 155 or vice versa. The translation is based on both sets of parameter databases 120 and 135.
The system used herein is called an adapter, and relies on two different kinds of adaptation, the so-called off-line adaptation at 115 which uses generic training data, and the online adaptation at 130 which adapts the generic data to the specific environment. The online adaptation uses a fast and light weight word alignment model, similar to the models that are used in real-time speech recognition systems. The online adaptation is then merged with the generic system parameters using a set of mathematical formula that allow for effective parameter merge. The formula calculates approximated parameter values as if they would have been trained if all training with both databases had been trained using the off-line adaptation scheme. This avoids complete retraining of the parameters from generic data, and may speed up the adaptation process.
A previous solution to the issue of a general corpus and a domain corpus, combined the two corpora and completely retrained the system. This was typically done “offline”, since it was a relatively time consuming process. Moreover, this requires disclosure of the user's data, which, as described above, could not always be done. The table merge described herein makes this system more able to operate in the off-line/online model.
The merged parameters 145 are used for the translator 100.
The Parameter Merge Module 140 combines parameters from the generic model 115 and the domain model 130. Each model is trained separately. A generic parameter is trained only from the generic data. This training is done at the developer, and may take more than two weeks, since the training data is large (over 100 million words). User domain parameter is trained only from user data, which is typically small (less than 1 million words). As two models are trained separately, the adaptation process is quite fast, since only the user parameters need to be trained.
If more computational resources are spent the adaptation, then one way of processing is to combine the generic data and user data, and to run the full training from scratch. It may also-be important to duplicate the user data multiple times so that the user data makes a significant effect on the model. As the data size is quite different (typically, generic data is about 100 million words, and user data is less than 1 million words), such user data duplication may be important. This method (combining the generic and the user data, and training an adaptor from scratch) is called offline adaptation.
The Basic Merge Formula
The following formulas are used for online adaptation.
For a conditional model P(e|f), two models Pg(e|f) and Pd(e|f) are merged as follows:
The formula combines a generic model Pg(e|f) and a domain model Pd(e|f) by a linear mixture weight λfK. However, the different mixture weight is assigned for different model conditioning variable f. The weight is defined by the count of f in the general corpus Cg(f) and in the domain corpus Cd(f). The domain weight factor K is a global constant for all f.
Intuitively, the mixture weight is adjusted according to the word frequencies. If a word f in a generic data, but only observed infrequently in the domain data, the probability P(e|f) should stay the same as or similar to Pg(e|f). On the other hand, if a word f is observed very frequently in domain data, then P(e|f) should be close to Pd(e|f).
Derivation
This section shows that the formula actually emulates the domain duplication effect. A bigram language model is used as an example here. Notice that the adapter formula is applicable for any conditional model, such as a model-1 translation model and an ngram language model or any other model.
Consider, for example, two bigram language models, Pg(w2|w1) and Pd(w2|w1), that are separately trained from general data and from domain data.
where Cg(w) and Cd(w) are the number of occurrences of w in the general data and in the domain data, respectively.
When the domain data is duplicated by K times and concatenated with the general data, and a single language model is trained with this data, such language model Pg+Kd(w2|w1) is:
By introducing duplicated term weights λw
the domain duplicated language model becomes
The last line shows that combining two separate language models Pg(w2|w1) and Pd(w2|w1) with duplicated term weights λw
Formula Application
The model parameter for this translation system may be formed of several different types of parameters. The adaptation formula above is modified differently according to the different types of parameters.
Word Probability Parameters
The formula above is directly applicable to parameters that are in the form of probabilities conditioned on a single word.
As seen in the formula (1), a merged entry P(e|f) is independent from the domain data if its probability Pd(e|f) is zero, that is
P(e|f)=λfK·Pg(e|f) if Pd(e|f)=0 (10)
This happens when the words e and f do no co-occur in any sentence in the domain corpus.
Therefore, an efficient implementation is the following:
otherwise, where
Pov(e|f)=λfK·Pg(e|f)+(1−λfK)·Pd(e|f) (12)
And Cd(e|f) is the number of sentences (in the domain corpus) in which both e and f appear. The size of the override table Pov(e|f) is identical to the word probability table from the domain corpus.
The override table is generated by the formula (12) at the end of the adaptation process. In the adapted version of the decoder, the override table is used in conjunction with the original generic word probability table Pg(e|f) as specified in the formula (11).
The following is a sample implementation of formula (11) and (12):
The OverrideWPTable(e,f) is maybe a fixed-sized array. Each element stores the precomputed value of formula (12). The size of the OverrideWPTable(e,f) is identical to the size of the word probability table from the domain corpus. The weight term W=GenCnt(f)/(GenCnt(f)+K*ComCnt(f)) implements the formula (2).
Phrase Probability Parameters
A simple application is possible to the parameters that in the form of probabilities conditioned on a sequence of words, or a phrase:
and Cg(fff) and Cd(fff) are the number of occurrences of phrase fff in the general corpus and the domain corpus. These phrase counts one provided from the generic data, in addition to the generic model parameters. If these phrase counts cannot be provided they can, further approximated by assuming that the counts are uniform u over the all possible phrases.
This turns out to be a linear model mixture in which the weight is determined by the domain corpus duplication factor K.
A similar approach for word probability parameters can be implemented as:
otherwise, where
The following is a exemplary sample implementation of formula (17) and (18):
The OverridePHTable is identical to the one from the domain corpus, except that each entry is replaced with the override value.
2.3.3 Count-Based Parameters
Some model parameters are not in the form of probabilities, but rather are in the form of simple counts. For this type of parameter, the basic formula (1) cannot be applied. Instead, the following formula is applied to have the same effect.
C(e,f)=Cg(e,f)+K·Cd(e,f) (19)
An efficient implementation is the following:
otherwise, where
Cov(e,f)=Cg(e,f)+K·Cd(e,f) (21)
The override table Cov(e,f) is generated by the formula (21) at the end of the adaptation process. In the adapted version of the decoder, the override lexicon table is used in conjunction with the original generic table Cg(e,f) as specified in formula (20).
The following is an exemplary implementation of formula (20) and (21):
The OverrideCTable(e,f) is a fixed-sized array. Each element stores the precomputed value of formula (21). The size of the OverrideCTable(e,f) may be identical to the size of the parameter table from the domain corpus.
Fast Train Module
The Fast Train Module uses a simpler version of the in-house Training System. It may use simple models such as bag-of-words translation models, and/or Hidden Markov models.
Simple Fast Training
The simplest form of the Fast Train Module is just to use such simple models without any additional data from the generic parameters. If the user data is sufficiently large, this scheme works well in practice.
Adaptive Fast Training
To improve the adaptation performance, especially for small user data, the generic parameters are additionally used to train the user data. In general, the training method builds a word-based conditional probability table based on the following formula:
Where cn(e,f) is the alignment count from the parameters at the n-th iteration.
Seed Adaptation
In non-adapter systems, the initial parameter P0(e|f) is usually a uniform parameter. In the adapter environment, the generic parameter Pg(e|f) can be used instead.
Use Alignment Counts from General Corpus
If the generic parameter provides the final alignment counts cgN(e,f) from the general corpus, it can be used in each iteration of the user data training in the following way:
Use Generic Model for Each Iteration
Instead of using cgN(e,f), the generic model table Pg(e|f) can be used in each iteration.
In this formula, a constant weight λ can be used.
Term Weighted Adaptation
The adaptation formula (1) can alternatively be used as:
Use Prior Count Instead of Alignment Count
The alignment count cgN(e,f), from the general corpus can be approximated using the term frequency count Cg(e).
cgN(e,f)≈Cg(f)·Pg(e|f) (26)
The operations described herein can be carried out on any computer, e.g., an Intel processor, running Windows and/or Java. Programming may be done in any suitable programming language, including C and Java.
In one embodiment, a method may comprise first carrying out a first training using at least one corpus of language information, using a first training operation to obtain a first parameter set; second carrying out a second training using a domain specific corpus, using a second training operation which operates faster than said first training operation, and which is less accurate than said first training operation, to obtain a second parameter set; and using said second parameter set to adapt said first parameter set to carry out a text-to-text operation. In another embodiment, said first carrying out may be carried out at a first location, and said second carrying out may be carried out at a second location, different than said first location.
Although only a few embodiments have been disclosed in detail above, other embodiments are possible and are intended to be encompassed within this specification. The specification describes specific examples to accomplish a more general goal that may be accomplished in other way. This disclosure is intended to be exemplary, and the claims are intended to cover any modification or alternative which might be predictable to a person having ordinary skill in the art. For example, other merging and training techniques may be used.
Also, only those claims which use the words “means for” are intended to be interpreted under 35 USC 112, sixth paragraph. Moreover, no limitations from the specification are intended to be read into any claims, unless those limitations are expressly included in the claims.
Number | Name | Date | Kind |
---|---|---|---|
4502128 | Okajima et al. | Feb 1985 | A |
4599691 | Sakaki et al. | Jul 1986 | A |
4787038 | Doi et al. | Nov 1988 | A |
4814987 | Miyao et al. | Mar 1989 | A |
4942526 | Okajima et al. | Jul 1990 | A |
5146405 | Church | Sep 1992 | A |
5181163 | Nakajima et al. | Jan 1993 | A |
5212730 | Wheatley et al. | May 1993 | A |
5267156 | Nomiyama | Nov 1993 | A |
5311429 | Tominaga | May 1994 | A |
5432948 | Davis et al. | Jul 1995 | A |
5477451 | Brown et al. | Dec 1995 | A |
5497319 | Chong et al. | Mar 1996 | A |
5510981 | Berger et al. | Apr 1996 | A |
5644774 | Fukumochi et al. | Jul 1997 | A |
5696980 | Brew | Dec 1997 | A |
5724593 | Hargrave, III et al. | Mar 1998 | A |
5761631 | Nasukawa | Jun 1998 | A |
5781884 | Pereira et al. | Jul 1998 | A |
5805832 | Brown et al. | Sep 1998 | A |
5848385 | Poznanski et al. | Dec 1998 | A |
5867811 | O'Donoghue | Feb 1999 | A |
5870706 | Alshawi | Feb 1999 | A |
5903858 | Saraki | May 1999 | A |
5987404 | Della Pietra et al. | Nov 1999 | A |
5991710 | Papineni et al. | Nov 1999 | A |
6031984 | Walser | Feb 2000 | A |
6032111 | Mohri | Feb 2000 | A |
6092034 | McCarley et al. | Jul 2000 | A |
6119077 | Shinozaki | Sep 2000 | A |
6131082 | Hargrave, III et al. | Oct 2000 | A |
6182014 | Kenyon et al. | Jan 2001 | B1 |
6205456 | Nakao | Mar 2001 | B1 |
6223150 | Duan et al. | Apr 2001 | B1 |
6236958 | Lange et al. | May 2001 | B1 |
6269351 | Black | Jul 2001 | B1 |
6278967 | Akers et al. | Aug 2001 | B1 |
6285978 | Bernth et al. | Sep 2001 | B1 |
6289302 | Kuo | Sep 2001 | B1 |
6304841 | Berger et al. | Oct 2001 | B1 |
6311152 | Bai et al. | Oct 2001 | B1 |
6360196 | Poznanski et al. | Mar 2002 | B1 |
6389387 | Poznanski et al. | May 2002 | B1 |
6393388 | Franz et al. | May 2002 | B1 |
6393389 | Chanod et al. | May 2002 | B1 |
6415250 | van den Akker | Jul 2002 | B1 |
6460015 | Hetherington et al. | Oct 2002 | B1 |
6502064 | Miyahira et al. | Dec 2002 | B1 |
6782356 | Lopke | Aug 2004 | B1 |
6810374 | Kang | Oct 2004 | B2 |
6901361 | Portilla | May 2005 | B1 |
6904402 | Wang et al. | Jun 2005 | B1 |
6999925 | Fischer et al. | Feb 2006 | B2 |
7107215 | Ghali | Sep 2006 | B2 |
7113903 | Riccardi et al. | Sep 2006 | B1 |
7383542 | Richardson et al. | Jun 2008 | B2 |
20020188438 | Knight et al. | Dec 2002 | A1 |
20020198701 | Moore | Dec 2002 | A1 |
20040030551 | Marcu et al. | Feb 2004 | A1 |
20060015323 | Udupa et al. | Jan 2006 | A1 |
Number | Date | Country |
---|---|---|
0469884 | Feb 1992 | EP |
0715265 | Jun 1996 | EP |
0933712 | Aug 1999 | EP |
07244666 | Jan 1995 | JP |
07244666 | Sep 1995 | JP |
10011447 | Jan 1998 | JP |
11272672 | Oct 1999 | JP |
Number | Date | Country | |
---|---|---|---|
20070094169 A1 | Apr 2007 | US |