The present invention relates to a word alignment apparatus for analyzing corresponding relation of respective words (word alignment) in a bilingual sentence pair, and to a word alignment score computing apparatus used therefor.
Word alignment is an essential task indispensable for Statistical Machine Translation (SMT).
Referring to
In SMT, such a word alignment plays a very important role. SMT prepares a bilingual corpus including a large number of bilingual pairs such as described above. Word alignment is done for each bilingual pair. Based on the word alignment, a translation model is built through a statistical process. This process is referred to as translation model training. In short, the translation model represents, in the form of probability, which word of one language would be translated to which word of the other language. In SMT, when a sentence of a source language is given, a number of candidate sentences of the translation target language (target language) are prepared; probability of the source language sentence generated from each candidate sentence of target language is computed, and the target language sentences that has the highest probability is estimated to be the translation of the source language. In this process, the translation model mentioned above is used.
Clearly, translation model with higher precision is necessary to improve SMT performance. For this purpose, it is necessary to improve word alignment precision of a bilingual corpus used in translation model training. Therefore, in order to improve SMT performance, it is desired to improve performance of a word alignment apparatus performing word alignment of bilingual pairs.
Prevalently used word alignment methods include IBM model (see Non-Patent Literature 1 below) and HMM model (see Non-Patent Literature 2). These models assume that word alignment is generated in accordance with a certain probability distribution, and estimate (learn) the probability distribution from actually observed word alignment (generation model). Given a source language sentence f1J=f1 . . . , fJ and a target language sentence e1I=e1, . . . , eI, the sentence f1J of the source language is generated from the sentence e1I of the target language via the word alignment a1J, and the probability of this generation is computed in accordance with Equation (1) below. In Equation (1), each aj is a hidden variable indicating that the source language word fj is aligned to the target language word ea_j. In the following texts, an underscore “_” indicates that a certain subscript notation further has a subscript notation, and parentheses “{ }” following the underscore indicate the range of subscripts. Specifically, a notation “ea_{j}” indicates that the subscript accompanying “e” is “aj” in normal expression, and the notation “ea_{j}−1” indicates that the subscript to “e” is aj−1, and the notation “ea_{j−1}” indicates that the subscript to “e” is aj−1.
In Equation (2), pa is alignment probability, and pt is lexical translation probability.
For a bilingual sentence pair (f1J, e1I), these models specify a best alignment ^a (the symbol “^” is originally to be written immediately above the immediately following character) satisfying Equation (3) below, using, for example, forward-backward algorithm. The best alignment is referred to as Viterbi alignment.
Non-Patent Literature 3 proposes a method of alignment in which Context-Dependent Deep Neural Network for HMM, which is one type of Feed Forward Neural Networks (FFNN) is applied to the HMM model of Non-Patent Literature 2, so that the alignment score corresponding to alignment probability and lexical score corresponding to lexical selection probability are computed using FFNN. Specifically, a score sNN (a1J|f1J, e1I) of alignment a1J for a bilingual sentence pair (f1J, e1I) is represented by Equation (4) below.
In the method of Non-Patent Literature 3, normalization over all words is computationally prohibitively expensive and, therefore, a score is used in place of probability. Here, ta and tt correspond to pa and pt of Equation (2), respectively. sNN represents a score of alignment a1J, and c(w) represents context of word w. As in the HMM model, Viterbi alignment is determined by forward-backward algorithm in this model.
Weight matrix L is an embedding matrix that manages word embedding of each word. Word embedding refers to a low dimensional real-valued vector representing syntactic and semantic properties of a word. When we represent a set of source language words by Vf, a set of target language words by Ve and the length of word embedding by M, the weight matrix L is a M×(|Vf|+|Ve|) matrix. It is noted, however, that <unk> representing an unknown word and <null> representing null are added to Vf and Ve, respectively.
The lexical translation model receives as inputs the source language word fj and the target language word ea_{j} as objects of computation, as well as their contextual words. The contextual words refer to words existing in a window of a predetermined size. Here, window width of 3 is assumed, as shown in
Specific computations in hidden layer 72 and output layer 74 are as follows.
z1=ƒ(H×z0+BH), (5)
tt=O×z1+BO (6)
where H, BH, O and BO are a |z1|×|Z0|, |z1|×1, 1×|z1| and 1×1 matrix, respectively. Further, f(x) is a non-linear activation function, and h tan h(x) is used here, which is represented as:
The alignment model for computing alignment score ta(aj−aj−1|c(ea_{j}−1)) can be formed in the same way.
In the training of each model, weight matrices of each layer are trained using Stochastic Gradient Descent (SGD) so that the ranking loss represented by Equation (7) below is minimized. The gradients of weights are computed by back propagation.
where θ denotes the parameters to be optimized (weights of weight matrices), T is training data, sθ denotes the score of a1J computed by the model under parameters θ (see Equation (4)), a+ is the correct alignment, and a− is the incorrect alignment that has the highest score by the model under parameters θ.
Both in Equations (2) and (4), alignment aj of each word depends on immediately preceding alignment aj−1. As shown in
It is not clear, however, if the immediately preceding alignment relation is sufficient as a clue for the alignment. In order to improve precision of word alignment, it may be necessary to figure out other methods and to specify, if any, a method attaining higher precision.
Therefore, an object of the present invention is to provide a word alignment apparatus enabling word alignment with precision higher than the conventional methods, a word alignment score computing apparatus for this purpose, and a computer program therefor.
According to a first aspect, the present invention provides a word alignment score computing apparatus that computes a score of word alignment of bilingual sentence pair of first and second languages. The apparatus includes: selecting means receiving the bilingual sentence pair and a word alignment for the bilingual sentence pair, for successively selecting words of a sentence in the first language of the bilingual sentence pair in a prescribed order; and score computing means for computing, for every word of the sentence in the first language of the bilingual sentence pair, a score representing a probability that the word selected by the selecting means and a word in the second language aligned with the word by the word alignment form a correct word pair, and based on this score, for computing a score of the word alignment. In computing a score of a certain word pair, the score computing means computes the score of the certain word pair based on all alignments of words selected by the selecting means preceding that word in the first language which forms the certain word pair.
Preferably, the selecting means includes means for successively selecting words of the sentence in the first language starting from the beginning of the sentence in the first language.
More preferably, the score computing means includes: first computing means for computing a score representing a probability that a word pair consisting of the word selected by the selecting means and a word in a sentence in the second language of the bilingual sentence pair aligned with the word by the word alignment is a correct word pair; and second computing means for computing, based on scores of all words of the sentence in the first language of the bilingual sentence pair computed by the first computing means, the score of the word alignment.
More preferably, the second computing means includes means for computing the score of the word alignment by multiplying the scores of all words of the sentence in the first language of the bilingual sentence pair computed by the first computing means.
The score computing means may include: a recurrent neural network having a first input receiving a word selected by the selecting means and a second input receiving a word in the second language aligned with the word by the word alignment; and input control means for applying the word selected by the selecting means and the word aligned with the word by the word alignment to the first and second inputs, respectively. The recurrent neural network includes: an input layer having the first and second inputs and computing and outputting word embedding vectors from words respectively applied to the first and second inputs; a hidden layer receiving outputs of the input layer, and generating, by a predetermined non-linear operation, a vector representing a relation between two outputs from the input layer; and an output layer computing and outputting the score based on the output of the hidden layer. The output of the hidden layer is applied as an input to the hidden layer when a next word pair is given to the word alignment score computing apparatus.
According to a second aspect, the present invention provides a word alignment apparatus that estimates word alignment of bilingual sentence pair of first and second languages. The apparatus includes: any of the above-described word alignment score computing apparatuses; word alignment candidate generating means for generating a plurality of word alignment candidates for the bilingual sentence pair; computing means for computing, for each of the plurality of word alignment candidates generated by the word alignment candidate generating means, a word alignment score for the bilingual sentence pair using the word alignment score computing apparatus; and word alignment determining means for determining and outputting as the word alignment for the bilingual sentence pair, that one of the word alignment candidates computed by the computing means for the plurality of word alignment candidates which corresponds to the highest score.
According to a third aspect, the present invention provides a computer program causing, when executed by a computer, the computer to function as each means of any of the above-described apparatuses.
In the following description and in the drawings, the same components are denoted by the same reference characters. Therefore, detailed descriptions thereof will not be repeated.
[Basic Concepts]
In the present embodiment, when a best alignment ^a is to be found, alignment of each word is determined based on all alignment relations from the beginning of a sentence to the immediately preceding one. By way of example, a score of alignment sequence a1J=a1, . . . , aJ is computed in accordance with Equation (8) as a score depending on all preceding alignment relations. The score may be a probability.
For this purpose, the present embodiment adopts an RNN (Recurrent Neural Network) based alignment model. This model computes a score sNN of alignment a1J using Equation (8) by an RNN. According to Equation (8), the estimation of the j-th alignment aj depends on all the preceding alignments a1j-1. Note that in this example, not a probability but a score is used as in the conventional FFNN type example.
Referring to
tRNN(aj|a1j-1,ƒj,ea
Upon receiving the vector yj output from hidden layer 112 and in response to vector yj, output layer 114 computes and outputs score 102 (tRNN) of alignment aj in accordance with Equation (10). Note that while the conventional FFNN-based model (
The computations in the hidden layer 112 and output layer 114 of this model are as follows.
yj=ƒ(Hd×xj+Rd×yj−1+BdH) (9)
tRNN=O×yj+BO (10)
where Hd, Rd, BdH, O and Bo are |yj|×|xj|, |yj|×|yj−1|, |yj|×1, 1× |yj| and 1×1 matrices, respectively. Note that |yj|=|yj−1| here. f(x) is a non-linear activation function, which is h tan h(x) in this embodiment.
In the present embodiment, Viterbi alignment is determined by a forward-backward algorithm. Strictly speaking, however, a forward-backward algorithm by dynamic programming cannot be used, since alignment history for yj is long. Therefore, here, Viterbi alignment is approximated by heuristic beam search. Specifically, in forward algorithm, the states corresponding only to a beam width designated beforehand are maintained and others are discarded.
As described above, the RNN-based model has a hidden layer with recurrent connections. Because of the recurrent connections, the history of previous alignments can be compactly encoded by hidden layer 112 and propagated. Therefore, by computing a score in accordance with the settings of hidden layer 112, a score can be found taking advantage of the entire previous alignment relations.
<Training>
During training, we optimized the weight matrices of respective layers using a mini-batch SGD with a batch size D. This method is faster to converge and more stable than a plain SGC (D=1). Gradients are computed by the Back Propagation Through Time (BPTT) algorithm. In BPTT, the network is unfolded in time direction (j), and then the gradients are computed at each time step. In addition, an l2 regularization term is added to the objective function to prevent the model from over fitting.
The RNN-based model can be trained by the supervised approach as in the FFNN-based model. The training here proceeds based on the ranking loss defined by Equation (7). Other than this method of training, constraints incorporating bi-directional agreement of alignments or unsupervised training may be used to further improve performance. The supervised training requires supervisory data (ideal alignments). To overcome this drawback, the present embodiment adopts unsupervised training using NCE (Noise-Contrastive Estimation), which learns from unlabeled training data.
<Unsupervised Learning>
Dyer et al. proposes an alignment model using unsupervised learning based on Contrastive Estimation (CE) (Chris Dyer, Jonathan Clark, Alon Lavie, and Noah A. Smith. 2011. Unsupervised Word Alignment with Arbitrary Features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies Volume 1, pages 409-419). CE learns a model that discriminates observed data from its neighborhood data, regarding the neighborhood data as pseudo negative samples relative to the observed data. Dyer et al. have regarded all possible word alignments of the bilingual sentences given as training data T as observed data, and regarded the full translation search space Ω as its neighborhood data.
The present embodiment introduced this idea into ranking loss with margin.
where Φ is a set of all possible alignments given (f, e), EΦ[sθ] is the expected value of the score sθ on Φ, e+ denotes a target language sentence in the training data, and e− denotes a pseudo target language sentence. The first term of the expected value is for the observed data and the second is for the neighborhood data.
The computation for the full search space Ω is prohibitively expensive. To reduce the computation, we adopted NCE. NCE uses randomly-sampled sentences from full search space Ω as e−. Further, it computes the expected values by a beam search with a beam width W and truncates alignments with low scores, to further reduce the computational burden. Then, Equation (11) above is converted to a form suitable for on-line processing.
where e+ is a target language sentence forming a pair of f+ in the training data, or (f+, e+)∈T, e− is a randomly-sampled pseudo target language sentence whose length is |e+|, and N denotes the number of pseudo target language sentences generated for f+. GEN is a subset of all possible alignment set Φ generated by beam search.
In a simple method of generating e−, e− is generated by repeating a random sampling of target words from a set Ve of target language words |e+| times and lining them up. To produce more effective negative samples through model learning, this method samples each word from a set of words whose probability is determined to be above a threshold C under the IBM model 1 incorporating 10 prior (translation candidate words) among a set of target language words that co-occur with source language words fi∈f+ in the bilingual sentences in the training data. IBM model 1 incorporating 10 prior is convenient for reducing the number of translation candidates because it generates more sparse alignments than the standard IBM model 1.
<Agreement Constraints>
Both of the FFNN and RNN-based models are based on the HMM alignment model and, therefore, they are asymmetric. Specifically, they can represent one-to-many alignment relations when viewed from the target language side. Asymmetric models as these are usually trained in two alignment directions. It has been found, however, that when such directional models are trained to agree with each other, alignment performance improves. Such a constraint is referred to as agreement constraint. In the following, a method of training introducing the agreement constraint to the above-described model will be described.
Specifically, the agreement constraint enforces agreement in word embeddings of both directions. The present embodiment trains two-directional models based on the following objective functions incorporating penalty terms that express a difference between word embeddings.
where θFE (θEF) denotes weights of layers in a source-to-target (target-to-source) alignment model, θL denotes weights of the input layer (Lookup layer), that is, word embeddings, and α is a parameter controlling strength of the agreement constraint. “∥θ∥” indicates the norm. In the experiments described below, 2-norm was used. Equations (13) and (14) can be applied to both supervised and unsupervised approaches. Namely, Equations (7) or (12) may be substituted for loss (θ) of Equations (13) and (14).
Referring to
First, the program reads a configuration file having parameters written beforehand, and sets batch size D, N, C, W and α (steps 150). Thereafter, inputs of θ1FE, θ1EF and a constant MaxIter indicating the number of maximum iteration are received, and training data T and IBM1 are read (step 152). Here, IBM1 is a list of translation candidate words found by IBM model 1 incorporating 10 prior for each of source and target language words. The program further includes a step 154, following the steps above, of repeating the following process on every t that satisfies the relation 1≤t≤MaxIter, and a step 156 of outputting values θEFMaxIter+1 and θFEMaxIter+1 obtained at the completion of step 154 and ending the process.
The process iterated on every t at step 154 includes: a step 170 of sampling D bilingual sentence pairs (f+, e+)D from training data T; a step 172 of generating N pseudo negative samples for each f+, based on translation candidates (IBM1) of each word in f+ found by IBM model 1 incorporating 10 prior; and a step 174 of generating N pseudo negative samples for each e+, based on translation candidates of each word in e+ through similar processing. Further, at steps 176 and 178, weights of respective layers of the neural network are updated in accordance with the objective functions described above. θtEF and θtFE are simultaneously updated at each iteration, and when θtFE and θtEF are to be updated, values of θt-1EF and θt-1FE are used to have word embeddings agree (match) with each other.
Step 194 includes: a step 200 of computing and storing a score sk, which will be described later, for all possible alignments (a1J,k) (k=1˜K) for the bilingual sentence pair (f1J, e1I) under processing; and a step 202 of selecting, as an alignment for the bilingual sentence pair under processing, the alignment having the maximum score sk stored at step 200 among all the alignments (a1J,k) (k=1˜K), labeling the bilingual sentence pair and updating the training data.
Step 200 includes: a step 210 of initializing the score sk to 1; a step 212 of updating the score sk taking advantage of the entire previous alignment results (a1j-1) for a word fj and a word ea_{j} aligned to the word fj by alignment aj for each j, by selecting the source language word fj while successively changing the variable j from j=1 to J, and thereby computing the final score sk; and a step 214 of storing the score sk computed at step 212 as the score for the k-th alignment.
Step 212 includes: a step 220 of computing score tRNN (aj|a1j-1, fj, ea_{j}); and a step 222 of updating the score sk by multiplying the score sk by the score tRNN.
The first embodiment is directed to an RNN-based model imposing two-directional agreement constraints. The present invention, however, is not limited to such a model having agreement constraints. A one-directional RNN model without agreement constraints may be used.
At steps 176 and 178 of
[Experiments]
We conducted experiments to evaluate performances of the word alignment methods described in the embodiments above. In the experiments, the Japanese-English word alignment on corpus BTEC (Toshiyuki Takezawa, Eiichiro Sumita, Fumiaki Sugaya, Hirofumi Yamamoto, and Seiichi Yamamoto. 2002. Toward a Broad-coverage Bilingual Corpus for Speech Translation of Travel Conversations in the Real World. In Proceedings of the 3rd International Conference on Language Resources and Evaluation, pages 147-152.) and the French-English word alignment on the Hansards data set used in 2003 NAACL shared task (Rada Mihalcea and Ted Pedersen. 2003. An Evaluation Exercise for Word Alignment. In Proceedings of the HLT-NAACL 2003 Workshop on Building and Using Parallel Texts: Data Driven Machine Translation and Beyond, pages 1-10) were conducted. In addition, we evaluated performances of the translation task from Chinese to English on the FBIS corpus, the IWSLT 2007 translation task from Japanese to English, and the NTCIR Japanese-to-English patent translation task.
<Comparison>
We compared the RNN-based alignment models in accordance with the embodiments above with two baseline models. The first is the IBM model 4, and the second is an FFNN-based model having one hidden layer. The IBM model 4 was trained by the model sequence (15H53545: five iterations of the IBM model 1, followed by five iterations of the HMM model and so on), which is the default setting for GIZA++ (IBM 4), presented by Och and Ney (Franz Josef Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29:19-51.). For the FFNN-based model, word embedding length M was set to 30, the number of units of a hidden layer |z1| to 100, and the window width to 5, respectively. Following teachings of Yang et al., (Nan Yang, Shujie Liu, Mu Li, Ming Zhou, and Nenghai Yu. 2013. Word Alignment Modeling with Context Dependent Deep Neural Network. In Proceedings of the 51 st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 166-175), the FFNN-based model was trained by the supervised approach described above.
For the RNN-based models, word embedding length M was set to 30, and the number of units of a hidden layer |yi| to 100, respectively. In evaluation experiments, four RNN-based models, that is, RNNs, RNNs+c, RNNu and RNNu+c were evaluated. Here, “s/u” denotes a supervised/unsupervised model and “+c” indicates whether the agreement constraint is imposed or not.
In training the models except IBM4, the weights of each layer are first initialized. Specifically, for the weights of the input layer (Lookup layer) L, we preliminarily trained word embeddings both for the source and target languages from each side of the training data, and then set L to the obtained word embeddings. This is to avoid falling into local minima. Other weights are set to random values in the closed interval [−0.1, 0.1]. For the word embedding training, RNNLM tool kit (http://www.fit.vutbr.cz/˜imikolov/) based on Mikolov (Tomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. 2010. Recurrent Neural Network based Language Model. In Proceedings of 11th Annual Conference of the International Speech Communication Association, pages 1045-1048) was used with default settings. It is noted, however, that all words that occurred less than five times were all collected to the special token <unk>. Next, each weight was optimized using the mini batch SGD. The batch size D was set to 100, learning rate to 0.01 and 12 regularization parameter to 0.1. The training was stopped after iterations of 50 epochs. Other parameters were as follows. Parameters W, N and C in unsupervised training were set to 100, 50 and 0.001, respectively. The parameter α indicating the strength of agreement constraint was set to 0.1.
In the translation tasks, we used phrase-based SMT of Koehn et al. (Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constrantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics on Interactive Poster and Demonstration Sessions, pages 177-180). All the Japanese and Chinese sentences were segmented by Chasen and the Stanford segmenter, respectively. In the training, long sentences over 40 words were filtered out. Using the SRILM tool kits (Stolcke, 2002) with modified Kneser-Ney smoothing, 5-gram language model was trained on the English side of each training data for IWSLT and NTCIR, and 5-gram language model was trained on the Xinhua side of English Gigaword Corpus for FBIS. The SMT weight parameters were tuned by MERT (Franz Josef Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160-167), using the development data.
<Word Alignment Results>
In
In
In BTEC, RNNu and RNNu+c significantly outperformed RNNs(I) and RNNs+c (I), respectively, while these performances were comparable in Hansards. This indicates that the unsupervised training of the embodiments above is effective when the precision of training data is low, for example, when the results of automatic alignment of training data were used as supervisory data.
<Machine Translation Results>
In NTCIR and FBIS, each alignment model was trained from the randomly sampled 100K data, and then a translation model was trained from all the training data word-aligned using the alignment model. In addition, the SMT system by IBM4 model (IBM4all) trained using all the training data was evaluated. The significance test on translation performance was performed by the boot strap method (Philipp Koehn. 2004. Statistical Significance Tests for Machine Translation Evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388-395) with a 5% significance level. In
From the results of
<Training Data Size>
According to
As described above, by using an RNN-based model in accordance with the embodiments above, not only the result of immediately preceding word alignment but also long history of past word alignments can be used for word alignment. As a result, as appears by the experiments, the word alignment using this model outperforms the conventional model depending only on the immediately preceding word alignment. Further, this model can be trained unsupervised, with the resulting model having high performance. In addition, performance comparable to or higher than the conventional model can be achieved even when training data necessary for training is of a small size.
[Computer Implementation]
The word alignment model training apparatus and the word alignment apparatus can substantially be implemented by computer hardware and computer software cooperating with the computer hardware.
<Software Configuration>
The software configuration is as shown in
<Hardware Configuration>
Referring to
Referring to
A computer program causing computer system 330 to function as the word alignment model training apparatus and the word alignment apparatus in accordance with the above-described embodiments is stored in advance in a removable memory 364. After the removable memory 364 is attached to memory port 352 and a rewriting program of ROM 358 is activated, the program is transferred to and stored in ROM 358 or HDD 354. Alternatively, the program may be transferred to RAM 360 from another device on a network by communication through network I/F 344 and then written to ROM 358 or HDD 354. At the time of execution, the program is read from ROM 358 or HDD 354 and loaded to RAM 360 and executed by CPU 356.
The program stored in ROM 358 or HDD 354 includes a sequence of instructions including a plurality of instructions causing computer 340 to function as various functional units of the word alignment model training apparatus and the word alignment apparatus in accordance with the above-described embodiments. Some of the basic functions necessary to realize the operation may be dynamically provided at the time of execution by the operating system (OS) running on computer 340, by a third party program, or by various programming tool kits or a program library installed in computer 340. Therefore, the program may not necessarily include all of the functions necessary to realize the word alignment model training apparatus and the word alignment apparatus in accordance with the above-described embodiments. The program has only to include instructions to realize the functions of the above-described system by dynamically calling appropriate functions or appropriate program tools in a program tool kit from storage devices in computer 340 in a manner controlled to attain desired results. Naturally, the program only may provide all the necessary functions.
The operation of computer system 330 executing a computer program is well known and, therefore, description thereof will not be given here.
In the embodiments above, first, words are successively selected from the beginning of an English sentence, and score of each alignment is computed in accordance with the alignment. The present invention, however, is not limited to such embodiments. The order of selecting words may be arbitrary, and any order may be used provided that all words can be selected in a prescribed order. It is noted, however, that alignment between one language and the other language word by word starting from the beginning of a sentence of the one language as in the embodiments above is the simplest.
Further, in the embodiments above, a specific function is used as a function for each layer of the recurrent type neural network. The present invention, however, is not limited to such embodiments. By way of example, any function that can express non-linear relation between two words may be used in a hidden layer. The same applies to the input layer and the output layer. In addition, though the output from neural network is a score in the embodiments above, it may be a probability of two words being in a correct corresponding relation as described above. Probability can be regarded as a type of score.
In the embodiments above, training of RNN-based neural network and word alignment are executed by the same computer. The present invention, however, is naturally not limited to such embodiments. By copying neural network parameters obtained by training to another computer and setting up the RNN-based neural network, word alignment can be executed by any computer.
The embodiments as have been described here are mere examples and should not be interpreted as restrictive. The scope of the present invention is determined by each of the claims with appropriate consideration of the written description of the embodiments and embraces modifications within the meaning of, and equivalent to, the languages in the claims.
The present invention can be used for specifying corresponding words in two sentences, for example, two sentences of different languages. Typically, the present invention can be used for forming training data for a translation model for statistical machine translation, a translation verifying apparatus for a translator and a translation proof reader, an apparatus for comparing two documents, and so on.
Number | Date | Country | Kind |
---|---|---|---|
2014-045012 | Mar 2014 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2015/053825 | 2/12/2015 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2015/133238 | 9/11/2015 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
6411932 | Molnar | Jun 2002 | B1 |
7249012 | Moore | Jul 2007 | B2 |
7366654 | Moore | Apr 2008 | B2 |
8185375 | Kumar | May 2012 | B1 |
8543563 | Nikoulina | Sep 2013 | B1 |
20040002848 | Zhou | Jan 2004 | A1 |
20050049851 | Watanabe | Mar 2005 | A1 |
20060015321 | Moore | Jan 2006 | A1 |
20170068665 | Tamura | Mar 2017 | A1 |
Entry |
---|
International Search report for corresponding International Application No. PCT/JP2015/053825 dated Mar. 10, 2015. |
Nan Yang et al. “Word Alignment Modeling with Context Dependent Deep Neural Network”, Aug. 2013, Proceedings of the 51th Annual Meeting of the Association for Computational Linguistics, pp. 166-175. |
Taisuke Sumii et al, “An Attempt to Apply a Recurrent Network to Machine Translation”, 2007 Nendo Annual Conference of JSAI (Dai 21 Kai) Ronbunshu [CD-ROM], The Japanese Society for Artificial Intelligence, Jun. 18, 2007 (Jun. 18, 2007), [ISSN]1347-9881, pp. 1-4. |
Peter F. Brown et al, “The Mathematics of Statistical Machine Translation: Parameter Estimation”, 1993. Association for Computational Linguistics, 19(2): pp. 263-311. |
Stephan Vogel et al. “Hmm-based Word Alignment in Statistical Translation”, 1996. Proceedings of the 16th International Conference on Computational Linguistics, pp. 836-841. |
Chris Dyer et al. “Unsupervised Word Alignment with Arbitrary Features”, 2011. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies vol. 1, pp. 409-419. |
Chooi-Ling Goh et al. “Constraining a Generative Word Alignment Model with Discriminative Output”, 2010. IEICE Transactions, 93-D(7):pp. 1976-1983. |
Tomas Mikolov et al. “Recurrent Neural Network based Language Model”, 2010. In Proceedings of 11th Annual Conference of the International Speech Communication Association, pp. 1045-1048. |
Franz Josef Och, “Minimum Error Rate Training in Statistical Machine Translation”, 2003. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pp. 160-167. |
Number | Date | Country | |
---|---|---|---|
20170068665 A1 | Mar 2017 | US |