Syntax-based statistical translation model

Information

  • Patent Grant
  • 8214196
  • Patent Number
    8,214,196
  • Date Filed
    Wednesday, July 3, 2002
    22 years ago
  • Date Issued
    Tuesday, July 3, 2012
    12 years ago
Abstract
A statistical translation model (TM) may receive a parse tree in a source language as an input and separately output a string in a target language. The TM may perform channel operations on the parse tree using model parameters stored in probability tables. The channel operations may include reordering child nodes, inserting extra words at each node (e.g., NULL words) translating leaf words, and reading off leaf words to generate the string in the target language. The TM may assign a translation probability to the string in the target language.
Description
BACKGROUND

Machine translation (MT) concerns the automatic translation of natural language sentences from a first language (e.g., French) into another language (e.g., English). Systems that perform MT techniques are said to “decode” the source language into the target language.


One type of MT decoder is the statistical MT decoder. A statistical MT decoder that translates French sentences into English may include a language model (LM) that assigns a probability P(e) to any English string, a translation model (TM) that assigns a probability P(f|e) to any pair of English and French strings, and a decoder. The decoder may take a previously unseen sentence f and try to find the e that maximizes P(e|f), or equivalently maximizes P(e)·P(f|e).


A TM may not model structural or syntactic aspects of a language. Such a TM may perform adequately for a structurally similar language pair (e.g., English and French), but may not adequately model a language pair with very different word order conventions (e.g., English and Japanese).


SUMMARY

A statistical translation model (TM) may receive a parse tree in a source language as an input and separately output a string in a target language. The TM may perform channel operations on the parse tree using model parameters stored in probability tables. The channel operations may include reordering child nodes, inserting extra words at each node (e.g., NULL words), translating leaf words, and reading off leaf words to generate the string in the target language. The TM may assign a translation probability to the string in the target language.


The reordering operation may be based on a probability corresponding to a sequence of the child node labels. The insertion operation may determine which extra word to insert and an insert position relative to the node.


The TM may be trained using an Expectation Maximization (EM) algorithm.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a statistical translation model (TM) system.



FIG. 2 illustrates channel operations performed on a input parse tree.



FIG. 3 is a flowchart describing channel operations performed by the TM.



FIG. 4 illustrates tables including model parameters used in the channel operations.



FIG. 5 is a graph structure for training the translation model using an Expectation Maximization (EM) algorithm.





DETAILED DESCRIPTION


FIG. 1 illustrates a statistical translation model (TM) system 100. The system may be based on a noisy channel model. A syntactic parser 110 may generate a parse tree from an input sentence. A parse tree 200, such as that shown in FIG. 2, includes a number of nodes, including parent nodes and child nodes. A child node may be a parent to another node (e.g., the VB2 node to the TO and NN nodes). The parent and child nodes have labels corresponding to a part-of-speech (POS) tag for the word or phrase corresponding to the node (e.g., verb (VB), personal pronoun (PP), noun (NN), etc). Leafs 215 of the tree include words in the input string.


The channel 105 accepts the parse tree as an input and performs operations on each node of the parse tree. As shown in FIG. 2, the operations may include reordering child nodes, inserting extra words at each node, translating leaf words, and reading off leafs to generate a string in the target language (e.g., Japanese).


The reorder operation may model translation between languages with different word orders, such as SVO (Subject-Verb-Object)-languages (English or Chinese) and SOV-languages (Japanese or Turkish). The word-insertion operation may capture linguistic differences in specifying syntactic cases. For example, English and French use structural position to specify case, while Japanese and Korean use case-marker particles.



FIG. 3 is a flowchart describing a channel operation 300 according to an embodiment. A string in a source language (e.g., English) may be input to the syntactic parser 110 (block 305). The syntactic parser 110 parses the input string into a parse tree 200 (block 310).


Child nodes on each internal node are stochastically reordered (block 315). A node with N children has N! possible reorderings. The probability of taking a specific reordering may be given by an r-table 405, as shown in FIG. 4. The reordering may be influenced only by the sequence of child nodes. In FIG. 2, the top VB node 205 has a child sequence PRP-VB1-VB2. The probability of reordering it into PRP-VB2-VB1 is 0.723 (from the second row in the r-table 405). The sequence VB-TO may be reordered into TO-VB, and TO-NN into NN-TO. Therefore, the probability of the second tree 220 is 0.7123·0.749·0.8931=0.484.


Next, an extra word may be stochastically inserted at each node (block 320). The word may be inserted either to the left of the node, to the right of the node, or nowhere. The word may be a NULL word 230. In a TM model, a NULL word may be an invisible word in the input sentence that generates output words distributed into random positions. NULL words may be function words, such as “ha” and “no” in Japanese. The position may be decided on the basis of the nodes of the input parse tree.


The insertion probability may be determined by an n-table. The n-table may be split into two tables: a table 410 for insert positions and a table 415 for words to be inserted. The node's label and its parent's label may be used to index the table for insert positions. For example, the PRP node has parent VB 205, thus (parent=VB, node=PRP) is the conditioning index. Using this label pair captures, for example, the regularity of inserting case-marker particles. In an embodiment, no conditioning variable is used when deciding which word to insert. That is, a function word like “ha” is just as likely to be inserted in one place as any other.


In FIG. 2, four NULL words 230 (“ha”, “no”, “ga”, and “desu”) were inserted to create a third tree 225. The top VB node, two TO nodes, and the NN node inserted nothing. Therefore, the probability of obtaining the third tree given the second tree is (0.652·0.219)·(0.252·0.094)·(0.252·0.062)·(0.252·0.0007)·0.735·0.709·0.900·0.800=3.498e−9. A translate operation may be applied to each leaf (block 325). In an embodiment, this operation is dependent only on the word itself and no context is consulted. A t-table 420 may specify the probability for all cases. For the translations shown in the fourth tree 235 of FIG. 2, the probability of the translate operation is 0.9052·0.900·0.038·0.333·01.000=0.0108.


The total probability of the reorder, insert, and translate operations may then be calculated. In this example, the probability is 0.484·3.498e−9·0.0108=1.828e−11. Note that there are many other combinations of such operations that yield the same Japanese sentence. Therefore, the probability of the Japanese sentence given the English parse tree is the sum of all these probabilities.


The channel operations may be modeled mathematically. Assume that an English parse tree is transformed into a French sentence f. Let the English parse tree ε, consist of nodes ε1, ε2, . . . , εn, and let the output French sentence consist of French words f1, f2, . . . , fm. Three random variables, N, R, and T, are channel operations applied to each node. Insertion N is an operation that inserts a French word just before or after the node. The insertion can be none, left, or right. Insertion N may also decide what French word to insert. Reorder R is an operation that changes the order of the children of the node. If a node has three children, there are 3!=6 ways to reorder them. This operation may apply only to non-terminal nodes in the tree. Translation T is an operation that translates a terminal English leaf word into a French word. This operation applies only to terminal nodes. Note that an English word can be translated into a French NULL word.


The notation θ=<ν,p,τ> stands for a set of values of <N, R, T>. θ1=<ν1, pi1> is a set of values of random variables associatds with εi, and θ=θ1, θ2, . . . , θn is a set of all random variables associated with a parse tree ε=ε1, ε2, . . . , εn.


The probability of getting a French sentence f given an English parse tree ε is







P


(

f
|
ɛ

)


=








θ
:
Str

)



θ


(
ɛ
)



)

=
f




P
(

θ
|
ɛ

)







where Str(θ(ε)) is the sequence of leaf words of a tree transformed by θ from ε.


The probability of having a particular set of values of random variables in a parse tree is

P(θ|ε)=P12, . . . , θn12, . . . , εn)










P


(

θ
|
ɛ

)


=

P
(


θ
1

,

θ
2

,





,


θ
n

|

ɛ
1


,

ɛ
2

,









ɛ
n


)









=




i
=
1

n







P
(



θ
i

|

θ
1


,

θ
2

,





,

θ

i
-
1


,

ɛ
1

,

ɛ
2

,





,

ɛ
n


)









Assuming a transform operation is independent from other transform operations, and the random variables of each node are determined only by the node itself, then

P(θ|ε)=P12, . . . , θn12, . . . , εn)










P


(

θ
|
ɛ

)


=

P
(


θ
1

,

θ
2

,





,


θ
n

|

ɛ
1


,

ɛ
2

,





,


ɛ
n

)









=




i
=
1

n







P
(


θ
i



ɛ
i


)









The random variables θ1=<ν1,pi1> are assumed to be independent of each other. It is also assumed that they are dependent on particular features of the node ε1. Then,

P11)=Pii11)
=P11)Pii)P11)
=P1|N1))Pi|R1))P1|Ti))
=ni|Ni))ri|Ri))ti|Ti))

where N, R, and T are the relevant features of N, R, and T, respectively. For example, the parent node label and the node label were used for N, and the syntactic category sequence of children was used for R. The last line in the above formula introduces a change in notation, meaning that those probabilities are the model parameters n(ν|N), r(ρ|R), and t(τ|T), where N, R, and T are the possible values for N, R and T, respectively.


In summary, the probability of getting a French sentence f given an English parse tree ε is










P


(

f
|
ɛ

)


=





θ
:

Str


(

θ


(
ɛ
)


)



=
f




P
(

θ
|
ɛ

)








=





θ
:

Str


(

θ


(
ϕ
)


)



=
f







i
=
1

n




n
(


ν
i

|

N


(

ɛ
1

)



)



r
(


ρ
i

|

R


(

ɛ
i

)



)



t
(

τ
i





T


(

ɛ
i

)













where ε=ε1, ε2, . . . , εn and θ=θ1, θ2, . . . , θn=<ν1, ρ1, τ1>,<ν2, ρ2, τ2>, . . . , <νn, ρn, τn>.


The model parameters n(ν|N), r(ρ|R), and t(τ|T), that is, the probabilities P(ν|N), P(ρ|R), and P(τ|T), decide the behavior of the translation model. These probabilities may be estimated from a training corpus.


An Expectation Maximization (EM) algorithm may be used to estimate the model parameters (see, e.g., A. Dempster, N. Laird, and D. Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm). The algorithm iteratively updates the model parameters to maximize the likelihood of the training corpus. First, the model parameters are initialized. A uniform distribution may be used or a distribution may be taken from other models. For each iteration, the number of events are counted and weighted by the probabilities of the events. The probabilities of events are calculated from the current model parameters. The model parameters are re-estimated based on the counts, and used for the next iteration. In this case, an event is a pair of a value of random variable (such as ν, ρ, or τ) and a feature value (such as N, R, or T) A separate counter is used for each event. Therefore, the same number of counters c(ν|N), c(ρ|R), and c(τ|T), as the number of entries in the probability tables, n(ν|N), r(ρ|R), and t(τ|T), are needed.


An exemplary training procedure is the following:


1. Initialize all probability tables: n(v|N), r(ρ|R), and t(τ|τ).


2. Reset all counters: c(v,N), c(ρ,R), and c(τ|τ).


3. For each pair <ε,f> in the training corpus,

    • For all θ, such that f=Str(θ(ε)),
      • Let cnt=P(θ|ε/Σθ:Str(θ(ε))=f P(θ|ε)
      • For i=1 . . . n,

        c(vi,Ni))+=cnt
        c(pi,Ri))+=cnt
        c1,τ(ε1))+=cnt


4. For each <v,N,>, <p,R>, and <τ,τ>,

n(v|N)=c(v,N)/Σvc(v,N)
r(ρ|R)=c(ρ,R)/Σpc(ρ,R)
t(τ|τ)=c(τ,τ)/Στc(τ,τ)


5. Repeat steps 2-4 for several iterations.


An EM algorithm for the translation model may implement a graph structure 500 of a pair <ε,f>, as shown in FIG. 5. A graph node is either a major-node 505 or a subnode. A major-node shows a pairing of a subtree of ε and a substring of f. A subnode shows a selection of a value (ν,p,τ) for the subtree-substring pair.


Let fkl=fk . . . fk+(l−1) be a substring of f from the word fk with length l. A subtree εi is a subtree of ε below εi. Assume that a subtree ε1 is ε.


A major node ν(εl,fkl) is a pair of a subtree and a substring fkl. The root of the graph is ν(εl,fkl), where L is the length of f. Each major-node connects to several ν-subnodes 510 v(ν;ε1; fkl), showing which value of ν is selected. The arc between ν(ε1,fkl) and v(ν;ε1; fkl) has weight P(ν|ε).


A ν-subnode v(ν;ε1; fkl) connects to a final-node with weight P(τ|ε1) if ε1 is a terminal node in ε. If εi is a non-terminal node, a ν-subnode connects to several ρ-subnodes v(ρ;ν;ε1; fkl) 515, showing a selection of a value ρ. The weight of the arc is P(ρ|εi).


A ρ-subnode 515 is then connected to π-subnodes v(π;ρ;ν;ε1; fkl) 520. The partition variable, π, shows a particular way of partitioning fkl.


A π-subnode v(π;ρ;ν;ε1; fkl) is then connected to major-nodes which correspond to the children of εi and the substring of fkl, decided by <ν, p, τ>. A major-node can be connected from different π-subnodes. The arc weights between ρ-subnodes and major nodes are always 1.0.


This graph structure makes it easy to obtain P(Θ|ε) for a particular Θ and Σθ:Str(θ(ε))=f P(Θ|ε). A trace starting from the graph root, selecting one of the arcs from major-nodes, ν-subnodes, and ρ-subnodes, and all the arcs from the π-subnodes, corresponds to a particular Θ, and the product of the weight on the trace corresponds to P(Θ|ε). Note that a trace forms a tree, making branches at the π-subnodes.


We define an alpha probability and a beta probability for each major-node, in analogy with the measures used in the inside-outside algorithm for probabilistic context free grammars. The alpha probability (outside probability) is a path probability from the graph root to the node and the side branches of the node. The beta probability (inside probability) is a path probability below the node.


The alpha probability for the graph root, α(ε1,f1L), is 1.0. For other major-nodes,

α(ν)=Σ α(ParentM(s))·{P(ν|ε)Pρ|ε)Πβ(ν′)}

    • εParentπ(ν) ν′εChildπ(s)
      • ν′≠ν


        where Parentπ(ν) is a set of π-subnodes which are immediate parents of ν, Childπ(s) is a set of major-nodes which are children of π-subnodes s, and ParentM(s) is a parent major-node of π-subnodes s (skipping ρ-subnodes, and ν-subnodes). P(ν|ε) and P (ρ|ε) are the arc weights from ParentM(s) to s.


The beta probability is defined as

β(ν)=β(εi,fkl) if εi is a terminal










β






(
ν
)


=



β






(


ɛ
i

,

f
k
l


)






if






ɛ
i






is





a





terminal







=



{





P


(

τ
|

ɛ
i


)






P
(

v
|

ɛ
i


)





ρ




P
(

ρ
|

ɛ
i


)





π




Π
j



β


(


ɛ
j

,

f

k



l




)







if






ɛ
i






is





a





non


-


terminal






,










non-terminal,


where εj is a child of εi and fk′l′is a proper partition of fkl.


The counts c(ν, N), c(ρ, R), and c(τ, τ) for each pair <ε, f> are,










c


(

v
,
N

)


=





k
,
l




ɛ
i

:

N


(

ɛ
i

)



=
N






{


α


(


ɛ
i

,

f
k
l


)




P


(

v
|

ɛ
i


)






p




P
(

p
|

ɛ
i


)





π




Π
j



β
(


ɛ
j

,

f

k



l




)






}

/

β


(


ɛ
1

,

f
1
L


)











c


(

ρ
,
R

)


=





k
,
l




ɛ
i



R


(

ɛ
i

)



=
R






{


α


(


ɛ
i

,

f
k
l


)




P


(

p
|

ɛ
i


)






v




P
(

v
|

ɛ
i


)





π




Π
j



β
(


ɛ
j

,

f

k



l




)






}

/

β


(


ɛ
1

,

f
1
L


)











c


(

τ
,
τ

)


=





k
,
l




ɛ
i

:

τ


(

ɛ
i

)



=
τ





{


α


(


ɛ
i

,

f
k
l


)




P


(


τ
|

ɛ
1


,

f
1
L


)












From these definitions,

Σθ:Str(θ(ε))=fP(Θ|ε)=β(ε1,f1L).


The counts c(ν|N), c(ρ|R), and c(τ|T) for each pair <ε, f> are also in FIG. 5. Those formulae replace the step 3 in the training procedure described above for each training pair, and these counts are used in step 4.


The graph structure is generated by expanding the root node ν(ε1,flL). The beta probability for each node is first calculated bottom-up, then the alpha probability for each node is calculated top-down. Once the alpha and beta probabilities for each node are obtained, the counts are calculated as above and used for updating the parameters.


The complexity of this training algorithm is O(n3|ν∥ρ∥π|). The cube comes from the number of parse tree nodes (n) and the number of possible French substrings (n2).


In an experiment, 2121 translation sentence pairs were extracted from a Japanese-English dictionary. These sentences were mostly short ones. The average sentence length was 6.9 for English and 9.7 for Japanese. However, many rare words were used, which made the task difficult. The vocabulary size was 3463 tokens for English, and 3983 tokens for Japanese, with 2029 tokens for English and 2507 tokens for Japanese occurring only once in the corpus.


A POS tagger (described in E. Brill, Transformation-based error-driven learning and natural language processing: A case study in part of speech tagging. Computational Linguistics, 21(4), 1995) and a parser (described in M. Collins. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania, 1999.) were used to obtain parse trees for the English side of the corpus. The output of the parser was modified in the following way. First, to reduce the number of parameters in the model, each node was re-labeled with the POS of the node's head word, and some POS labels were collapsed. For example, labels for different verb endings (such as VBD for -ed and VBG for -ing) were changed to the same label VB. There were then 30 different node labels, and 474 unique child label sequences.


Second, a subtree was flattened if the node's head word was the same as the parent's head word. For example, (NN1 (VB NN2)) was flattened to (NN1 VB NN2) if the VB was a head word for both NN1 and NN2. This flattening was motivated by various word orders in different languages. An English SVO structure is translated into SOV in Japanese, or into VSO in Arabic. These differences are easily modeled by the flattened subtree (NN1 VB NN2), rather than (NN1 (VB NN2)).


The training procedure resulted in tables of estimated model parameters. FIG. 4 shows part of those parameters obtained by the training above.


A number of embodiments have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the invention. Accordingly, other embodiments are within the scope of the following claims.

Claims
  • 1. A method for translating natural languages using a statistical translation system, the method comprising: parsing a first string in a first language into a parse tree using a statistical parser included in the statistical machine translation system, the parse tree including a plurality of nodes, one or more of said nodes including one or more leafs, each leaf including a first word in the first language, the nodes including child nodes having labels;determining a plurality of possible reorderings of one or more of said child nodes including one or more of the leafs using the statistical translation system, the reordering performed in response to a probability corresponding to a sequence of the child node labels;determining a probability between 0.0000% and 100.0000%, non-inclusive, of the possible reorderings by the statistical translation system;determining a plurality of possible insertions of one or more words at one or more of said nodes using the statistical translation system;determining a probability between 0.0000% and 100.0000%, non-inclusive, of the possible insertions of one or more words at one or more of said nodes by the statistical translation system;translating the first word at each leaf into a second word corresponding to a possible translation in a second language using the statistical translation system; anddetermining a total probability between 0.0000% and 100.0000%, non-inclusive, based on the reordering, the inserting, and the translating by the statistical translation system.
  • 2. The method of claim 1, wherein said translating comprises translating the first word at each leaf into a second word in the second language in response to a probability of a correct translation.
  • 3. The method of claim 1, wherein said inserting comprises inserting one of a plurality of NULL words at one or more of said nodes.
  • 4. The method of claim 1, wherein said inserting comprises inserting the word at an insert position relative to a node.
  • 5. The method of claim 4, wherein the insert position comprises one of a left position, a right position, and no position.
  • 6. The method of claim 1, wherein the parse tree includes one or more parent nodes and a plurality of child nodes associated with the one or more parent nodes, the parent nodes having parent node labels and the child nodes having child node labels.
  • 7. The method of claim 6, wherein said inserting comprises inserting one of the plurality of words at a child node in response to the child node label and the parent node label.
  • 8. The method of claim 1, further comprising: generating a second string including the second word at each leaf.
  • 9. The method of claim 8, further comprising: assigning a translation probability to the second string.
  • 10. An apparatus for translating natural languages, the apparatus comprising: a reordering module operative to determine a plurality of possible reorderings of nodes in a parse tree, said parse tree including the plurality of possible nodes, one or more of said nodes including a leaf having a first word in a first language, the parse tree including a plurality of parent nodes having labels, each parent node including one or more child nodes having a label, the reordering module including a reorder table having a reordering probability associated with reordering a first child node sequence into a second child node sequence;an insertion module operative to determine a plurality of possible insertions of an additional word at one or more of said nodes and to determine a probability between 0.0000% and 100.0000%, non-inclusive, of the possible insertions of of the additional word;a translation module operative to translate the first word at each leaf into a second word corresponding to a possible translation in a second language;a probability module to determine a plurality of possible reorderings of a probability between 0.0000% and 100.0000%, non-inclusive, of said plurality of possible reorderings of one or more of said nodes and to determine a total probability between 0.0000% and 100.0000%, non-inclusive, based on the reorder, the insertion, and the translation; anda statistical translation system operative to execute the reordering module, the insertion module, the translation module, and the probability module to effectuate functionalities attributed respectively thereto.
  • 11. The apparatus of claim 10, wherein the translation module is further operative to generate an output string including the second word at each leaf.
  • 12. The apparatus of claim 11, wherein the translation module is operative to assign a translation probability to the output string.
  • 13. The apparatus of claim 11, further comprising a training module operative to receive a plurality of translation sentence pairs and train the apparatus using said translation pairs and an Expectation Maximization (EM) algorithm.
  • 14. The apparatus of claim 10, wherein the insertion module includes an insertion table including the probability associated with inserting the additional word in a position relative to one of the child nodes.
  • 15. The apparatus of claim 14, wherein the insertion probability is associated with a label pair including the label of said one or more child node and the label of the parent node associated with said child node.
  • 16. The apparatus of claim 10, wherein the insertion module includes an insertion table including an insertion probability associated with inserting one of a plurality of additional words.
  • 17. The apparatus of claim 16, wherein the additional word comprises a NULL word.
  • 18. An article comprising a non-transitory machine readable medium including machine-executable instructions, the instructions operative to cause a machine to: parse a first string in a first language into a parse tree, the parse tree including a plurality of nodes, one or more of said nodes including one or more leafs, each leaf including a first word in the first language, the nodes including child nodes having labels;determine a plurality of possible reorderings of one or more of said child nodes including one or more of the leafs, the reordering performed in response to a probability corresponding to a sequence of the child node labels;determine a probability between 0.0000% and 100.0000%, non-inclusive, of the plurality of possible reorderings;determining a plurality of possible insertions of one or more words at one or more of said nodes;determine a probability between 0.0000% and 100.0000%, non-inclusive, of the plurality of possible insertions at one or more of said nodes;translate the first word at each leaf into a second word corresponding to a possible translation in a second language; anddetermine a total probability between 0.0000% and 100.0000%, non-inclusive, based on the reordering, the inserting, and the translating, wherein the nodes include child nodes having labels, and wherein said reordering comprises reordering one or more of said child nodes in response to a probability corresponding to a sequence of the child node labels.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority to U.S. Provisional Application Ser. No. 60/302,915, filed on Jul. 3, 2001.

ORIGIN OF INVENTION

The research and development described in this application were supported by DARPA-ITO under grant number N66001-00-1-8914. The U.S. Government may have certain rights in the claimed inventions.

US Referenced Citations (295)
Number Name Date Kind
4502128 Okajima et al. Feb 1985 A
4599691 Sakaki et al. Jul 1986 A
4615002 Innes Sep 1986 A
4661924 Okamoto et al. Apr 1987 A
4787038 Doi et al. Nov 1988 A
4791587 Doi Dec 1988 A
4800522 Miyao et al. Jan 1989 A
4814987 Miyao et al. Mar 1989 A
4942526 Okajima et al. Jul 1990 A
4980829 Okajima et al. Dec 1990 A
5020112 Chou May 1991 A
5088038 Tanaka et al. Feb 1992 A
5091876 Kumano et al. Feb 1992 A
5146405 Church Sep 1992 A
5167504 Mann Dec 1992 A
5181163 Nakajima et al. Jan 1993 A
5212730 Wheatley et al. May 1993 A
5218537 Hemphill et al. Jun 1993 A
5220503 Suzuki et al. Jun 1993 A
5267156 Nomiyama Nov 1993 A
5268839 Kaji Dec 1993 A
5295068 Nishino et al. Mar 1994 A
5311429 Tominaga May 1994 A
5387104 Corder Feb 1995 A
5432948 Davis et al. Jul 1995 A
5442546 Kaji et al. Aug 1995 A
5477450 Takeda et al. Dec 1995 A
5477451 Brown et al. Dec 1995 A
5495413 Kutsumi et al. Feb 1996 A
5497319 Chong et al. Mar 1996 A
5510981 Berger et al. Apr 1996 A
5528491 Kuno et al. Jun 1996 A
5535120 Chong et al. Jul 1996 A
5541836 Church et al. Jul 1996 A
5541837 Fushimoto Jul 1996 A
5548508 Nagami Aug 1996 A
5644774 Fukumochi et al. Jul 1997 A
5675815 Yamauchi et al. Oct 1997 A
5687383 Nakayama et al. Nov 1997 A
5696980 Brew Dec 1997 A
5724593 Hargrave, III et al. Mar 1998 A
5752052 Richardson et al. May 1998 A
5754972 Baker et al. May 1998 A
5761631 Nasukawa Jun 1998 A
5761689 Rayson et al. Jun 1998 A
5768603 Brown et al. Jun 1998 A
5779486 Ho et al. Jul 1998 A
5781884 Pereira et al. Jul 1998 A
5794178 Caid et al. Aug 1998 A
5805832 Brown et al. Sep 1998 A
5806032 Sproat Sep 1998 A
5819265 Ravin et al. Oct 1998 A
5826219 Kutsumi Oct 1998 A
5826220 Takeda et al. Oct 1998 A
5845143 Yamauchi et al. Dec 1998 A
5848385 Poznanski et al. Dec 1998 A
5848386 Motoyama Dec 1998 A
5855015 Shoham Dec 1998 A
5864788 Kutsumi Jan 1999 A
5867811 O'Donoghue Feb 1999 A
5870706 Alshawi Feb 1999 A
5893134 O'Donoghue et al. Apr 1999 A
5903858 Saraki May 1999 A
5907821 Kaji et al. May 1999 A
5909681 Passera et al. Jun 1999 A
5966685 Flanagan et al. Oct 1999 A
5983169 Kozma Nov 1999 A
5987402 Murata et al. Nov 1999 A
5987404 Della Pietra et al. Nov 1999 A
5991710 Papineni et al. Nov 1999 A
5995922 Penteroudakis et al. Nov 1999 A
6018617 Sweitzer et al. Jan 2000 A
6031984 Walser Feb 2000 A
6032111 Mohri Feb 2000 A
6064819 Franssen et al. May 2000 A
6064951 Park et al. May 2000 A
6073143 Nishikawa et al. Jun 2000 A
6077085 Parry et al. Jun 2000 A
6092034 McCarley et al. Jul 2000 A
6119077 Shinozaki Sep 2000 A
6131082 Hargrave, III et al. Oct 2000 A
6161082 Goldberg et al. Dec 2000 A
6182014 Kenyon et al. Jan 2001 B1
6182027 Nasukawa et al. Jan 2001 B1
6205456 Nakao Mar 2001 B1
6223150 Duan et al. Apr 2001 B1
6233544 Alshawi May 2001 B1
6233545 Datig May 2001 B1
6233546 Datig May 2001 B1
6236958 Lange et al. May 2001 B1
6269351 Black Jul 2001 B1
6275789 Moser et al. Aug 2001 B1
6278967 Akers et al. Aug 2001 B1
6278969 King et al. Aug 2001 B1
6285978 Bernth et al. Sep 2001 B1
6289302 Kuo Sep 2001 B1
6304841 Berger et al. Oct 2001 B1
6311152 Bai et al. Oct 2001 B1
6317708 Witbrock et al. Nov 2001 B1
6327568 Joost Dec 2001 B1
6330529 Ito Dec 2001 B1
6330530 Horiguchi et al. Dec 2001 B1
6356864 Foltz et al. Mar 2002 B1
6360196 Poznanski et al. Mar 2002 B1
6389387 Poznanski et al. May 2002 B1
6393388 Franz et al. May 2002 B1
6393389 Chanod et al. May 2002 B1
6415250 van den Akker Jul 2002 B1
6460015 Hetherington et al. Oct 2002 B1
6470306 Pringle et al. Oct 2002 B1
6473729 Gastaldo et al. Oct 2002 B1
6480698 Ho et al. Nov 2002 B2
6490549 Ulicny et al. Dec 2002 B1
6498921 Ho et al. Dec 2002 B1
6502064 Miyahira et al. Dec 2002 B1
6529865 Duan et al. Mar 2003 B1
6535842 Roche et al. Mar 2003 B1
6587844 Mohri Jul 2003 B1
6647364 Yumura et al. Nov 2003 B1
6691279 Yoden et al. Feb 2004 B2
6745161 Arnold et al. Jun 2004 B1
6757646 Marchisio Jun 2004 B2
6778949 Duan et al. Aug 2004 B2
6782356 Lopke Aug 2004 B1
6810374 Kang Oct 2004 B2
6848080 Lee et al. Jan 2005 B1
6857022 Scanlan Feb 2005 B1
6885985 Hull Apr 2005 B2
6901361 Portilla May 2005 B1
6904402 Wang et al. Jun 2005 B1
6952665 Shimomura et al. Oct 2005 B1
6983239 Epstein Jan 2006 B1
6996520 Levin Feb 2006 B2
6999925 Fischer et al. Feb 2006 B2
7013262 Tokuda et al. Mar 2006 B2
7016827 Ramaswamy et al. Mar 2006 B1
7016977 Dunsmoir et al. Mar 2006 B1
7024351 Wang Apr 2006 B2
7031911 Zhou et al. Apr 2006 B2
7050964 Menzes et al. May 2006 B2
7085708 Manson Aug 2006 B2
7103531 Moore Sep 2006 B2
7107204 Liu et al. Sep 2006 B1
7107215 Ghali Sep 2006 B2
7113903 Riccardi et al. Sep 2006 B1
7143036 Weise Nov 2006 B2
7146358 Gravano et al. Dec 2006 B1
7149688 Schalkwyk Dec 2006 B2
7174289 Sukehiro Feb 2007 B2
7177792 Knight et al. Feb 2007 B2
7191115 Moore Mar 2007 B2
7197451 Carter et al. Mar 2007 B1
7206736 Moore Apr 2007 B2
7209875 Quirk et al. Apr 2007 B2
7219051 Moore May 2007 B2
7239998 Xun Jul 2007 B2
7249012 Moore Jul 2007 B2
7249013 Al-Onaizan et al. Jul 2007 B2
7283950 Pournasseh et al. Oct 2007 B2
7295962 Marcu Nov 2007 B2
7302392 Thenthiruperai et al. Nov 2007 B1
7319949 Pinkham Jan 2008 B2
7340388 Soricut et al. Mar 2008 B2
7346487 Li Mar 2008 B2
7346493 Ringger et al. Mar 2008 B2
7349839 Moore Mar 2008 B2
7356457 Pinkham et al. Apr 2008 B2
7373291 Garst May 2008 B2
7383542 Richardson et al. Jun 2008 B2
7389222 Langmead et al. Jun 2008 B1
7389234 Schmid et al. Jun 2008 B2
7409332 Moore Aug 2008 B2
7447623 Appleby Nov 2008 B2
7454326 Marcu et al. Nov 2008 B2
7496497 Liu Feb 2009 B2
7533013 Marcu May 2009 B2
7536295 Cancedda et al. May 2009 B2
7546235 Brockett et al. Jun 2009 B2
7565281 Appleby Jul 2009 B2
7574347 Wang Aug 2009 B2
7580830 Al-Onaizan et al. Aug 2009 B2
7620538 Marcu et al. Nov 2009 B2
7624005 Koehn et al. Nov 2009 B2
7624020 Yamada et al. Nov 2009 B2
7680646 Lux-Pogodalla et al. Mar 2010 B2
7689405 Marcu Mar 2010 B2
7698125 Graehl et al. Apr 2010 B2
7707025 Whitelock Apr 2010 B2
7711545 Koehn May 2010 B2
7716037 Precoda et al. May 2010 B2
7813918 Muslea et al. Oct 2010 B2
7974833 Soricut et al. Jul 2011 B2
20010009009 Iizuka Jul 2001 A1
20010029455 Chin et al. Oct 2001 A1
20020002451 Sukehiro Jan 2002 A1
20020013693 Fuji Jan 2002 A1
20020040292 Marcu Apr 2002 A1
20020046018 Marcu et al. Apr 2002 A1
20020046262 Heilig et al. Apr 2002 A1
20020078091 Vu et al. Jun 2002 A1
20020099744 Coden et al. Jul 2002 A1
20020111788 Kimpara Aug 2002 A1
20020111789 Hull Aug 2002 A1
20020152063 Tokieda et al. Oct 2002 A1
20020169592 Aityan Nov 2002 A1
20020188438 Knight et al. Dec 2002 A1
20020188439 Marcu Dec 2002 A1
20020198699 Greene et al. Dec 2002 A1
20020198701 Moore Dec 2002 A1
20020198713 Franz et al. Dec 2002 A1
20030009322 Marcu Jan 2003 A1
20030144832 Harris Jul 2003 A1
20030158723 Masuichi et al. Aug 2003 A1
20030176995 Sukehiro Sep 2003 A1
20030182102 Corston-Oliver et al. Sep 2003 A1
20030191626 Al-Onaizan et al. Oct 2003 A1
20030204400 Marcu et al. Oct 2003 A1
20030217052 Rubenczyk et al. Nov 2003 A1
20030233222 Soricut et al. Dec 2003 A1
20040015342 Garst Jan 2004 A1
20040024581 Koehn et al. Feb 2004 A1
20040030551 Marcu et al. Feb 2004 A1
20040035055 Zhu et al. Feb 2004 A1
20040059708 Dean et al. Mar 2004 A1
20040068411 Scanlan Apr 2004 A1
20040098247 Moore May 2004 A1
20040111253 Luo et al. Jun 2004 A1
20040122656 Abir Jun 2004 A1
20040167768 Travieso et al. Aug 2004 A1
20040167784 Travieso et al. Aug 2004 A1
20040193401 Ringger et al. Sep 2004 A1
20040230418 Kitamura Nov 2004 A1
20040237044 Travieso et al. Nov 2004 A1
20040260532 Richardson et al. Dec 2004 A1
20050021322 Richardson et al. Jan 2005 A1
20050021517 Marchisio Jan 2005 A1
20050026131 Elzinga et al. Feb 2005 A1
20050033565 Koehn Feb 2005 A1
20050038643 Koehn Feb 2005 A1
20050060160 Roh et al. Mar 2005 A1
20050075858 Pournasseh et al. Apr 2005 A1
20050102130 Quirk et al. May 2005 A1
20050125218 Rajput et al. Jun 2005 A1
20050149315 Flanagan et al. Jul 2005 A1
20050171757 Appleby Aug 2005 A1
20050204002 Friend Sep 2005 A1
20050228640 Aue et al. Oct 2005 A1
20050228642 Mau et al. Oct 2005 A1
20050228643 Munteanu et al. Oct 2005 A1
20050234701 Graehl et al. Oct 2005 A1
20060015320 Och Jan 2006 A1
20060015323 Udupa et al. Jan 2006 A1
20060018541 Chelba et al. Jan 2006 A1
20060020448 Chelba et al. Jan 2006 A1
20060095248 Menezes et al. May 2006 A1
20060111891 Menezes et al. May 2006 A1
20060111892 Menezes et al. May 2006 A1
20060111896 Menezes et al. May 2006 A1
20060129424 Chan Jun 2006 A1
20060142995 Knight et al. Jun 2006 A1
20060150069 Chang Jul 2006 A1
20060190241 Goutte et al. Aug 2006 A1
20070016400 Soricutt et al. Jan 2007 A1
20070016401 Ehsani et al. Jan 2007 A1
20070033001 Muslea et al. Feb 2007 A1
20070094169 Yamada et al. Apr 2007 A1
20070112553 Jacobson May 2007 A1
20070112555 Lavi et al. May 2007 A1
20070112556 Lavi et al. May 2007 A1
20070122792 Galley et al. May 2007 A1
20070168450 Prajapat et al. Jul 2007 A1
20070180373 Bauman et al. Aug 2007 A1
20070219774 Quirk et al. Sep 2007 A1
20070250306 Marcu et al. Oct 2007 A1
20070269775 Andreev et al. Nov 2007 A1
20070294076 Shore et al. Dec 2007 A1
20080052061 Kim et al. Feb 2008 A1
20080114583 Al-Onaizan et al. May 2008 A1
20080154581 Lavi et al. Jun 2008 A1
20080183555 Walk Jul 2008 A1
20080215418 Kolve et al. Sep 2008 A1
20080249760 Marcu et al. Oct 2008 A1
20080270109 Och Oct 2008 A1
20080270112 Shimohata Oct 2008 A1
20080281578 Kumaran et al. Nov 2008 A1
20080307481 Panje Dec 2008 A1
20090076792 Lawson-Tancred Mar 2009 A1
20090083023 Foster et al. Mar 2009 A1
20090119091 Sarig May 2009 A1
20090326912 Ueffing Dec 2009 A1
20100017293 Lung et al. Jan 2010 A1
20100042398 Marcu et al. Feb 2010 A1
20100174524 Koehn Jul 2010 A1
20110029300 Marcu et al. Feb 2011 A1
20110082684 Soricut et al. Apr 2011 A1
Foreign Referenced Citations (7)
Number Date Country
0469884 Feb 1992 EP
0715265 Jun 1996 EP
0933712 Aug 1999 EP
0933712 Jan 2001 EP
07244666 Sep 1995 JP
10011447 Jan 1998 JP
11272672 Oct 1999 JP
Related Publications (1)
Number Date Country
20030023423 A1 Jan 2003 US
Provisional Applications (1)
Number Date Country
60302915 Jul 2001 US