System and method for capitalizing machine translated text

Information

  • Patent Grant
  • 8886518
  • Patent Number
    8,886,518
  • Date Filed
    Monday, August 7, 2006
    18 years ago
  • Date Issued
    Tuesday, November 11, 2014
    9 years ago
Abstract
A system and method for capitalizing translated text is provided. A capitalized source text is automatically translated to a target text. The target text is capitalized according to information in the capitalized source text.
Description
BACKGROUND

1. Field of the Art


The present invention relates generally to machine translation, and more particularly to capitalizing machine translated text.


2. Description of Related Art


Capitalization is the process of recovering case information for texts in lowercase. Generally, capitalization improves the legibility of texts but does not affect the word choice or order. In natural language processing, a good capitalization model has been shown useful for name entity recognition, automatic content extraction, speech recognition, modern word processors, and an automatic translation system (sometimes referred to as a machine translation system or an MT system). Capitalization of output from the automatic translation system improves the comprehension of the automatically translated text in a target language.


Capitalization of automatically translated text may be characterized as a sequence labeling process. An input to such labeling process is a lowercase sentence. An output is a capitalization tag sequence. Unfortunately, associating capitalization tags with lowercase words can result in capitalization ambiguities (i.e., each lowercase word can have more than one tag).


One solution to resolve capitalization ambiguities for automatically translated text is a 1-gram tagger model, where the case of a word is estimated from a target language corpus with case information. Other solutions for capitalizing automatically translated text treat capitalization as a lexical ambiguity resolution problem. Still some solutions to resolve capitalization ambiguities include applying a maximum entropy Markov model (MEMM) and/or combining features of words, cases, and context (i.e., tag transitions) of the target language.


These solutions are monolingual because the solutions are estimated only from the target (monolingual) text. Unfortunately, such monolingual solutions may not always perform well on badly translated text and/or source text that includes capitalization based on special use.


SUMMARY

The present invention provides a method for capitalizing translated text. An exemplary method according to one embodiment includes automatically translating a capitalized source text to a target text, and capitalizing the target text according to the capitalized source text.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates an exemplary architecture for an exemplary machine translation system;



FIG. 2 illustrates a block diagram showing exemplary modules comprising the automatic translation server;



FIG. 3 illustrates a block diagram showing exemplary components associated with the capitalizer module;



FIG. 4 illustrates a schematic diagram showing an exemplary alignment of two sentences;



FIG. 5 illustrates a block diagram showing an exemplary capitalization feature component; and



FIG. 6 a flow diagram illustrating a process for capitalizing translated text.





DETAILED DESCRIPTION

Various embodiments include translating a capitalized source sentence in a source language to a lowercase sentence in a target language using an automatic translation system, and capitalizing the lower case sentence according to information in the capitalized source sentence. The automatic translation system may generate a set of possible capitalized sentences for the lowercase sentence. In some embodiments, the automatic translation system parses the input sentence and the capitalized sentence into phrases and aligns the phrases to provide a phrase alignment. The automatic translation system may use the capitalization information from the capitalized input sentence in the source language, along with monolingual capitalization models based on the target lowercase sentence, and, optionally, the phrase alignment, to find a best capitalized sentence. The best capitalized sentence may be determined from a combination of a set of model features that include information from the input sentence in the source language.


Turning now to FIG. 1, an exemplary architecture for an automatic machine translation system 100 is illustrated. The machine translation system 100 comprises an automatic translation server 106, a network 104, and a device 102. The device 102 may comprise any type of user or other device, such as, a laptop or desktop computer, a personal digital assistant (PDA), a cellular telephone, and so forth.


According to an exemplary embodiment, the device 102 is configured to communicate a capitalized input sentence, designated by F in FIG. 1, in a source language over the network 104 to the automatic translation server 106. The network 104 may comprise any type of network, such as a wide area network, a local area network, a peer to peer network, and so forth. According to an alternative embodiment, the device 102 communicates directly with the automatic translation server 106, rather than via the network 104.


The automatic translation server 106 is configured to receive the capitalized input sentence (F), translate the capitalized input sentence (F) from the source language to a target language, and return a best capitalized sentence, designated as E* in FIG. 1, in the target language over the network 104 to the device 102.


A sentence may comprise a string of characters representing units of speech (e.g., words, phrases, letters, symbols, punctuation, and the like) in a natural language. The sentence may be determined, for example, by rules of grammar. In various embodiments, the sentence comprises a character string of arbitrary length selected for processing by an automatic translation system.



FIG. 2 illustrates a block diagram illustrating exemplary modules comprising the automatic translation server 106 of FIG. 1. The exemplary automatic translation server 106 comprises a case remover module 202, an automatic translator module 204, and a capitalizer module 206.


The case remover module 202 receives the capitalized input sentence (F) and generates a lowercase sentence, designated by ƒ in FIG. 2, in the source language (i.e., the lowercase source sentence). According to an exemplary embodiment, the case remover module 202 generates the lowercase source sentence by replacing all uppercase characters in the capitalized input sentence (F) with a corresponding lowercase character according to a lookup table.


The automatic translator module 204 is configured to receive the lowercase source sentence (e.g., ƒ) and translate the lowercase source sentence from the source language to a lowercase sentence, designated by “e” in FIG. 2, in the target language (i.e., the lowercase target sentence). The translation is performed by a phrase based statistical automatic translation system, according to an exemplary embodiment. However, according to alternative embodiments, automatic translation may comprise a dictionary based automatic translation system, a syntax based automatic translation system, or any combination of automatic translation systems.


The capitalizer module 206 is configured to receive the capitalized input sentence F in the source language, along with the lowercase target sentence (e.g., e) from the automatic translator module 204, and the lowercase source sentence (e.g., ƒ). The capitalizer module 206 determines the best capitalized sentence (e.g., E*) based on capitalization information in the capitalized input sentence (e.g., F). Any type of process for determining the best capitalized sentence may be employed. For example, conditional random fields, discussed in more detail in associated with FIG. 4, may be utilized to determine the best capitalized sentence. The capitalizer module 206 also utilizes information in the lowercase target sentence.


Although various modules are shown in association with the automatic translation server 106, fewer or more modules may comprise the automatic translation server 106 and still fall within the scope of various embodiments.



FIG. 3 is a block diagram illustrating exemplary components associated with the capitalizer module 206. The capitalizer module 206 is configured to generate a set of capitalized target sentences, designated by “E” in FIG. 3 (“the capitalized target sentence”), and align the capitalized target sentence(s) (e.g., E) with the capitalized input sentence (e.g., F). The capitalizer module 206 may determine one or more probabilities for each of the capitalized target sentences (e.g., E) according to information including, but not necessarily limited to, capitalization information from the capitalized input sentence (e.g., F). The capitalizer module 206 may combine the probabilities to select the best capitalized sentence (e.g., E*) according to the combined probabilities. The capitalizer module 206 may include components, such as a capitalized sentence generator 302, an aligner 304, a capitalization feature component 306, and a probability combiner 308. Although the capitalizer module 206 is described in FIG. 3 as including various components, fewer or more components may comprise the capitalizer module 206 and still fall within the scope of various embodiments.


The capitalized sentence generator 302 receives the lowercase target sentence (e.g., e), for example, from the automatic translator module 204. The capitalized sentence generator 302 is configured to generate one or more capitalization configurations that may be consistent with the lowercase target sentence, comprising a set of possible capitalized target sentences (e.g., E).


According to an exemplary embodiment, the one or more capitalization configurations may be generated according to a function, such as the function GEN(e). The capitalized target sentence (e.g. E) may be a capitalization configuration selected from the one or more capitalization configurations returned by the function (e.g., GEN(e)).


For example, the function GEN may generate a set of capitalization candidate words from a lowercase word, such as a lowercase word designated by w. For example, where the lowercase word w=“mt,” then GEN(mt)={mt, mT, Mt, MT}.


Heuristics may be used to reduce the range of capitalization candidate words generated by the function (e.g., GEN), according to exemplary embodiments. An example of heuristics includes:


The returned set of GEN on the lowercase words w comprises the union of:


(i) {w,AU(w), IU(w)}


(ii) {v|v is seen in training data and AL(v)=w}


(iii) {{tilde over (F)}m,k|AL({tilde over (F)}m,k)=AL(w)}


The heuristic (iii) may provide candidates in addition to the heuristic (i) and (ii), for the lowercase word (e.g., w) when translated from a strange input word, such as {tilde over (F)}m,k in a phrase {tilde over (F)}m, of the capitalized input sentence (e.g., F) that is aligned to a phrase that the lowercase word (e.g., w) is in, or comprises. For example, the heuristic (iii) may be used to create capitalization candidates for the translation of URLs, file names, and file paths. The function, such as GEN, may be applied to each of the lowercase words (e.g., w) in the lowercase target sentence (e.g., e) to generate a set of all possible capitalized target sentences (e.g., E).


The aligner 304 receives the capitalized input sentence, and a capitalized target sentence(s) (e.g., E). Optionally, the aligner 304 may receive the lowercase target sentence (e.g., e). The sentences may be comprised of one or more phrases, as discussed herein. For example, an English sentence, “The red door is closed.” may be parsed into the phrases “The red door” and “is closed.” An equivalent sentence in German “Die rote Tür ist zu” may be parsed into the phrases, “Die rote Tür” and “ist zu.” The phrase “The red door” may be aligned with the phrase “Die rote Tür” and the phrase “is closed” may be aligned with the phrase “ist zu.”


The aligner 304 may be configured to associate phrases from the capitalized input sentence with phrases from a capitalized target sentence(s) (e.g., E) generated by the function, such as the function GEN, and output an alignment, designated as “A” in FIG. 3. For example the aligner 304 may associate the phrase “The red door” with the phrase “Die rote Tür.” Any methods of obtaining phrase boundaries and the alignment (e.g., A), such as with a statistical phrase-based automatic translation system, may be employed.


Optionally, the capitalized input sentence is aligned with the lowercase target sentence. The capitalized target sentence(s) (e.g., E) may preserve the alignment between the capitalized input sentence (e.g., F) and the lowercase target sentence (e.g., e). The phrase alignment (e.g., A) may be used by the capitalization feature component 306 to determine probabilities of one or more feature functions that a capitalized target sentence(s) (e.g., E) is the best capitalized sentence (e.g., E*).


In various embodiments, a probability that a capitalized target sentence is the best capitalized sentence (e.g., E*) may be determined according to a conditional random field probabilistic model. According to an exemplary embodiment, the probability of the capitalized target sentence E, given the capitalized input sentence and the alignment may be represented as a probability function, such as p(E|F,A). The probability function (e.g., p(E|F,A)) may be determined from information including information from the capitalized input sentence and the alignment between the capitalized input sentence and the capitalized target sentence(s) (e.g., E) comprising one of the one or more capitalization configurations of the lowercase target sentence (e.g., e).


For example, the best capitalized sentence (e.g., E*) may be found from generating all the possible capitalization configurations from the lowercase target sentence and determining the capitalized target sentence(s) with the highest probability, for p(E|F,A). The best capitalized sentence (e.g., E*) may be generated utilizing the relation:

E*=arg maxEεGEN (e)p(E|F,A)

However, any method for generating the best capitalized sentence may be utilized according to some embodiments.


The capitalization feature component 306 is configured to calculate probabilities, as discussed herein. The capitalization feature component 306 can calculate probabilities for one or more feature functions, such as ƒi(E,F,A), for a capitalized target sentence according to the capitalized input sentence and the alignment. According to an exemplary embodiment, “i” represents the ith feature function. The capitalization feature component 306 can output the one or more feature functions, such as ƒi(E,F,A), for the capitalized target sentence(s) (e.g., E).


The probability combiner 308 can then combine the one or more feature functions, such as ƒi(E,F,A), and calculate the best capitalized sentence (e.g., E*). In various embodiments, the probability combiner 308 sums the probabilities for the one or more feature functions (e.g., ƒi(E,F,A)) together. The probability combiner 308 can then calculate a weighted sum of the probabilities for the one or more feature functions. For example, the probability combiner 308 may calculate a weighted sum according to the relation:











p

λ
_




(


E

F

,
A

)


=


1

Z


(

F
,
A
,

λ
_


)





exp


(




i
=
1

I








λ
i




f
i



(

E
,
F
,
A

)




)







(
2
)








where:










Z


(

F
,
A
,

λ
_


)


=




E


GEN


(
e
)










exp


(




i
=
1

I








λ
i




f
i



(

E
,
F
,
A

)




)







(
3
)








and λ=(λi, . . . , λl) is a feature weight vector. The capitalizer module 206 can utilize the relation to look for the best capitalized sentence (e.g., E*), according to an exemplary embodiment. For example, the best capitalized sentence (e.g., E*) may satisfy the relation:










E
*

=

arg







max

E


GEN


(

e
,
F

)









i
=
1

I








λ
i




f
i



(

E
,
F
,
A

)










(
4
)








For each capitalized target sentence (e.g., E), the one or more feature functions (e.g., ƒi(E,F,A)) may be weighted by a specific weight, such as λi, where i=1 . . . I, for the various feature functions and associated weights, respectively.


The probability combiner 308 receives one or more values returned by the one or more feature functions and applies the respective weight (e.g., λi). Capitalization information in the capitalized input sentence (e.g., F) may be associated with the respective weight (e.g., λi). The probability combiner 308 can sum the weighted feature functions, such as λiƒi(E,F,A)) for all feature functions (e.g., i, where i=1 . . . I) to determine the probability for the capitalized target sentence(s), According to an exemplary embodiment, the probability combiner 308 can select the capitalized target sentence(s) (e.g., E) with the best probability as the best capitalized sentence (e.g., E*).



FIG. 4 is a schematic diagram illustrating an exemplary alignment of two sentences, such as by using the aligner 304. The aligner 304 may be configured to operate on the capitalized input sentence and the capitalized target sentence(s), such as the capitalized target sentence E generated by the function GEN(e), to determine the alignment (e.g., A). One or more phrase boundaries may be denoted by the square brackets 410.


A vertex 412 corresponds to a word in the capitalized input sentence and the capitalized target sentence E (e.g., “Cliquez” in F, and “OK” in E). A line 414 may connect a word in the capitalized input sentence and a word in the capitalized target sentence (e.g., “Cliquez”—“Click”) and correspond to a word alignment. According to an exemplary embodiment, an edge 416 between two words in the capitalized target sentence(s) (e.g., E) represents the dependency between the two words captured by monolingual n-gram language models. For example, if a source phrase (designated “{tilde over (F)}j,” in FIG. 4) is the jth phrase of the capitalized input sentence (e.g., F) and the target phrase (designated “{tilde over (E)}k,” in FIG. 4) is the kth phrase of the capitalized target sentence (e.g., E), they may align to each other.


The alignment does not require word alignment, but a word in the target phrase (e.g., {tilde over (E)}k) may be aligned to any word in the source phrase (e.g., {tilde over (F)}j). A probabilistic model defined on a diagram, such as the diagram of FIG. 4, may be referred to as a conditional random field (CRF). A capitalization model using the CRF may be represented by the relation given in equation (2), (3), and (4), discussed herein, where each feature function (e.g., ƒi(E,F,A)) may be defined on the kth target word (e.g., Ek) of the kth target phrase (e.g., {tilde over (E)}k) of the capitalized target sentence(s) (e.g., E), relative to the jth source word (e.g., Fj) of the jth source phrase (e.g., {tilde over (F)}j), where the target phrase (e.g., {tilde over (E)}k) and the source phrase (e.g., {tilde over (F)}j) are aligned.



FIG. 5 is a block diagram illustrating an exemplary capitalization feature component 306 of FIG. 3. The capitalization feature component 306 comprises a capitalized translation model feature 502, a capitalization tag model feature 504, an uppercase translation model feature 508, a monolingual language model feature 506, an initial position model feature 510, and a punctuation model feature 512. Although FIG. 5 describes various model features comprising the capitalization feature component 306, fewer or more model features may comprise the capitalization feature component 306 and still fall within the scope of various embodiments.


The capitalized translation model feature 502 includes a feature function, such as ƒcap.t1(Ek,F,A). According to the feature function, the larger the probability that a target word (e.g., Ek) is translated from a source word (e.g., Fj), the larger the probability that the translated word preserves the case of the source word. Referring to the example of FIG. 4, the phrase “Click OK” is part of the target phrase (e.g., {tilde over (E)}k), in the capitalized target sentence(s) (e.g., E). As illustrated in FIG. 4, the phrase “Cliquez OK” is the source phrase (e.g., {tilde over (F)}j), in the capitalized input sentence (e.g., F) and the source phrase (e.g., {tilde over (F)}j) is aligned to the target phrase (e.g., {tilde over (E)}k). The capitalized translation model feature 502 computes, for example, a word probability (e.g., p(Ek|{tilde over (F)}m,n)) of “Click.” The word probability may be computed by an equation, such as log p(Click|Cliquez)+log p(Click|OK), for instance. “Click” may be assumed to be aligned to any word in the source phrase (e.g., {tilde over (F)}j). The larger the probability that “Click” is translated from a word in the source phrase, i.e., “Cliquez,” the more chances that “Click” preserves the case of “Cliquez” in the target phrase (e.g., {tilde over (E)}k).


According to an exemplary embodiment, for the translated word (Ek) and an aligned phrase pair, such as {tilde over (E)}k and {tilde over (F)}m, where Ekε{tilde over (E)}l, the capitalized translation model feature 502 of the translated word (Ek) comprises the feature function represented by the relation:











f


cap
.
t






1




(


E
k

,
F
,
A

)


=

log





n
=
1





F
~

m










p


(


E
k




F
~


m
,
n



)








(
5
)








where the probability (e.g., p(Ek|{tilde over (F)}m,n)) may be determined according to a capitalized translation table. The capitalized translation table, such as for the probability p(Ek|{tilde over (F)}m,n), may be smoothed according to well known techniques to avoid negative infinite values for the feature function (e.g., ƒcap.t1(Ek,F,A)).


In some embodiments, the capitalized translation table, such as for the probability p(Ek|{tilde over (F)}m,n), may be estimated from a word-aligned bilingual corpus. The capitalized translation model feature 502 may output the feature function (e.g., ƒcap.t1(Ek,F,A)) to the probability combiner 308. The probability combiner 308 may apply a feature weight, such as λcap.t1, to the feature function (e.g., ƒcap.t1(Ek,F,A)) and accumulate a weighted feature function (e.g., λcap.t1ƒcap.t1(Ek,F,A)), according to equation (4).


The capitalization tag model feature 504 may be used to associate tags with words to indicate capitalization. Examples of capitalization tags include: initial capital (IU), all characters uppercase (AU), all characters lowercase (AL), mixed case (MX), and all characters having no case (AN). For example, lowercase words in a lowercase sentence e “click ok to save your changes to /home/doc” may be associated with tags to give the output sentence “click IU ok AU to AL save AL your AL changes AL to AL /home/doc MX.AN.” A corresponding capitalized target sentence E may state “Click OK to save your changes to /home/DOC.”


In some embodiments, the capitalization tag model feature 504 includes a tag feature function, such as ƒcap.tag.t1.(E,F,A). The capitalization tag feature function (e.g., ƒcap.tag.t1.(E,F,A)) may be based on the capitalization tag for the aligned phrases in the capitalized input sentence and the capitalized target sentence. Referring to the example of FIG. 4, a probability of the word “Click” aligning to the phrase “Cliquez OK” may be represented as log(p(IU|IU)p(click|cliquez))+log(p(IU|AU)p(click|ok)). The probability may be computed in terms of a tag translation probability, such as p(IU|AU), and a lowercase word translation probability, such as p(click|ok), for example.


The lowercase word translation probability may be used to determine how much of the tag translation probability will contribute to the calculation of the best capitalized sentence (e.g., E*). A smaller value of the word translation probability (e.g., p(click|ok)) typically results in a smaller chance that the surface form of “click” preserves the case information from that of “ok” in the input capitalized sentence (e.g., F). This feature may be represented by the equation:











f


cap
.
tag
.
t






1




(


E
k

,
F
,
A

)


=

log





n
=
1



f
~

m









p


(


e
k




f
~


m
,
n



)


×

p


(


τ


(

E
k

)




τ


(


F
~


m
,
n


)



)









(
6
)








where p(ek|{tilde over (ƒ)}m,n) may be determined according to a translation table (t-table) over lowercase word pairs, such as a t-table in a statistical automatic translation system. The term p(τ(Ek)|τ({tilde over (F)}m,n)) may be determined according to the probability of a target capitalization tag, given a source capitalization tag, and may be estimated from a word-aligned bilingual corpus, according to an exemplary embodiment.


The capitalization tag model feature 504 provides additional probability information to the probability combiner 308, for example, when a capitalized word pair is unseen. For example, word pairs that have not previously been observed co-occurring in a sentence pair (e.g., unseen words pairs) may comprise words without a one to one translation equivalent. In some embodiments, the term p(ek|{tilde over (ƒ)}m,n) and/or p(τ(Ek)|τ({tilde over (F)}m,n)) may be smoothed to handle unseen words or unseen word pairs. The capitalization tag model feature 504 outputs the tag feature function (e.g., ƒcap.tag.t1(E,F,A)). According to an exemplary embodiment, the probability combiner 308 may apply a feature weight (e.g. λcap.tag.t1) to the feature function (e.g., ƒcap.t1(Ek,F,A)) and accumulate a tag weighted feature function (e.g., λcap.tag.t1ƒcap.t1(Ek,F,A)), such as by utilizing equation (4). However, any process for obtaining a weighted probability for capitalization may be employed.


The monolingual language model feature 506 comprises a monolingual feature function, such as ƒLM(Ek,F,A). The monolingual language model feature 506 may ignore information available from the capitalized input sentence F and the alignment A. The monolingual language model feature 506 may compute a probability, such as p(Ei|Ei−1, . . . , Ei−n+1), of an occurrence of the translated word (Ek) according to the logarithm of the probability of an n-gram ending at the translated word (Ek). The monolingual feature function ƒLM(Ek,F,A) may be represented, According to an exemplary embodiment, as:

ƒLM(Ei,F,A)=log p(Ei|Ei−1, . . . ,Ei−n+1)  (7)

The probability (e.g., p(Ei|Ei−1, . . . , Ei−n+1)) may be appropriately smoothed such that p(Ei|Ei−1, . . . , Ei−n+1) never returns zero.


The monolingual language model feature 506 outputs the monolingual feature function (ƒLM(Ek,F,A)), to the probability combiner 308, for example. The probability combiner 308 can apply the feature weight, such as λLM, to the monolingual feature function and accumulate a weighted feature function, such as λLMƒLM(Ek,F,A), according to equation (4), for example.


The uppercase translation model feature 508 comprises an uppercase feature function, such as ƒUC(Ek,F,A). The uppercase translation model feature 508 is configured to receive the capitalized input sentence (e.g., F), the capitalized target sentence(s) (e.g., E), and the alignment (e.g., A), and output the uppercase feature function (e.g., ƒUC(Ek,F,A)). The translated word (e.g., Ek) may be in all uppercase if the words in a corresponding phrase, such as the source phrase (e.g., {tilde over (F)}j) discussed in FIG. 4, of the capitalized input sentence are in uppercase.


The uppercase translation model feature 508 may be captured by the capitalization tag model feature 504, for example, where the probability of a tag, such as an AU tag, in the capitalized input sentence is preserved in the target capitalized sentence(s). However, in some embodiments, the uppercase translation model feature 508 further enhances the probability of the target capitalized sentence.


The uppercase translation model feature 508 increases the probability, for example, to translate “ABC XYZ” in the capitalized input sentence (e.g., F) into “UUU VVV” in the best capitalized sentence (e.g., E*), even if all words are unseen. The uppercase translation model feature 508 outputs the uppercase feature function (e.g., ƒUC(Ek,F,A)). The probability combiner 308 may apply a feature weight, such as λUC, to the uppercase feature function and accumulate the weighted feature function λUCƒUC(Ek,F,A), for example, according to equation (4).


The initial position model feature 510 comprises an initial position feature function, such as ƒIP(Ek,F,A). The initial position model feature 510 is configured to receive the capitalized target sentence(s) (e.g., E) and output the feature function (e.g., ƒIP(Ek,F,A)). The initial position model feature 510 may ignore information available from the capitalized input sentence (e.g., F) and the alignment (e.g., A). The translated word (e.g., Ek) in the capitalized target sentence(s) may be initially capitalized if it is the first word that contains letters in the capitalized target sentence. For example, for a sentence “• Please click the button” that starts with a bullet, the initial position feature value of the word “please” is 1 because the bullet (“•”) does not contain a letter. The initial position model feature 510 outputs the initial position feature function (e.g., ƒIP(Ek,F,A)). The probability combiner 308 may apply a feature weight, such as λIP, to the feature function and accumulate a weighted feature function, such as λIPƒIP(Ek,F,A), utilizing, for example, equation (4).


The punctuation model feature 512 includes a punctuation feature function, such as ƒP(Ek,F,A). The punctuation model feature 512 is configured to receive the capitalized target sentence and output the feature function (e.g., ƒP(Ek,F,A)). The punctuation model feature 512 may ignore information available from the capitalized input sentence and the alignment. The translated word (e.g., Ek) may initially be capitalized if the translated word follows a punctuation mark, for example. For non-sentence-ending punctuation marks, such as a comma, a colon, and the like, a negative feature weight, such as λPw, may be applied to the translated word. The punctuation model feature 512 outputs the punctuation feature function (e.g., ƒP(Ek,F,A)). The probability combiner 308 may apply a feature weight, such as λP, to the punctuation feature function and accumulates a feature function, such as λPƒP(Ek,F,A), according to equation (4), for example.


Although various feature functions have been described in FIG. 5, fewer or more feature functions may be provided for generating a best capitalized sentence (e.g., E*) and still fall within the scope of various embodiments. Further, the equations described herein are exemplary and other equations utilized for generating the best capitalized sentence may vary or differentiate from the exemplary equations set forth herein and still fall within the scope of various embodiments.



FIG. 6 is a flow diagram illustrating a process 600 for capitalizing translated text, such as by using a bilingual capitalization model. At step 602, a capitalized source text from a source language is translated to a target text in a target language.


As discussed herein, the source text may be translated in units of a capitalized sentence in a source language or a capitalized input sentence (F). The target text output of the translation may include a lowercase sentence e. For clarity, the process is described as operating on text strings in units of “sentences.” However, text strings in any arbitrary lengths or units may be utilized in the process of capitalizing the translated text.


Automatic translation of the capitalized input sentence (F) may be performed by any of a number of techniques, such as statistical automatic translation, statistical phrase-based automatic translation, 1-gram automatic translation, n-gram automatic translation, syntax-based automatic translation, and so forth.


According to an exemplary embodiment, the automatic translation of the capitalized input sentence (e.g., F) to the lowercase target sentence (e.g., e) may be performed in the automatic translation server 106, by using the case remover module 202 to generate a lowercase source sentence (e.g., ƒ) and by using the automatic translator module 204 to translate the lowercase source sentence to the lowercase target sentence.


At step 604, the target text is capitalized according to information in the capitalized source text. For example, the target text may be capitalized by the capitalizer module 206 of the automatic translation server 106 as described elsewhere herein. According to an exemplary embodiment, the capitalizer module 206 may receive the capitalized input sentence, the lowercase target sentence, and the lowercase source sentence. The capitalizer module 206 may generate one or more capitalization configurations and select the best capitalization configuration (e.g., the best capitalized sentence (E*)) according to information in the capitalized input sentence (e.g., F).


Various exemplary embodiments are specifically illustrated and/or described herein. However, it will be appreciated that modifications and variations are covered by the above teachings and within the scope of the appended claims without departing from the spirit and intended scope thereof. For example, additional model features for computing weighted feature functions (e.g., λiƒi(Ei,F,A)), according to information about text and/or alignment may be applied or a model feature may be configured to apply negatively weighted probabilities to capitalized nouns, in a source language where all nouns are capitalized (e.g., German) when translated to a target language where only proper nouns are capitalized, (e.g., English). As another example, an embodiment of the capitalization model may include a syntax-based MT, rather a than phrase-based statistical MT translation system. The syntax based MT may include a description of the translational correspondence within a translation rule, or a synchronous production, rather than a translational phrase pair: Training data may be derivation forests, instead of a phrase-aligned bilingual corpus.


While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. For example, any of the elements associated with the automatic translation server 106 may employ any of the desired functionality set forth hereinabove. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments.

Claims
  • 1. A method for capitalizing translated text comprising: executing a translator module stored on a device to automatically translate a capitalized source text to a target text, wherein prior to translation the capitalized source text is converted to lower case and then translated; andcapitalizing the target text according to capitalization information in the capitalized source text and the target text, wherein the step of capitalizing the target text according to capitalization information in the capitalized source text includes: generating one or more capitalization configurations for the target text;computing a configuration probability for each of the one or more capitalization configurations, the configuration probability computed from capitalization information in the capitalized source text and at least one capitalization model feature function based on an alignment between the capitalized source text and the target text or the capitalized source text and the capitalization configuration; andselecting the best capitalization configuration based on the highest configuration probability.
  • 2. The method of claim 1 further comprising capitalizing the target text according to the translated target text.
  • 3. The method of claim 1 further comprising capitalizing the target text using conditional random fields.
  • 4. The method of claim 1 wherein the step of capitalizing the target text further comprises: assigning the computed configuration probability to each respective one or more capitalization configurations,wherein the configuration probability is computed for the one or more capitalization configurations.
  • 5. The method of claim 4 wherein the at least one capitalization model feature function includes a capitalized translation model feature function.
  • 6. The method of claim 4 wherein the at least one capitalization model feature function includes a capitalization tag model feature function.
  • 7. The method of claim 4 wherein the at least one capitalization model feature function includes an uppercase model feature function.
  • 8. The method of claim 4 wherein the at least one capitalization model feature function includes a monolingual language model feature function.
  • 9. The method of claim 4 wherein the at least one capitalization model feature function includes an initial position model feature function.
  • 10. The method of claim 4 wherein the at least one capitalization model feature function includes a punctuation model feature function.
  • 11. The method of claim 4, further including: selecting a source phrase from the capitalized source text;selecting a target phrase from the target text;determining an alignment between the source phrase and the target phrase;computing a word probability from capitalization information in the source phrase, the alignment, and the at least one capitalization model feature function for the one or more capitalization configurations; andapplying the word probability to the computing of the configuration probability.
  • 12. The method of claim 11 wherein the at least one capitalization model feature function includes a capitalized translation model feature function.
  • 13. The method of claim 11 wherein the at least one capitalization model feature function includes a capitalization tag model feature function.
  • 14. The method of claim 11 wherein the at least one capitalization model feature function includes an uppercase model feature function.
  • 15. A translation system comprising: a device;an automatic translator module executable and stored on the device and configured to automatically convert a capitalized source text to lower case text and translate the lower case text to a target text; anda capitalization module configured to recover a capitalized text from the target text according to capitalization information in the capitalized source text and the target text, and capitalize the target text, the capitalization of the target text including: generating a plurality of capitalization configurations for the target text;for each capitalization configuration, computing a feature probability for each of a plurality of capitalization model feature functions; associating a feature weight with each capitalization model feature function;applying the associated feature weight to the respective computed feature probability for each of the plurality of capitalization model feature functions;for each capitalization configuration, calculating a capitalization configuration probability based on a weighted sum of the computed feature probabilities and applied feature weights, and based on an alignment between the capitalized source text and the target text or the capitalized source text and the capitalization configuration; andselecting the best capitalization configuration from the plurality of capitalization configurations based on the highest calculated capitalization configuration probability.
  • 16. The translation system of claim 15, wherein the capitalization module is further configured to recover the capitalized text from the source text.
  • 17. A translation system comprising: a device;an automatic translator module executable and stored on the device and configured to automatically convert a capitalized source text to lower case text and translate the lower case text to a target text;an aligner configured to determine an alignment between one or more phrases in the capitalized source text and one or more respective phrases in the target text of a capitalization configuration; anda capitalization module configured to recover a capitalized text from the target text according to capitalization information in the capitalized source text and the target text and the alignment determined by the aligner, and to capitalize the target text,the capitalization of the target text including: generating a plurality of capitalization configurations for the target text;for each capitalization configuration, computing a feature probability for each of a plurality of capitalization model feature functions; associating a feature weight with each capitalization model feature function;applying the associated feature weight to the respective computed feature probability for each of the plurality of capitalization model feature functions;for each capitalization configuration, calculating a capitalization configuration probability based on a weighted sum of the computed feature probabilities and applied feature weights, and based on the alignment between the one or more phrases in the capitalized source text and the one or more phrases in the target text or between the capitalized source text and the capitalization configuration;assigning the calculated capitalization configuration probability to each respective capitalization configuration; andselecting the best capitalization configuration from the plurality of capitalization configurations based on the highest calculated capitalization configuration probability.
  • 18. The translation system of claim 17 wherein the capitalization module further includes a capitalized translation model feature function.
  • 19. The translation system of claim 17 wherein the capitalization module further includes a capitalization tag model feature function.
  • 20. The translation system of claim 17 wherein the capitalization module further includes an uppercase model feature function.
  • 21. The translation system of claim 17 wherein the capitalization module further includes a monolingual language model feature function.
  • 22. The translation system of claim 17 wherein the capitalization module further includes an initial position model feature function.
  • 23. The translation system of claim 17 wherein the capitalization module further includes a punctuation model feature function.
  • 24. A computer program embodied on a non-transitory computer readable medium having instructions for capitalizing translated text, comprising: executing a translator module stored on a device to automatically translate a capitalized source text to a target text, the translation of the capitalized source text including converting source text to lower case and translating the lower case source text; andcapitalizing the target text according to capitalization information in the capitalized source text, the step of capitalizing the target text according to the capitalized source text including: generating one or more capitalization configurations for the target text;computing a configuration probability for each of the one or more capitalization configurations, the configuration probability computed from capitalization information in the capitalized source text and at least one capitalization model feature function based on an alignment between the capitalized source text and the target text or the capitalized source text and the capitalization configuration; andselecting the best capitalization configuration based on the highest computed configuration probability.
US Referenced Citations (361)
Number Name Date Kind
4502128 Okajima et al. Feb 1985 A
4599691 Sakaki et al. Jul 1986 A
4615002 Innes Sep 1986 A
4661924 Okamoto et al. Apr 1987 A
4787038 Doi et al. Nov 1988 A
4791587 Doi Dec 1988 A
4800522 Miyao et al. Jan 1989 A
4814987 Miyao et al. Mar 1989 A
4942526 Okajima et al. Jul 1990 A
4980829 Okajima et al. Dec 1990 A
5020112 Chou May 1991 A
5088038 Tanaka et al. Feb 1992 A
5091876 Kumano et al. Feb 1992 A
5146405 Church Sep 1992 A
5167504 Mann Dec 1992 A
5181163 Nakajima et al. Jan 1993 A
5212730 Wheatley et al. May 1993 A
5218537 Hemphill et al. Jun 1993 A
5220503 Suzuki et al. Jun 1993 A
5267156 Nomiyama Nov 1993 A
5268839 Kaji Dec 1993 A
5295068 Nishino et al. Mar 1994 A
5302132 Corder Apr 1994 A
5311429 Tominaga May 1994 A
5387104 Corder Feb 1995 A
5408410 Kaji Apr 1995 A
5432948 Davis et al. Jul 1995 A
5442546 Kaji et al. Aug 1995 A
5477450 Takeda et al. Dec 1995 A
5477451 Brown et al. Dec 1995 A
5495413 Kutsumi et al. Feb 1996 A
5497319 Chong et al. Mar 1996 A
5510981 Berger et al. Apr 1996 A
5528491 Kuno et al. Jun 1996 A
5535120 Chong et al. Jul 1996 A
5541836 Church et al. Jul 1996 A
5541837 Fushimoto Jul 1996 A
5548508 Nagami Aug 1996 A
5644774 Fukumochi et al. Jul 1997 A
5675815 Yamauchi et al. Oct 1997 A
5687383 Nakayama et al. Nov 1997 A
5696980 Brew Dec 1997 A
5724593 Hargrave, III et al. Mar 1998 A
5752052 Richardson et al. May 1998 A
5754972 Baker et al. May 1998 A
5761631 Nasukawa Jun 1998 A
5761689 Rayson et al. Jun 1998 A
5768603 Brown et al. Jun 1998 A
5779486 Ho et al. Jul 1998 A
5781884 Pereira et al. Jul 1998 A
5794178 Caid et al. Aug 1998 A
5805832 Brown et al. Sep 1998 A
5806032 Sproat Sep 1998 A
5819265 Ravin et al. Oct 1998 A
5826219 Kutsumi Oct 1998 A
5826220 Takeda et al. Oct 1998 A
5845143 Yamauchi et al. Dec 1998 A
5848385 Poznanski et al. Dec 1998 A
5848386 Motoyama Dec 1998 A
5855015 Shoham Dec 1998 A
5864788 Kutsumi Jan 1999 A
5867811 O'Donoghue Feb 1999 A
5870706 Alshawi Feb 1999 A
5893134 O'Donoghue et al. Apr 1999 A
5903858 Saraki May 1999 A
5907821 Kaji et al. May 1999 A
5909681 Passera et al. Jun 1999 A
5930746 Ting Jul 1999 A
5966685 Flanagan et al. Oct 1999 A
5966686 Heidorn et al. Oct 1999 A
5983169 Kozma Nov 1999 A
5987402 Murata et al. Nov 1999 A
5987404 Della Pietra et al. Nov 1999 A
5991710 Papineni et al. Nov 1999 A
5995922 Penteroudakis et al. Nov 1999 A
6018617 Sweitzer et al. Jan 2000 A
6031984 Walser Feb 2000 A
6032111 Mohri Feb 2000 A
6047252 Kumano et al. Apr 2000 A
6064819 Franssen et al. May 2000 A
6064951 Park et al. May 2000 A
6073143 Nishikawa et al. Jun 2000 A
6077085 Parry et al. Jun 2000 A
6092034 McCarley et al. Jul 2000 A
6119077 Shinozaki Sep 2000 A
6131082 Hargrave, III et al. Oct 2000 A
6161082 Goldberg et al. Dec 2000 A
6182014 Kenyon et al. Jan 2001 B1
6182027 Nasukawa et al. Jan 2001 B1
6205456 Nakao Mar 2001 B1
6206700 Brown et al. Mar 2001 B1
6223150 Duan et al. Apr 2001 B1
6233544 Alshawi May 2001 B1
6233545 Datig May 2001 B1
6233546 Datig May 2001 B1
6236958 Lange et al. May 2001 B1
6269351 Black Jul 2001 B1
6275789 Moser et al. Aug 2001 B1
6278967 Akers et al. Aug 2001 B1
6278969 King et al. Aug 2001 B1
6285978 Bernth et al. Sep 2001 B1
6289302 Kuo Sep 2001 B1
6304841 Berger et al. Oct 2001 B1
6311152 Bai et al. Oct 2001 B1
6317708 Witbrock et al. Nov 2001 B1
6327568 Joost Dec 2001 B1
6330529 Ito Dec 2001 B1
6330530 Horiguchi et al. Dec 2001 B1
6356864 Foltz et al. Mar 2002 B1
6360196 Poznanski et al. Mar 2002 B1
6389387 Poznanski et al. May 2002 B1
6393388 Franz et al. May 2002 B1
6393389 Chanod et al. May 2002 B1
6415250 van den Akker Jul 2002 B1
6460015 Hetherington et al. Oct 2002 B1
6470306 Pringle et al. Oct 2002 B1
6473729 Gastaldo et al. Oct 2002 B1
6473896 Hicken et al. Oct 2002 B1
6480698 Ho et al. Nov 2002 B2
6490549 Ulicny et al. Dec 2002 B1
6498921 Ho et al. Dec 2002 B1
6502064 Miyahira et al. Dec 2002 B1
6529865 Duan et al. Mar 2003 B1
6535842 Roche et al. Mar 2003 B1
6587844 Mohri Jul 2003 B1
6609087 Miller et al. Aug 2003 B1
6647364 Yumura et al. Nov 2003 B1
6691279 Yoden et al. Feb 2004 B2
6745161 Arnold et al. Jun 2004 B1
6745176 Probert, Jr. et al. Jun 2004 B2
6757646 Marchisio Jun 2004 B2
6778949 Duan et al. Aug 2004 B2
6782356 Lopke Aug 2004 B1
6810374 Kang Oct 2004 B2
6848080 Lee et al. Jan 2005 B1
6857022 Scanlan Feb 2005 B1
6885985 Hull Apr 2005 B2
6901361 Portilla May 2005 B1
6904402 Wang et al. Jun 2005 B1
6952665 Shimomura et al. Oct 2005 B1
6983239 Epstein Jan 2006 B1
6996518 Jones et al. Feb 2006 B2
6996520 Levin Feb 2006 B2
6999925 Fischer et al. Feb 2006 B2
7013262 Tokuda et al. Mar 2006 B2
7016827 Ramaswamy et al. Mar 2006 B1
7016977 Dunsmoir et al. Mar 2006 B1
7024351 Wang Apr 2006 B2
7031911 Zhou et al. Apr 2006 B2
7050964 Menzes et al. May 2006 B2
7085708 Manson Aug 2006 B2
7089493 Hatori et al. Aug 2006 B2
7103531 Moore Sep 2006 B2
7107204 Liu et al. Sep 2006 B1
7107215 Ghali Sep 2006 B2
7113903 Riccardi et al. Sep 2006 B1
7143036 Weise Nov 2006 B2
7146358 Gravano et al. Dec 2006 B1
7149688 Schalkwyk Dec 2006 B2
7171348 Scanlan Jan 2007 B2
7174289 Sukehiro Feb 2007 B2
7177792 Knight et al. Feb 2007 B2
7191115 Moore Mar 2007 B2
7194403 Okura et al. Mar 2007 B2
7197451 Carter et al. Mar 2007 B1
7206736 Moore Apr 2007 B2
7209875 Quirk et al. Apr 2007 B2
7219051 Moore May 2007 B2
7239998 Xun Jul 2007 B2
7249012 Moore Jul 2007 B2
7249013 Al-Onaizan et al. Jul 2007 B2
7283950 Pournasseh et al. Oct 2007 B2
7295962 Marcu Nov 2007 B2
7302392 Thenthiruperai et al. Nov 2007 B1
7319949 Pinkham Jan 2008 B2
7340388 Soricut et al. Mar 2008 B2
7346487 Li Mar 2008 B2
7346493 Ringger et al. Mar 2008 B2
7349839 Moore Mar 2008 B2
7349845 Coffman et al. Mar 2008 B2
7356457 Pinkham et al. Apr 2008 B2
7369998 Sarich et al. May 2008 B2
7373291 Garst May 2008 B2
7383542 Richardson et al. Jun 2008 B2
7389222 Langmead et al. Jun 2008 B1
7389234 Schmid et al. Jun 2008 B2
7403890 Roushar Jul 2008 B2
7409332 Moore Aug 2008 B2
7409333 Wilkinson et al. Aug 2008 B2
7447623 Appleby Nov 2008 B2
7454326 Marcu et al. Nov 2008 B2
7496497 Liu Feb 2009 B2
7533013 Marcu May 2009 B2
7536295 Cancedda et al. May 2009 B2
7546235 Brockett et al. Jun 2009 B2
7552053 Gao et al. Jun 2009 B2
7565281 Appleby Jul 2009 B2
7574347 Wang Aug 2009 B2
7580830 Al-Onaizan et al. Aug 2009 B2
7584092 Brockett et al. Sep 2009 B2
7587307 Cancedda et al. Sep 2009 B2
7620538 Marcu et al. Nov 2009 B2
7620632 Andrews Nov 2009 B2
7624005 Koehn et al. Nov 2009 B2
7624020 Yamada et al. Nov 2009 B2
7627479 Travieso et al. Dec 2009 B2
7680646 Lux-Pogodalla et al. Mar 2010 B2
7689405 Marcu Mar 2010 B2
7698124 Menezes et al. Apr 2010 B2
7698125 Graehl et al. Apr 2010 B2
7707025 Whitelock Apr 2010 B2
7711545 Koehn May 2010 B2
7716037 Precoda et al. May 2010 B2
7801720 Satake et al. Sep 2010 B2
7813918 Muslea et al. Oct 2010 B2
7822596 Elgazzar et al. Oct 2010 B2
7925494 Cheng et al. Apr 2011 B2
7957953 Moore Jun 2011 B2
7974833 Soricut et al. Jul 2011 B2
8060360 He Nov 2011 B2
8145472 Shore et al. Mar 2012 B2
8214196 Yamada et al. Jul 2012 B2
8244519 Bicici et al. Aug 2012 B2
8265923 Chatterjee et al. Sep 2012 B2
8615389 Marcu Dec 2013 B1
20010009009 Iizuka Jul 2001 A1
20010029455 Chin et al. Oct 2001 A1
20020002451 Sukehiro Jan 2002 A1
20020013693 Fuji Jan 2002 A1
20020040292 Marcu Apr 2002 A1
20020046018 Marcu et al. Apr 2002 A1
20020046262 Heilig et al. Apr 2002 A1
20020059566 Delcambre et al. May 2002 A1
20020078091 Vu et al. Jun 2002 A1
20020087313 Lee et al. Jul 2002 A1
20020099744 Coden et al. Jul 2002 A1
20020111788 Kimpara Aug 2002 A1
20020111789 Hull Aug 2002 A1
20020111967 Nagase Aug 2002 A1
20020143537 Ozawa et al. Oct 2002 A1
20020152063 Tokieda et al. Oct 2002 A1
20020169592 Aityan Nov 2002 A1
20020188438 Knight et al. Dec 2002 A1
20020188439 Marcu Dec 2002 A1
20020198699 Greene et al. Dec 2002 A1
20020198701 Moore Dec 2002 A1
20020198713 Franz et al. Dec 2002 A1
20030009322 Marcu Jan 2003 A1
20030023423 Yamada et al. Jan 2003 A1
20030144832 Harris Jul 2003 A1
20030154071 Shreve Aug 2003 A1
20030158723 Masuichi et al. Aug 2003 A1
20030176995 Sukehiro Sep 2003 A1
20030182102 Corston-Oliver et al. Sep 2003 A1
20030191626 Al-Onaizan et al. Oct 2003 A1
20030204400 Marcu et al. Oct 2003 A1
20030216905 Chelba et al. Nov 2003 A1
20030217052 Rubenczyk et al. Nov 2003 A1
20030233222 Soricut et al. Dec 2003 A1
20040006560 Chan et al. Jan 2004 A1
20040015342 Garst Jan 2004 A1
20040024581 Koehn et al. Feb 2004 A1
20040030551 Marcu et al. Feb 2004 A1
20040035055 Zhu et al. Feb 2004 A1
20040044530 Moore Mar 2004 A1
20040059708 Dean et al. Mar 2004 A1
20040068411 Scanlan Apr 2004 A1
20040098247 Moore May 2004 A1
20040102956 Levin May 2004 A1
20040102957 Levin May 2004 A1
20040111253 Luo et al. Jun 2004 A1
20040115597 Butt Jun 2004 A1
20040122656 Abir Jun 2004 A1
20040167768 Travieso et al. Aug 2004 A1
20040167784 Travieso et al. Aug 2004 A1
20040193401 Ringger et al. Sep 2004 A1
20040230418 Kitamura Nov 2004 A1
20040237044 Travieso et al. Nov 2004 A1
20040260532 Richardson et al. Dec 2004 A1
20050021322 Richardson et al. Jan 2005 A1
20050021517 Marchisio Jan 2005 A1
20050026131 Elzinga et al. Feb 2005 A1
20050033565 Koehn Feb 2005 A1
20050038643 Koehn Feb 2005 A1
20050055199 Ryzchachkin et al. Mar 2005 A1
20050055217 Sumita et al. Mar 2005 A1
20050060160 Roh et al. Mar 2005 A1
20050075858 Pournasseh et al. Apr 2005 A1
20050086226 Krachman Apr 2005 A1
20050102130 Quirk et al. May 2005 A1
20050125218 Rajput et al. Jun 2005 A1
20050149315 Flanagan et al. Jul 2005 A1
20050171757 Appleby Aug 2005 A1
20050204002 Friend Sep 2005 A1
20050228640 Aue et al. Oct 2005 A1
20050228642 Mau et al. Oct 2005 A1
20050228643 Munteanu et al. Oct 2005 A1
20050234701 Graehl et al. Oct 2005 A1
20050267738 Wilkinson et al. Dec 2005 A1
20060004563 Campbell et al. Jan 2006 A1
20060015320 Och Jan 2006 A1
20060015323 Udupa et al. Jan 2006 A1
20060018541 Chelba et al. Jan 2006 A1
20060020448 Chelba et al. Jan 2006 A1
20060041428 Fritsch et al. Feb 2006 A1
20060095248 Menezes et al. May 2006 A1
20060111891 Menezes et al. May 2006 A1
20060111892 Menezes et al. May 2006 A1
20060111896 Menezes et al. May 2006 A1
20060129424 Chan Jun 2006 A1
20060142995 Knight et al. Jun 2006 A1
20060150069 Chang Jul 2006 A1
20060167984 Fellenstein et al. Jul 2006 A1
20060190241 Goutte et al. Aug 2006 A1
20070016400 Soricutt et al. Jan 2007 A1
20070016401 Ehsani et al. Jan 2007 A1
20070033001 Muslea et al. Feb 2007 A1
20070050182 Sneddon et al. Mar 2007 A1
20070078654 Moore Apr 2007 A1
20070078845 Scott et al. Apr 2007 A1
20070083357 Moore et al. Apr 2007 A1
20070094169 Yamada et al. Apr 2007 A1
20070112553 Jacobson May 2007 A1
20070112555 Lavi et al. May 2007 A1
20070112556 Lavi et al. May 2007 A1
20070122792 Galley et al. May 2007 A1
20070168202 Changela et al. Jul 2007 A1
20070168450 Prajapat et al. Jul 2007 A1
20070180373 Bauman et al. Aug 2007 A1
20070219774 Quirk et al. Sep 2007 A1
20070250306 Marcu et al. Oct 2007 A1
20070265825 Cancedda et al. Nov 2007 A1
20070265826 Chen et al. Nov 2007 A1
20070269775 Andreev et al. Nov 2007 A1
20070294076 Shore et al. Dec 2007 A1
20080052061 Kim et al. Feb 2008 A1
20080065478 Kohlmeier et al. Mar 2008 A1
20080114583 Al-Onaizan et al. May 2008 A1
20080154581 Lavi et al. Jun 2008 A1
20080183555 Walk Jul 2008 A1
20080215418 Kolve et al. Sep 2008 A1
20080249760 Marcu et al. Oct 2008 A1
20080270109 Och Oct 2008 A1
20080270112 Shimohata Oct 2008 A1
20080281578 Kumaran et al. Nov 2008 A1
20080307481 Panje Dec 2008 A1
20090076792 Lawson-Tancred Mar 2009 A1
20090083023 Foster et al. Mar 2009 A1
20090119091 Sarig May 2009 A1
20090125497 Jiang et al. May 2009 A1
20090234634 Chen et al. Sep 2009 A1
20090241115 Raffo et al. Sep 2009 A1
20090326912 Ueffing Dec 2009 A1
20100017293 Lung et al. Jan 2010 A1
20100042398 Marcu et al. Feb 2010 A1
20100138213 Bicici et al. Jun 2010 A1
20100174524 Koehn Jul 2010 A1
20110029300 Marcu et al. Feb 2011 A1
20110066643 Cooper et al. Mar 2011 A1
20110082684 Soricut et al. Apr 2011 A1
20120323554 Hopkins et al. Dec 2012 A1
Foreign Referenced Citations (13)
Number Date Country
202005022113.9 Feb 2014 DE
0469884 Feb 1992 EP
0715265 Jun 1996 EP
0933712 Aug 1999 EP
0933712 Jan 2001 EP
07244666 Jan 1995 JP
10011447 Jan 1998 JP
11272672 Oct 1999 JP
2004501429 Jan 2004 JP
2004062726 Feb 2004 JP
2008101837 May 2008 JP
5452868 Jan 2014 JP
WO03083709 Oct 2003 WO
Non-Patent Literature Citations (255)
Entry
Lita, L. V., Ittycheriah, A., Roukos, S., and Kambhatla, N.(2003). tRuEcasing. In Hinrichs, E. and Roth, D., editors,Proceedings of the 41st Annual Meeting of the Associationfor Computational Linguistics, pp. 152-159.
McCallum, A. and Li, W. 2003. Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In Proceedings of the Seventh Conference on Natural Language Learning At HLT-NAACL 2003—vol. 4 (Edmonton, Canada). Association for Computational Linguistics, Morristown, NJ, 188-191.
A. Agbago, R. Kuhn, and G. Foster. 2005. True-casing for the Portage system. In Recent Ad-vances in Natural Language Processing, pp. 21-24, Borovets, Bulgaria, Sep. 21-23, 2005.
A. Arun, A. Axelrod, A. Birch Mayne, C. Callison-Burch, H. Hoang, P. Koehn, M. Osborne, and D. Talbot,“Edinburgh system description for the 2006 TC-STARspoken language translation evaluation,” in TC-STARWorkshop on Speech-to-Speech Translation, Barcelona,Spain, Jun. 2006, pp. 37-41.
Ruiqiang Zhang, Hirofumi Yamamoto, Michael Paul, HideoOkuma, Keiji Yasuda, Yves Lepage, Etienne Denoual, DaichiMochihashi, Andrew Finch,Eiichiro Sumita, “The NiCT-ATRStatistical Machine Translation System for the IWSLT 2006Evaluation,” submitted to IWSLT, 2006.
“Bangalore, S. and Rambow, O., “Using TAGs, a Tree Model, and a Language Model for Generation,” May 2000,Workshop TAG+5, Paris.”
Gale, W. and Church, K., “A Program for Aligning Sentences in Bilingual Corpora,” 1993, Computational Linguisitcs, vol. 19, No. 1, pp. 75-102.
Ueffing et al., “Using Pos Information for Statistical Machine Translation into Morphologically Rich Languages,” In EACL, 2003: Proceedings of the Tenth Conference on European Chapter of the Association for Computational Linguistics, pp. 347-354.
Frederking et al., “Three Heads are Better Than One,” In Proceedings of the 4th Conference on Applied Natural Language Processing, Stuttgart, Germany, 1994, pp. 95-100.
Och et al., “Discriminative Training and Maximum Entropy Models for Statistical Machine Translation,” In Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia, PA, 2002.
Yasuda et al., “Automatic Machine Translation Selection Scheme to Output the Best Result,” Proc of LREC, 2002, pp. 525-528.
Papineni et al., “Bleu: a Method for Automatic Evaluation of Machine Translation”, Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Jul. 2002, pp. 311-318.
Shaalan et al., “Machine Translation of English Noun Phrases into Arabic”, (2004), vol. 17, No. 2, International Journal of Computer Processing of Oriental Languages, 14 pages.
Isahara et al., “Analysis, Generation and Semantic Representation in CONTRAST—A Context-Based Machine Translation System”, 1995, Systems and Computers in Japan, vol. 26, No. 14, pp. 37-53.
Proz.com, Rates for proofreading versus Translating, http://www.proz.com/forum/business—issues/202-rates—for—proofreading—versus—translating.html, Apr. 23, 2009, retrieved Jul. 13, 2012.
Celine, Volume discounts on large translation project, naked translations, http://www.nakedtranslations.com/en/2007/volume-discounts-on-large-translation-projects/, Aug. 1, 2007, retrieved Jul. 16, 2012.
Graehl, J and Knight, K, May 2004, Training Tree Transducers, In NAACL-HLT (2004), pp. 105-112.
Niessen et al, “Statistical machine translation with scarce resources using morphosyntactic information”, Jun. 2004, Computational Linguistics, vol. 30, issue 2, pp. 181-204.
Liu et al., “Context Discovery Using Attenuated Bloom Filters in Ad-Hoc Networks,” Springer, pp. 13-25, 2006.
First Office Action mailed Jun. 7, 2004 in Canadian Patent Application 2408819, filed May 11, 2001.
First Office Action mailed Jun. 14, 2007 in Canadian Patent Application 2475857, filed Mar. 11, 2003.
Office Action mailed Mar. 26, 2012 in German Patent Application 10392450.7, filed Mar. 28, 2003.
First Office Action mailed Nov. 5, 2008 in Canadian Patent Application 2408398, filed Mar. 27, 2003.
Second Office Action mailed Sep. 25, 2009 in Canadian Patent Application 2408398, filed Mar. 27, 2003.
First Office Action mailed Mar. 1, 2005 in European Patent Application No. 03716920.8, filed Mar. 27, 2003.
Second Office Action mailed Nov. 9, 2006 in European Patent Application No. 03716920.8, filed Mar. 27, 2003.
Third Office Action mailed Apr. 30, 2008 in European Patent Application No. 03716920.8, filed Mar. 27, 2003.
Office Action mailed Oct. 25, 2011 in Japanese Patent Application 2007-536911 filed Oct. 12, 2005.
Office Action mailed Jul. 24, 2012 in Japanese Patent Application 2007-536911 filed Oct. 12, 2005.
Final Office Action mailed Apr. 9, 2013 in Japanese Patent Application 2007-536911 filed Oct. 12, 2005.
Office Action mailed May 13, 2005 in Chinese Patent Application 1812317.1, filed May 11, 2001.
Office Action mailed Apr. 21, 2006 in Chinese Patent Application 1812317.1, filed May 11, 2001.
Office Action mailed Jul. 19, 2006 in Japanese Patent Application 2003-577155, filed Mar. 11, 2003.
Office Action mailed Mar. 1, 2007 in Chinese Patent Application 3805749.2, filed Mar. 11, 2003.
Office Action mailed Feb. 27, 2007 in Japanese Patent Application 2002-590018, filed May 13, 2002.
Office Action mailed Jan. 26, 2007 in Chinese Patent Application 3807018.9, filed Mar. 27, 2003.
Office Action mailed Dec. 7, 2005 in Indian Patent Application 2283/DELNP/2004, filed Mar. 11, 2003.
Office Action mailed Mar. 31, 2009 in European Patent Application 3714080.3, filed Mar. 11, 2003.
Agichtein et al., “Snowball: Extracting Information from Large Plain-Text Collections,” ACM DL '00, the Fifth ACM Conference on Digital Libraries, Jun. 2, 2000, San Antonio, TX, USA.
Satake, Masaomi, “Anaphora Resolution for Named Entity Extraction in Japanese Newspaper Articles,” Master's Thesis [online], Feb. 15, 2002, School of Information Science, JAIST, Nomi, Ishikaw, Japan.
Office Action mailed Aug. 29, 2006 in Japanese Patent Application 2003-581064, filed Mar. 27, 2003.
Office Action mailed Jan. 26, 2007 in Chinese Patent Application 3807027.8, filed Mar. 28, 2003.
Office Action mailed Jul. 25, 2006 in Japanese Patent Application 2003-581063, filed Mar. 28, 2003.
Huang et al., “A syntax-directed translator with extended domain of locality,” Jun. 9, 2006, In Proceedings of the Workshop on Computationally Hard Problems and Joint Inference in Speech and Language Processing, pp. 1-8, New York City, New York, Association for Computational Linguistics.
Melamed et al., “Statistical machine translation by generalized parsing,” 2005, Technical Report 05-001, Proteus Project, New York University, http://nlp.cs.nyu.edu/pubs/.
Galley et al., “Scalable Inference and Training of Context-Rich Syntactic Translation Models,” Jul. 2006, In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pp. 961-968.
Huang et al., “Statistical syntax-directed translation with extended domain of locality,” Jun. 9, 2006, In Proceedings of AMTA, pp. 1-8.
Huang et al. Automatic Extraction of Named Entity Translingual Equivalence Based on Multi-Feature Cost Minimization. In Proceedings of the ACL 2003 Workshop on Multilingual and Mixed-Language Name Entry Recognition.
Langlais, P. et al., “TransType: a Computer-Aided Translation Typing System” EmbedMT '00 ANLP-NAACL 2000 Workshop: Embedded Machine Translation Systems, 2000, pp. 46-51. <http://acl.ldc.upenn.edu/W/W00/W00-0507.pdf>.
Alshawi, Hiyan, “Head Automata for Speech Translation”, Proceedings of the ICSLP 96, 1996, Philadelphia, Pennsylvania.
Ambati, V., “Dependency Structure Trees in Syntax Based Machine Translation,” Spring 2008 Report <http://www.cs.cmu.edu/˜vamshi/publications/DependencyMT—report.pdf>, pp. 1-8.
Ballesteros, L. et al., “Phrasal Translation and Query Expansion Techniques for Cross-Language Information,” SIGIR 97, Philadelphia, PA, © 1997, pp. 84-91.
Bannard, C. and Callison-Burch, C., “Paraphrasing with Bilingual Parallel Corpora,” In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics (Ann Arbor, MI, Jun. 25-30, 2005). Annual Meeting of the ACL Assoc. for Computational Linguistics, Morristown, NJ, 597-604. DOI=http://dx.doi.org/10.3115/1219840.
Berhe, G. et al., “Modeling Service-baed Multimedia Content Adaptation in Pervasive Computing,” CF '04 (Ischia, Italy) Apr. 14-16, 2004, pp. 60-69.
Boitet, C. et al., “Main Research Issues in Building Web Services,” Proc. of the 6th Symposium on Natural Language Processing, Human and Computer Processing of Language and Speech, © 2005, pp. 1-11.
Brill, Eric, “Transformation-Based Error-Driven Learning and Natural Language Processing: A Case Study in Part of Speech Tagging”, 1995, Association for Computational Linguistics, vol. 21, No. 4, pp. 1-37.
Callison-Burch, C. et al., “Statistical Machine Translation with Word- and Sentence-aligned Parallel Corpora,” In Proceedings of the 42nd Meeting on Assoc. for Computational Linguistics (Barcelona, Spain, Jul. 21-26, 2004) Annual Meeting of the ACL. Assoc. for Computational Linguistics, Morristown, NJ, 1.
Cheng, P. et al., “Creating Multilingual Translation Lexicons with Regional Variations Using Web Corpora,” In Proceedings of the 42nd Annual Meeting on Assoc. for Computational Linguistics (Barcelona, Spain, Jul. 21-26, 2004). Annual Meeting of the ACL. Assoc. for Computational Linguistics, Morristown, NJ, 53.
Cheung et al., “Sentence Alignment in Parallel, Comparable, and Quasi-comparable Corpora”, In Proceedings of LREC, 2004, pp. 30-33.
Cohen et al., “Spectral Bloom Filters,” SIGMOD 2003, Jun. 9-12, 2003, ACM pp. 241-252.
Cohen, “Hardware-Assisted Algorithm for Full-text Large-dictionary String Matching Using n-gram Hashing,” 1998, Information Processing and Management, vol. 34, No. 4, pp. 443-464.
Covington, “An Algorithm to Align Words for Historical Comparison”, Computational Linguistics, 1996, 22(4), pp. 481-496.
Eisner, Jason,“Learning Non-Isomorphic Tree Mappings for Machine Translation,” 2003, in Proc. of the 41st Meeting of the ACL, pp. 205-208.
Fleming, Michael et al., “Mixed-Initiative Translation of Web Pages,” AMTA 2000, LNAI 1934, Springer-Verlag, Berlin, Germany, 2000, pp. 25-29.
Franz Josef Och, Hermann Ney: “Improved Statistical Alignment Models” ACLOO:Proc. of the 38th Annual Meeting of the Association for Computational Lingustics, ′Online! Oct. 2-6, 2000, pp. 440-447, XP002279144 Hong Kong, China Retrieved from the Internet: <URL:http://www-i6.informatik.rwth-aachen.de/Colleagues/och/ACLOO.ps> retrieved on May 6, 2004! abstract.
Fuji, Ren and Hongchi Shi, “Parallel Machine Translation: Principles and Practice,” Engineering of Complex Computer Systems, 2001 Proceedings, Seventh IEEE Int'l Conference, pp. 249-259, 2001.
Fung et al, “Mining Very-non parallel corpora: Parallel sentence and lexicon extractioin via bootstrapping and EM”, In EMNLP 2004.
Gale, W. and Church, K., “A Program for Aligning Sentences in Bilingual Corpora,” 1993, Computational Linguistics, vol. 19, No. 1, pp. 177-184.
Galley et al., “Scalable Inference and Training of Context-Rich Syntactic Translation Models,” Jul. 2006, in Proc. of the 21st International Conference on Computational Linguistics, pp. 961-968.
Galley et al., “What's in a translation rule?”, 2004, in Proc. of HLT/NAACL '04, pp. 1-8.
Gaussier et al, “A Geometric View on Bilingual Lexicon Extraction from Comparable Corpora”, In Proceedings of ACL Jul. 2004.
Gildea, D., “Loosely Tree-based Alignment for Machine Translation,” In Proceedings of the 41st Annual Meeting on Assoc. for Computational Linguistics—vol. 1 (Sapporo, Japan, Jul. 7-12, 2003). Annual Meeting of the ACL Assoc. for Computational Linguistics, Morristown, NJ, 80-87. DOI=http://dx.doi.org/10.3115/1075096.1075107.
Grossi et al, “Suffix Trees and their applications in string algorithms”, In. Proceedings of the 1st South American Workshop on String Processing, Sep. 1993, pp. 57-76.
Gupta et al., “Kelips: Building an Efficient and Stable P2P DHT thorough Increased Memory and Background Overhead,” 2003 IPTPS, LNCS 2735, pp. 160-169.
Habash, Nizar, “The Use of a Structural N-gram Language Model in Generation-Heavy Hybrid Machine Translation,” University of Maryland, Univ. Institute for Advance Computer Studies, Sep. 8, 2004.
Huang et al., “Relabeling Syntax Trees to Improve Syntax-Based Machine Translation Quality,” Jun. 4-9, 2006, in Proc. of the Human Language Technology Conference of the North American Chapter of the ACL, pp. 240-247.
Imamura et al., “Feedback Cleaning of Machine Translation Rules Using Automatic Evaluation,” 2003 Computational Linguistics, pp. 447-454.
Klein et al., “Accurate Unlexicalized Parsing,” Jul. 2003m, in Proc. of the 41st Annual Meeting of the ACL, pp. 423-430.
Koehn, Philipp, “Noun Phrase Translation,” A PhD Dissertation for the University of Southern California, pp. xiii, 23, 25-57, 72-81, Dec. 2003.
Kupiec, Julian, “An Algorithm for Finding Noun Phrase Correspondences in Bilingual Corpora,” In Proceedings of the 31st Annual Meeting of the ACL, 1993, pp. 17-22.
Lee-Y.S.,“Neural Network Approach to Adaptive Learning: with an Application to Chinese Homophone Disambiguation,” IEEE pp. 1521-1526, 2001.
Llitjos, A. F. et al., “The Translation Correction Tool: English-Spanish User Studies,” Citeseer © 2004, downloaded from: http://gs37.sp.cs.cmu.edu/ari/papers/lrec04/fontll, pp. 1-4.
McDevitt, K. et al., “Designing of a Community-based Translation Center,” Technical Report TR-03-30, Computer Science, Virginia Tech, © 2003, pp. 1-8.
Metze, F. et al., “The NESPOLE! Speech-to-Speech Translation System,” Proc. of the HLT 2002, 2nd Int'l Conf. on Human Language Technology (San Francisco, CA), © 2002, pp. 378-383.
Mohri, Mehryar, “Regular Approximation of Context Free Grammars Through Transformation”, 2000, pp. 251-261, “Robustness in Language and Speech Technology”, Chapter 9, Kluwer Academic Publishers.
Nagao, K. et al., “Semantic Annotation and Transcoding: Making Web Content More Accessible,” IEEE Multimedia, vol. 8, Issue 2 Apr.-Jun. 2001, pp. 69-81.
Norvig, Peter, “Techniques for Automatic Memoization with Applications to Context-Free Parsing”, Compuational Linguistics,1991, pp. 91-98, vol. 17, No. 1.
Och et al. “A Smorgasbord of Features for Statistical Machine Translation.” HLTNAACL Conference. Mar. 2004, 8 pages.
Och, F., “Minimum Error Rate Training in Statistical Machine Translation,” In Proceedings of the 41st Annual Meeting on Assoc. for Computational Linguistics—vol. 1 (Sapporo, Japan, Jul. 7-12, 2003). Annual Meeting of the ACL. Assoc. for Computational Linguistics, Morristown, NJ, 160-167. DOI= http://dx.doi.org/10.3115/1075096.
Och, F. and Ney, H., “A Systematic Comparison of Various Statistical Alignment Models,” Computational Linguistics, 2003, 29:1, 19-51.
Perugini, Saviero et al., “Enhancing Usability in CITIDEL: Multimodal, Multilingual and Interactive Visualization Interfaces,” JCDL '04, Tucson, AZ, Jun. 7-11, 2004, pp. 315-324.
Petrov et al., “Learning Accurate, Compact and Interpretable Tree Annotation,” Jun. 4-9, 2006, in Proc. of the Human Language Technology Conference of the North American Chapter of the ACL, pp. 433-440.
Qun, Liu, “A Chinese-English Machine Translation System Based on Micro-Engine Architecture,” An Int'l. Conference on Translation and Information Technology, Hong Kong, Dec. 2000, pp. 1-10.
Rayner et al.,“Hybrid Language Processing in the Spoken Language Translator,” IEEE, pp. 107-110, 1997.
Rogati et al., “Resource Selection for Domain-Specific Cross-Lingual IR,” ACM 2004, pp. 154-161.
Kumar, S. and Byrne, W., “Minimum Bayes-Risk Decoding for Statistical Machine Translation.” HLTNAACL Conference. Mar. 2004, 8 pages.
Shirai, S., “A Hybrid Rule and Example-based Method for Machine Translation,” NTT Communication Science Laboratories, pp. 1-5, 1997.
Tanaka, K. and Iwasaki, H. “Extraction of Lexical Translations from Non-Aligned Corpora,” Proceedings of COLING 1996.
Taskar, B., et al., “A Discriminative Matching Approach to Word Alignment,” In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing (Vancouver, BC, Canada, Oct. 6-8, 2005). Human Language Technology Conference. Assoc. for Computational Linguistics, Morristown, NJ.
Tomas, J., “Binary Feature Classification for word Disambiguation in Statistical Machine Translation,” Proceedings of the 2nd Int'l. Workshop on Pattern Recognition, 2002, pp. 1-12.
Uchimoto, K. et al., “Word Translation by Combining Example-based Methods and Machine Learning Models,” Natural Language Processing (Shizen Gengo Shori), vol. 10, No. 3, Apr. 2003, pp. 87-114.
Uchimoto, K. et al., “Word Translation by Combining Example-based Methods and Machine Learning Models,”Natural Language Processing (Shizen Gengo Shori), vol. 10, No. 3, Apr. 2003, pp. 87-114. (English Translation).
Varga et al. “Parallel corpora for medium density languages”, In Proceedings of RANLP 2005, pp. 590-596.
Yamada K., “A Syntax-Based Statistical Translation Model,” 2002 PhD Dissertation, pp. 1-141.
Yamamoto et al., “Acquisition of Phrase-level Bilingual Correspondence using Dependency Structure” In Proceedings of COLING—2000, pp. 933-939.
Zhang et al., “Synchronous Binarization for Machine Translations,” Jun. 4-9, 2006, in Proc. of the Human Language Technology Conference of the North American Chapter of the ACL, pp. 256-263.
Zhang et al., “Distributed Language Modeling for N-best List Re-ranking,” In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (Sydney, Australia, Jul. 22-23, 2006). ACL Workshops. Assoc. for Computational Linguistics, Morristown, NJ, 216-223.
Patent Cooperation Treaty International Preliminary Report on Patentability and The Written Opinion, International application No. PCT/US2008/004296, Oct. 6, 2009, 5 pgs.
Documnet, Wikipedia.com, web.archive.org (Feb. 24, 2004) <http://web.archive.org/web/20040222202831 /http://en.wikipedia.org/wikiiDocument>, Feb. 24, 2004.
Identifying, Dictionary.com, wayback.archive.org (Feb. 28, 2007) <http://wayback.archive.org/web/200501 01.OOOOOO*/http:////dictionary.reference.com//browse//identifying>, Feb. 28, 2005 <http://web.archive.org/web/20070228150533/http://dictionary.reference.com/browse/identifying>.
Koehn, P., et al, “Statistical Phrase-Based Translation,” Proceedings of of HLT-NAACL 2003 Main Papers , pp. 48-54 Edmonton, May-Jun. 2003.
Abney, S.P., “Stochastic Attribute Value Grammars”, Association for Computional Linguistics, 1997, pp. 597-618.
Fox, H., “Phrasal Cohesion and Statistical Machine Translation” Proceedings of the Conference on Empirical Methods in Natural Language Processing, Philadelphia, Jul. 2002, pp. 304-311. Association for Computational Linguistics. <URL: http://acl.ldc.upenn.edu/W/W02/W02-1039.pdf>.
Tillman, C., et al, “Word Reordering and a Dynamic Programming Beam Search Algorithm for Statistical Machine Translation” <URL: http://acl.ldc.upenn.edu/J/J03/J03-1005.pdf>, 2003.
Wang, W., et al. “Capitalizing Machine Translation” In HLT-NAACL '06 Proceedings Jun. 2006. <http://www.isi.edu/natural-language/mt/hlt-naac1-06-wang.pdf>.
Notice of Allowance mailed Dec. 10, 2013 in Japanese Patent Application 2007-536911, filed Oct. 12, 2005.
Abney, Stephen, “Parsing by Chunks,” 1991, Principle-Based Parsing: Computation and Psycholinguistics, vol. 44, pp. 257-279.
Al-Onaizan et al., “Statistical Machine Translation,” 1999, JHU Summer Tech Workshop, Final Report, pp. 1-42.
Al-Onaizan, Y. and Knight, K., “Named Entity Translation: Extended Abstract” 2002, Proceedings of HLT-02, San Diego, CA.
Al-Onaizan, Y. and Knight, K., “Translating Named Entities Using Monolingual and Bilingual Resources,” 2002, Proc. of the 40th Annual Meeting of the ACL,pp. 400-408.
Al-Onaizan et al., “Translating with Scarce Resources,” 2000, 17th National Conference of the American Association for Artificial Intelligence, Austin, TX, pp. 672-678.
Alshawi et al., “Learning Dependency Translation Models as Collections of Finite-State Head Transducers,” 2000, Computational Linguistics, vol. 26, pp. 45-60.
Arbabi et al., “Algorithms for Arabic name transliteration,” Mar. 1994, IBM Journal of Research and Development, vol. 38, Issue 2, pp. 183-194.
Barnett et al., “Knowledge and Natural Language Processing,” Aug. 1990, Communications of the ACM, vol. 33, Issue 8, pp. 50-71.
Bangalore, S. and Rambow, O., “Corpus-Based Lexical Choice in Natural Language Generation,” 2000, Proc. of the 38th Annual ACL, Hong Kong, pp. 464-471.
Bangalore, S. and Rambow, O., “Exploiting a Probabilistic Hierarchical Model for Generation, ” 2000, Proc. of 18th conf. on Computational Linguistics, vol. 1, pp. 42-48.
Bangalore, S. and Rambow, O., “Evaluation Metrics for Generation,” 2000, Proc. of the 1st International Natural Language Generation Conf., vol. 14, p. 1-8.
Bangalore, S. and Rambow, O., “Using TAGs, a Tree Model, and a Language Model for Generation,” May 2000, Workshop TAG+5, Paris.
Baum, Leonard, “An Inequality and Associated Maximization Technique in Statistical Estimation for Probabilistic Functions of Markov Processes”, 1972, Inequalities 3:1-8.
Bikel et al., “An Algorithm that Learns What's in a Name,” 1999, Machine Learning Journal Special Issue on Natural Language Learning, vol. 34, pp. 211-232.
Brants, Thorsten, “TnT—A Statistical Part-of-Speech Tagger,” 2000, Proc. of the 6th Applied Natural Language Processing Conference, Seattle.
Brill, Eric. “Transformation-Based Error-Driven Learning and Natural Language Processing: A Case Study in Part of Speech Tagging”, 1995, Computational Linguistics, vol. 21, No. 4, pp. 543-565.
Brown et al., “A Statistical Approach to Machine Translation,” Jun. 1990, Computational Linguistics, vol. 16, No. 2, pp. 79-85.
Brown, Ralf, “Automated Dictionary Extraction for “Knowledge-Free” Example-Based Translation,” 1997, Proc. of 7th Int'l Conf. on Theoretical and Methodological Issues in MT, Santa Fe, NM, pp. 111-118.
Brown et al., “The Mathematics of Statistical Machine Translation: Parameter Estimation,” 1993, Computational Linguistics, vol. 19, Issue 2, pp. 263-311.
Brown et al., “Word-Sense Disambiguation Using Statistical Methods,” 1991, Proc. of 29th Annual ACL, pp. 264-270.
Carl, Michael. “A Constructivist Approach to Machine Translation,” 1998, New Methods of Language Processing and Computational Natural Language Learning, pp. 247-256.
Chen, K. and Chen, H., “Machine Translation: An Integrated Approach,” 1995, Proc. of 6th Int'l Conf. on Theoretical and Methodological Issue in MT, pp. 287-294.
Chinchor, Nancy, “MUC-7 Named Entity Task Definition,” 1997, Version 3.5.
Clarkson, P. and Rosenfeld, R., “Statistical Language Modeling Using the CMU-Cambridge Toolkit”, 1997, Proc. ESCA Eurospeech, Rhodes, Greece, pp. 2707-2710.
Corston-Oliver, Simon, “Beyond String Matching and Cue Phrases: Improving Efficiency and Coverage in Discourse Analysis”,1998, The AAAI Spring Symposium on Intelligent Text Summarization, pp. 9-15.
Dagan, I. and Itai, A., “Word Sense Disambiguation Using a Second Language Monolingual Corpus”, 1994, Computational Linguistics, vol. 20, No. 4, pp. 563-596.
Dempster et al., “Maximum Likelihood from Incomplete Data via the EM Algorithm”, 1977, Journal of the Royal Statistical Society, vol. 39, No. 1, pp. 1-38.
Diab, M. and Finch, S., “A Statistical Word-Level Translation Model for Comparable Corpora,” 2000, In Proc.of the Conference on ContentBased Multimedia Information Access (RIAO).
Elhadad, M. and Robin, J., “An Overview of SURGE: a Reusable Comprehensive Syntactic Realization Component,” 1996, Technical Report 96-03, Department of Mathematics and Computer Science, Ben Gurion University, Beer Sheva, Israel.
Elhadad, M. and Robin, J., “Controlling Content Realization with Functional Unification Grammars”, 1992, Aspects of Automated Natural Language Generation, Dale et al. (eds)., Springer Verlag, pp. 89-104.
Elhadad et al., “Floating Constraints in Lexical Choice”, 1996, ACL, 23(2): 195-239.
Elhadad, Michael, “FUF: the Universal Unifier User Manual Version 5.2”, 1993, Department of Computer Science, Ben Gurion University, Beer Sheva, Israel.
Elhadad. M. and Robin, J., “SURGE: a Compi:ehensive Plug-in Syntactic Realization Component for Text Generation”, 1999 (available at http://www.cs.bgu.ac.il/˜elhadad/pub.html).
Elhadad, Michael, “Using Argumentation to Control Lexical Choice: A Functional Unification Implementation”, 1992, Ph.D. Thesis, Graduate School of Arts and Sciences, Columbia University.
Fung, Pascale, “Compiling Bilingual Lexicon Entries From a Non-Parallel English-Chinese Corpus”, 1995, Proc. of the Third Workshop on Very Large Corpora, Boston, MA, pp. 173-183.
Fung, P. and Yee, L., “An IR Approach for Translating New Words from Nonparallel, Comparable Texts”, 1998, 36th Annual Meeting of the ACL, 17th International Conference on Computational Linguistics, pp. 414-420.
Gale, W. and Church, K., “A Program for Aligning Sentences in Bilingual Corpora,” 1991, 29th Annual Meeting of the ACL, pp. 177-183.
Germann, Ulrich, “Building a Statistical Machine Translation System from Scratch: How Much Bang for the Buck Can We Expect?” Proc. of the Data-Driven MT Workshop of ACL-01, Toulouse, France, 2001.
Germann et al., “Fast Decoding and Optimal Decoding for Machine Translation”, 2001, Proc. of the 39th Annual Meeting of the ACL, Toulouse, France, pp. 228-235.
Diab, Mona, “An Unsupervised Method for Multilingual Word Sense Tagging Using Parallel Corpora: A Preliminary Investigation”, 2000, SIGLEX Workshop on Word Senses and Multi-Linguality, pp. 1-9.
Grefenstette, Gregory, “The World Wide Web as a Resource for Example-Based Machine Translation Tasks”, 1999, Translating and the Computer 21, Proc. of the 21st International Conf. on Translating and the Computer, London, UK, 12 pp.
Hatzivassiloglou, V. et al., “Unification-Based Glossing”, 1995, Proc. of the International Joint Conference on Artificial Intelligence, pp. 1382-1389.
Ide, N. and Veronis, J., “Introduction to the Special Issue on Word Sense Disambiguation: The State of the Art”, Mar. 1998, Computational Linguistics, vol. 24, Issue 1, pp. 2-40.
Imamura, Kenji, “Hierarchical Phrase Alignment Harmonized with Parsing”, 2001, in Proc. of NLPRS, Tokyo.
Jelinek, F., “Fast Sequential Decoding Algorithm Using a Stack”, Nov. 1969, IBM J. Res. Develop., vol. 13, No. 6, pp. 675-685.
Jones, K. Sparck, “Experiments in Relevance Weighting of Search Terms”, 1979, Information Processing & Management, vol. 15, Pergamon Press Ltd., UK, pp. 133-144.
Knight, K. and Yamada, K., “A Computational Approach to Deciphering Unknown Scripts,” 1999, Proc. of the ACL Workshop on Unsupervised Learning in Natural Language Processing.
Knight, K. and Al-Onaizan, Y., “A Primer on Finite-State Software for Natural Language Processing”, 1999 (available at http://www.isi.edu/licensed-sw/carmel).
Knight, Kevin, “A Statistical MT Tutorial Workbook,” 1999, JHU Summer Workshop (available at http://www.isi.edu/natural-language/mt/wkbk.rtf).
Knight, Kevin, “Automating Knowledge Acquisition for Machine Translation,” 1997, Al Magazine 18(4).
Knight, K. and Chander, I., “Automated Postediting of Documents,”1994, Proc. of the 12th Conference on Artificial Intelligence, pp. 779-784.
Knight, K. and Luk, S., “Building a Large-Scale Knowledge Base for Machine Translation,” 1994, Proc. of the 12th Conference on Artificial Intelligence, pp. 773-778.
Knight, Kevin, “Connectionist Ideas and Algorithms,” Nov. 1990, Communications of the ACM, vol. 33, No. 11, pp. 59-74.
Knight, Kevin, “Decoding Complexity in Word-Replacement Translation Models”, 1999, Computational Linguistics, 25(4).
Knight et al., “Filling Knowledge Gaps in a Broad-Coverage Machine Translation System”, 1995, Proc. of the14th International Joint Conference on Artificial Intelligence, Montreal, Canada, vol. 2, pp. 1390-1396.
Knight, Kevin, “Integrating Knowledge Acquisition and Language Acquisition,” May 1992, Journal of Applied Intelligence, vol. 1, No. 4.
Knight et al., “Integrating Knowledge Bases and Statistics in MT,” 1994, Proc. of the Conference of the Association for Machine Translation in the Americas.
Knight, Kevin, “Learning Word Meanings by Instruction,”1996, Proc. of the National Conference on Artificial Intelligence, vol. 1, pp. 447-454.
Knight, K. and Graehl, J., “Machine Transliteration”, 1997, Proc. of the ACL-97, Madrid, Spain.
Knight, K. et al., “Machine Transliteration of Names in Arabic Text,” 2002, Proc. of the ACL Workshop on Computational Approaches to Semitic Languages.
Knight, K. and Marcu, D., “Statistics-Based Summarization—Step One: Sentence Compression,” 2000, American Association for Artificial Intelligence Conference, pp. 703-710.
Knight et al., “Translation with Finite-State Devices,” 1998, Proc. of the 3rd AMTA Conference, pp. 421-437.
Knight, K. and Hatzivassiloglou, V., “Two-Level, Many-Paths Generation,” 1995, Proc. of the 33rd Annual Conference of the ACL, pp. 252-260.
Knight, Kevin, “Unification: A Multidisciplinary Survey,” 1989, ACM Computing Surveys, vol. 21, No. 1.
Koehn, P. and Knight, K., “ChunkMT: Statistical Machine Translation with Richer Linguistic Knowledge,” Apr. 2002, Information Sciences Institution.
Koehn, P. and Knight, K., “Estimating Word Translation Probabilities from Unrelated Monolingual Corpora Using the EM Algorithm,” 2000, Proc. of the 17th meeting of the AAAI.
Koehn, P. and Knight, K., “Knowledge Sources for Word-Level Translation Models,” 2001, Conference on Empirical Methods in Natural Language Processing.
Kurohashi, S. and Nagao, M., “Automatic Detection of Discourse Structure by Checking Surface Information in Sentences,” 1994, Proc. of COL-LING '94, vol. 2, pp. 1123-1127.
Langkilde-Geary, Irene, “An Empirical Verification of Coverage and Correctness for a General-Purpose Sentence Generator,” 1998, Proc. 2nd Int'l Natural Language Generation Conference.
Langkilde-Geary, Irene, “A Foundation for General-Purpose Natural Language Generation: Sentence Realization Using Probabilistic Models of Language,” 2002, Ph.D. Thesis, Faculty of the Graduate School, University of Southern California.
Langkilde, Irene, “Forest-Based Statistical Sentence Generation,” 2000, Proc. of the 1st Conference on North American chapter of the ACL, Seattle, WA, pp. 170-177.
Langkilde, I. and Knight, K., “The Practical Value of N-Grams in Generation,” 1998, Proc. of the 9th International Natural Language Generation Workshop, p. 248-255.
Langkilde, I. and Knight, K., “Generation that Exploits Corput-Based Statistical Knowledge,” 1998, Proc. of the COLING—ACL, pp. 704-710.
Mann, G. and Yarowsky, D., “Multipath Translation Lexicon Induction via Bridge Languages,” 2001, Proc. of the 2nd Conference of the North American Chapter of the ACL, Pittsburgh, PA, pp. 151-158.
Manning, C. and Schutze, H., “Foundations of Statistical Natural Language Processing,” 2000, The MIT Press, Cambridge, MA [redacted].
Marcu, D. and Wong, W., “A Phrase-Based, Joint Probability Model for Statistical Machine Translation,” 2002, Proc. of ACL-2 conference on Empirical Methods in Natural Language Processing, vol. 10, pp. 133-139.
Marcu, Daniel, “Building Up Rhetorical Structure Trees,” 1996, Proc. of the National Conference on Artificial Intelligence and Innovative Applications of Artificial Intelligence Conference, vol. 2, pp. 1069-1074.
Marcu, Daniel, “Discourse trees are good indicators of importance in text,” 1999, Advances in Automatic Text Summarization, The MIT Press, Cambridge, MA.
Marcu, Daniel, “Instructions for Manually Annotating the Discourse Structures of Texts,” 1999, Discourse Annotation, pp. 1-49.
Marcu, Daniel, “The Rhetorical Parsing of Natural Language Texts,” 1997, Proceedings of ACL/EACL '97, pp. 96-103.
Marcu, Daniel, “The Rhetorical Parsing, Summarization, and Generation of Natural Language Texts,” 1997, Ph.D. Thesis, Graduate Department of Computer Science, University of Toronto.
Marcu, Daniel, “Towards a Unified Approach to Memory- and Statistical-Based Machine Translation,” 2001, Proc. of the 39th Annual Meeting of the ACL, pp. 378-385.
Melamed, I. Dan, “A Word-to-Word Model of Translational Equivalence,” 1997, Proc. of the 35th Annual Meeting of the ACL, Madrid, Spain, pp. 490-497.
Melamed, I. Dan, “Automatic Evaluation and Uniform Filter Cascades for Inducing N-Best Translation Lexicons,” 1995, Proc. of the 3rd Workshop on Very Large Corpora, Boston, MA, pp. 184-198.
Melamed, I. Dan, “Empirical Methods for Exploiting Parallel Texts,” 2001, MIT Press, Cambridge, MA [table of contents].
Meng et al., “Generating Phonetic Cognates to Handle Named Entities in English-Chinese Cross-Language Spoken Document Retrieval,” 2001, IEEE Workshop on Automatic Speech Recognition and Understanding, pp. 311-314.
Miike et al., “A full-text retrieval system with a dynamic abstract generation function,” 1994, Proceedings of SI-GIR '94, pp. 152-161.
Mikheev et al., “Named Entity Recognition without Gazeteers,” 1999, Proc. of European Chapter of the ACL, Bergen, Norway, pp. 1-8.
Monasson et al., “Determining computational complexity from characteristic ‘phase transitions’,” Jul. 1999, Nature Magazine, vol. 400, pp. 133-137.
Mooney, Raymond, “Comparative Experiments on Disambiguating Word Senses: An Illustration of the Role of Bias in Machine Learning,” 1996, Proc. of the Conference on Empirical Methods in Natural Language Processing, pp. 82-91.
Niessen,S. and Ney, H, “Toward hierarchical models for statistical machine translation of inflected languages,” 2001, Data-Driven Machine Translation Workshop, Toulouse, France, pp. 47-54.
Och, F. and Ney, H, “Improved Statistical Alignment Models,” 2000, 38th Annual Meeting of the ACL, Hong Kong, pp. 440-447.
Och et al., “Improved Alignment Models for Statistical Machine Translation,” 1999, Proc. of the Joint Conf. of Empirical Methods in Natural Language Processing and Very Large Corpora, pp. 20-28.
Papineni et al., “Bleu: a Method for Automatic Evaluation of Machine Translation,” 2001, IBM Research Report, RC22176.
Pla et al., “Tagging and Chunking with Bigrams,” 2000, Proc. of the 18th Conference on Computational Linguistics, vol. 2, pp. 614-620.
Rapp, Reinhard, Automatic Identification of Word Translations from Unrelated English and German Corpora, 1999, 37th Annual Meeting of the ACL, pp. 519-526.
Rapp, Reinhard, “Identifying Word Translations in Non-Parallel Texts,” 1995, 33rd Annual Meeting of the ACL, pp. 320-322.
Resnik, P. and Yarowsky, D., “A Perspective on Word Sense Disambiguation Methods and Their Evaluation,” 1997, Proceedings of SIGLEX '97, Washington, DC, pp. 79-86.
Resnik, Philip, “Mining the Web for Bilingual Text,” 1999, 37th Annual Meeting of the ACL, College Park, MD, pp. 527-534.
Rich, E. and Knight, K., “Artificial Intelligence, Second Edition,” 1991, McGraw-Hill Book Company [redacted].
Richard et al., “Visiting the Traveling Salesman Problem with Petri nets and application in the glass industry,” Feb. 1996, IEEE Emerging Technologies and Factory Automation, pp. 238-242.
Robin, Jacques, “Revision-Based Generation of Natural Language Summaries Providing Historical Background: Corpus-Based Analysis, Design Implementation and Evaluation,” 1994, Ph.D. Thesis, Columbia University, New York.
Sang, E. and Buchholz, S., “Introduction to the CoNLL-2000 Shared Task: Chunking,” 20002, Proc. of CoNLL-2000 and LLL-2000, Lisbon, Portugal, pp. 127-132.
Schmid, H., and Walde, S., “Robust German Noun Chunking With a Probabilistic Context-Free Grammar,” 2000, Proc. of the 18th Conference on Computational Linguistics, vol. 2, pp. 726-732.
Selman et al., “A New Method for Solving Hard Satisfiability Problems,” 1992, Proc. of the 10th National Conference on Artificial Intelligence, San Jose, CA, pp. 440-446.
Schutze, Hinrich, “Automatic Word Sense Discrimination,” 1998, Computational Linguistics, Special Issue on Word Sense Disambiguation, vol. 24, Issue 1, pp. 97-123.
Sobashima et al., “A Bidirectional Transfer-Driven Machine Translation System for Spoken Dialogues,” 1994, Proc. of 15th Conference on Computational Linguistics, vol. 1, pp. 64-68.
Shapiro, Stuart (ed.), “Encyclopedia of Artificial Intelligence, 2nd edition”, vol. 2, 1992, John Wiley & Sons Inc; “Unification” article, K. Knight, pp. 1630-1637.
Soricut et al., “Using a large monolingual corpus to improve translation accuracy,” 2002, Lecture Notes in Computer Science, vol. 2499, Proc. of the 5th Conference of the Association for Machine Translation in the Americas on Machine Translation: From Research to Real Users, pp. 155-164.
Stalls, B. and Knight, K., “Translating Names and Technical Terms in Arabic Text,” 1998, Proc. of the COLING/ACL Workkshop on Computational Approaches to Semitic Language.
Sun et al., “Chinese Named Entity Identification Using Class-based Language Model,” 2002, Proc. of 19th International Conference on Computational Linguistics, Taipei, Taiwan, vol. 1, pp. 1-7.
Sumita et al., “A Discourse Structure Analyzer for Japanese Text,” 1992, Proc. of the International Conference on Flfth Generation Computer Systems, vol. 2, pp. 1133-1140.
Taylor et al., “The Penn Treebank: An Overview,” in A. Abeill (ed.), Treebanks: Building and Using Parsed Corpora, 2003, pp. 5-22.
Tiedemann, Jorg, “Automatic Construction of Weighted String Similarity Measures,” 1999, In Proceedings of the Joint SIGDAT Conference on Emperical Methods in Natural Language Processing and Very Large Corpora.
Tillmann et al., “A DP based Search Using Monotone Alignments in Statistical Translation,” 1997, Proc. of the Annual Meeting of the ACL, pp. 366-372.
Tillman, C. and Xia, F., “A Phrase-Based Unigram Model for Statistical Machine Translation,” 2003, Proc. of the North American Chapter of the ACL on Human Language Technology, vol. 2, pp. 106-108.
Veale, T. and Way, A., “Gaijin: A Bootstrapping, Template-Driven Approach to Example-Based MT,” 1997, Proc. of New Methods in Natural Language Processing (NEMPLP97), Sofia, Bulgaria.
Vogel, S. and Ney, H., “Construction of a Hierarchical Translation Memory,” 2000, Proc. of Cooling 2000, Saarbrucken, Germany, pp. 1131-1135.
Vogel et al., “The CMU Statistical Machine Translation System,” 2003, Machine Translation Summit IX, New Orleans, LA.
Vogel et al., “The Statistical Translation Module in the Verbmobil System,” 2000, Workshop on Multi-Lingual Speech Communication, pp. 69-74.
Wang, Ye-Yi, “Grammar Interference and Statistical Machine Translation,” 1998, Ph.D Thesis, Carnegie Mellon University, Pittsburgh, PA.
Watanbe et al., “Statistical Machine Translation Based on Hierarchical Phrase Alignment,” 2002, 9th International Conference on Theoretical and Methodological Issues in Machin Translation (TMI-2002), Keihanna, Japan, pp. 188-198.
Witbrock, M. and Mittal, V., “Ultra-Summarization: A Statistical Approach to Generating Highly Condensed Non-Extractive Summaries,” 1999, Proc. of SIGIR '99, 22nd International Conference on Research and Development in Information Retrieval, Berkeley, CA, pp. 315-316.
Wang, Y. and Waibel, A., “Decoding Algorithm in Statistical Machine Translation,” 1996, Proc. of the 35th Annual Meeting of the ACL, pp. 366-372.
Wu, Dekai, “Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora,” 1997, Computational Linguistics, vol. 23, Issue 3, pp. 377-403.
Wu, Dekai, “A Polynomial-Time Algorithm for Statistical Machine Translation,” 1996, Proc. of 34th Annual Meeting of the ACL, pp. 152-158.
Yamada, K. and Knight, K., “A Decoder for Syntax-based Statistical MT,” 2001, Proceedings of the 40th Annual Meeting of the ACL, pp. 303-310.
Yamada, K. and Knight, K. “A Syntax-based Statistical Translation Model,” 2001, Proc. of the 39th Annual Meeting of the ACL, pp. 523-530.
Yamamoto et al., “A Comparative Study on Translation Units for Bilingual Lexicon Extraction,” 2001, Japan Academic Association for Copyright Clearance, Tokyo, Japan.
Yarowsky, David, “Unsupervised Word Sense Disambiguation Rivaling Supervised Methods,” 1995, 33rd Annual Meeting of the ACL, pp. 189-196.
Callan et al., “TREC and TIPSTER Experiments with INQUERY,” 1994, Information Processing and Management, vol. 31, Issue 3, pp. 327-343.
Cohen, Yossi, “Interpreter for FUF,” (available at ftp://ftp.cs.bgu.ac.il/pub/people/elhadad/fuf-life.lf).
Mohri, M. and Riley, M., “An Efficient Algorithm for the N-Best-Strings Problem,” 2002, Proc. of the 7th Int. Conf. on Spoken Language Processing (ICSLP'02), Denver, CO, pp. 1313-1316.
Nederhof, M. and Satta, G., “IDL-Expressions: A Formalism for Representing and Parsing Finite Languages in Natural Language Processing,” 2004, Journal of Artificial Intelligence Research, vol. 21, pp. 281-287.
Och, F. and Ney, H., “Discriminative Training and Maximum Entropy Models for Statistical Machine Translation,” 2002, Proc. of the 40th Annual Meeting of the ACL, Philadelphia, PA, pp. 295-302.
Resnik, P. and Smith, A., “The Web as a Parallel Corpus,” Sep. 2003, Computational Linguistics, Special Issue on Web as Corpus, vol. 29, Issue 3, pp. 349-380.
Russell, S. and Norvig, P., “Artificial Intelligence: A Modern Approach,” 1995, Prentice-Hall, Inc., New Jersey [redacted—table of contents].
Ueffing et al., “Generation of Word Graphs in Statistical Machine Translation,” 2002, Proc. of Empirical Methods in Natural Language Processing (EMNLP), pp. 156-163.
Kumar, R. and Li, H., “Integer Programming Approach to Printed Circuit Board Assembly Time Optimization,” 1995, IEEE Transactions on Components, Packaging, and Manufacturing, Part B: Advance Packaging, vol. 18, No. 4, pp. 720-727.