Efficient online domain adaptation

Information

  • Patent Grant
  • 9213694
  • Patent Number
    9,213,694
  • Date Filed
    Thursday, October 10, 2013
    11 years ago
  • Date Issued
    Tuesday, December 15, 2015
    9 years ago
Abstract
Systems and methods for efficient online domain adaptation are provided herein. Methods may include receiving a post-edited machine translated sentence pair, updating a machine translation model by adjusting translation weights for a translation memory and a language model while generating test machine translations of the machine translated sentence pair until one of the test machine translations approximately matches the post-edits for the machine translated sentence pair, and retranslating the remaining machine translation sentence pairs that have yet to be post-edited using the updated machine translation model.
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This Non-Provisional patent application is related to U.S. patent application Ser. No. 13/685,372, titled “Personalized Machine Translation via Online Adaptation”, which was filed on Nov. 26, 2012, which is hereby incorporated by reference herein in its entirety including all references cited therein.


FIELD OF THE TECHNOLOGY

Embodiments of the disclosure relate to machine translation systems and methods. More specifically, but not by way of limitation, the present technology includes systems and methods that provide efficient online domain adaptation where updates to a machine translation system occur as soon as post-edits to machine translations are received by the machine translation system.


BACKGROUND OF THE DISCLOSURE

Post edit data, such as human translator feedback, may be created by human translators in order to correct a machine translation sentence pair. For example, a machine translation sentence pair may include a source sentence unit, such as a word or phrase, as well as a machine translation generated target sentence unit. If the target sentence unit generated by the machine translation system is incorrect, a human translator may generate post-edits that correct the error. While these post-edits are a valuable resource for customizing and adapting statistical machine translation models, updating the machine translation system with these post-edits remains a difficult endeavor.


SUMMARY OF THE DISCLOSURE

According to some embodiments, the present technology may be directed to a method of immediately updating a machine translation system with post-edits during translation of a document, using a machine translation system that comprises a processor and a memory for storing logic that is executed by the processor to perform the method, comprising: (a) receiving a post-edited machine translated sentence pair, wherein the post-edited machine translated sentence pair comprises a source sentence unit and a post-edited target sentence unit; (b) updating a machine translation model by: (i) performing an alignment of the post-edits of the machine translated sentence pair to generate phrases; and (ii) adding the phrases to the machine translation model; (c) adapting a language model from the target sentence unit of the post-edits; (d) calculating translation statistics for the post-edits; (e) adjusting translation weights using the translation statistics while generating test machine translations of the machine translated sentence pair until one of the test machine translations approximately matches the post-edits for the machine translated sentence pair; and (f) retranslating the remaining machine translation sentence pairs that have yet to be post-edited using the updated machine translation model and the adjusted translation weights.


According to other embodiments, the present technology may be directed to a machine translation system that immediately incorporates post-edits into a machine translation model during translation of a document, the machine translation system comprising: (1) a processor; and (2) a memory for storing logic that is executed by the processor to: (a) receiving a post-edited machine translated sentence pair, wherein the post-edited machine translated sentence pair comprises a source sentence unit and a post-edited target sentence unit; (b) updating a machine translation model by: (i) performing an alignment of the post-edits of the machine translated sentence pair to generate phrases; and (ii) adding the phrases to the machine translation model; (c) adapting a language model from the target sentence unit of the post-edits; (d) calculating translation statistics for the post-edits; (e) adjusting translation weights using the translation statistics while generating test machine translations of the machine translated sentence pair until one of the test machine translations approximately matches the post-edits for the machine translated sentence pair; and (f) retranslating the remaining machine translation sentence pairs that have yet to be post-edited using the updated machine translation model.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, where like reference numerals refer to identical or functionally similar elements throughout the separate views, together with the detailed description below, are incorporated in and form part of the specification, and serve to further illustrate embodiments of concepts that include the claimed disclosure, and explain various principles and advantages of those embodiments.


The methods and systems disclosed herein have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.



FIG. 1 is an exemplary computing architecture that may be used to practice aspects of the present technology;



FIG. 2 is an example of an algorithm utilized by a machine translation system to immediately update a machine translation system with post-edits during translation of a document;



FIG. 3 is an example of an algorithm utilized by a machine translation system to update a probability table used by the machine translation system.



FIG. 4 is a flowchart of a method for immediately updating a machine translation system with post-edits during translation of a document



FIG. 5 illustrates an exemplary computing system that may be used to implement embodiments according to the present technology.





DETAILED DESCRIPTION

In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosure. It will be apparent, however, to one skilled in the art, that the disclosure may be practiced without these specific details. In other instances, structures and devices are shown at block diagram form only in order to avoid obscuring the disclosure.


Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” or “according to one embodiment” (or other phrases having similar import) at various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Furthermore, depending on the context of discussion herein, a singular term may include its plural forms and a plural term may include its singular form. Similarly, a hyphenated term (e.g., “on-demand”) may be occasionally interchangeably used with its non-hyphenated version (e.g., “on demand”), a capitalized entry (e.g., “Software”) may be interchangeably used with its non-capitalized version (e.g., “software”), a plural term may be indicated with or without an apostrophe (e.g., PE's or PEs), and an italicized term (e.g., “N+1”) may be interchangeably used with its non-italicized version (e.g., “N+1”). Such occasional interchangeable uses shall not be considered inconsistent with each other.


The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.


It is noted at the outset that the terms “coupled,” “connected”, “connecting,” “electrically connected,” etc., are used interchangeably herein to generally refer to the condition of being electrically/electronically connected. Similarly, a first entity is considered to be in “communication” with a second entity (or entities) when the first entity electrically sends and/or receives (whether through wireline or wireless means) information signals (whether containing data information or non-data/control information) to the second entity regardless of the type (analog or digital) of those signals. It is further noted that various figures (including component diagrams) shown and discussed herein are for illustrative purpose only, and are not drawn to scale.


The present technology provides for fast online updates to a machine translation system that uses language pairs, immediately after the machine translation system receives a new sentence pair. For example, the machine translation system may receive a new sentence pair that includes a source sentence unit and a post-edited target sentence unit. The post-edited target sentence unit includes changes to an original target sentence unit that was provided to a translator.


For context, the phrase “sentence unit” may be understood to include any sub-sentential unit that is obtained from a sentence in a document that is to be translated. In general, a machine translation system receives a document for translation and uses machine translation techniques to translate the document in a source language into a target language. This machine translation process includes breaking the source document into sentences and further dividing the sentences into sub-sentential units. Using methods that would be known to one of ordinary skill in the art, the machine translation system outputs sentence pairs for the sentence that include source sentence units and target sentence units that are machine translations of the source sentence units.


These sentence pairs are evaluated by human translators who may edit or change the target sentence unit of a sentence pair to correct errors in the machine translation. These changes are referred to as post-edits of target sentence units.


New information obtained from the post-edited target sentence unit is incorporated into the statistical models of the language pairs and may be passed to the decoder. With a fast online tuning method, the translation model of the present technology may utilize this new information in subsequent translations. This method yields significant bilingual evaluation understudy (BLEU) improvements over both small-scale and strong baselines when translating coherent in-domain sentences.


As opposed to adding new data to a static translation memory, the present technology may employ a translation model update procedure to modify relevant parts of probabilistic models of the translation model in order to update machine translations of subsequent sentences of a document.



FIG. 1 illustrates an exemplary architecture, which implements a methodology for rapidly adapting a machine translation system such that it can better anticipate the behavior of a human post-editor who is tasked with converting automatic translations of a source language text into high-quality translations in a desired target language. The architecture may include a plurality of clients 105A-C that are coupled with both a generic machine translation (MT) engine 115 and a machine translation system 100. Generally, the generic MT engine 115 initially generates machine translation pairs for a document received from one of the clients. The generic MT engine 115 utilizes a translation model (TM) 115A, a language model (LM) 115B and generic weights 115C that affect how the generic MT engine 115 translates the document received from the client, by affecting application of the translation model 115A and the language model 115B.


The machine translation system 100 is configured to utilize post-edits received from the clients to generate in-domain adapted and retuned machine translation models. For example, the machine translation system 100 generates both an in-domain translation model (IDTM) 100A, an in-domain language model (IDLM) 100B, and in-domain weighting (IDW) 100C. In some embodiments, each client may have their own IDTM, IDLM, and IDW due to the machine translation system's ability to re-translate remaining sentence pairs for a translator as post-edits from the translator are received by the machine translation system 100.


Thus, in some embodiments, the IDTM, IDLM, and IDW are invoked and utilized by the machine translation system 100 during a single translation session for a translator. In other embodiments, the IDTM, IDLM, and IDW may persist and can be utilized for multiple translation projects.


The following description provides an example of the use of the machine translation system 100 by a client, such as client 105A. Initially the post-editor associated with client 105A receives a source language text and automatic translations in the target language generated by a baseline machine translation engine 115. The source language text and automatic translations are also referred to as a sentence pair. The sentence pair includes a source sentence unit and a target sentence unit that is a machine translation of the source sentence unit.


Changes to the target sentence unit by the client 105A are referred to as a post-edited target sentence unit. As the post-editor corrects a target sentence unit, the post-edits are transmitted to the machine translation system 100. The machine translation system 100 then automatically learns new sub-sentential translation correspondences and enhances the generic MT engine 115 with these correspondences. The machine translation system 100 also adjusts the parameters of the generic MT engine 115 to optimize translation performance on the corrected translations. The result is a generic MT engine 115 that is better equipped to handle the vocabulary and phrasing choices desired by the post-editor on his/her current workflow.



FIG. 1 illustrates a server-side instantiation of the machine translation system 100 that continuously updates translations and personalizes on a per-user, per-document basis. It will be understood that many other variants of this approach are possible using the same technology. For instance, the updating may be done client side, the updating may be periodic instead of continuous, and the scope of personalization may be wider or narrower than per-user, per-document.


The translation model 115A and language model 115B are typically very large databases, where the weights are a vector of numbers. The machine translation system 100 has the ability to instantiate and reset the per-user, and in-domain translation models IDTM 100A, in-domain language models IDLM 100B, and in-domain weighting IDW 100C. It will be understood that IDTM 100A, IDLM 100B, and IDW 100C may belong to client 105A and are illustrated as a set of personalized machine translation tools 100D. Indeed, each client is provided with their own IDTM, IDLM, and IDW. Upon instantiation and reset the IDTM 100A and IDLM 100B are empty databases and the IDW 100C is equal to the generic weights 115C used by the generic MT engine 115. The IDTM 100A and IDLM 100B are typically very small databases compared to the generic TM 115A and LM 115B.


In some embodiments, the clients 105A-C communicate with the machine translation system 100 via a REST API over a network 110 such as the Internet. The API responds to requests for translation keyed to a user, requests to update the IDTM 100A, the IDLM 100B, and the IDW 100C, and requests to reset a per-user IDTM 100A, the IDLM 100B, and the IDW 100C.


When a client 105A begins to post-edit a document, the client 105A requests that the machine translation system 100 instantiate or reset the per-user IDTM 100A, IDLM 100B, and IDW 100C, and requests translations for the document segments, which are carried out initially by the generic MT engine 115. The generic MT engine 115 responds to the client request with a set of machine translated sentence pairs.


When a user post-edits a translation, and specifically a target sentence unit, the client 105A requests that the machine translation system 100 update the IDTM 100A, IDLM 100B, and IDW 100C for the user with new translational correspondences, target language phrases, and parameter weights to remember the post-edit corrections made by the user. The client 105A then requests re-translation of the remaining sentence pairs by the generic MT engine 115. The generic MT engine 115 may then re-translate the documents consulting the user's IDTM 100A, IDLM 100B, and IDW 100C via calls to the machine translation system 100, and also using generic models where appropriate. In some embodiments, once translation of a document is completed and a new document translation process begins, the client 105A may request a reset of their IDTM 100A, IDLM 100B, and IDW 100C.


In some embodiments, the machine translation system may execute test machine translations of the original sentence pairs and adjust the IDW 100C until the test machine translations generated by the generic MT engine 115 approximately match the post edit sentence pair received from the client 105A, as determined by BLEU. That is, once the machine translation system 100 populates the translation model with words and phrases from forced alignment of the post-edit sentence pair and creates the IDLM 100B, the machine translation system 100 may iteratively adjust the components of the IDW 100C until the translations generated by the generic MT engine 115 approximate the translation quality of the human translator.


While the above embodiments contemplate a separate generic MT engine 115 and machine translation system 100, it will be understood that the functionalities of these systems may be combined into a single machine translation system. Further, the functionalities of these systems may be executed on the client device 105A, rather than the client 105A interacting with the generic MT engine 115 and the machine translation system 100 over a network 110.


The following description provides details regarding the processes used by the machine translation system 100 to create and utilize the IDTM 100A, IDLM 100B, and IDW 100C for a client. The machine translation system 100 provides both a machine translation model adaptation process and a retuning of parameter weights. FIG. 2 illustrates an exemplary algorithm that includes both the machine translation model adaptation and weight retuning processes. Generally, new information obtained from the post-edits of machine translated sentence pairs are added to the model by updating vocabularies, databases, and phrase tables. Next, the algorithm adjusts existing translation model weights to encourage usage of new in-domain phrases using an online discriminative ridge regression technique.


The following variables are defined for purposes of clarity: Vs is a vocabulary of source words; Vt is a vocabulary of target words encountered by the machine translation system; (ŝ,{circumflex over (t)}) a post-edited machine translated sentence pair that is used to adapt a translation model M (e.g., IDTM 100A); and (s, t) is a phrase pair generated from the post-edited machine translated sentence pair.


The following steps are performed in sequence with parallelism where possible. First, (ŝ,{circumflex over (t)}) is tokenized and modified into lowercase. Next, Vt and Vs are updated with unknown words from (ŝ,{circumflex over (t)}). Subsequently, the machine translation system 100 uses use existing, static alignment models, which are pre-trained on original training data in both directions, to “force align” (ŝ,{circumflex over (t)}) and run regular alignment refinement heuristics to produce a word alignment for (ŝ,{circumflex over (t)}).


Next, the machine translation system 100 builds a small in-domain language model (IDLM 100B) during adaptation of the machine translation model. For each (ŝ,{circumflex over (t)}), the machine translation system performs an ngram-count to update an existing count file, also the machine translation system 100 recompiles the in-domain language model (IDLM 100B) using a smoothing algorithm, such as Witten-Bell smoothing. Since the amount of in-domain data is fairly small, the machine translation system 100 re-builds the IDLM 100B quickly and efficiently.


Next, the machine translation system 100 updates a fractional count table by extracting fractional counts from (ŝ,{circumflex over (t)}) and adds these new counts to the fractional count table. In some embodiments the machine translation system 100 extracts and filters lexicon entries from (ŝ,{circumflex over (t)}) and its alignments. An exemplary algorithm for building the fractional count table includes, but is not limited to, the IBM Model 4 Table, M4={cf(tj|si)}|Vs|×|Vt|.


After updating the fraction count table, the machine translation system 100 then updates a probability distribution table to change the distributions for each source term siε(ŝ,{circumflex over (t)}). In some embodiments, this includes the determination of maximum likelihood word alignments from (ŝ,{circumflex over (t)}) and its alignments.


To avoid dumping M1 databases to disk in text format or storing counts and re-normalizing, the machine translation system 100 may perform a heuristic update. Assuming that Vs and Vt were already updated, the machine translation system 100 uses the following equation M1ŝ,{circumflex over (t)}={{circumflex over (p)}(tj|si)}Vs×Vt to determine Viterbi alignments extracted from (ŝ,{circumflex over (t)}). In some embodiments, the machine translation system 100 may utilize IBM Model 1 Tables algorithm (M1={p(tj|si)}|Vs|×|Vt|)


Also, FIG. 3 illustrates an example of a heuristic algorithm that may be utilized to alter the probability distributions for the translation model.


Next, the machine translation system 100 may be configured to adapt a phrase table used to generate phrases from sentence pairs. To obtain a set of new phrases P={(e,f)}1, the machine translation system 100 executes a phrase extraction pipeline, which includes forward/inverse phrase extraction and sorting.


The process of adapting the phrase table used by the machine translation system 100 may include the pre-computing of various phrase features for the phrases (e, f) that were extracted from the post-edited machine translation sentence unit. Various features that are computed by the machine translation system include “missing word,” for which the machine translation system 100 uses the previously updated IBM Model 4 table as described above. Also, the machine translation system 100 may use an “inverse IBM Model 1” feature which employs the previously (heuristically) updated IBM Model 1 tables. Another exemplary feature includes “inverse phrase probability”, which is computed using counts of inversely extracted phrases.


The machine translation system 100 is also configured to execute a process of retuning of model weights. The machine translation system 100 is configured to adjust model weights to encourage the inclusion of new phrases and adaptation to the target domain. In some instances, the machine translation system 100 may adjust translation model weights using a tuning method such as discriminative ridge regression (DRR), although other translation model tuning methods known to one of ordinary skill in the art may also likewise be utilized.


The discriminative ridge regression method includes the determination of an nbest(ŝ) list for the re-decode of a source sentence ŝ, ordered by decreasing derivation scores. The machine translation system 100 builds an n×m matrix Rŝ that contains the difference vectors between each feature vector hŝi and hŝ*, the feature vector for the best hypothesis in terms of BLEU+1 to reference {circumflex over (t)}.


The goal of this process is to allow the machine translation system 100 to find a vector w such that Rŝ·w∝Iŝ, where Iŝ is a column vector of n rows containing the difference in BLEU+1 scores for each hŝi from hŝ*. This may be expressed by the equation







w
=



arg





min

w












R

s
^


·
w

-

I

s
^





2



,





which is a regression problem with an exact solution w=(R′ŝ·Rŝ+βI)−1R′ŝ·I, where β=0.01 is the regularization parameter that stabilizes R′ŝ·Rŝ.


The solution wt for sentence pair (ŝ,{circumflex over (t)})t at update time t is interpolated with the previous weight vector wt−1 and used for re-translating of sentence st+1 in the next iteration (e.g., remaining sentence pairs that have yet to be post-edited by the client 105A. In accordance with the present technology, the machine translation system 100 is configured to initially generate a set of machine translation pairs. After receiving a post edit of one of these machine translation pairs in the set, the machine translation system 100 executes the adaptation and tuning methods described herein to update the translation model/methodologies used by the machine translation system 100. Once updated, the machine translation system 100 retranslates any remaining machine translations that have yet to be translated from the set. Each time post-edits are received the machine translation system 100 updates and retranslates any remaining machine translations. Thus, the machine translation system 100 may, for each post-edit received, iteratively update the translation model and retranslate any previous machine translations, assuming the machine translations have not been post-edited by a human translator.


In some instances the interpolation weight is set to 0.5. This allows the machine translation system 100 to utilize a combination between the generic weights 115C of the generic MT engine 115 and the IDW 100C. The machine translation system 100 uses the interpolation weight of 0.5 and multiplies both the generic weights 115C and the IDW 100C by 0.5. The machine translation system 100 then utilizes an average of these values. The use of interpolation ensures moderation of large discrepancies between the generic weights 115C and the IDW 100C, which may lead to poor translations for the client.



FIG. 4 is a flowchart of an exemplary method for immediately updating a machine translation system with post-edits during translation of a document. The method is executed by a machine translation system 100, which is configured to execute the method. In some instances, the method includes receiving 405 a post-edited machine translated sentence pair. As mentioned above, the post-edited machine translated sentence pair comprises a source sentence unit and a post-edited target sentence unit. Again, this post-edited machine translated sentence pair includes a machine translation sentence pair that has been post-edited by a human translator in order to alter or modify the target sentence unit that was generated by a generic MT engine.


Next, the method includes updating 410 a machine translation model by performing an alignment of the post-edits of the machine translated sentence pair to generate phrases, and adding the phrases to the machine translation model.


In some embodiments, the method includes adapting 415 a language model from the target sentence unit of the post-edits, as well as calculating 420 translation statistics for the post-edits.


Next, the method includes adjusting 425 translation weights using the translation statistics while generating test machine translations of the machine translated sentence pair until one of the test machine translations approximately matches the post-edits for the machine translated sentence pair. Finally, the method includes retranslating 430 the remaining machine translation sentence pairs that have yet to be post-edited using the updated machine translation model. In some embodiments, retranslated machine translation sentence pair may be post-edited by a human translator. Thus, the method may return to step 405 when a poste-edit to a retranslated machine translation sentence pair, which would result in incremental retranslation of the machine translation sentence pairs, even those pairs that have been retranslated one or more times. Advantageously, each time remaining sentence pairs are retranslated it is assumed that the retranslations will cause the sentence pairs to require a lesser amount of post-editing, or no post-editing at all.



FIG. 5 illustrates an exemplary computing device (also referred to as computing system or system) 1 that may be used to implement an embodiment of the present systems and methods. The system 1 of FIG. 5 may be implemented in the contexts of the likes of computing devices, radios, terminals, networks, servers, or combinations thereof. The computing device 1 of FIG. 5 includes a processor 10 and main memory 20. Main memory 20 stores, in part, instructions and data for execution by processor 10. Main memory 20 may store the executable code when in operation. The system 1 of FIG. 5 further includes a mass storage device 30, portable storage device 40, output devices 50, user input devices 60, a display system 70, and peripherals 80.


The components shown in FIG. 5 are depicted as being connected via a single bus 90. The components may be connected through one or more data transport means. Processor 10 and main memory 20 may be connected via a local microprocessor bus, and the mass storage device 30, peripherals 80, portable storage device 40, and display system 70 may be connected via one or more input/output (I/O) buses.


Mass storage device 30, which may be implemented with a magnetic disk drive or an optical disk drive, is a non-volatile storage device for storing data and instructions for use by processor 10. Mass storage device 30 can store the system software for implementing embodiments of the present technology for purposes of loading that software into main memory 20.


Portable storage device 40 operates in conjunction with a portable non-volatile storage medium, such as a floppy disk, compact disk or digital video disc, to input and output data and code to and from the computing system 1 of FIG. 5. The system software for implementing embodiments of the present technology may be stored on such a portable medium and input to the computing system 1 via the portable storage device 40.


Input devices 60 provide a portion of a user interface. Input devices 60 may include an alphanumeric keypad, such as a keyboard, for inputting alphanumeric and other information, or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys. Additionally, the system 1 as shown in FIG. 5 includes output devices 50. Suitable output devices include speakers, printers, network interfaces, and monitors.


Display system 70 may include a liquid crystal display (LCD) or other suitable display device. Display system 70 receives textual and graphical information, and processes the information for output to the display device.


Peripherals 80 may include any type of computer support device to add additional functionality to the computing system. Peripherals 80 may include a modem or a router.


The components contained in the computing system 1 of FIG. 5 are those typically found in computing systems that may be suitable for use with embodiments of the present technology and are intended to represent a broad category of such computer components that are well known in the art. Thus, the computing system 1 can be a personal computer, hand held computing system, telephone, mobile computing system, workstation, server, minicomputer, mainframe computer, or any other computing system. The computer can also include different bus configurations, networked platforms, multi-processor platforms, etc. Various operating systems can be used including UNIX, Linux, Windows, Macintosh OS, Chrome OS, and other suitable operating systems.


Some of the above-described functions may be composed of instructions that are stored on storage media (e.g., computer-readable medium). The instructions may be retrieved and executed by the processor. Some examples of storage media are memory devices, tapes, disks, and the like. The instructions are operational when executed by the processor to direct the processor to operate in accord with the technology. Those skilled in the art are familiar with instructions, processor(s), and storage media.


It is noteworthy that any hardware platform suitable for performing the processing described herein is suitable for use with the technology. The terms “computer-readable storage medium” and “computer-readable storage media” as used herein refer to any medium or media that participate in providing instructions to a CPU for execution. Such media can take many forms, including, but not limited to, non-volatile media, volatile media and transmission media. Non-volatile media include, for example, optical or magnetic disks, such as a fixed disk. Volatile media include dynamic memory, such as system RAM. Transmission media include coaxial cables, copper wire and fiber optics, among others, including the wires that comprise one embodiment of a bus. Transmission media can also take the form of acoustic or light waves, such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape, any other magnetic medium, a CD-ROM disk, digital video disk (DVD), any other optical medium, any other physical medium with patterns of marks or holes, a RAM, a PROM, an EPROM, an EEPROM, a FLASHEPROM, any other memory chip or data exchange adapter, a carrier wave, or any other medium from which a computer can read.


Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to a CPU for execution. A bus carries the data to system RAM, from which a CPU retrieves and executes the instructions. The instructions received by system RAM can optionally be stored on a fixed disk either before or after execution by a CPU.


Computer program code for carrying out operations for aspects of the present technology may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).


The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present technology has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. Exemplary embodiments were chosen and described in order to best explain the principles of the present technology and its practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.


Aspects of the present technology are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.


These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.


The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.


The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present technology. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.


While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. The descriptions are not intended to limit the scope of the technology to the particular forms set forth herein. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments. It should be understood that the above description is illustrative and not restrictive. To the contrary, the present descriptions are intended to cover such alternatives, modifications, and equivalents as may be included within the spirit and scope of the technology as defined by the appended claims and otherwise appreciated by one of ordinary skill in the art. The scope of the technology should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the appended claims along with their full scope of equivalents.

Claims
  • 1. A method of immediately updating a machine translation system with post-edits during translation of a document, using a machine translation system that comprises a processor and a memory for storing logic that is executed by the processor to perform the method, comprising: receiving a post-edited machine translated sentence pair, the post-edited machine translated sentence pair comprising a source sentence unit and a post-edited target sentence unit;updating a machine translation model by: performing an alignment of the post-edits of the machine translated sentence pair to generate phrases; andadding the phrases to the machine translation model;adapting a language model from the post-edited target sentence unit;calculating translation statistics for the post-edits;adjusting translation weights using the translation statistics while generating test machine translations of the machine translated sentence pair until one of the test machine translations approximately matches the post-edits for the machine translated sentence pair; andretranslating remaining machine translation sentence pairs that have yet to be post-edited using the updated machine translation model and the adjusted translation weights.
  • 2. The method according to claim 1, further comprising: receiving a document for translation from a source language into a target language; andperforming a machine translation of the document to generate a set of machine translated sentence pairs.
  • 3. The method according to claim 1, wherein the updating of the machine translation model further comprises tokenizing the post-edits of the machine translated sentence pair.
  • 4. The method according to claim 3, wherein the updating of the machine translation model further comprises updating a vocabulary of source sentence units with unknown source sentence units included in the post-edits of the machine translated sentence pair and updating a vocabulary of target sentence units with unknown target sentence units included in the post-edits of the machine translated sentence pair.
  • 5. The method according to claim 4, further comprising extracting fractional counts from the post-edits of the machine translated sentence pair and adding the extracted fractional counts to a fractional count table.
  • 6. The method according to claim 5, further comprising adjusting probability distributions for the source sentence unit of the post-edits of the machine translated sentence pair.
  • 7. The method according to claim 1, further comprising updating a phrase table with counts that define a number of occurrences of phrases in the phrase table; and reordering feature values for the phrases in the phrase table based upon the counts.
  • 8. The method according to claim 1, wherein the translation weights for the machine translation system are adjusted using discriminative ridge regression.
  • 9. The method according to claim 1, wherein the adapting of the language model includes executing an ngram-count of the post-edited machine translated sentence pair to update a count file that comprises counts for input sentence pairs; and recompiling the language model using a smoothing algorithm.
  • 10. The method according to claim 1, wherein the alignment includes both forward and reverse alignments of the post-edited machine translated sentence pair.
  • 11. A machine translation system that immediately incorporates post-edits into a machine translation model during translation of a document, the machine translation system comprising: a processor; anda memory for storing logic that is executed by the processor to: receive a post-edit of a target sentence unit of a machine translated sentence pair of a set of machine translated sentence pairs, wherein a machine translated sentence pair comprises a source sentence unit and the target sentence unit;receive a post-edited machine translated sentence pair, the post-edited machine translated sentence pair comprising a post-edited source sentence unit and a post-edited target sentence unit;update the machine translation model by: performing an alignment of the post-edits of the machine translated sentence pair to generate phrases; andadding the phrases to the machine translation model;adapt a language model from the post-edited target sentence unit;calculate translation statistics for the post-edits;adjust translation weights using the translation statistics while generating test machine translations of the machine translated sentence pair until one of the test machine translations approximately matches the post-edits for the machine translated sentence pair; andretranslate remaining machine translation sentence pairs that have yet to be post-edited using the updated machine translation model.
  • 12. The machine translation system according to claim 11, wherein the processor further executes the logic to: receive the document for translation from a source language into a target language; andperform a machine translation of the document to generate the set of machine translated sentence pairs.
  • 13. The machine translation system according to claim 11, wherein the machine translation system updates the machine translation model by further tokenizing the post-edits of the machine translated sentence pair.
  • 14. The machine translation system according to claim 13, wherein the machine translation system updates the machine translation model by updating a vocabulary of source sentence units with unknown source sentence units included in the post-edits of the machine translated sentence pair and updating a vocabulary of target sentence units with unknown target sentence units included in the post-edits of the machine translated sentence pair.
  • 15. The machine translation system according to claim 14, wherein the processor further executes the logic to extract fractional counts from the post-edits of the machine translated sentence pair and adding the extracted fractional counts to a fractional count table.
  • 16. The machine translation system according to claim 15, wherein the processor further executes the logic to adjust probability distributions for the source sentence unit of the post-edits of the machine translated sentence pair.
  • 17. The machine translation system according to claim 11, wherein the processor further executes the logic to update a phrase table with counts that define a number of occurrences of phrases in the phrase table; and reorder feature values for the phrases in the phrase table based upon the counts.
  • 18. The machine translation system according to claim 11, wherein the translation weights for the machine translation system are adjusted using discriminative ridge regression.
  • 19. The machine translation system according to claim 11, wherein the machine translation system is configured to adapt the language model by executing an ngram-count of the post-edited machine translated sentence pair to update a count file that comprises counts for input sentence pairs; and recompiling the language model using a smoothing algorithm.
  • 20. The machine translation system according to claim 11, wherein the machine translation system is configured to: receive post-edits for a retranslated machine translation sentence pair;re-updat the machine translation model;calculate translation statistics for the post-edits of the retranslated machine translation sentence pair;adjust translation weights using the translation statistics while generating test machine translations of the retranslated machine translated sentence pair until one of the test machine translations approximately matches the post-edits for the retranslated machine translation sentence pair; andretranslate any remaining retranslated machine translation sentence pairs that have yet to be post-edited using the updated machine translation model and the translation weights.
US Referenced Citations (467)
Number Name Date Kind
4502128 Okajima et al. Feb 1985 A
4599691 Sakaki et al. Jul 1986 A
4615002 Innes Sep 1986 A
4661924 Okamoto et al. Apr 1987 A
4787038 Doi et al. Nov 1988 A
4791587 Doi Dec 1988 A
4800522 Miyao et al. Jan 1989 A
4814987 Miyao et al. Mar 1989 A
4942526 Okajima et al. Jul 1990 A
4980829 Okajima et al. Dec 1990 A
5020112 Chou May 1991 A
5088038 Tanaka et al. Feb 1992 A
5091876 Kumano et al. Feb 1992 A
5146405 Church Sep 1992 A
5167504 Mann Dec 1992 A
5175684 Chong Dec 1992 A
5181163 Nakajima et al. Jan 1993 A
5212730 Wheatley et al. May 1993 A
5218537 Hemphill et al. Jun 1993 A
5220503 Suzuki et al. Jun 1993 A
5267156 Nomiyama Nov 1993 A
5268839 Kaji Dec 1993 A
5295068 Nishino et al. Mar 1994 A
5302132 Corder Apr 1994 A
5311429 Tominaga May 1994 A
5387104 Corder Feb 1995 A
5408410 Kaji Apr 1995 A
5432948 Davis et al. Jul 1995 A
5442546 Kaji et al. Aug 1995 A
5477450 Takeda et al. Dec 1995 A
5477451 Brown et al. Dec 1995 A
5488725 Turtle et al. Jan 1996 A
5495413 Kutsumi et al. Feb 1996 A
5497319 Chong et al. Mar 1996 A
5510981 Berger et al. Apr 1996 A
5528491 Kuno et al. Jun 1996 A
5535120 Chong et al. Jul 1996 A
5541836 Church et al. Jul 1996 A
5541837 Fushimoto Jul 1996 A
5548508 Nagami Aug 1996 A
5587902 Kugimiya Dec 1996 A
5644774 Fukumochi et al. Jul 1997 A
5675815 Yamauchi et al. Oct 1997 A
5687383 Nakayama et al. Nov 1997 A
5696980 Brew Dec 1997 A
5724593 Hargrave, III et al. Mar 1998 A
5752052 Richardson et al. May 1998 A
5754972 Baker et al. May 1998 A
5761631 Nasukawa Jun 1998 A
5761689 Rayson et al. Jun 1998 A
5768603 Brown et al. Jun 1998 A
5779486 Ho et al. Jul 1998 A
5781884 Pereira et al. Jul 1998 A
5794178 Caid et al. Aug 1998 A
5805832 Brown et al. Sep 1998 A
5806032 Sproat Sep 1998 A
5819265 Ravin et al. Oct 1998 A
5826219 Kutsumi Oct 1998 A
5826220 Takeda et al. Oct 1998 A
5845143 Yamauchi et al. Dec 1998 A
5848385 Poznanski et al. Dec 1998 A
5848386 Motoyama Dec 1998 A
5850561 Church et al. Dec 1998 A
5855015 Shoham Dec 1998 A
5864788 Kutsumi Jan 1999 A
5867811 O'Donoghue Feb 1999 A
5870706 Alshawi Feb 1999 A
5893134 O'Donoghue et al. Apr 1999 A
5903858 Saraki May 1999 A
5907821 Kaji et al. May 1999 A
5909681 Passera et al. Jun 1999 A
5930746 Ting Jul 1999 A
5960384 Brash Sep 1999 A
5963205 Sotomayor Oct 1999 A
5966685 Flanagan et al. Oct 1999 A
5966686 Heidorn et al. Oct 1999 A
5983169 Kozma Nov 1999 A
5987402 Murata et al. Nov 1999 A
5987404 Della Pietra et al. Nov 1999 A
5991710 Papineni et al. Nov 1999 A
5995922 Penteroudakis et al. Nov 1999 A
6018617 Sweitzer et al. Jan 2000 A
6031984 Walser Feb 2000 A
6032111 Mohri Feb 2000 A
6047252 Kumano et al. Apr 2000 A
6064819 Franssen et al. May 2000 A
6064951 Park et al. May 2000 A
6073143 Nishikawa et al. Jun 2000 A
6077085 Parry et al. Jun 2000 A
6092034 McCarley et al. Jul 2000 A
6119077 Shinozaki Sep 2000 A
6119078 Kobayakawa et al. Sep 2000 A
6131082 Hargrave, III et al. Oct 2000 A
6161082 Goldberg et al. Dec 2000 A
6182014 Kenyon et al. Jan 2001 B1
6182027 Nasukawa et al. Jan 2001 B1
6185524 Carus et al. Feb 2001 B1
6205456 Nakao Mar 2001 B1
6206700 Brown et al. Mar 2001 B1
6223150 Duan et al. Apr 2001 B1
6233544 Alshawi May 2001 B1
6233545 Datig May 2001 B1
6233546 Datig May 2001 B1
6236958 Lange et al. May 2001 B1
6269351 Black Jul 2001 B1
6275789 Moser et al. Aug 2001 B1
6278967 Akers et al. Aug 2001 B1
6278969 King et al. Aug 2001 B1
6285978 Bernth et al. Sep 2001 B1
6289302 Kuo Sep 2001 B1
6304841 Berger et al. Oct 2001 B1
6311152 Bai et al. Oct 2001 B1
6317708 Witbrock et al. Nov 2001 B1
6327568 Joost Dec 2001 B1
6330529 Ito Dec 2001 B1
6330530 Horiguchi et al. Dec 2001 B1
6356864 Foltz et al. Mar 2002 B1
6360196 Poznanski et al. Mar 2002 B1
6389387 Poznanski et al. May 2002 B1
6393388 Franz et al. May 2002 B1
6393389 Chanod et al. May 2002 B1
6415250 van den Akker Jul 2002 B1
6460015 Hetherington et al. Oct 2002 B1
6470306 Pringle et al. Oct 2002 B1
6473729 Gastaldo et al. Oct 2002 B1
6473896 Hicken et al. Oct 2002 B1
6480698 Ho et al. Nov 2002 B2
6490549 Ulicny et al. Dec 2002 B1
6498921 Ho et al. Dec 2002 B1
6502064 Miyahira et al. Dec 2002 B1
6529865 Duan et al. Mar 2003 B1
6535842 Roche et al. Mar 2003 B1
6587844 Mohri Jul 2003 B1
6598046 Goldberg et al. Jul 2003 B1
6604101 Chan et al. Aug 2003 B1
6609087 Miller et al. Aug 2003 B1
6647364 Yumura et al. Nov 2003 B1
6691279 Yoden et al. Feb 2004 B2
6745161 Arnold et al. Jun 2004 B1
6745176 Probert, Jr. et al. Jun 2004 B2
6757646 Marchisio Jun 2004 B2
6778949 Duan et al. Aug 2004 B2
6782356 Lopke Aug 2004 B1
6810374 Kang Oct 2004 B2
6848080 Lee et al. Jan 2005 B1
6857022 Scanlan Feb 2005 B1
6885985 Hull Apr 2005 B2
6901361 Portilla May 2005 B1
6904402 Wang et al. Jun 2005 B1
6910003 Arnold et al. Jun 2005 B1
6952665 Shimomura et al. Oct 2005 B1
6983239 Epstein Jan 2006 B1
6993473 Cartus Jan 2006 B2
6996518 Jones et al. Feb 2006 B2
6996520 Levin Feb 2006 B2
6999925 Fischer et al. Feb 2006 B2
7013262 Tokuda et al. Mar 2006 B2
7016827 Ramaswamy et al. Mar 2006 B1
7016977 Dunsmoir et al. Mar 2006 B1
7024351 Wang Apr 2006 B2
7031911 Zhou et al. Apr 2006 B2
7050964 Menzes et al. May 2006 B2
7054803 Eisele May 2006 B2
7085708 Manson Aug 2006 B2
7089493 Hatori et al. Aug 2006 B2
7103531 Moore Sep 2006 B2
7107204 Liu et al. Sep 2006 B1
7107215 Ghali Sep 2006 B2
7113903 Riccardi et al. Sep 2006 B1
7143036 Weise Nov 2006 B2
7146358 Gravano et al. Dec 2006 B1
7149688 Schalkwyk Dec 2006 B2
7171348 Scanlan Jan 2007 B2
7174289 Sukehiro Feb 2007 B2
7177792 Knight et al. Feb 2007 B2
7191115 Moore Mar 2007 B2
7194403 Okura et al. Mar 2007 B2
7197451 Carter et al. Mar 2007 B1
7200550 Menezes et al. Apr 2007 B2
7206736 Moore Apr 2007 B2
7209875 Quirk et al. Apr 2007 B2
7219051 Moore May 2007 B2
7239998 Xun Jul 2007 B2
7249012 Moore Jul 2007 B2
7249013 Al-Onaizan et al. Jul 2007 B2
7283950 Pournasseh et al. Oct 2007 B2
7295962 Marcu Nov 2007 B2
7295963 Richardson et al. Nov 2007 B2
7302392 Thenthiruperai et al. Nov 2007 B1
7319949 Pinkham Jan 2008 B2
7328156 Meliksetian et al. Feb 2008 B2
7340388 Soricut et al. Mar 2008 B2
7346487 Li Mar 2008 B2
7346493 Ringger et al. Mar 2008 B2
7349839 Moore Mar 2008 B2
7349845 Coffman et al. Mar 2008 B2
7356457 Pinkham et al. Apr 2008 B2
7369998 Sarich et al. May 2008 B2
7373291 Garst May 2008 B2
7383542 Richardson et al. Jun 2008 B2
7389222 Langmead et al. Jun 2008 B1
7389234 Schmid et al. Jun 2008 B2
7403890 Roushar Jul 2008 B2
7409332 Moore Aug 2008 B2
7409333 Wilkinson et al. Aug 2008 B2
7447623 Appleby Nov 2008 B2
7454326 Marcu et al. Nov 2008 B2
7496497 Liu Feb 2009 B2
7533013 Marcu May 2009 B2
7536295 Cancedda et al. May 2009 B2
7546235 Brockett et al. Jun 2009 B2
7552053 Gao et al. Jun 2009 B2
7565281 Appleby Jul 2009 B2
7574347 Wang Aug 2009 B2
7580828 D'Agostini Aug 2009 B2
7580830 Al-Onaizan et al. Aug 2009 B2
7584092 Brockett et al. Sep 2009 B2
7587307 Cancedda et al. Sep 2009 B2
7620538 Marcu et al. Nov 2009 B2
7620632 Andrews Nov 2009 B2
7624005 Koehn et al. Nov 2009 B2
7624020 Yamada et al. Nov 2009 B2
7627479 Travieso et al. Dec 2009 B2
7636656 Nieh Dec 2009 B1
7680646 Lux-Pogodalla et al. Mar 2010 B2
7689405 Marcu Mar 2010 B2
7698124 Menezes et al. Apr 2010 B2
7698125 Graehl et al. Apr 2010 B2
7707025 Whitelock Apr 2010 B2
7711545 Koehn May 2010 B2
7716037 Precoda et al. May 2010 B2
7801720 Satake et al. Sep 2010 B2
7813918 Muslea et al. Oct 2010 B2
7822596 Elgazzar et al. Oct 2010 B2
7925494 Cheng et al. Apr 2011 B2
7957953 Moore Jun 2011 B2
7974833 Soricut et al. Jul 2011 B2
7974976 Yahia et al. Jul 2011 B2
7983897 Chin et al. Jul 2011 B2
8060360 He Nov 2011 B2
8145472 Shore et al. Mar 2012 B2
8214196 Yamada et al. Jul 2012 B2
8219382 Kim et al. Jul 2012 B2
8234106 Marcu et al. Jul 2012 B2
8244519 Bicici et al. Aug 2012 B2
8249854 Nikitin et al. Aug 2012 B2
8265923 Chatterjee et al. Sep 2012 B2
8275600 Bilac et al. Sep 2012 B2
8296127 Marcu et al. Oct 2012 B2
8315850 Furuuchi et al. Nov 2012 B2
8326598 Macherey et al. Dec 2012 B1
8380486 Soricut et al. Feb 2013 B2
8433556 Fraser et al. Apr 2013 B2
8442813 Popat May 2013 B1
8468149 Lung et al. Jun 2013 B1
8504351 Waibel et al. Aug 2013 B2
8543563 Nikoulina et al. Sep 2013 B1
8548794 Koehn Oct 2013 B2
8600728 Knight et al. Dec 2013 B2
8615389 Marcu Dec 2013 B1
8655642 Fux et al. Feb 2014 B2
8666725 Och Mar 2014 B2
8676563 Soricut et al. Mar 2014 B2
8694303 Hopkins et al. Apr 2014 B2
8762128 Brants et al. Jun 2014 B1
8825466 Wang et al. Sep 2014 B1
8831928 Marcu et al. Sep 2014 B2
8886515 Van Assche Nov 2014 B2
8886517 Soricut et al. Nov 2014 B2
8886518 Wang et al. Nov 2014 B1
8942973 Viswanathan Jan 2015 B2
8943080 Marcu et al. Jan 2015 B2
8977536 Och Mar 2015 B2
8990064 Marcu et al. Mar 2015 B2
9122674 Wong et al. Sep 2015 B1
9152622 Marcu et al. Oct 2015 B2
20010009009 Iizuka Jul 2001 A1
20010029455 Chin et al. Oct 2001 A1
20020002451 Sukehiro Jan 2002 A1
20020013693 Fuji Jan 2002 A1
20020040292 Marcu Apr 2002 A1
20020046018 Marcu et al. Apr 2002 A1
20020046262 Heilig et al. Apr 2002 A1
20020059566 Delcambre et al. May 2002 A1
20020078091 Vu et al. Jun 2002 A1
20020083029 Chun et al. Jun 2002 A1
20020087313 Lee et al. Jul 2002 A1
20020099744 Coden et al. Jul 2002 A1
20020107683 Eisele Aug 2002 A1
20020111788 Kimpara Aug 2002 A1
20020111789 Hull Aug 2002 A1
20020111967 Nagase Aug 2002 A1
20020143537 Ozawa et al. Oct 2002 A1
20020152063 Tokieda et al. Oct 2002 A1
20020169592 Aityan Nov 2002 A1
20020188438 Knight et al. Dec 2002 A1
20020188439 Marcu Dec 2002 A1
20020198699 Greene et al. Dec 2002 A1
20020198701 Moore Dec 2002 A1
20020198713 Franz et al. Dec 2002 A1
20030004705 Kempe Jan 2003 A1
20030009322 Marcu Jan 2003 A1
20030023423 Yamada et al. Jan 2003 A1
20030040900 D'Agostini Feb 2003 A1
20030061022 Reinders Mar 2003 A1
20030129571 Kim Jul 2003 A1
20030144832 Harris Jul 2003 A1
20030154071 Shreve Aug 2003 A1
20030158723 Masuichi et al. Aug 2003 A1
20030176995 Sukehiro Sep 2003 A1
20030182102 Corston-Oliver et al. Sep 2003 A1
20030191626 Al-Onaizan et al. Oct 2003 A1
20030204400 Marcu et al. Oct 2003 A1
20030216905 Chelba et al. Nov 2003 A1
20030217052 Rubenczyk et al. Nov 2003 A1
20030233222 Soricut et al. Dec 2003 A1
20040006560 Chan et al. Jan 2004 A1
20040015342 Garst Jan 2004 A1
20040023193 Wen et al. Feb 2004 A1
20040024581 Koehn et al. Feb 2004 A1
20040030551 Marcu et al. Feb 2004 A1
20040044530 Moore Mar 2004 A1
20040059708 Dean et al. Mar 2004 A1
20040068411 Scanlan Apr 2004 A1
20040098247 Moore May 2004 A1
20040102956 Levin May 2004 A1
20040102957 Levin May 2004 A1
20040111253 Luo et al. Jun 2004 A1
20040115597 Butt Jun 2004 A1
20040122656 Abir Jun 2004 A1
20040167768 Travieso et al. Aug 2004 A1
20040167784 Travieso et al. Aug 2004 A1
20040193401 Ringger et al. Sep 2004 A1
20040230418 Kitamura Nov 2004 A1
20040237044 Travieso et al. Nov 2004 A1
20040260532 Richardson et al. Dec 2004 A1
20050021322 Richardson et al. Jan 2005 A1
20050021323 Li Jan 2005 A1
20050021517 Marchisio Jan 2005 A1
20050026131 Elzinga et al. Feb 2005 A1
20050033565 Koehn Feb 2005 A1
20050038643 Koehn Feb 2005 A1
20050055199 Ryzchachkin et al. Mar 2005 A1
20050055217 Sumita et al. Mar 2005 A1
20050060160 Roh et al. Mar 2005 A1
20050075858 Pournasseh et al. Apr 2005 A1
20050086226 Krachman Apr 2005 A1
20050102130 Quirk et al. May 2005 A1
20050107999 Kempe et al. May 2005 A1
20050125218 Rajput et al. Jun 2005 A1
20050149315 Flanagan et al. Jul 2005 A1
20050171757 Appleby Aug 2005 A1
20050204002 Friend Sep 2005 A1
20050228640 Aue et al. Oct 2005 A1
20050228642 Mau et al. Oct 2005 A1
20050228643 Munteanu et al. Oct 2005 A1
20050234701 Graehl et al. Oct 2005 A1
20050267738 Wilkinson et al. Dec 2005 A1
20060004563 Campbell et al. Jan 2006 A1
20060015320 Och Jan 2006 A1
20060015323 Udupa et al. Jan 2006 A1
20060018541 Chelba et al. Jan 2006 A1
20060020448 Chelba et al. Jan 2006 A1
20060041428 Fritsch et al. Feb 2006 A1
20060095248 Menezes et al. May 2006 A1
20060111891 Menezes et al. May 2006 A1
20060111892 Menezes et al. May 2006 A1
20060111896 Menezes et al. May 2006 A1
20060129424 Chan Jun 2006 A1
20060136824 Lin Jun 2006 A1
20060142995 Knight et al. Jun 2006 A1
20060150069 Chang Jul 2006 A1
20060167984 Fellenstein et al. Jul 2006 A1
20060190241 Goutte et al. Aug 2006 A1
20070015121 Johnson et al. Jan 2007 A1
20070016400 Soricutt et al. Jan 2007 A1
20070016401 Ehsani et al. Jan 2007 A1
20070033001 Muslea et al. Feb 2007 A1
20070043553 Dolan Feb 2007 A1
20070050182 Sneddon et al. Mar 2007 A1
20070073532 Brockett Mar 2007 A1
20070078654 Moore Apr 2007 A1
20070078845 Scott et al. Apr 2007 A1
20070083357 Moore et al. Apr 2007 A1
20070094169 Yamada et al. Apr 2007 A1
20070112553 Jacobson May 2007 A1
20070112555 Lavi et al. May 2007 A1
20070112556 Lavi et al. May 2007 A1
20070122792 Galley et al. May 2007 A1
20070168202 Changela et al. Jul 2007 A1
20070168450 Prajapat et al. Jul 2007 A1
20070180373 Bauman et al. Aug 2007 A1
20070208719 Tran Sep 2007 A1
20070219774 Quirk et al. Sep 2007 A1
20070233460 Lancaster et al. Oct 2007 A1
20070233547 Younger et al. Oct 2007 A1
20070250306 Marcu et al. Oct 2007 A1
20070265825 Cancedda et al. Nov 2007 A1
20070265826 Chen et al. Nov 2007 A1
20070269775 Andreev et al. Nov 2007 A1
20070294076 Shore et al. Dec 2007 A1
20080040095 Sinha et al. Feb 2008 A1
20080052061 Kim et al. Feb 2008 A1
20080065478 Kohlmeier et al. Mar 2008 A1
20080109209 Fraser et al. May 2008 A1
20080114583 Al-Onaizan et al. May 2008 A1
20080154581 Lavi et al. Jun 2008 A1
20080183555 Walk Jul 2008 A1
20080195461 Li et al. Aug 2008 A1
20080215418 Kolve et al. Sep 2008 A1
20080249760 Marcu et al. Oct 2008 A1
20080270109 Och Oct 2008 A1
20080270112 Shimohata Oct 2008 A1
20080281578 Kumaran et al. Nov 2008 A1
20080300857 Barbaiani et al. Dec 2008 A1
20080307481 Panje Dec 2008 A1
20090076792 Lawson-Tancred Mar 2009 A1
20090083023 Foster et al. Mar 2009 A1
20090106017 D'Agostini Apr 2009 A1
20090119091 Sarig May 2009 A1
20090125497 Jiang et al. May 2009 A1
20090198487 Wong et al. Aug 2009 A1
20090234634 Chen et al. Sep 2009 A1
20090234635 Bhatt et al. Sep 2009 A1
20090241115 Raffo et al. Sep 2009 A1
20090248662 Murdock Oct 2009 A1
20090313006 Tang Dec 2009 A1
20090326912 Ueffing Dec 2009 A1
20090326913 Simard et al. Dec 2009 A1
20100005086 Wang et al. Jan 2010 A1
20100017293 Lung et al. Jan 2010 A1
20100042398 Marcu et al. Feb 2010 A1
20100138210 Seo et al. Jun 2010 A1
20100138213 Bicici et al. Jun 2010 A1
20100174524 Koehn Jul 2010 A1
20110029300 Marcu et al. Feb 2011 A1
20110066643 Cooper et al. Mar 2011 A1
20110082683 Soricut et al. Apr 2011 A1
20110082684 Soricut et al. Apr 2011 A1
20110184722 Sneddon et al. Jul 2011 A1
20110191096 Sarikaya et al. Aug 2011 A1
20110191410 Refuah et al. Aug 2011 A1
20110225104 Soricut et al. Sep 2011 A1
20120016657 He et al. Jan 2012 A1
20120096019 Manickam et al. Apr 2012 A1
20120116751 Bernardini et al. May 2012 A1
20120136646 Kraenzel et al. May 2012 A1
20120150441 Ma et al. Jun 2012 A1
20120150529 Kim et al. Jun 2012 A1
20120191457 Minnis et al. Jul 2012 A1
20120253783 Castelli et al. Oct 2012 A1
20120265711 Assche Oct 2012 A1
20120278302 Choudhury et al. Nov 2012 A1
20120323554 Hopkins et al. Dec 2012 A1
20130018650 Moore et al. Jan 2013 A1
20130024184 Vogel et al. Jan 2013 A1
20130103381 Assche Apr 2013 A1
20130124185 Sarr et al. May 2013 A1
20130144594 Bangalore et al. Jun 2013 A1
20130238310 Viswanathan Sep 2013 A1
20130290339 LuVogt et al. Oct 2013 A1
20140006003 Soricut et al. Jan 2014 A1
20140019114 Travieso et al. Jan 2014 A1
20140149102 Marcu et al. May 2014 A1
20140188453 Marcu et al. Jul 2014 A1
20140350931 Levit Nov 2014 A1
20150106076 Hieber et al. Apr 2015 A1
Foreign Referenced Citations (29)
Number Date Country
2408819 Nov 2006 CA
2475857 Dec 2008 CA
2480398 Jun 2011 CA
1488338 Apr 2010 DE
202005022113.9 Feb 2014 DE
0469884 Feb 1992 EP
0715265 Jun 1996 EP
0933712 Aug 1999 EP
0933712 Jan 2001 EP
1488338 Sep 2004 EP
1488338 Apr 2010 EP
1488338 Apr 2010 ES
1488338 Apr 2010 FR
1488338 Apr 2010 GB
1072987 Feb 2006 HK
1072987 Sep 2010 HK
07244666 Sep 1995 JP
10011447 Jan 1998 JP
11272672 Oct 1998 JP
2004501429 Jan 2004 JP
2004062726 Feb 2004 JP
2008101837 May 2008 JP
5452868 Jan 2014 JP
WO03083709 Oct 2003 WO
WO03083710 Oct 2003 WO
WO2004042615 May 2004 WO
WO2007056563 May 2007 WO
WO2011041675 Apr 2011 WO
WO2011162947 Dec 2011 WO
Non-Patent Literature Citations (512)
Entry
Gildea, D., “Loosely Tree-based Alignment for Machine Translation,” In Proceedings of the 41st Annual Meeting on Assoc. for Computational Linguistics—vol. 1 (Sapporo, Japan, Jul. 7-12, 2003). Annual Meeting of the ACL Assoc. For Computational Linguistics, Morristown, NJ, 80-87. DOI=http://dx.doi.org/10.3115/1075096.1075107.
Grefenstette, Gregory, “The World Wide Web as a Resource for Example-Based Machine Translation Tasks”, 1999, Translating and the Computer 21, Proc. of the 21 st International Conf. on Translating and the Computer. London, UK, 12 pp.
Grossi et al, “Suffix Trees and Their Applications in String Algorithms”, In. Proceedings of the 1st South American Workshop on String Processing, Sep. 1993, pp. 57-76.
Gupta et al., “Kelips: Building an Efficient and Stable P2P DHT thorough Increased Memory and Background Overhead,” 2003 IPTPS, LNCS 2735, pp. 160-169.
Habash, Nizar, “The Use of a Structural N-gram Language Model in Generation-Heavy Hybrid Machine Translation,” University of Maryland, Univ. Institute for Advance Computer Studies, Sep. 8, 2004.
Hatzivassiloglou, V. et al., “Unification-Based Glossing”, 1995, Proc. of the International Joint Conference on Artificial Intelligence, pp. 1382-1389.
Huang et al., “Relabeling Syntax Trees to Improve Syntax-Based Machine Translation Quality,” Jun. 4-9, 2006, in Proc. of the Human Language Technology Conference of the North American Chapter of the ACL, pp. 240-247.
Ide, N. and Veronis, J., “Introduction to the Special Issue on Word Sense Disambiguation: The State of the Art”, Mar. 1998, Computational Linguistics, vol. 24, Issue 1, pp. 2-40.
Bikel, D., Schwartz, R., and Weischedei, R., “An Algorithm that Learns What's in a Name,” Machine Learning 34, 211-231 (1999).
Imamura et al., “Feedback Cleaning of Machine Translation Rules Using Automatic Evaluation,” 2003 Computational Linguistics, pp. 447-454.
Imamura, Kenji, “Hierarchical Phrase Alignment Harmonized with Parsing”, 2001, in Proc. of NLPRS, Tokyo.
Jelinek, F., “Fast Sequential Decoding Algorithm Using a Stack”, Nov. 1969, IBM J. Res. Develop., vol. 13, No. 6, pp. 675-685.
Jones, K. Sparck, “Experiments in Relevance Weighting of Search Terms”, 1979, Information Processing & Management, vol. 15, Pergamon Press Ltd., UK, pp. 133-144.
Klein et al., “Accurate Unlexicalized Parsing,” Jul. 2003, in Proc. of the 41st Annual Meeting of the ACL, pp. 423-430.
Knight et al., “Integrating Knowledge Bases and Statistics in MT,” 1994, Proc. of the Conference of the Association for Machine Translation in the Americas.
Knight et al., “Filling Knowledge Gaps in a Broad-Coverage Machine Translation System”, 1995, Proc. of the14th International Joint Conference on Artificial Intelligence, Montreal, Canada, vol. 2, pp. 1390-1396.
Knight, K. and Al-Onaizan, Y., “A Primer on Finite-State Software for Natural Language Processing”, 1999 (available at http://www.isLedullicensed-sw/carmel).
Knight, K. and Al-Onaizan, Y., “Translation with Finite-State Devices,” Proceedings of the 4th AMTA Conference, 1998.
Knight, K. and Chander, I., “Automated Postediting of Documents,”1994, Proc. of the 12th Conference on Artificial Intelligence, pp. 779-784.
Knight, K. and Graehl, J., “Machine Transliteration”, 1997, Proc. of the ACL-97, Madrid, Spain, pp. 128-135.
Knight, K. and Hatzivassiloglou, V., “Two-Level, Many-Paths Generation,” 1995, Proc. of the 33rd Annual Conference of the ACL, pp. 252-260.
Knight, K. and Luk, S., “Building a Large-Scale Knowledge Base for Machine Translation,” 1994, Proc. of the 12th Conference on Artificial Intelligence, pp. 773-778.
Knight, K. and Marcu, D., “Statistics-Based Summarization—Step One: Sentence Compression,” 2000, American Association for Artificial Intelligence Conference, pp. 703-710.
Knight, K. and Yamada, K., “A Computational Approach to Deciphering Unknown Scripts,” 1999, Proc. of the ACL Workshop on Unsupervised Learning in Natural Language Processing.
Knight, Kevin, “A Statistical MT Tutorial Workbook,” 1999, JHU Summer Workshop (available at http://www.isLedu/natural-language/mUwkbk.rtf).
Knight, Kevin, “Automating Knowledge Acquisition for Machine Translation,” 1997, Al Magazine, vol. 18, No. 4.
Knight, Kevin, “Connectionist Ideas and Algorithms,” Nov. 1990, Communications of the ACM, vol. 33, No. 11, pp. 59-74.
Knight, Kevin, “Decoding Complexity in Word-Replacement Translation Models”, 1999, Computational Linguistics, vol. 25, No. 4.
Knight, Kevin, “Integrating Knowledge Acquisition and Language Acquisition”, May 1992, Journal of Applied Intelligence, vol. 1, No. 4.
Knight, Kevin, “Learning Word Meanings by Instruction,”1996, Proc. of the D National Conference on Artificial Intelligence, vol. 1, pp. 447-454.
Knight, Kevin, “Unification: A Multidisciplinary Survey,” 1989, ACM Computing Surveys, vol. 21, No. 1.
Koehn, Philipp, “Noun Phrase Translation,” A PhD Dissertation for the University of Southern California, pp. i-105, Dec. 2003.
Koehn, P. and Knight, K., “ChunkMT: Statistical Machine Translation with Richer Linguistic Knowledge,” Apr. 2002, Information Sciences Institution.
Koehn, P. and Knight, K., “Estimating Word Translation Probabilities from Unrelated Monolingual Corpora Using the EM Algorithm,” 2000, Proc. of the 17th meeting of the AAAI.
Koehn, P. and Knight, K., “Knowledge Sources for Word-Level Translation Models,” 2001, Conference on Empirical Methods in Natural Language Processing.
Kumar, R. and Li, H., “Integer Programming Approach to Printed Circuit Board Assembly Time Optimization,” 1995, IEEE Transactions on Components, Packaging, and Manufacturing, Part B: Advance Packaging, vol. 18, No. 4. pp. 720-727.
Kupiec, Julian, “An Algorithm for Finding Noun Phrase Correspondences in Bilingual Corpora,” In Proceedings of the 31st Annual Meeting of the ACL, 1993, pp. 17-22.
Kurohashi, S. and Nagao, M., “Automatic Detection of Discourse Structure by Checking Surface Information in Sentences,” 1994, Proc. of COL-LING '94, vol. 2, pp. 1123-1127.
Langkilde, I. and Knight, K., “Generation that Exploits Corpus-Based Statistical Knowledge,” 1998, Proc. of the COLING-ACL, pp. 704-710.
Langkilde, I. and Knight, K., “The Practical Value of N-Grams in Generation,” 1998, Proc. of the 9th International Natural Language Generation Workshop, pp. 248-255.
Langkilde, Irene, “Forest-Based Statistical Sentence Generation,” 2000, Proc. of the 1st Conference on North American chapter of the ACL, Seattle, WA, pp. 170-177.
Langkilde-Geary, Irene, “A Foundation for General-Purpose Natural Language Generation: Sentence Realization Using Probabilistic Models of Language,” 2002, Ph.D. Thesis, Faculty of the Graduate School, University of Southern California.
Langkilde-Geary, Irene, “An Empirical Verification of Coverage and Correctness for a General-Purpose Sentence Generator,” 1998, Proc. 2nd Int'l Natural Language Generation Conference.
Lee, Yue-Shi, “Neural Network Approach to Adaptive Learning: with an Application to Chinese Homophone Disambiguation,” IEEE 2001 pp. 1521-1526.
Lita, L., et al., “tRuEcasIng,” 2003 Proceedings of the 41st Annual Meeting of the Assoc. For Computational Linguistics (In Hinrichs, E. and Roth, D.—editors), pp. 152-159.
Llitjos, A. F. et al., “The Translation Correction Tool: English-Spanish User Studies,” Citeseer © 2004, downloaded from: http://gs37.sp.cs.cmu.edu/ari/papers/Irec04/fontll, pp. 1-4.
Mann, G. and Yarowsky, D., “Multipath Translation Lexicon Induction via Bridge Languages,” 2001, Proc. of the 2nd Conference of the North American Chapter of the ACL, Pittsburgh, PA, pp. 151-158.
Manning, C. and Schutze, H., “Foundations of Statistical Natural Language Processing,” 2000, The MIT Press, Cambridge, MA [Front Matter].
Marcu, D. and Wong, W., “A Phrase-Based, Joint Probability Model for Statistical Machine Translation,” 2002, Proc. of ACL-2 conference on Empirical Methods in Natural Language Processing, vol. 10, pp. 133-139.
Marcu, Daniel, “Building Up Rhetorical Structure Trees,” 1996, Proc. of the National Conference on Artificial Intelligence and Innovative Applications of Artificial Intelligence Conference, vol. 2, pp. 1069-1074.
Non-Final Office Action, Apr. 16, 2015, U.S. Appl. No. 11/454,212, filed Jun. 15, 2006.
Final Office Action, Nov. 19, 2013, U.S. Appl. No. 11/454,212, filed Jun. 15, 2006.
Non-Final Office Action, May 9, 2013, U.S. Appl. No. 11/454,212, filed Jun. 15, 2006.
Advisory Action, Nov. 29, 2011, U.S. Appl. No. 11/454,212, filed Jun. 15, 2006.
Final Office Action, Aug. 15, 2011, U.S. Appl. No. 11/454,212, filed Jun. 15, 2006.
Non-Final Office Action, Mar. 1, 2011, U.S. Appl. No. 11/454,212, filed Jun. 15, 2006.
Advisory Action, Sep. 30, 2010, U.S. Appl. No. 11/454,212, filed Jun. 15, 2006.
Final Office Action, Jul. 19, 2010, U.S. Appl. No. 11/454,212, filed Jun. 15, 2006.
Non-Final Office Action, Nov. 27, 2009, U.S. Appl. No. 11/454,212, filed Jun. 15, 2006.
Final Office Action, Sep. 24, 2009, U.S. Appl. No. 11/454,212, filed Jun. 15, 2006.
Non-Final Office Action, Mar. 3, 2009, U.S. Appl. No. 11/454,212, filed Jun. 15, 2006.
Final Office Action, Oct. 27, 2008, U.S. Appl. No. 11/454,212, filed Jun. 15, 2006.
Non-Final Office Action, Apr. 17, 2008, U.S. Appl. No. 11/454,212, filed Jun. 15, 2006.
Notice of Allowance, Jul. 9, 2009, U.S. Appl. No. 11/223,823, filed Sep. 9, 2005.
Non-Final Office Action, Feb. 3, 2009, U.S. Appl. No. 11/223,823, filed Sep. 9, 2005.
Non-Final Office Action, Aug. 6, 2008, U.S. Appl. No. 11/223,823, filed Sep. 9, 2005.
Advisory Action, Jun. 9, 2008, U.S. Appl. No. 11/223,823, filed Sep. 9, 2005.
Non-Final Office Action, Sep. 20, 2007, U.S. Appl. No. 11/223,823, filed Sep. 9, 2005.
Final Office Action, Mar. 4, 2008, U.S. Appl. No. 11/223,823, filed Sep. 9, 2005.
Notice of Allowance, Jun. 10, 2010, U.S. Appl. No. 11/197,744, filed Aug. 3, 2005.
Non-Final Office Action, Dec. 15, 2009, U.S. Appl. No. 11/197,744, filed Aug. 3, 2005.
Final Office Action, Aug. 25, 2009, U.S. Appl. No. 11/197,744, filed Aug. 3, 2005.
Non-Final Office Action, Feb. 10, 2009, U.S. Appl. No. 11/197,744, filed Aug. 3, 2005.
Non-Final Office Action, Jun. 18, 2008, U.S. Appl. No. 11/197,744, filed Aug. 3, 2005.
Advisory Action, Aug. 5, 2013, U.S. Appl. No. 11/272,460, filed Nov. 9, 2005.
Final Office Action, May 7, 2013, U.S. Appl. No. 11/272,460, filed Nov. 9, 2005.
Non-Final Office Action, Oct. 3, 2012, U.S. Appl. No. 11/272,460, filed Nov. 9, 2005.
Final Office Action, Jan. 27, 2010, U.S. Appl. No. 11/272,460, filed Nov. 9, 2005.
Examiner's Answer, Jul. 23, 2009, U.S. Appl. No. 11/272,460, filed Nov. 9, 2005.
Advisory Action, Jan. 22, 2009, U.S. Appl. No. 11/272,460, filed Nov. 9, 2005.
Final Office Action, Oct. 7, 2008, U.S. Appl. No. 11/272,460, filed Nov. 9, 2005.
Non-Final Office Action, Mar. 10, 2008, U.S. Appl. No. 11/272,460, filed Nov. 9, 2005.
Non-Final Office Action, Feb. 3, 2014, U.S. Appl. No. 11/272,460, filed Nov. 9, 2005.
Final Office Action, May 21, 2014, U.S. Appl. No. 11/272,460, filed Nov. 9, 2005.
Non-Final Office Action, Mar. 25, 2015, U.S. Appl. No. 11/272,460, filed Nov. 9, 2005.
Notice of Allowance, Mar. 20, 2009, U.S. Appl. No. 09/854,327, filed May 11, 2001.
Non-Final Office Action, Oct. 2, 2008, U.S. Appl. No. 09/854,327, filed May 11, 2001.
Final Office Action, Dec. 14, 2007, U.S. Appl. No. 09/854,327, filed May 11, 2001.
Non-Final Office Action, Jun. 6, 2007, U.S. Appl. No. 09/854,327, filed May 11, 2001.
Advisory Action, Jan. 10, 2007, U.S. Appl. No. 09/854,327, filed May 11, 2001.
Final Office Action, Sep. 18, 2006, U.S. Appl. No. 09/854,327, filed May 11, 2001.
Non-Final Office Action, Mar. 17, 2006, U.S. Appl. No. 09/854,327, filed May 11, 2001.
Non-Final Office Action, Sep. 15, 2005, U.S. Appl. No. 09/854,327, filed May 11, 2001.
Notice of Allowance, Jul. 30, 2007, U.S. Appl. No. 10/143,382, filed May 9, 2002.
Non-Final Office Action, Mar. 6, 2007, U.S. Appl. No. 10/143,382, filed May 9, 2002.
Non-Final Office Action, Aug. 8, 2006, U.S. Appl. No. 10/143,382, filed May 9, 2002.
Notice of Allowance, Nov. 16, 2009, U.S. Appl. No. 10/150,532, filed May 17, 2002.
Final Office Action, Jan. 12, 2009, U.S. Appl. No. 10/150,532, filed May 17, 2002.
Non-Final Office Action, Jul. 29, 2008, U.S. Appl. No. 10/150,532, filed May 17, 2002.
Non-Final Office Action, Jan. 9, 2008, U.S. Appl. No. 10/150,532, filed May 17, 2002.
Final Office Action, Jul. 19, 2007, U.S. Appl. No. 10/150,532, filed May 17, 2002.
Non-Final Office Action, Oct. 18, 2006, U.S. Appl. No. 10/150,532, filed May 17, 2002.
Non-Final Office Action, Apr. 17, 2006, U.S. Appl. No. 10/150,532, filed May 17, 2002.
Notice of Allowance, Apr. 28, 2006, U.S. Appl. No. 10/160,284, filed May 31, 2002.
Non-Final Office Action, Oct. 11, 2005, U.S. Appl. No. 10/160,284, filed May 31, 2002.
Notice of Allowance, Feb. 6, 2012, U.S. Appl. No. 10/190,298, filed Jul. 3, 2002.
Notice of Allowance, Oct. 25, 2011, U.S. Appl. No. 10/190,298, filed Jul. 3, 2002.
Final Office Action, Jan. 20, 2011, U.S. Appl. No. 10/190,298, filed Jul. 3, 2002.
Non-Final Office Action, Aug. 5, 2010, U.S. Appl. No. 10/190,298, filed Jul. 3, 2002.
Final Office Action, Aug. 18, 2009, U.S. Appl. No. 10/190,298, filed Jul. 3, 2002.
Non-Final Office Action, Feb. 26, 2009, U.S. Appl. No. 10/190,298, filed Jul. 3, 2002.
Non-Final Office Action, Aug. 4, 2008, U.S. Appl. No. 10/190,298, filed Jul. 3, 2002.
Advisory Action, Apr. 15, 2008, U.S. Appl. No. 10/190,298, filed Jul. 3, 2002.
Final Office Action, Dec. 7, 2007, U.S. Appl. No. 10/190,298, filed Jul. 3, 2002.
Non-Final Office Action, Jul. 19, 2007, U.S. Appl. No. 10/190,298, filed Jul. 3, 2002.
Advisory Action, Aug. 25, 2006, U.S. Appl. No. 10/190,298, filed Jul. 3, 2002.
Final Office Action, Jun. 8, 2006, U.S. Appl. No. 10/190,298, filed Jul. 3, 2002.
Non-Final Office Action, Feb. 14, 2006, U.S. Appl. No. 10/190,298, filed Jul. 3, 2002.
Non-Final Office Action, Sep. 29, 2005, U.S. Appl. No. 10/190,298, filed Jul. 3, 2002.
Notice of Allowance, Oct. 10, 2007, U.S. Appl. No. 10/401,134, filed Mar. 26, 2003.
Non-Final Office Action, Oct. 10, 2006, U.S. Appl. No. 10/401,134, filed Mar. 26, 2003.
Notice of Allowance, Jul. 10, 2009, U.S. Appl. No. 10/401,124, filed Mar. 26, 2003.
Final Office Action, Jun. 16, 2009, U.S. Appl. No. 10/401,124, filed Mar. 26, 2003.
Non-Final Office Action, Dec. 12, 2008, U.S. Appl. No. 10/401,124, filed Mar. 26, 2003.
Non-Final Office Action, May 13, 2008, U.S. Appl. No. 10/401,124, filed Mar. 26, 2003.
Non-Final Office Action, Oct. 12, 2007, U.S. Appl. No. 10/401,124, filed Mar. 26, 2003.
Advisory Action, Jul. 18, 2007, U.S. Appl. No. 10/401,124, filed Mar. 26, 2003.
Final Office Action, Apr. 3, 2007, U.S. Appl. No. 10/401,124, filed Mar. 26, 2003.
Non-Final Office Action, Oct. 11, 2006, U.S. Appl. No. 10/401,124, filed Mar. 26, 2003.
Notice of Allowance, Mar. 30, 2007, U.S. Appl. No. 10/387,032, filed Mar. 11, 2003.
Non-Final Office Action, Nov. 7, 2006, U.S. Appl. No. 10/387,032, filed Mar. 11, 2003.
Notice of Allowance, Jul. 9, 2009, U.S. Appl. No. 10/403,862, filed Mar. 28, 2003.
Non-Final Office Action, Nov. 13, 2008, U.S. Appl. No. 10/403,862, filed Mar. 28, 2003.
Advisory Action, Aug. 1, 2008, U.S. Appl. No. 10/403,862, filed Mar. 28, 2003.
Final Office Action, May 7, 2008, U.S. Appl. No. 10/403,862, filed Mar. 28, 2003.
Non-Final Office Action, Oct. 31, 2007, U.S. Appl. No. 10/403,862, filed Mar. 28, 2003.
Advisory Action, Jul. 30, 2007, U.S. Appl. No. 10/403,862, filed Mar. 28, 2003.
Final Office Action, May 9, 2007, U.S. Appl. No. 10/403,862, filed Mar. 28, 2003.
Non-Final Office Action, Nov. 8, 2006, U.S. Appl. No. 10/403,862, filed Mar. 28, 2003.
Notice of Allowance, Jul. 30, 2008, U.S. Appl. No. 10/402,350, filed Mar. 27, 2003.
Non-Final Office Action, Nov. 16, 2007, U.S. Appl. No. 10/402,350, filed Mar. 27, 2003.
Advisory Action, Aug. 15, 2007, U.S. Appl. No. 10/402,350, filed Mar. 27, 2003.
Final Office Action, May 30, 2007, U.S. Appl. No. 10/402,350, filed Mar. 27, 2003.
Non-Final Office Action, Nov. 8, 2006, U.S. Appl. No. 10/402,350, filed Mar. 27, 2003.
Notice of Allowance, May 15, 2013, U.S. Appl. No. 10/884,175, filed Jul. 2, 2004.
Advisory Action, Nov. 15, 2011, U.S. Appl. No. 10/884,175, filed Jul. 2, 2004.
Final Office Action, Aug. 29, 2011, U.S. Appl. No. 10/884,175, filed Jul. 2, 2004.
Non-Final Office Action, Feb. 4, 2011, U.S. Appl. No. 10/884,175, filed Jul. 2, 2004.
Advisory Action, May 3, 2010, U.S. Appl. No. 10/884,175, filed Jul. 2, 2004.
Final Office Action, Feb. 18, 2010, U.S. Appl. No. 10/884,175, filed Jul. 2, 2004.
Non-Final Office Action, Sep. 18, 2009, U.S. Appl. No. 10/884,175, filed Jul. 2, 2004.
Non-Final Office Action, Apr. 7, 2009, U.S. Appl. No. 10/884,175, filed Jul. 2, 2004.
Non-Final Office Action, Oct. 6, 2008, U.S. Appl. No. 10/884,175, filed Jul. 2, 2004.
Non-Final Office Action, Mar. 24, 2008, U.S. Appl. No. 10/884,175, filed Jul. 2, 2004.
Non-Final Office Action, Sep. 5, 2007, U.S. Appl. No. 10/884,175, filed Jul. 2, 2004.
Notice of Allowance, Dec. 31, 2009, U.S. Appl. No. 10/884,174, filed Jul. 2, 2004.
Non-Final Office Action, Aug. 11, 2009, U.S. Appl. No. 10/884,174, filed Jul. 2, 2004.
Final Office Action, Apr. 28, 2009, U.S. Appl. No. 10/884,174, filed Jul. 2, 2004.
Non-Final Office Action, Oct. 6, 2008, U.S. Appl. No. 10/884,174, filed Jul. 2, 2004.
Non-Final Office Action, Mar. 27, 2008, U.S. Appl. No. 10/884,174, filed Jul. 2, 2004.
Non-Final Office Action, Sep. 19, 2007, U.S. Appl. No. 10/884,174, filed Jul. 2, 2004.
Notice of Allowance, Jan. 13, 2010, U.S. Appl. No. 11/082,216, filed Mar. 15, 2005.
Notice of Allowance, Dec. 1, 2009, U.S. Appl. No. 11/082,216, filed Mar. 15, 2005.
Final Office Action, Oct. 9, 2009, U.S. Appl. No. 11/082,216, filed Mar. 15, 2005.
Non-Final Office Action, Mar. 31, 2009, U.S. Appl. No. 11/082,216, filed Mar. 15, 2005.
PTAB Decision, May 5, 2011, U.S. Appl. No. 11/087,376, filed Mar. 22, 2005.
Notice of Allowance, Jul. 23, 2012, U.S. Appl. No. 11/087,376, filed Mar. 22, 2005.
Notice of Allowance, Jun. 12, 2012, U.S. Appl. No. 11/087,376, filed Mar. 22, 2005.
Notice of Allowance, Jul. 13, 2011, U.S. Appl. No. 11/087,376, filed Mar. 22, 2005.
Examiner's Answer, Nov. 28, 2008, U.S. Appl. No. 11/087,376, filed Mar. 22, 2005.
Advisory Action, Feb. 22, 2008, U.S. Appl. No. 11/087,376, filed Mar. 22, 2005.
Final Office Action, Nov. 14, 2007, U.S. Appl. No. 11/087,376, filed Mar. 22, 2005.
Non-Final Office Action, May 24, 2007, U.S. Appl. No. 11/087,376, filed Mar. 22, 2005.
Notice of Allowance, Oct. 2, 2013, U.S. Appl. No. 11/107,304, filed Apr. 15, 2005.
Final Office Action, Apr. 9, 2012, U.S. Appl. No. 11/107,304, filed Apr. 15, 2005.
Non-Final Office Action, Aug. 30, 2011, U.S. Appl. No. 11/107,304, filed Apr. 15, 2005.
Final Office Action, Nov. 19, 2009, U.S. Appl. No. 11/107,304, filed Apr. 15, 2005.
Non-Final Office Action, May 13, 2009, U.S. Appl. No. 11/107,304, filed Apr. 15, 2005.
Advisory Action, Feb. 12, 2009, U.S. Appl. No. 11/107,304, filed Apr. 15, 2005.
Final Office Action, Dec. 4, 2008, U.S. Appl. No. 11/107,304, filed Apr. 15, 2005.
Non-Final Office Action, Jun. 9, 2008, U.S. Appl. No. 11/107,304, filed Apr. 15, 2005.
Non-Final Office Action, Dec. 21, 2007, U.S. Appl. No. 11/107,304, filed Apr. 15, 2005.
Notice of Allowance, Aug. 5, 2013, U.S. Appl. No. 11/250,151, filed Oct. 12, 2005.
Final Office Action, Aug. 29, 2012, U.S. Appl. No. 11/250,151, filed Oct. 12, 2005.
Non-Final Office Action, Dec. 2, 2011, U.S. Appl. No. 11/250,151, filed Oct. 12, 2005.
Final Office Action, Oct. 14, 2010, U.S. Appl. No. 11/250,151, filed Oct. 12, 2005.
Non-Final Office Action, May 13, 2010, U.S. Appl. No. 11/250,151, filed Oct. 12, 2005.
Final Office Action, Dec. 11, 2009, U.S. Appl. No. 11/250,151, filed Oct. 12, 2005.
Non-Final Office Action, May 13, 2009, U.S. Appl. No. 11/250,151, filed Oct. 12, 2005.
Non-Final Office Action, Oct. 28, 2008, U.S. Appl. No. 11/250,151, filed Oct. 12, 2005.
Notice of Allowance, Feb. 18, 2011, U.S. Appl. No. 11/158,897, filed Jun. 21, 2005.
Non-Final Office Action, Jun. 9, 2010, U.S. Appl. No. 11/158,897, filed Jun. 21, 2005.
Final Office Action, Sep. 10, 2009, U.S. Appl. No. 11/158,897, filed Jun. 21, 2005.
Non-Final Office Action, Mar. 17, 2009, U.S. Appl. No. 11/158,897, filed Jun. 21, 2005.
Notice of Allowance, Oct. 25, 2012, U.S. Appl. No. 11/592,450, filed Nov. 2, 2006.
Non-Final Office Action, Feb. 14, 2012, U.S. Appl. No. 11/592,450, filed Nov. 2, 2006.
Final Office Action, Feb. 28, 2011, U.S. Appl. No. 11/592,450, filed Nov. 2, 2006.
Non-Final Office Action, Sep. 28, 2010, U.S. Appl. No. 11/592,450, filled Nov. 2, 2006.
Non-Final Office Action, Apr. 1, 2010, U.S. Appl. No. 11/592,450, filed Nov. 2, 2006.
Notice of Allowance, Sep. 10, 2014, U.S. Appl. No. 11/635,248, filed Dec. 5, 2006.
Non-Final Office Action, Jul. 15, 2014, U.S. Appl. No. 11/635,248, filed Dec. 5, 2006.
Non-Final Office Action, Sep. 11, 2013, U.S. Appl. No. 11/635,248, filed Dec. 5, 2006.
Advisory Action, Nov. 1, 2011, U.S. Appl. No. 11/635,248, filed Dec. 5, 2006.
Final Office Action, Aug. 9, 2011, U.S. Appl. No. 11/635,248, filed Dec. 5, 2006.
Non-Final Office Action, Mar. 16, 2011, U.S. Appl. No. 11/635,248, filed Dec. 5, 2006.
Non-Final Office Action, Sep. 28, 2010, U.S. Appl. No. 11/635,248, filed Dec. 5, 2006.
Supplemental Notice of Allowability, Aug. 28, 2014, U.S. Appl. No. 11/501,189, filed Aug. 7, 2006.
Notice of Allowance, Jun. 26, 2014, U.S. Appl. No. 11/501,189, filed Aug. 7, 2006.
Advisory Action, Nov. 16, 2010, U.S. Appl. No. 11/501,189, filed Aug. 7, 2006.
Non-Final Office Action, Dec. 3, 2013, U.S. Appl. No. 11/501,189, filed Aug. 7, 2006.
Final Office Action, Sep. 2, 2010, U.S. Appl. No. 11/501,189, filed Aug. 7, 2006.
Non-Final Office Action, Apr. 13, 2010, U.S. Appl. No. 11/501,189, filed Aug. 7, 2006.
Notice of Allowance, Feb. 25, 2008, U.S. Appl. No. 11/412,307, filed Apr. 26, 2006.
Notice of Allowance, Feb. 19, 2008, U.S. Appl. No. 11/412,307, filed Apr. 26, 2006.
Non-Final Office Action, Aug. 7, 2007, U.S. Appl. No. 11/412,307, filed Apr. 26, 2006.
Final Office Action, Jul. 14, 2014, U.S. Appl. No. 11/640,157, filed Dec. 15, 2006.
Non-Final Office Action, Jan. 28, 2014, U.S. Appl. No. 11/640,157, filed Dec. 15, 2006.
Non-Final Office Action, Jul. 17, 2013, U.S. Appl. No. 11/640,157, filed Dec. 15, 2006.
Final Office Action, Dec. 4, 2012, U.S. Appl. No. 11/640,157, filed Dec. 15, 2006.
Non-Final Office Action, May 9, 2012, U.S. Appl. No. 11/640,157, filed Dec. 15, 2006.
Advisory Action, Nov. 17, 2011, U.S. Appl. No. 11/640,157, filed Dec. 15, 2006.
Final Office Action, Aug. 31, 2011, U.S. Appl. No. 11/640,157, filed Dec. 15, 2006.
Non-Final Office Action, Apr. 26, 2011, U.S. Appl. No. 11/640,157, filed Dec. 15, 2006.
Final Office Action, Sep. 1, 2010, U.S. Appl. No. 11/640,157, filed Dec. 15, 2006.
Non-Final Office Action, Jan. 21, 2010, U.S. Appl. No. 11/640,157, filed Dec. 15, 2006.
Non-Final Office Action, Jan. 29, 2015, U.S. Appl. No. 11/640,157, filed Dec. 15, 2006.
Notice of Allowance, Apr. 9, 2015, U.S. Appl. No. 11/640,157, filed Dec. 15, 2006.
Notice of Allowance, Feb. 11, 2013, U.S. Appl. No. 11/698,501, filed Jan. 26, 2007.
Non-Final Office Action, Jun. 7, 2012, U.S. Appl. No. 11/698,501, filed Jan. 26, 2007.
Final Office Action, Jul. 6, 2010, U.S. Appl. No. 11/698,501, filed Jan. 26, 2007.
Non-Final Office Action, Dec. 22, 2009, U.S. Appl. No. 11/698,501, filed Jan. 26, 2007.
Final Office Action, Aug. 4, 2009, U.S. Appl. No. 11/698,501, filed Jan. 26, 2007.
Non-Final Office Action, Dec. 24, 2008, U.S. Appl. No. 11/698,501, filed Jan. 26, 2007.
Non-Final Office Action, Jun. 4, 2013, U.S. Appl. No. 11/784,161, filed Apr. 4, 2007.
Final Office Action, Jul. 11, 2012, U.S. Appl. No. 11/784,161, filed Apr. 4, 2007.
Non-Final Office Action, Oct. 4, 2011, U.S. Appl. No. 11/784,161, filed Apr. 4, 2007.
Final Office Action, Oct. 13, 2010, U.S. Appl. No. 11/784,161, filed Apr. 4, 2007.
Non-Final Office Action, Apr. 26, 2010, U.S. Appl. No. 11/784,161, filed Apr. 4, 2007.
Final Office Action, Jan. 27, 2014, U.S. Appl. No. 11/784,161, filed Apr. 4, 2007.
Notice of Allowance, May 5, 2014, U.S. Appl. No. 11/784,161, filed Apr. 4, 2007.
Supplemental Notice of Allowance, Jul. 30, 2014, U.S. Appl. No. 11/784,161, filed Apr. 4, 2007.
Non-Final Office Action, Mar. 29, 2013, U.S. Appl. No. 12/077,005, filed Mar. 14, 2008.
Non-Final Office Action, Jul. 2, 2012, U.S. Appl. No. 12/077,005, filed Mar. 14, 2008.
Non-Final Office Action, Jun. 17, 2011, U.S. Appl. No. 12/077,005, filed Mar. 14, 2008.
Final Office Action, Dec. 14, 2011, U.S. Appl. No. 12/077,005, filed Mar. 14, 2008.
Notice of Allowance, Apr. 30, 2014, U.S. Appl. No. 11/811,228, filed Jun. 8, 2007.
Non-Final Office Action, Nov. 20, 2013, U.S. Appl. No. 11/811,228, filed Jun. 8, 2007.
Advisory Action, Sep. 27, 2013, U.S. Appl. No. 11/811,228, filed Jun. 8, 2007.
Final Office Action, Jul. 17, 2013, U.S. Appl. No. 11/811,228, filed Jun. 8, 2007.
Non-Final Office Action, Feb. 20, 2013, U.S. Appl. No. 11/811,228, filed Jun. 8, 2007.
Final Office Action, Feb. 1, 2011, U.S. Appl. No. 11/811,228, filed Jun. 8, 2007.
Non-Final Office Action, Jul. 7, 2010, U.S. Appl. No. 11/811,228, filed Jun. 8, 2007.
Notice of Allowance, Apr. 22, 2009, U.S. Appl. No. 11/811,384, filed Jul. 7, 2007.
Non-Final Office Action, Oct. 7, 2008, U.S. Appl. No. 11/811,384, filed Jul. 7, 2007.
Final Office Action, Mar. 27, 2012, U.S. Appl. No. 12/132,401, filed Jun. 3, 2008.
Non-Final Office Action, Aug. 23, 2011, U.S. Appl. No. 12/132,401, filed Jun. 3, 2008.
Notice of Allowance, Oct. 9, 2014, U.S. Appl. No. 12/132,401, filed Jun. 3, 2008.
Non-Final Office Action, Jun. 12, 2014, U.S. Appl. No. 12/218,859, filed Jul. 17, 2008.
Final Office Action, Apr. 24, 2012, U.S. Appl. No. 12/218,859, filed Jul. 17, 2008.
Non-Final Office Action, Aug. 5, 2011, U.S. Appl. No. 12/218,859, filed Jul. 17, 2008.
Final Office Action, Apr. 12, 2011, U.S. Appl. No. 12/218,859, filed Jul. 17, 2008.
Non-Final Office Action, Oct. 4, 2010, U.S. Appl. No. 12/218,859, filed Jul. 17, 2008.
Advisory Action, Jun. 20, 2013, U.S. Appl. No. 12/510,913, filed Jul. 28, 2009.
Final Office Action, Apr. 11, 2013, U.S. Appl. No. 12/510,913, filed Jul. 28, 2009.
Non-Final Office Action, Aug. 22, 2012, U.S. Appl. No. 12/510,913, filed Jul. 28, 2009.
Non-Final Office Action, Jun. 9, 2014, U.S. Appl. No. 12/510,913, filed Jul. 28, 2009.
Notice of Allowance, Oct. 7, 2014, U.S. Appl. No. 12/510,913, filed Jul. 28, 2009.
Supplemental Notice of Allowability, Jan. 26, 2015, U.S. Appl. No. 12/510,913, filed Jul. 28, 2009.
Supplemental Notice of Allowability, Feb. 2, 2015, U.S. Appl. No. 12/510,913, filed Jul. 28, 2009.
Notice of Allowance, Oct. 9, 2012, U.S. Appl. No. 12/572,021, filed Oct. 1, 2009.
Non-Final Office Action, Jun. 19, 2012, U.S. Appl. No. 12/572,021, filed Oct. 1, 2009.
Notice of Allowance, Mar. 13, 2012, U.S. Appl. No. 12/576,110, filed Oct. 8, 2009.
Non-Final Office Action, Jul. 7, 2011, U.S. Appl. No. 12/576,110, filed Oct. 8, 2009.
Non-Final Office Action, Sep. 24, 2013, U.S. Appl. No. 12/720,536, filed Mar. 9, 2010.
Non-Final Office Action, Jun. 27, 2012, U.S. Appl. No. 12/720,536, filed Mar. 9, 2010.
Advisory Action, Jun. 12, 2013, U.S. Appl. No. 12/720,536, filed Mar. 9, 2010.
Final Office Action, Apr. 24, 2013, U.S. Appl. No. 12/720,536, filed Mar. 9, 2010.
Final Office Action, Feb. 12, 2014, U.S. Appl. No. 12/720,536, filed Mar. 9, 2010.
Advisory Action, Apr. 23, 2014, U.S. Appl. No. 12/720,536, filed Mar. 9, 2010.
Non-Final Office Action, Jun. 23, 2014, U.S. Appl. No. 12/720,536, filed Mar. 9, 2010.
Non-Final Office Action, Mar. 25, 2015, U.S. Appl. No. 12/720,536, filed Mar. 9, 2010.
Non-Final Office Action, Sep. 23, 2013, U.S. Appl. No. 12/820,061, filed Jun. 21, 2010.
Final Office Action, Jun. 11, 2013, U.S. Appl. No. 12/820,061, filed Jun. 21, 2010.
Non-Final Office Action, Feb. 25, 2013, U.S. Appl. No. 12/820,061, filed Jun. 21, 2010.
Non-Final Office Action, Jun. 9, 2011, U.S. Appl. No. 12/722,470, filed Mar. 11, 2010.
Notice of Allowance, Aug. 18, 2014, U.S. Appl. No. 13/417,071, filed Mar. 9, 2012.
Office Action, Mar. 21, 2014, U.S. Appl. No. 13/417,071, filed Mar. 9, 2012.
Advisory Action, Jun. 26, 2013, U.S. Appl. No. 13/089,202, filed Apr. 18, 2011.
Final Office Action, Apr. 8, 2013, U.S. Appl. No. 13/089,202, filed Apr. 18, 2011.
Non-Final Office Action, Aug. 1, 2012, U.S. Appl. No. 13/089,202, filed Apr. 18, 2011.
Non-Final Office Action, Aug. 21, 2014, U.S. Appl. No. 13/089,202, filed Apr. 18, 2011.
Final Office Action, Jan. 21, 2015, U.S. Appl. No. 13/089,202, filed Apr. 18, 2011.
Advisory Action, Apr. 14, 2015, U.S. Appl. No. 13/089,202, filed Apr. 18, 2011.
Notice of Allowance, Nov. 14, 2013, U.S. Appl. No. 13/161,401, filed Jun. 15, 2011.
Notice of Allowance, Mar. 19, 2014, U.S. Appl. No. 13/277,149, filed Oct. 19, 2011.
Notice of Allowance, Jun. 13, 2014, U.S. Appl. No. 13/539,037, filed Jun. 29, 2012.
Non-Final Office Action, Mar. 19, 2015, U.S. Appl. No. 13/685,372, filed Nov. 26, 2012.
Non-Final Office Action, Jan. 8, 2015, U.S. Appl. No. 13/481,561, filed May 25, 2012.
Abney, Steven P. , “Parsing by Chunks,” 1991, Principle-Based Parsing: Computation and Psycholinguistics, vol. 44, pp. 257-279.
Agbago, A., et al., “Truecasing for the Portage System,” In Recent Advances in Natural Language Processing (Borovets, Bulgaria), Sep. 21-23, 2005, pp. 21-24.
Al-Onaizan et al., “Statistical Machine Translation,” 1999, JHU Summer Tech Workshop, Final Report, pp. 1-42.
Al-Onaizan et al., “Translating with Scarce Resources,” 2000, 17th National Conference of the American Association for Artificial Intelligence, Austin, TX, pp. 672-678.
Al-Onaizan, Y. and Knight K., “Machine Transliteration of Names in Arabic Text,” Proceedings of ACL Workshop on Computational Approaches to Semitic Languages. Philadelphia, 2002.
Al-Onaizan, Y. and Knight, K., “Named Entity Translation: Extended Abstract”, 2002, Proceedings of HLT-02, San Diego, CA.
Al-Onaizan, Y. and Knight, K., “Translating Named Entities Using Monolingual and Bilingual Resources,” 2002, Proc. of the 40th Annual Meeting of the ACL, pp. 400-408.
Alshawi et al., “Learning Dependency Translation Models as Collections of Finite-State Head Transducers,” 2000, Computational Linguistics, vol. 26, pp. 45-60.
Alshawi, Hiyan, “Head Automata for Speech Translation”, Proceedings of the ICSLP 96, 1996, Philadelphia, Pennsylvania.
Ambati, V., “Dependency Structure Trees in Syntax Based Machine Translation,” Spring 2008 Report <http://www.cs.cmu.edu/˜vamshi/publications/DependencyMT—report.pdf>, pp. 1-8.
Arbabi et al., “Algorithms for Arabic name transliteration,” Mar. 1994, IBM Journal of Research and Development, vol. 38, Issue 2, pp. 183-194.
Arun, A., et al., “Edinburgh System Description for the 2006 TC-STAR Spoken Language Translation Evaluation,” in TC-STAR Workshop on Speech-to-Speech Translation (Barcelona, Spain), Jun. 2006, pp. 37-41.
Ballesteros, L. et al., “Phrasal Translation and Query Expansion Techniques for Cross-Language Information Retrieval,” SIGIR 97, Philadelphia, PA, © 1997, pp. 84-91.
Bangalore, S. and Rambow, O., “Evaluation Metrics for Generation,” 2000, Proc. of the 1st International Natural Language Generation Conf., vol. 14, pp. 1-8.
Bangalore, S. and Rambow, O., “Using TAGs, a Tree Model, and a Language Model for Generation,” May 2000, Workshop TAG+5, Paris.
Shapiro, Stuart (ed.), “Encyclopedia of Artificial Intelligence, 2nd edition”, vol. D 2,1992, John Wiley & Sons Inc; “Unification” article, K. Knight, pp. 1630-1637.
Shirai, S., “A Hybrid Rule and Example-based Method for Machine Translation,” 1997, NTT Communication Science Laboratories, pp. 1-5.
Sobashima et al., “A Bidirectional Transfer-Driven Machine Translation System for Spoken Dialogues,” 1994, Proc. of 15th Conference on Computational Linguistics, vol. 1, pp. 64-68.
Soricut et al., “Using a Large Monolingual Corpus to Improve Translation Accuracy,” 2002, Lecture Notes in Computer Science, vol. 2499, Proc. of the 5th Conference of the Association for Machine Translation in the Americas on Machine Translation: From Research to Real Users, pp. 155-164.
Stalls, B. and Knight, K., “Translating Names and Technical Terms in Arabic Text,” 1998, Proc. of the COLING/ACL Workshop on Computational Approaches to Semitic Language.
Sumita et al., “A Discourse Structure Analyzer for Japanese Text,” 1992, Proc. of the International Conference on Fifth Generation Computer Systems, vol. 2, pp. 1133-1140.
Sun et al., “Chinese Named Entity Identification Using Class-based Language Model,” 2002, Proc. of 19th International Conference on Computational Linguistics, Taipei, Taiwan, vol. 1, pp. 1-7.
Tanaka, K. and Iwasaki, H. “Extraction of Lexical Translations from Non-Aligned Corpora,” Proceedings of COLING 1996.
Taskar, B., et al., “A Discriminative Matching Approach to Word Alignment,” In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing (Vancouver, BC, Canada, Oct. 6-8, 2005). Human Language Technology Conference. Assoc. For Computational Linguistics, Morristown, NJ.
Taylor et al., “The Penn Treebank: An Overview,” in A. Abeill (ed.), D Treebanks: Building and Using Parsed Corpora, 2003, pp. 5-22.
Tiedemann, Jorg, “Automatic Construction of Weighted String Similarity Measures,” 1999, In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora.
Tillman, C. and Xia, F., “A Phrase-Based Unigram Model for Statistical Machine Translation,” 2003, Proc. of the North American Chapter of the ACL on Human Language Technology, vol. 2, pp. 106-108.
Tillmann et al., “A DP Based Search Using Monotone Alignments in Statistical Translation,” 1997, Proc. of the Annual Meeting of the ACL, pp. 366-372.
Tomas, J., “Binary Feature Classification for Word Disambiguation in Statistical Machine Translation,” Proceedings of the 2nd Int'l. Workshop on Pattern Recognition, 2002, pp. 1-12.
Uchimoto, K. et al., “Word Translation by Combining Example-Based Methods and Machine Learning Models,” Natural Language Processing (Shizen Gengo Shori), vol. 10, No. 3, Apr. 2003, pp. 87-114.
Uchimoto, K. et al., “Word Translation by Combining Example-based Methods and Machine Learning Models,” Natural Language Processing (Shizen Gengo Shori), vol. 10, No. 3, Apr. 2003, pp. 87-114. (English Translation).
Ueffing et al., “Generation of Word Graphs in Statistical Machine Translation,” 2002, Proc. of Empirical Methods in Natural Language Processing (EMNLP), pp. 156-163.
Varga et al., “Parallel Corpora for Medium Density Languages”, In Proceedings of RANLP 2005, pp. 590-596.
Veale, T. and Way, A., “Gaijin: A Bootstrapping, Template-Driven Approach to Example-Based MT,” 1997, Proc. of New Methods in Natural Language Processing (NEMPLP97), Sofia, Bulgaria.
Vogel et al., “The CMU Statistical Machine Translation System,” 2003, Machine Translation Summit IX, New Orleans, LA.
Vogel et al., “The Statistical Translation Module in the Verbmobil System,” 2000, Workshop on Multi-Lingual Speech Communication, pp. 69-74.
Vogel, S. and Ney, H., “Construction of a Hierarchical Translation Memory,” 2000, Proc. of Cooling 2000, Saarbrucken, Germany, pp. 1131-1135.
Wang, Y. and Waibel, A., “Decoding Algorithm in Statistical Machine Translation,” 1996, Proc. of the 35th Annual Meeting of the ACL, pp. 366-372.
Wang, Ye-Yi, “Grammar Inference and Statistical Machine Translation,” 1998, Ph.D Thesis, Carnegie Mellon University, Pittsburgh, PA.
Watanabe et al., “Statistical Machine Translation Based on Hierarchical Phrase Alignment,” 2002, 9th International Conference on Theoretical and Methodological Issues in Machine Translation (TMI-2002), Keihanna, Japan, pp. 188-198.
Witbrock, M. and Mittal, V., “Ultra-Summarization: A Statistical Approach to Generating Highly Condensed Non-Extractive Summaries,” 1999, Proc. of SIGIR '99, 22nd International Conference on Research and Development in Information Retrieval, Berkeley, CA, pp. 315-316.
Wu, Dekai, “A Polynomial-Time Algorithm for Statistical Machine Translation,” 1996, Proc. of 34th Annual Meeting of the ACL, pp. 152-158.
Wu, Dekai, “Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora,” 1997, Computational Linguistics, vol. 23, Issue 3, pp. 377-403.
Yamada, K. and Knight, K. “A Syntax-Based Statistical Translation Model,” 2001, Proc. of the 39th Annual Meeting of the ACL, pp. 523-530.
Yamada, K. and Knight, K., “A Decoder for Syntax-Based Statistical MT,” 2001, Proceedings of the 40th Annual Meeting of the ACL, pp. 303-310.
Yamada K., “A Syntax-Based Statistical Translation Model,” 2002 PhD Dissertation, pp. 1-141.
Yamamoto et al., “A Comparative Study on Translation Units for Bilingual Lexicon Extraction,” 2001, Japan Academic Association for Copyright Clearance, Tokyo, Japan.
Yamamoto et al, “Acquisition of Phrase-level Bilingual Correspondence using Dependency Structure” In Proceedings of Coling-2000, pp. 933-939.
Yarowsky, David, “Unsupervised Word Sense Disambiguation Rivaling Supervised Methods,” 1995, 33rd Annual Meeting of the ACL, pp. 189-196.
Zhang et al., “Synchronous Binarization for Machine Translations,” Jun. 4-9, 2006, in Proc. of the Human Language Technology Conference of the North American Chapter of the ACL, pp. 256-263.
Zhang et al., “Distributed Language Modeling for N-best List Re-ranking,” In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (Sydney, Australia, Jul. 22-23, 2006). ACL Workshops. Assoc. for Computational Linguistics, Morristown, NJ, 216-223.
Patent Cooperation Treaty International Preliminary Report on Patentability and The Written Opinion, International application No. PCT/US2008/004296, Oct. 6, 2009, 5 pgs.
Document, Wikipedia.com, web.archive.org (Feb. 22, 2004) /http://en.wikipedia.org/wikii/Document>, Feb. 22, 2004.
Identifying, Dictionary.com, wayback.archive.org (Feb. 28, 2007) </http://dictionary.reference.com/browse/identifying>, accessed Oct. 27, 2011 <http://web.archive.org/web/20070228150533/http://dictionary.reference.com/browse/identifying>.
Koehn, P. et al, “Statistical Phrase-Based Translation,” Proceedings of HLT-NAACL 2003 Main Papers , pp. 48-54 Edmonton, May-Jun. 2003.
Abney, S.P., “Stochastic Attribute Value Grammars”, Association for Computational Linguistics, 1997, pp. 597-618.
Fox, H., “Phrasal Cohesion and Statistical Machine Translation” Proceedings of the Conference on Empirical Methods in Natural Language Processing, Philadelphia, Jul. 2002, pp. 304-311. Association for Computational Linguistics. <URL: http://acl.ldc.upenn.edu/W/W02/W02-1039.pdf>.
Tillman, C., et al, “Word Reordering and a Dynamic Programming Beam Search Algorithm for Statistical Machine Translation,” 2003, Association for Computational Linguistics, vol. 29, No. 1, pp. 97-133 <URL: http://acl.ldc.upenn.edu/J/J03/J03-1005.pdf>.
Wang, W., et al. “Capitalizing Machine Translation” In HLT-NAACL '06 Proceedings Jun. 2006. <http://www.isi.edu/natural-language/mt/hlt-naac1-06-wang.pdf>.
Langlais, P. et al., “TransType: a Computer-Aided Translation Typing System” EmbedMT '00 ANLP-NAACL 2000 Workshop: Embedded Machine Translation Systems, 2000, pp. 46-51. <http://acl.ldc.upenn.edu/W/W00/W00-0507.pdf>.
Ueffing et al., “Using POS Information for Statistical Machine Translation into Morphologically Rich Languages,” In EACL, 2003: Proceedings of the Tenth Conference on European Chapter of the Association for Computational Linguistics, pp. 347-354.
Frederking et al., “Three Heads are Better Than One,” In Proceedings of the 4th Conference on Applied Natural Language Processing, Stuttgart, Germany, 1994, pp. 95-100.
Yasuda et al., “Automatic Machine Translation Selection Scheme to Output the Best Result,” Proc. of LREC, 2002, pp. 525-528.
Papineni et al., “Bleu: a Method for Automatic Evaluation of Machine Translation”, Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Jul. 2002, pp. 311-318.
Shaalan et al., “Machine Translation of English Noun Phrases into Arabic”, (2004), vol. 17, No. 2, International Journal of Computer Processing of Oriental Languages, 14 pages.
Bangalore, S. and Rambow, O., “Corpus-Based Lexical Choice in Natural Language Generation,” 2000, Proc. of the 38th Annual ACL, Hong Kong, pp. 464-471.
Bangalore, S. and Rambow, O., “Exploiting a Probabilistic Hierarchical Model for Generation,” 2000, Proc. of 18th conf. on Computational Linguistics, vol. 1, pp. 42-48.
Bannard, C. and Callison-Burch, C., “Paraphrasing with Bilingual Parallel Corpora,” In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics (Ann Arbor, MI, Jun. 25-30, 2005), Annual Meeting of the ACL Assoc. for Computational Linguistics, Morristown, NJ, 597-604. DOI=http://dx.doi.org/10.3115/1219840.
Barnett et al., “Knowledge and Natural Language Processing,” Aug. 1990, Communications of the ACM, vol. 33, Issue 8, pp. 50-71.
Baum, L., “An Inequality and Associated Maximization Technique in Statistical Estimation for Probabilistic Functions of Markov Processes”, 1972, Inequalities 3:1-8.
Berhe, G. et al., “Modeling Service-based Multimedia Content Adaptation in Pervasive Computing,” CF '04 (Ischia, Italy) Apr. 14-16, 2004, pp. 60-69.
Boitet, C. et al., “Main Research Issues in Building Web Services for Mutualized, Non-Commercial Translation,” Proc. of the 6th Symposium on Natural Language Processing, Human and Computer Processing of Language and Speech, © 2005, pp. 1-11.
Brants, T., “TnT—A Statistical Part-of-Speech Tagger,” 2000, Proc. of the 6th Applied Natural Language Processing Conference, Seattle.
Brill, E., “Transformation-Based Error-Driven Learning and Natural Language Processing: A Case Study in Part of Speech Tagging”, 1995, Computational Linguistics, vol. 21, No. 4, pp. 543-565.
Brown et al., “A Statistical Approach to Machine Translation,” Jun. 1990, Computational Linguistics, vol. 16, No. 2, pp. 79-85.
Brown et al., “Word-Sense Disambiguation Using Statistical Methods,” 1991, Proc. of 29th Annual ACL pp. 264-270.
Brown et al., “The Mathematics of Statistical Machine Translation: Parameter Estimation,” 1993, Computational Linguistics, vol. 19, Issue 2, pp. 263-311.
Brown, Ralf, “Automated Dictionary Extraction for “Knowledge-Free” Example-Based Translation,” 1997, Proc. of 7th Int'l Cont. on Theoretical and Methodological Issues in MT, Santa Fe, NM, pp. 111-118.
Callan et al., “TREC and TIPSTER Experiments with Inquery,” 1994, Information Processing and Management, vol. 31, Issue 3, pp. 327-343.
Callison-Burch, C. et al., “Statistical Machine Translation with Word- and Sentence-aligned Parallel Corpora,” In Proceedings of the 42nd Meeting on Assoc. For Computational Linguistics (Barcelona, Spain, Jul. 21-26, 2004). Annual Meeting of the ACL. Assoc. for Computational Linguistics, Morristown, NJ, 1.
Carl, M. “A Constructivist Approach to Machine Translation,” 1998, New Methods of Language Processing and Computational Natural Language Learning, pp. 247-256.
Chen, et al., “Machine Translation: An Integrated Approach,” 1995, Proc. of 6th Int'l Cont. on Theoretical and Methodological Issue in MT, pp. 287-294.
Cheng et al., “Creating Multilingual Translation Lexicons with Regional Variations Using Web Corpora,” In Proceedings of the 42nd Annual Meeting on Assoc. for Computational Linguistics (Barcelona, Spain, Jul. 21-26, 2004). Annual Meeting of the ACL. Assoc. For Computational Linguistics, Morristown, NJ, 53.
Cheung et al., “Sentence Alignment in Parallel, Comparable, and Quasi-comparable Corpora”, In Proceedings of LREC, 2004, pp. 30-33.
Chinchor, Nancy, “MUC-7 Named Entity Task Definition,” 1997, Version 3.5.
Clarkson, P. and Rosenfeld, R., “Statistical Language Modeling Using the CMU-Cambridge Toolkit”, 1997, Proc. ESCA Eurospeech, Rhodes, Greece, pp. 2707-2710.
Cohen et al., “Spectral Bloom Filters,” SIGMOD 2003, Jun. 9-12, 2003, ACM pp. 241-252.
Cohen, “Hardware-Assisted Algorithm for Full-text Large-Dictionary String Matching Using n-gram Hashing,” 1998, Information Processing and Management, vol. 34, No. 4, pp. 443-464.
Yossi, Cohen “Interpreter for FUF,” available at URL <ftp://ftp.cs.bgu.ac.il/pub/people/elhadad/fuf-life.lf> (downloaded Jun. 1, 2008).
Corston-Oliver, S., “Beyond String Matching and Cue Phrases: Improving Efficiency and Coverage in Discourse Analysis”, 1998, The AAAI Spring Symposium on Intelligent Text Summarization, pp. 9-15.
Covington, “An Algorithm to Align Words for Historical Comparison”, Computational Linguistics, 1996,vol. 22, No. 4, pp. 481-496.
Dagan et al., “Word Sense Disambiguation Using a Second Language Monolingual Corpus”, 1994, Association for Computational Linguistics, vol. 20, No. 4, pp. 563-596.
Dempster et al., “Maximum Likelihood from Incomplete Data via the EM Algorithm”, 1977, Journal of the Royal Statistical Society, vol. 39, No. 1, pp. 1-38.
Diab et al., “A Statistical Word-Level Translation Model for Comparable Corpora,” 2000, In Proc. of the Conference on Content Based Multimedia Information Access (RIAO).
Diab, M., “An Unsupervised Method for Multilingual Word Sense Tagging Using Parallel Corpora: A Preliminary Investigation”, 2000, SIGLEX Workshop on Word Senses and Multi-Linguality, pp. 1-9.
Eisner, Jason, “Learning Non-Isomorphic Tree Mappings for Machine Translation,” 2003, in Proc. of the 41st Meeting of the ACL, pp. 205-208.
Elhadad et al., “Floating Constraints in Lexical Choice”, 1996, ACL, vol. 23 No. 2, pp. 195-239.
Elhadad, M. and Robin, J., “An Overview of SURGE: a Reusable Comprehensive Syntactic Realization Component,” 1996, Technical Report 96-03, Department of Mathematics and Computer Science, Ben Gurion University, Beer Sheva, Israel.
Elhadad, M. and Robin, J., “Controlling Content Realization with Functional Unification Grammars”, 1992, Aspects of Automated Natural Language Generation, Dale et al. (eds)., Springer Verlag, pp. 89-104.
Elhadad, Michael, “FUF: the Universal Unifier User Manual Version 5.2”, 1993, Department of Computer Science, Ben Gurion University, Beer Sheva, Israel.
Elhadad, Michael, “Using Argumentation to Control Lexical Choice: A Functional Unification Implementation”, 1992, Ph.D. Thesis, Graduate School of Arts and Sciences, Columbia University.
Elhadad, M. and Robin, J., “SURGE: a Comprehensive Plug-in Syntactic Realization Component for Text Generation”, 1999 (available at http://www.cs.bgu.ac.il/-elhadad/pub.html).
Fleming, Michael et al., “Mixed-Initiative Translation of Web Pages,” AMTA 2000, LNAI 1934, Springer-Verlag, Berlin, Germany, 2000, pp. 25-29.
Och, Franz Josef and Ney, Hermann, “Improved Statistical Alignment Models” ACLOO:Proc. of the 38th Annual Meeting of the Association for Computational Linguistics, Online! Oct. 2-6, 2000, pp. 440-447, XP002279144 Hong Kong, China Retrieved from the Internet: <URL:http://www-i6.informatik.rwth-aachen.de/Colleagues/och/ACLOO.ps>, retrieved on May 6, 2004, abstract.
Ren, Fuji and Shi, Hongchi, “Parallel Machine Translation: Principles and Practice,” Engineering of Complex Computer Systems, 2001 Proceedings, Seventh IEEE Int'l Conference, pp. 249-259, 2001.
Fung et al, “Mining Very-Non-Parallel Corpora: Parallel Sentence and Lexicon Extraction via Bootstrapping and EM”, In EMNLP 2004.
Fung, P. and Yee, L., “An IR Approach for Translating New Words from Nonparallel, Comparable Texts”, 1998, 36th Annual Meeting of the ACL, 17th International Conference on Computational Linguistics, pp. 414-420.
Fung, Pascale, “Compiling Bilingual Lexicon Entries From a Non-Parallel English-Chinese Corpus”, 1995, Proc., of the Third Workshop on Very Large Corpora, Boston, MA, pp. 173-183.
Gale, W. and Church, K., “A Program for Aligning Sentences in Bilingual Corpora,” 1991, 29th Annual Meeting of the ACL, pp. 177-183.
Gale, W. and Church, K., “A Program for Aligning Sentences in Bilingual Corpora,” 1993, Computational Linguistics, vol. 19, No. 1, pp. 75-102.
Galley et al., “Scalable Inference and Training of Context-Rich Syntactic Translation Models,” Jul. 2006, in Proc. of the 21st International Conference on Computational Linguistics, pp. 961-968.
Galley et al., “What's in a translation rule?”, 2004, in Proc. of HLT/NAACL '04, pp. 1-8.
Gaussier et al, “A Geometric View on Bilingual Lexicon Extraction from Comparable Corpora”, In Proceedings of ACL 2004, July.
Germann et al., “Fast Decoding and Optimal Decoding for Machine Translation”, 2001, Proc. of the 39th Annual Meeting of the ACL, Toulouse, France, pp. 228-235.
Germann, Ulrich: “Building a Statistical Machine Translation System from Scratch: How Much Bang for the Buck Can We Expect?” Proc. of the Data-Driven MT Workshop of ACL-01, Toulouse, France, 2001.
Isahara et al., “Analysis, Generation and Semantic Representation in Contrast—A Context-Based Machine Translation System”, 1995, Systems and Computers in Japan, vol. 26, No. 14, pp. 37-53.
Proz.com, Rates for proofreading versus Translating, http://www.proz.com/forum/business—issues/202-rates—for—proofreading—versus—translating.html, Apr. 23, 2009, retrieved Jul. 13, 2012.
Graciet C., Volume discounts on large translation project, naked translations, http://www.nakedtranslations.com/en/2007/volume-discounts-on-large-translation-projects/, Aug. 1, 2007, retrieved Jul. 16, 2012.
Graehl, J and Knight, K, May 2004, “Training Tree Transducers,” In NAACL-HLT (2004), pp. 105-112.
Niessen et al, “Statistical machine translation with scarce resources using morphosyntactic information”, Jun. 2004, Computational Linguistics, vol. 30, issue 2, pp. 181-204.
Liu et al., “Context Discovery Using Attenuated Bloom Filters in Ad-Hoc Networks,” Springer, pp. 13-25, 2006.
First Office Action mailed Jun. 7, 2004 in Canadian Patent Application 2408819, filed May 11, 2001.
First Office Action mailed Jun. 14, 2007 in Canadian Patent Application 2475857, filed Mar. 11, 2003.
Office Action mailed Mar. 26, 2012 in German Patent Application 10392450.7, filed Mar. 28, 2003.
First Office Action mailed Nov. 5, 2008 in Canadian Patent Application 2408398, filed Mar. 27, 2003.
Second Office Action mailed Sep. 25, 2009 in Canadian Patent Application 2408398, filed Mar. 27, 2003.
First Office Action mailed Mar. 1, 2005 in European Patent Application No. 03716920.8, filed Mar. 27, 2003.
Second Office Action mailed Nov. 9, 2006 in European Patent Application No. 03716920.8, filed Mar. 27, 2003.
Third Office Action mailed Apr. 30, 2008 in European Patent Application No. 03716920.8, filed Mar. 27, 2003.
Office Action mailed Oct. 25, 2011 in Japanese Patent Application 2007-536911 filed Oct. 12, 2005.
Office Action mailed Jul. 24, 2012 in Japanese Patent Application 2007-536911 filed Oct. 12, 2005.
Final Office Action mailed Apr. 9, 2013 in Japanese Patent Application 2007-536911 filed Oct. 12, 2005.
Office Action mailed May 13, 2005 in Chinese Patent Application 1812317.1, filed May 11, 2001.
Office Action mailed Apr. 21, 2006 in Chinese Patent Application 1812317.1, filed May 11, 2001.
Office Action mailed Jul. 19, 2006 in Japanese Patent Application 2003-577155, filed Mar. 11, 2003.
Office Action mailed Mar. 1, 2007 in Chinese Patent Application 3805749.2, filed Mar. 11, 2003.
Office Action mailed Feb. 27, 2007 in Japanese Patent Application 2002-590018, filed May 13, 2002.
Office Action mailed Jan. 26, 2007 in Chinese Patent Application 3807018.9, filed Mar. 27, 2003.
Office Action mailed Dec. 7, 2005 in Indian Patent Application 2283/DELNP/2004, filed Mar. 11, 2003.
Office Action mailed Mar. 31, 2009 in European Patent Application 3714080.3, filed Mar. 11, 2003.
Agichtein et al., “Snowball: Extracting Information from Large Plain-Text Collections,” ACM DL '00, the Fifth ACM Conference on Digital Libraries, Jun. 2, 2000, San Antonio, TX, USA.
Satake, Masaomi, “Anaphora Resolution for Named Entity Extraction in Japanese Newspaper Articles,” Master's Thesis [online], Feb. 15, 2002, School of Information Science, JAIST, Nomi, Ishikaw, Japan.
Office Action mailed Aug. 29, 2006 in Japanese Patent Application 2003-581064, filed Mar. 27, 2003.
Office Action mailed Jan. 26, 2007 in Chinese Patent Application 3807027.8, filed Mar. 28, 2003.
Office Action mailed Jul. 25, 2006 in Japanese Patent Application 2003-581063, filed Mar. 28, 2003.
Huang et al., “A syntax-directed translator with extended domain of locality,” Jun. 9, 2006, In Proceedings of the Workshop on Computationally Hard Problems and Joint Inference in Speech and Language Processing, pp. 1-8, New York City, New York, Association for Computational Linguistics.
Melamed et al., “Statistical machine translation by generalized parsing,” 2005, Technical Report 05-001, Proteus Project, New York University, http://nlp.cs.nyu.edu/pubs/.
Huang et al., “Statistical syntax-directed translation with extended domain of locality,” Jun. 9, 2006, In Proceedings of AMTA, pp. 1-8.
Huang et al. “Automatic Extraction of Named Entity Translingual Equivalence Based on Multi-Feature Cost Minimization”. In Proceedings of the ACL 2003 Workshop on Multilingual and Mixed-Language Name Entry Recognition.
Notice of Allowance mailed Dec. 10, 2013 in Japanese Patent Application 2007-536911, filed Oct. 12, 2005.
Makoushina, J. “Translation Quality Assurance Tools: Current State and Future Approaches.” Translating and the Computer, Dec. 17, 2007, 29, 1-39, retrieved at <http://www.palex.ru/fc/98/Translation%20Quality%Assurance%20Tools.pdf>.
Specia et al. “Improving the Confidence of Machine Translation Quality Estimates,” MT Summit XII, Ottawa, Canada, 2009, 8 pages.
Soricut et al., “TrustRank: Inducing Trust in Automatic Translations via Ranking”, published In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), Jul. 2010, pp. 612-621.
U.S. Appl. No. 11/454,212, filed Jun. 15, 2006.
Editorial FreeLancer Association, Guidelines for Fees, https://web.archive.org/web/20090604130631/http://www.the-efa.org/res/code—2.php, Jun. 4, 2009, retrieved Aug. 9, 2014.
Wasnak, L., “Beyond the Basics: How Much Should I Charge”, https://web.archive.org/web/20070121231531/http://www.writersmarket.com/assets/pdf/How—Much—Should—I—Charge.pdf, Jan. 21, 2007, retrieved Aug. 19, 2014.
Summons to Attend Oral Proceedings mailed Sep. 18, 2014 in German Patent Application 10392450.7, filed Mar. 28, 2003.
Examination Report mailed Jul. 22,2013 in German Patent Application 112005002534.9, filed Oct. 12, 2005.
Office Action mailed Feb. 2, 2015 in German Patent Application 10392450.7, filed Mar. 28, 2003.
Abney, Steven P. , “Parsing by Chunks,” 1994, Bell Communications Research, pp. 1-18.
Leusch et al.. , “A Novel String-to-String Distance Measure with Applications to Machine Translation Evaluation”, 2003, https://www-i6.informatik.rwth-aachen.de, pp. 1-8.
Oflazer, Kemal., “Error-tolerant Finite-state Recognition with Application to Morphological Analysis and Spelling Correction”, 1996, https://www.ucrel.lancs.ac.uk, pp. 1-18.
Snover et al., “A Study of Translation Edit Rate with Targeted Human Annotation”, 2006, https://www.cs.umd.edu/˜snover/pub/amta06/ter—amta.pdf, pp. 1-9.
Levenshtein, V.I., “Binary Codes Capable of Correcting Deletions, Insertions, and Reversals”, 1966, Doklady Akademii Nauk SSSR, vol. 163, No. 4, pp. 707-710.
Marcu, Daniel, “Discourse trees are good indicators of importance in text,” 1999, Advances in Automatic Text Summarization, The MIT Press, Cambridge, MA.
Marcu, Daniel, “Instructions for Manually Annotating the Discourse Structures of Texts,” 1999, Discourse Annotation, pp. 1-49.
Marcu, Daniel, “The Rhetorical Parsing of Natural Language Texts,” 1997, Proceedings of ACLIEACL '97, pp. 96-103.
Marcu, Daniel, “The Rhetorical Parsing, Summarization, and Generation of Natural Language Texts,” 1997, Ph.D. Thesis, Graduate Department of Computer Science, University of Toronto.
Marcu, Daniel, “Towards a Unified Approach to Memory- and Statistical-Based Machine Translation,” 2001, Proc. of the 39th Annual Meeting of the ACL, pp. 378-385.
McCallum, A. and Li, W., “Early Results for Named Entity Recognition with Conditional Random Fields, Feature Induction and Web-enhanced Lexicons,” In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL, 2003, vol. 4 (Edmonton, Canada), Assoc. for Computational Linguistics, Morristown, NJ, pp. 188-191.
McDevitt, K. et al., “Designing of a Community-based Translation Center,” Technical Report TR-03-30, Computer Science, Virginia Tech, © 2003, pp. 1-8.
Melamed, I. Dan, “A Word-to-Word Model of Translational Equivalence,” 1997, Proc. of the 35th Annual Meeting of the ACL, Madrid, Spain, pp. 490-497.
Melamed, I. Dan, “Automatic Evaluation and Uniform Filter Cascades for Inducing N-Best Translation Lexicons,” 1995, Proc. of the 3rd Workshop on Very Large Corpora, Boston, MA, pp. 184-198.
Melamed, I. Dan, “Empirical Methods for Exploiting Parallel Texts,” 2001, MIT Press, Cambridge, MA [table of contents].
Meng et al.. “Generating Phonetic Cognates to Handle Named Entities in English-Chinese Cross-Language Spoken Document Retrieval,” 2001, IEEE Workshop on Automatic Speech Recognition and Understanding. pp. 311-314.
Metze, F. et al., “The NESPOLE! Speech-to-Speech Translation System,” Proc. of the HLT 2002, 2nd Int'l Conf. on Human Language Technology (San Francisco, CA), © 2002, pp. 378-383.
Mikheev et al., “Named Entity Recognition without Gazeteers,” 1999, Proc. of European Chapter of the ACL, Bergen, Norway, pp. 1-8.
Miike et al., “A Full-Text Retrieval System with a Dynamic Abstract Generation Function,” 1994, Proceedings of SI-GIR '94, pp. 152-161.
Mohri, M. and Riley, M., “An Efficient Algorithm for the N-Best-Strings Problem,” 2002, Proc. of the 7th Int. Conf. on Spoken Language Processing (ICSLP'02), Denver, CO, pp. 1313-1316.
Mohri, Mehryar, “Regular Approximation of Context Free Grammars Through Transformation”, 2000, pp. 251-261, “Robustness in Language and Speech Technology”, Chapter 9, Kluwer Academic Publishers.
Monasson et al., “Determining Computational Complexity from Characteristic ‘Phase Transitions’,” Jul. 1999, Nature Magazine, vol. 400, pp. 133-137.
Mooney, Raymond, “Comparative Experiments on Disambiguating Word Senses: An Illustration of the Role of Bias in Machine Learning,” 1996, Proc. of the Conference on Empirical Methods in Natural Language Processing, pp. 82-91.
Nagao, K. et al., “Semantic Annotation and Transcoding: Making Web Content More Accessible,” IEEE Multimedia, vol. 8, Issue 2 Apr.-Jun. 2001, pp. 69-81.
Nederhof, M. and Satta, G., “IDL-Expressions: A Formalism for Representing and Parsing Finite Languages in Natural Language Processing,” 2004, Journal of Artificial Intelligence Research, vol. 21, pp. 281-287.
Niessen, S. and Ney, H, “Toward Hierarchical Models for Statistical Machine Translation of Inflected Languages,” 2001, Data-Driven Machine Translation Workshop, Toulouse, France, pp. 47-54.
Norvig, Peter, “Techniques for Automatic Memorization with Applications to Context-Free Parsing”, Computational Linguistics,1991, pp. 91-98, vol. 17, No. 1.
Och et al., “Improved Alignment Models for Statistical Machine Translation,” 1999, Proc. of the Joint Conf. of Empirical Methods in Natural Language Processing and Very Large Corpora, pp. 20-28.
Och et al. “A Smorgasbord of Features for Statistical Machine Translation.” HLTNAACL Conference. Mar. 2004, 8 pages.
Och, F., “Minimum Error Rate Training in Statistical Machine Translation,” In Proceedings of the 41st Annual Meeting on Assoc. For Computational Linguistics—vol. 1 (Sapporo, Japan, Jul. 7-12, 2003). Annual Meeting of the ACL. Assoc. For Computational Linguistics, Morristown, NJ, 160-167. DOI= http://dx.doi.org/10.3115/1075096.
Och, F. and Ney, H., “Discriminative Training and Maximum Entropy Models for Statistical Machine Translation,” 2002, Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia, PA, pp. 295-302.
Och et al., ““Discriminative Training and Maximum Entropy Models for Statistical Machine Translation,”” In Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia, PA, 2002.
Kumar, S. and Byrne, W., “Minimum Bayes-Risk Decoding for Statistical Machine Translation.” HLTNAACL Conference. Mar. 2004, 8 pages.
Papineni et al., “Bleu: a Method for Automatic Evaluation of Machine Translation,” 2001, IBM Research Report, RC22176(WQ102-022).
Perugini, Saviero et al., “Enhancing Usability in CITIDEL: Multimodal, Multilingual and Interactive Visualization Interfaces,” JCDL '04, Tucson, AZ, Jun. 7-11, 2004, pp. 315-324.
Petrov et al., “Learning Accurate, Compact and Interpretable Tree Annotation,” Jun. 4-9, 2006, in Proc. of the Human Language Technology Conference of the North American Chapter of the ACL, pp. 433-440.
Pla et al., “Tagging and Chunking with Bigrams,” 2000, Proc. of the 18th Conference on Computational Linguistics, vol. 2, pp. 614-620.
Qun, Liu, “A Chinese-English Machine Translation System Based on Micro-Engine Architecture,” An Int'l Conference on Translation and Information Technology, Hong Kong, Dec. 2000, pp. 1-10.
Rapp, Reinhard, Automatic Identification of Word Translations from Unrelated English and German Corpora, 1999, 37th Annual Meeting of the ACL, pp. 519-526.
Rapp, Reinhard, “Identifying Word Translations in Non-Parallel Texts,” 1995, 33rd Annual Meeting of the ACL, pp. 320-322.
Rayner et al.,“Hybrid Language Processing in the Spoken Language Translator,” IEEE 1997, pp. 107-110.
Resnik, P. and Smith, A., “The Web as a Parallel Corpus,” Sep. 2003, Computational Linguistics, Special Issue on Web as Corpus, vol. 29, Issue 3, pp. 349-380.
Resnik, P. and Yarowsky, D. “A Perspective on Word Sense Disambiguation Methods and Their Evaluation,” 1997, Proceedings of SIGLEX '97, Washington, D.C., pp. 79-86.
Resnik, Philip, “Mining the Web for Bilingual Text,” 1999, 37th Annual Meeting of the ACL, College Park, MD, pp. 527-534.
Rich, E. and Knight, K., “Artificial Intelligence, Second Edition,” 1991, McGraw-Hill Book Company [Front Matter].
Richard et al., “Visiting the Traveling Salesman Problem with Petri nets and application in the glass industry,” Feb. 1996, IEEE Emerging Technologies and Factory Automation, pp. 238-242.
Robin, Jacques, “Revision-Based Generation of Natural Language Summaries Providing Historical Background: Corpus-Based Analysis, Design Implementation and Evaluation,” 1994, Ph.D. Thesis, Columbia University, New York.
Rogati et al., “Resource Selection for Domain-Specific Cross-Lingual IR,” ACM 2004, pp. 154-161.
Zhang, R. et al., “The NiCT-ATR Statistical Machine Translation System for the IWSLT 2006 Evaluation,” submitted to IWSLT, 2006.
Russell, S. and Norvig, P., “Artificial Intelligence: A Modern Approach,” 1995, Prentice-Hall, Inc., New Jersey [Front Matter].
Sang, E. and Buchholz, S., “Introduction to the CoNLL-2000 Shared Task: Chunking,” 2002, Proc. of CoNLL-2000 and LLL-2000, Lisbon, Portugal, pp. 127-132.
Schmid, H., and Schulte im Walde, S., “Robust German Noun Chunking With a Probabilistic Context-Free Grammar,” 2000, Proc. of the 18th Conference on Computational Linguistics, vol. 2, pp. 726-732.
Schutze, Hinrich, “Automatic Word Sense Discrimination,” 1998, Computational Linguistics, Special Issue on Word Sense Disambiguation, vol. 24, Issue 1, pp. 97-123.
Selman et al., “A New Method for Solving Hard Satisfiability Problems,” 1992, Proc. of the 10th National Conference on Artificial Intelligence, San Jose, CA, pp. 440-446.
Kumar, Shankar, “Minimum Bayes-Risk Techniques in Automatic Speech Recognition and Statistical Machine Translation: A dissertation submitted to the Johns Hopkins University in conformity with the requirements for the degree of Doctor of Philosophy,” Baltimore, MD Oct. 2004.
Related Publications (1)
Number Date Country
20150106076 A1 Apr 2015 US