In-context exact (ICE) matching

Information

  • Patent Grant
  • 10248650
  • Patent Number
    10,248,650
  • Date Filed
    Monday, May 9, 2016
    8 years ago
  • Date Issued
    Tuesday, April 2, 2019
    5 years ago
  • CPC
  • Field of Search
    • US
    • 704 009000
    • 704 260000
    • 704 251000
    • 704 002000
    • 704 235000
    • 704 270000
    • 704 277000
    • 704 004000
    • 704 005000
    • 341 107000
    • 715 708000
    • 715 532000
    • 715 268000
    • 707 003000
    • 707 797000
    • 707 005000
    • CPC
    • G06F17/274
    • G06F17/2785
    • G06F17/271
    • G06F17/3071
    • G06F17/2755
    • G06F17/30
    • G06F17/5068
    • G06F17/30949
    • G10L15/18
  • International Classifications
    • G06F17/28
    • G06F16/903
    • Disclaimer
      This patent is subject to a terminal disclaimer.
Abstract
Methods, systems and program product are disclosed for determining matching level of a text lookup segment with a plurality of source texts in a translation memory in terms of context. The invention determines exact matches for the lookup segment in the plurality of source texts, and determines, in the case that at least one exact match is determined, that a respective exact match is an in-context exact match for the lookup segment in the case that a context of the lookup segment matches that of the respective exact match. Degree of context matching required can be predetermined, and results prioritized. The invention also includes methods, systems and program products for storing a translation pair of source text and target text in a translation memory including context, and the translation memory so formed. The invention ensures that content is translated the same as previously translated content and reduces translator intervention.
Description
BACKGROUND OF THE INVENTION

Technical Field


This invention relates generally to processing content, and more particularly, to ensuring an exact translation match to source content including context to simplify and otherwise facilitate translation and other processing functions associated with the content.


Related Art


As information becomes more accessible on a global basis, especially given the advent and rapid utilization of the Internet and the World-Wide-Web, the role of translation has shifted away from simple transcription of source text into a target language. Translators today must ensure the timely and accurate deployment of the translated content to designated sites and customers. As such, the increased need for content translation has prompted numerous companies to develop tools that automate and aid in part of the translation process. Given that translators seek to translate content as quickly as possible, translation can be made more efficient with the greater flexibility in software functionality and the ability to save previous translations for future use. Therefore, tools have been created to save translations, including blocks and/or segments of translations, in computer memory (“translation memory” or “TM”).


Translation memories, also known as translation databases, are collections of entries where a source text is associated with its corresponding translation in one or more target languages. Translation memory includes a database that stores source and target language pairs of text segments that can be retrieved for use with present texts and texts to be translated in the future. Typically, TMs are used in translation tools: when the translator “opens” a segment, the application looks up the database for equivalent source text. The result is a list of matches usually ranked with a score expressing the percentage of similarity between the source text in the document and in the TM. The translator or a different TM system provides the target text segments that are paired with the lookup segments so that the end product is a quality translation.


There are many computer-assisted translation (“CAT”) tools available to assist the translator, such as bilingual and multilingual dictionaries, grammar and spell checkers and terminology software, but TM goes one step further by making use of these other CAT tools while at the same time matching up the original source document stored in its database with the updated or revised document through exact and fuzzy matching. An exact match (100% match) is a match where there is no difference (or no difference that cannot be handled automatically by the tool) between the source text in the document and the source text in the TM. A fuzzy match (less than 100% match) is a match where the source text in the document is very similar, but not exactly the same, as the source text in the TM. Duplicated exact matches are also often treated as fuzzy matches. A TM system is used as a translator's aid, storing a human translator's text in a database for future use. For instance, TM can be utilized when a translator translates the original text, using translation memory to store the paired source and target segments. The translator could then reuse the stored texts to translate the revised or updated version of the text. Only the segments of the new text that do not match the old one would have to be translated. The alternative would be to use a manual translation system or a different CAT system to translate the original text. The TM system could then be used by a translator to translate the revision or update by aligning the texts produced by a translator or other CAT system and storing them in the TM database for present and future work. The translator could then proceed to translate only the segments of the new text, using TM as described above.


There are many advantages in using TMs: The translation can go much faster, avoid unnecessary re-typing of existing translations, and/or enable a translator to change only certain parts of the text. TMs also allow a better control of the quality of the translation. In the related art, TM was employed to speed the translation step in large batch projects. For example, a software company may release version 1 of its software product and need to translate the accompanying documentation. The documentation is broken into sentences and translated, with all sentence pairs captured in TM. Two years later the company releases version 2 of its software. The documentation has changed significantly, but there is also a significant portion similar to the original documentation. This time, as translators translate the documentation, their work is reduced through leveraging exact and fuzzy matches from the TM. As this example illustrates, TM is typically used as an aid in a pipeline process. In the related art, there are also some limitations with the utilization of TM.


Automatically leveraging translation using exact matches (without validating them) can generate incorrect translation since there is no verification of the context where the new segment is used compared to where the original one was used: this is the difference between true reuse and recycling. In the related art, TM systems are recycling systems. With Web content, and now with many types of content, it is common for a document to be translated, and then have minor changes made to it, and then have need for it to be translated again. For example, a web document listing the advantages of a product might be translated, but then a new advantage might be added and the document would therefore need to be translated again. In the related art, TM would reduce the effort of translating the document a second time. Exact matches for most sentences would exist where the source text was identical to one or more entries in the TM. The translator then makes sure that the right exact match is chosen for each by evaluating the appropriateness of a match to contextual information. However, the related art does not provide for a determination of content context. In addition, within the related art, there is no automated process for accurately choosing the best exact match for a given segment or validating whether a given exact match is an appropriate match for the context to which it is being applied. As such, a translator is required to validate matches. The fact that a translator needs to validate and possibly perform an action for every sentence when just a few words may have changed, given that under the related art a segment may be translated differently under different circumstances or contexts, is grossly inefficient.


In view of the foregoing, there is a need in the art for an automated process which accurately validates whether a given exact match is an appropriate match for the context to which it is being applied.


SUMMARY OF THE INVENTION

The invention includes methods, systems and program product for determining a matching level of a text lookup segment with a plurality of source texts in a translation memory in terms of context. In particular, the invention determines any exact matches for the lookup segment in the plurality of source texts, and determines, in the case that at least one exact match is determined, that a respective exact match is an in-context exact (ICE) match for the lookup segment in the case that a context of the lookup segment matches that of the respective exact match. The degree of context matching required can be predetermined, and results prioritized. The invention also includes methods, systems and program products for storing a translation pair of source text and target text in a translation memory including context, and the translation memory so formed. The invention ensures that content is translated the same as previously translated content and reduces translator intervention.


A first aspect of the invention is directed to a method of determining a matching level of a plurality of source texts stored in a translation memory to a lookup segment to be translated, the method comprising the steps of: determining any exact matches for the lookup segment in the plurality of source texts; and determining, in the case that at least one exact match is determined, that a respective exact match is an in-context exact (ICE) match for the lookup segment in the case that a context of the lookup segment matches that of the respective exact match.


A second aspect of the invention includes a system for determining a matching level of a plurality of source texts stored in a translation memory to a lookup segment to be translated, the system comprising: means for determining any exact matches for the lookup segment in the plurality of source texts; and means for determining, in the case that at least one exact match is determined, that a respective exact match is an in-context exact (ICE) match for the lookup segment in the case that a context of the lookup segment matches that of the respective exact match.


A third aspect of the invention related to a program product stored on a computer readable medium for determining a matching level of a plurality of source texts stored in a translation memory to a lookup segment to be translated, the computer readable medium comprising program code for performing the following steps: determining any exact matches for the lookup segment in the plurality of source texts; and determining, in the case that at least one exact match is determined, that a respective exact match is an in-context exact (ICE) match for the lookup segment in the case that a context of the lookup segment matches that of the respective exact match.


A fourth aspect of the invention is directed to a method of storing a translation pair of source text and target text in a translation memory, the method comprising the steps of: assigning a context to the translation pair; and storing the context with the translation pair.


A fifth aspect of the invention is directed to a system for storing a translation pair of source text and target text in a translation memory, the system comprising: means for assigning a context to the translation pair; and means for storing the context with the translation pair.


A sixth aspect of the invention is relates to a program product stored on a computer readable medium for storing a translated text segment for storage in a translation memory, the computer readable medium comprising program code for performing the following steps: assigning a context to the translated text segment; and storing the context with the translated text segment.


A seventh aspect of the invention includes translation memory comprising: a plurality of source texts for comparison to a lookup segment; and a context identifier for each source text.


An eighth aspect of the invention relates to a client-side system for interacting with a translation system including a translation memory, the system comprising: means for assigning a segment identifier to a segment to be translated by the translation system, the segment identifier indicating a usage context of the segment; and means for communicating the segment identifier assignment for storage as part of the translation memory.


A ninth aspect of the invention includes a program product stored on a computer readable medium for interacting with a translation system including a translation memory having a plurality of source texts, the computer readable medium comprising program code for performing the following steps: assigning a segment identifier to a segment to be translated by the translation system, the segment identifier indicating a usage context of the segment; and communicating the segment identifier assignment for storage as part of the translation memory.


The foregoing and other features of the invention will be apparent from the following more particular description of embodiments of the invention.





BRIEF DESCRIPTION OF THE DRAWINGS

The embodiments of this invention will be described in detail, with reference to the following figures, wherein like designations denote like elements, and wherein:



FIG. 1 shows a block diagram of a computer system using an ICE match translation system according to the invention.



FIGS. 2A-2B show a flow diagram of one embodiment of an operational methodology of the system of FIG. 1.



FIG. 3 shows a couple of entries in an illustrative translation memory.



FIG. 4 shows an illustrative source asset including the entries of FIG. 3.



FIG. 5 shows a flow diagram of one embodiment for translation memory generation according to the invention.



FIG. 6 shows a flow diagram of an alternative embodiment for translation memory generation according to the invention.





DETAILED DESCRIPTION

The detailed description includes the following headings for convenience purposes only: I. Definitions, II. General Overview, III. System Overview, IV. Operational Methodology, and V. Conclusion.


I. DEFINITIONS

“Asset” means a content source defining a bound collection of related content or grouping of text segments, e.g., by context, usage, size, etc. In general, an asset is associable to a document, such as a hypertext markup language (HTML) file, a Microsoft® Word® document, or a simple text file. However, some assets do not correspond to file system files. The asset may in fact be defined from the columns of a database table or the structures within an extensible markup language (XML) repository. Regardless of how they are represented physically, they all share the common purpose—defining a bound collection of related content that can be accessed, manipulated, and ultimately, translated. An asset may contain content, formatting information, and internal structural data that depends on the nature of the asset.


“Source asset” refers to the asset from which a lookup segment is drawn.


“Segment” includes a translatable chunk of content, e.g., a phrase, sentence, paragraph, etc. It represents the smallest unit of translation work. In practice, a segment can represent a paragraph, a sentence or even a sentence fragment. Segments typically are not single words, though single word segments can be used.


“Source text” refers to the text within the translation memory that corresponds to the original (source) language, which is the language being translated. The source text is compared to the lookup segment from the asset to during the match lookup process in order to find a match.


“Target text” includes the translation of the source text for a particular locale, i.e., it is one half of a translation memory TM entry.


“Translation memory” (abbreviated TM) includes a repository including TM entries. A TM can include TM entries for any number of locales. For example, it can contain entries for English-to-French, Greek-to-Russian, Albanian-to-Turkish, etc.


“TM entry” includes a translation pair stored in the translation memory that maps source text to target text. It is specific for a given translation pair, which includes a source text and target text locale pair, and is usually associated with the asset whose translation produced this translation pair. In effect, a TM entry represents a previous translation, which can be reused later. In addition, each TM entry according to the invention includes a context portion that identifies the context of the related source text and target text pair.


“Exact match” means a source text that contains source text that is completely identical to the lookup text from the asset at the moment it comes out of a translation memory. As used herein, exact matches also include 100% matches, which are similar to exact matches, but do not necessarily result from exact matches because of differences that exist in the translation memory entry. For example, a match can be scored as 100% without having been an exact match for one of the following reasons: 1) unscored whitespace differences—using a different type of space character from that of the TM entry will prevent it from being selected as an exact match, 2) configured penalties through which the invention effectively ignores certain differences between the source and lookup text, or 3) segment repair through which repair heuristics can be applied to fix differences between the TM match and the original lookup text.


“Context” means discourse that surrounds a text segment and helps to determine its interpretation. Context, as used herein, may include different levels. For example, context may include: a usage context level and an asset context level. Each different context may require different verbiage depending on the intended audience of the content.


“Usage context” refers to discourse that surrounds a segment and influences how the invention derives the appropriate translation for content—by considering the text surrounding the text to be translated. Typically, the usage context is defined in conjunction with surrounding content, which provides insight into the meaning of the segment to be translated. Usage context can also have levels in terms of text that precedes a particular segment and text that follow (post) a particular segment.


“Asset context” refers to discourse relative to the asset environment in which the segment exists, i.e., background and perspective framework of the overall content in which a text segment appears.


“In-context exact (ICE) match” for a lookup segment means the source text must be an exact match and shares at least one context level with the TM entry providing the match.


“Lookup text” refers to the segment of text from the source asset for which a TM match is to be sought.


“Segment identifier” (SID) includes a label that defines the usage context in which a given segment is to be translated, and is associated with content at creation of the content. A SID provides a context identification for the given segment. A SID may include marker tags that define segment boundaries. As described below, a SID is an alternative to basing the usage context on surrounding segments.


II. GENERAL OVERVIEW

The present invention provides methods, systems and program products for, inter alia, determining a matching level of a plurality of source texts stored in a translation memory to a lookup segment to be translated. The invention generates high quality matches for source content from previously stored translations in a translation memory TM. In the related art, the best matches available were exact matches, i.e., matches where the source text was identical to one or more entries in the TM. However, there is no automated process for accurately choosing the best exact match for a given segment or validating whether a given exact match was an appropriate match for the context to which it is being applied. In particular, a segment may be translated differently under different circumstances or contexts. The appropriateness of an exact match requires evaluation of contextual information, which is based on the content usage (as defined by the sentences or segments surrounding it) as well as the asset context (which may require different verbiage depending on the intended audience.)


The current invention does not replace the exact match process. Instead, it provides a new level for matching, above exact matches, thus, employing a true reuse TM system which negates the need for manual validation and aids one in creating a TM which is as valuable as possible. In particular, one embodiment of the invention determines a matching level of a plurality of source texts stored in a translation memory to a lookup segment to be translated by determining any exact matches for the lookup segment in the plurality of source texts; and determining, in the case that at least one exact match is determined, that a respective exact match is an in-context exact (ICE) match for the lookup segment in the case that a context of the lookup segment matches that of the respective exact match. Accordingly, the ICE match determination determines the appropriateness of an exact match based on the context of the lookup segment. Those source texts that are exact matches and have a matching context are referred to as “in-context exact (ICE) matches.” An ICE match is considered superior to an exact match in that it guarantees that the translation applied is appropriate for the context in which it is used. An ICE match is a translation match that guarantees a high level of appropriateness by virtue of the match having been previously translated in the same context as the segment currently being translated.


The invention leverages context information in order to: 1) determine the appropriateness of an exact match as a high quality (non-review requiring) match for new content, 2) select the best context match for a given lookup segment, and 3) guarantee that previously assigned translations for formerly translated content is always restorable. For new content, the invention leverages context information to find a high quality match from the TM based on segment usage context. The invention also ensures that the same content will always be translated the same way given its context—both on the asset and content level.


In terms of translation of a given lookup segment, suppose a source document is translated and all segments are stored in TM. If the exact same source document is then put through a second time, the document, including all its content, will be fully matched and the resulting translated document will be exactly the same as the first translated document. This behavior is straight forward, and expected. However, this can only be guaranteed as a result of using context information. To further understand the significance of this guarantee, consider a source document that has the same exact sentence repeated twice in two different places. Because the second usage may not have the same implied intentions as the first, it is translated differently. Now again consider an identical document being leveraged against the TM. Should the duplicated sentence have the same translation or should they differ as they did in the original document? Without taking the context of their usage into account, these sentences most likely would be translated the same by the TM. However, according to the invention, the context is considered, which guarantees that the two sentences will continue to have different translations as long as their usage context dictates such.


In terms of restoring previously translated text segments, the invention also ensures that the translations of new documents will not impact the ability to restore the translation of a formerly translated document, and provides a method of ensuring that translations are perfectly repeatable by leveraging a previously translated document against TM so that it will always result in the same translations as stored by the translator. Consider again two identical documents. The first document is translated, and the results are stored in the TM. When the second document is leveraged against the same TM, the document is presented as being fully translated with ICE matches. The usage context is identical to that of the first document. Now consider that the second document is targeted for a different audience. The source language text is not changed in this example since it is deemed suitable for both audiences. However, the translation into the target language requires some alterations. The translator updates the translations for this document, and stores the results into the TM. Time passes, and copies of both translated documents are again required. For space reasons, the original translated documents were deleted. Neither of the source documents has been altered, and thus, they still contain identical source text. The invention facilitates the regeneration of the original translated documents, each being identical to the originally translated documents (which themselves were not identical). Even though the content of the source documents is identical, the invention is able to leverage asset context information to ensure that the document specific translations are recoverable.


The invention may be exploited as part of a content management system such as Idiom's WorldServer™, or as a separate system. WorldServer™, for example, is a Web-based application that enables enterprises to manage their content in the context of the whole globalization process while leveraging established Web architecture, content management and workflow systems. Content management systems simplify the multiple complexities arising from deploying, for example, a global Web strategy, enabling a company's Web-site to efficiently support multiple countries and also different languages, locations and cultures. They provide structures and processes to collaboration among site managers, Web developers, content owners, translators and editors, resulting in a streamlined process, a synchronized global Web strategy and a coordinated global Web team. A translator uses a content management system to see what content he or she has to translate. In WorldServer™, the translator can either export the content needing translation to a third party editing tool, or use a translation workbench to perform the actual translation. A translator can be an individual contributor, including users that are adapting but not translating content and/or reviewers who review content. Content management systems store translated phrases into TM for later recall.


III. SYSTEM OVERVIEW

With reference to the accompanying drawings, FIG. 1 is a block diagram of an in-context exact match translation system 100 in accordance with the invention. It should be recognized that while system 100 is shown as a separate system, it may be implemented as part of a larger content management or translation system such as Idiom's WorldServer™. In this regard, description of system 100 may include certain functionality of a translation system, but omit other functionality for clarity. In addition, it should be recognized that while system 100 is shown in a client-server (e.g., Web-based) environment, other arrangements are also possible.


System 100 is shown implemented on a computer 102 as computer program code. To this extent, computer 102 is shown including a memory 112, a processing unit 114, an input/output (I/O) interface 116, and a bus 118. Further, computer 102 is shown in communication with an external I/O device 120 and a storage system 122. In general, processing unit 114 executes computer program code, such as system 100, that is stored in memory 112 and/or storage system 122. While executing computer program code, processing unit 114 can read and/or write data to/from memory 112, storage system 122, and/or I/O device 120. Bus 118 provides a communication link between each of the components in computer 102, and I/O device 120 can comprise any device that enables user to interact with computer 102 (e.g., keyboard, pointing device, display, etc.).


Alternatively, a user can interact with another computing device (not shown) in communication with computer 102. In this case, I/O interface 116 can comprise any device that enables computer 102 to communicate with one or more other computing devices over a network (e.g., a network system, network adapter, I/O port, modem, etc.). The network can comprise any combination of various types of communications links. For example, the network can comprise addressable connections that may utilize any combination of wire line and/or wireless transmission methods. In this instance, the computing devices (e.g., computer 102) may utilize conventional network connectivity, such as Token Ring, Ethernet, WiFi or other conventional communications standards. Further, the network can comprise one or more of any type of network, including the Internet, a wide area network (WAN), a local area network (LAN), a virtual private network (VPN), etc. Where communications occur via the Internet, connectivity could be provided by conventional TCP/IP sockets-based protocol, and a computing device could utilize an Internet service provider to establish connectivity to the Internet.


Computer 102 is only representative of various possible combinations of hardware and software. For example, processing unit 114 may comprise a single processing unit, or be distributed across one or more processing units in one or more locations, e.g., on a client and server. Similarly, memory 112 and/or storage system 122 may reside at one or more physical locations. Memory 112 and/or storage system 122 can comprise any combination of various types of computer-readable media and/or transmission media including magnetic media, optical media, random access memory (RAM), read only memory (ROM), a data object, etc. I/O interface 116 can comprise any system for exchanging information with one or more I/O devices. Further, it is understood that one or more additional components (e.g., system software, math co-processing unit, etc.) not shown in FIG. 1 can be included in computer 102. To this extent, computer 102 can comprise any type of computing device such as a network server, a desktop computer, a laptop, a handheld device, a mobile phone, a pager, a personal data assistant, etc. However, if computer 102 comprises a handheld device or the like, it is understood that one or more I/O devices (e.g., a display) and/or storage system 122 could be contained within computer 102, not externally as shown.


As discussed further below, system 100 is shown including an exact match determinator 130, an in-context exact (ICE) match determinator 132, a hash algorithm 133, a fuzzy match determinator 134, a translation memory TM generator 136, a segment retriever 138 and other system components (Sys. Comp.) 140. ICE match determinator 132 includes a context identifier 142, a match evaluator 144 and an ICE match prioritizer 146. Other system components 140 may include other functionality necessary for operation of the invention, but not expressly described herein. For example, other system components 140 may include an auto-translation system and/or content management system functionality such as that provided by Idiom's WorldServer™.


Although not shown for clarity in FIG. 1, it should be understood that client-side system 150 may include similar structure to computer 102, and include program code for providing functionality as described below.



FIG. 1 also shows a translation memory 128 (hereinafter “TM 128”) for use by system 100. As shown in FIG. 3, TM 128 includes a plurality of TM entries 148 including stored target texts 162, 164 that have been previously translated into a particular language for particular source texts 152A, 152B, respectively (only two shown for brevity). For example, stored source text “global enterprises” 152A has been transcribed into a number of German translations 162, i.e., target texts, and stored source text “team of visionaries” 152B has been translated into a number of French translations 164, i.e., target texts. Each source text 152A, 152B is for comparison to a lookup segment. Each TM entry 148 also includes context identification 166 (only two labeled for clarity). In one embodiment, context identification 166 includes indications of different context levels such as a usage context portion 168 and an asset context portion (AC) 170. Asset context portion 170 includes an asset code, e.g., “33333,” that identifies a particular asset to system 100. Other context levels may also be provided. In some cases, asset context portion 170 may be omitted.


In one preferred embodiment, each usage context portion 168 includes a preceding usage context (UC) hash code 172 and a post usage context (UC) hash code 174. Preceding UC hash code 172 is generated using hash algorithm 133 based on the text stream generated by a preceding segment that the respective source text appeared next to during translation. Similarly, post UC hash code 174 is generated using hash algorithm 133 based on the text stream generated by a following (post) segment that the respective source text appeared next to during translation. Hash algorithm 133 includes any now known or later developed hash algorithm that can convert a text stream into a unique numerical identifier. (It should be recognized that the hash codes shown are simplified for clarity.) Accordingly, each UC hash code indicates a unique usage context level for the respective source text. In an alternative embodiment, only one usage context hash code may be employed for a particular source text 152, and preceding and following segments.


Where a lookup segment 154 is assigned a context at creation, context identifications 166 may be generated using a user-specified SID, as described above, rather than hash algorithm 133.


It should be recognized that the particular codes used herein are for illustration purposes only.


IV. OPERATIONAL METHODOLOGY

Turning to FIGS. 2A-2B, a flow diagram of one embodiment of operational methodology of the invention will now be described. Discussion of FIGS. 2A-2B will be made in conjunction with FIGS. 1, 3 and 4.


A. Preliminary Steps


Starting with FIG. 2A, as a preliminary step PS, in one embodiment, a lookup segment 154 is loaded by way of client computer system 150 directly linked or linked by a network of some type, e.g. a LAN, a WAN, or the Internet, to ICE match system 100. For example, lookup segment 154 may be loaded via a translation workflow application server (not shown), e.g., Idiom's WorldServer™, which ICE match system 100 may be a part of. Lookup segment 154 may be loaded as part of a larger asset. In this case, system 100 may conduct segmentation of the larger asset in any now known or later developed fashion to create lookup segments 154, e.g., as provided by Idiom's WorldServer™. Segmentation is the process through which an asset's content is parsed and exposed as translatable segments. The size of the segment depends on segmentation rules, which may be user defined.


B. General Methodology


The steps S1-S12 represent analysis for each lookup segment 154.


In a first step S1, any exact matches for lookup segment 154 in the plurality of source texts 152 in TM 128 is determined by exact match determinator 130. Exact match determinator 130 may function as in most conventional TM systems, which employ a string comparison algorithm to gauge the appropriateness of a translation stored within TM 128, where scores are awarded based on how closely the two strings match. A score of 100% typically indicates that an exact match has been found. For example, as shown in FIG. 3, lookup segment “global enterprises,” when translated into German, would result in three exact matches: 1) globale Wesen, 2) globale Untemehmen, and 3) globale Geschafte. Lookup segment “team of visionaries,” when translated into French, would result in four exact matches: 1) equipe de visionnaires, 2) groupe de visionnaires, 3) bande des visionnaires, and 4) groupe de futurologues. More than one exact match may exist within TM 128 for each lookup segment 154 because multiple translations for any given segment and the meaning of a statement in a given language are not only derived from the words, but also from the context in which it is used. Accordingly, each previous translation can result in many target text translations 162, 164 for a particular source text 152, and hence, an identical lookup segment 154.


In step S2, a determination is made as to whether at least one exact match is determined, i.e., found in TM 128. If NO, at step S2, processing proceeds to step S3 at which fuzzy match determinator 134 determines whether there are any fuzzy matches for lookup segment 154 in any now known or later developed fashion. Any fuzzy matches for lookup segment 154 are reported at step S4. “Reporting” as used herein, can mean displaying results to a user, transferring and/or storing results. Although not shown, if fuzzy matches are not found, then conventional auto-translation may be instigated.


If YES at step S2, at step S5, ICE match determinator 132 determines whether a respective exact match is an in-context exact (ICE) match for lookup segment 154. As stated above, an “ICE match” means source text 152 must be an exact match and that it also shares a common context with lookup segment 154. In other words, an exact match that has a context identification 166 that matches that of lookup segment 154 is an ICE match. In one embodiment, the context for purposes of this determination includes only the usage context. However, other context matching levels may be employed, as will be described below.


Step S5 includes two sub-steps. First, sub-step S5A, context identifier 142 identifies a context of lookup segment 154. In one embodiment, context identifier 142 identifies a context based on surrounding segments of lookup segment 154 in its source asset. In this case, hash algorithm 133 is implemented to determine a usage context for lookup segment 154 by calculating a lookup segment (LS) preceding UC hash code and a lookup segment (LS) post UC hash code. Again, hash algorithm 133 includes any now known or later developed hash algorithm that can convert a text stream into a unique numerical identifier. Referring to FIG. 4, an illustrative source asset 180 including lookup segment 154A in the form of “team of visionaries” is shown. A LS preceding UC hash code is formed based on the immediately preceding segment 190. For example, as shown in FIG. 4, a LS preceding UC hash code would be calculated for “Idiom was founded in January 1998 by a team of visionaries.” Similarly, a LS post UC hash code would be calculated for the immediately following segment 192, i.e., “team of visionaries who recognized the need for an enterprise-class software product that would meet the globalization.” An asset context for source asset 180 can be identified by context identifier 142 based on an asset hash, which is based on the system's identification of a particular asset, e.g., asset name, location within system, etc.


In an alternative embodiment, context identifier 142 identifies a context of lookup segment 154 according to a segment identifier (SID) associated with lookup segment 154, which as stated above, includes a label that defines the usage context in which a segment is to be translated. A SID may include marker tags that define segment boundaries. Preferably, a SID is associated with a source text 152 and/or lookup segment 154 during creation of the segment, i.e., by a content creator. However, a SID may be associated with a source text 152 and/or lookup segment 154, or overwritten at a later time, e.g., by a previous content translator.


In sub-step S5B, ICE match evaluator 144 makes an evaluation for each exact match for a lookup segment 154 by using context identification 166 stored with each candidate to determine whether it has been used in the same context as lookup segment 154, i.e., whether each exact match is an ICE match. The degree of context matching required in order for an exact match to be considered an ICE match can be predetermined. In one embodiment, ICE match evaluator 144 indicates that a respective exact match is an ICE match for lookup segment 154 only in the case that each context level of lookup segment 154 matches that of the respective exact match. For example, where context includes a usage context level and an asset context level, the determining step may indicate that a respective exact match is an ICE match for the lookup segment only in the case that both the usage context level and the asset context level of the lookup segment matches that of the respective exact match.


Example

Referring to FIG. 3, assume an illustrative lookup segment 154 includes the text “team of visionaries,” and that it is to be translated into French. Also, assume the lookup segment “team of visionaries” has a LS preceding UC hash code of 333 and a LS post UC hash code of 4444, and an asset context of 666666. (It should be understood that all hash codes in FIG. 3 are simplified for purposes of clarity). As shown in FIG. 3, exact match determinator 130 would determine four exact matches for lookup segment “team of visionaries,” when translated into French: 1) equipe de visionnaires, 2) groupe de visionnaires, 3) bande des visionnaires, and 4) groupe de futurologues. ICE match evaluator 144 reviews the exact matches, and as shown in FIG. 3, would determine that when lookup segment “team of visionaries” is translated into French, the source text “groupe de visionnaires” has the same context because it has the same asset context 170 and usage context (hash codes) 172, 174. Accordingly, “groupe de visionnaires” would be an ICE match. The other source texts would not qualify as ICE matches because they do not have at least one context code of lookup segment “team of visionaries.”


In an alternative embodiment, ICE match evaluator 144 may indicate that a respective exact match is an ICE match for lookup segment 154 even if only some context levels of the lookup segment matches that of the respective exact match.


Example

Referring to FIG. 3, assume an illustrative lookup segment 154 includes the text “global enterprises,” and that it is to be translated into German. Also, assume the lookup segment “global enterprises” has a LS preceding DC hash code of 1234 and a LS post DC hash code of 4321, and an asset context of 7890. As shown in FIG. 3, exact match determinator 130 would determine three exact matches for lookup segment “global enterprises,” when translated into German: 1) globale Wesen, 2) globale Dntemehmen, and 3) globale Geschafte. Assuming that only one usage context level is required for an exact match to be an ICE match, ICE match evaluator 144 reviews the exact matches, and as shown in FIG. 3, would determine that when lookup segment “global enterprises” is translated into German, the source texts “glob ale Wesen” and “glob ale Dntemehmen” have the same context because they each have one DC hash code that matches one of LS DC hash codes. That is, “glob ale Wesen” has the same previous DC hash code as the lookup segment, and “globale Dntemehmen” has the same post DC hash code as the lookup segment. The other source texts would not qualify as ICE matches because they do not have at least one context level of lookup segment “global enterprises.” Details of how system 100 prioritizes multiple ICE matches will be described below.


If no ICE matches are determined, i.e., NO at step S6, at step S7, any exact matches are reported. Subsequently, at step S8, exact matches and fuzzy matches, i.e., from step S3-S4, can be validated by a user in any now known or later developed fashion. In this case, exact matches and fuzzy matches are retrieved to their respective caches, and are made available to the translator by means of a client computer system 150 where the translator must validate each exact match in order to ensure that such match is the best match given the source asset 180 content and update each fuzzy match in order to match the source asset 180 content.


If ICE matches are determined, i.e., YES at step S6, then as shown in FIG. 2B, at step S9, ICE match prioritizer 146 determines whether more than one ICE match is found. If only one ICE match is determined, then at step S10, the single ICE match is reported. Once an ICE match is automatically reported, system 100 allows retrieval of the target text 162, 164 via segment retriever 138.


C. Multiple ICE Match Prioritization


Returning to FIG. 2B, step S11-12 represent optional steps for addressing the situation in which multiple ICE matches are determined in step S5, i.e., YES at step S9. In one embodiment (not shown), ICE match determinator 130 may simply allow a user to select an ICE match from a list of ICE matches. However, this is not preferred because it defeats one purpose of the ICE matches, i.e., not having to validate an exact match. In the preferred embodiment shown in FIG. 2B, if more than one ICE match is determined, then ICE match prioritizer 146 prioritizes (ranks) each ICE match according to a degree of context matching at step S11. As described above, the “degree of context matching” can be predetermined. This step prioritizes each ICE matches degree of context matching and either presents the ICE matches to a user for selection or automatically selects the highest ranked ICE match, at step S12. It should be understood that various formula for prioritizing multiple ICE matches are possible depending on the number of context levels. The following example illustrates one embodiment for prioritizing multiple ICE matches.


Example

Assume the context includes a usage context level and an asset context level, and the lookup segment “team of visionaries” is to be translated into French using TM 128 of FIG. 3 based on a source asset 180, as shown in FIG. 4. In this case, “team of visionaries” has four exact matches: 1) equipe de visionnaires, 2) groupe de visionnaires, 3) bande des visionnaires, and 4) groupe de futurologues, based on previously stored translations. Assume also that lookup segment “team of visionaries” has an LS previous UC hash code 333, an LS post UC hash code 4444 and an asset code 666666. Assume also that for an exact match to be indicated by ICE match determinator 132 as an ICE match, only one context level needs to match that of the lookup segment. In this case, each exact match is an ICE match. In particular, 1) “equipe de visionnaires” has matching previous UC hash code and asset code, 2) “groupe de visionnaires” has all matching context levels, 3) “bande des visionnaires” has a matching asset code, and 4) “groupe de futurologues” has a matching post UC hash code.


It should be recognized that, by definition, ICE matches are prioritized above unmatched lookup segments (i.e., those that require manual or machine translation), fuzzy matches, and exact matches that are not ICE matches. One prioritization rubric for ICE matches is shown below. In this rubric, rankings are listed in reverse order of precedence (i.e., the higher the number, the higher the prioritization): wherein the usage context (UC) level includes a preceding UC level and a post UC level, and the prioritizing step includes:


1. Full Usage Context (UC) Matches are Preferred Over Partial Usage Context (UC) Matches:


Assuming that exact matches that have only one UC hash code that match a hash code of the lookup segment are considered ICE matches (referred to as “partial matches), preference is given to those ICE matches that have both the same previous and post hash UC codes as the lookup segment (referred to as “full UC matches”) over the partial matches. In other words, an ICE match having both previous and post UC levels that match those of the lookup segment are preferred over an ICE match having only one of the previous and post UC levels matching those of the lookup segment. For the example, ICE match 2) “groupe de visionnaires” would be preferred over all others because it has matching preceding (333) and post (4444) UC hash codes with the lookup segment.


2. ICE Matches from Same Asset as Lookup Segment are Preferred Over Those from Other Assets:


In this case, two or more ICE matches cannot be differentiated by the above-described full-over-partial matching preference, i.e., the first preference is non-conclusive, a preference is given to the ICE match that is from the same asset as the lookup segment based on the asset code. In other words, where the first preference is non-conclusive, an ICE match from the same asset as the lookup segment is preferred over an ICE match from a different asset. In the example, ICE matches 1) “equipe de visionnaires” and 4) “groupe de futurologues” are both partial ICE matches, but ICE match 1) “equipe de visionnaires” is from the same asset “666666” as the lookup segment, and would be preferred.


3. Where Two or More ICE Matches from the Same Asset are Determined for a Lookup Segment, the ICE Match with a Closer Position to the Position of Lookup Segment within the Asset is Preferred:


This prioritization addresses the situation in which a lookup segment 154 exists in numerous locations within a single source asset 180, and as a result two or more ICE matches exist for a particular asset. For example, FIG. 4 shows source asset 180 including two occurrences of lookup segment “team of visionaries” 154A. In this case, ICE match prioritizer 146 evaluates the position within the asset of the particular lookup segment and will prefer the ICE match that is closest in position within the asset to the lookup segment over the other ICE matches from the same asset. In other words, where the second preference is non-conclusive, an ICE match with a closest position to a position of the lookup segment within the asset is preferred over the other ICE matches. This evaluation of position can be repeated for any number of repetitions of a lookup segment within a particular asset.


Once the prioritization is complete, at step S12, ICE match prioritizer 146 allows a user to select the ICE match based on the rank in any now known or later developed fashion, e.g., via a graphical user interface of client computer system 150, or automatically selects the highest prioritized ICE match. Once an ICE match is selected, system 100 allows retrieval of at least one target text 162, 164 via segment retriever 138.


Because of the high-level match quality provided by an ICE match, source texts that are determined to be ICE matches do not need to be reviewed or validated by the translator. They can be automatically accepted, thus decreasing the translation cycle time and resulting in cheaper translation costs. In addition, system 100 addresses the situation in which a plurality of lookup segments 154 that are substantially identical in terms of content are present in a single source asset 180. In this case, system 100 is capable of determining an ICE match for each lookup segment 154 based on a matching level. Typically, at least one lookup segment has a different ICE match than at least one other lookup segment to assist in this determination. If not, multiple ICE matches can be reported to a user for selection, as described above. System 100 also facilitates the translation of sections of content, which are repeated across different assets with minimal effort, including without limitation retrieving matches even when segments of content have been split or merged and/or allowing content blocks to be translated differently within a single asset.


The above-described operation can continue to process further lookup segments of source asset 180 against TM 128, or provide output to a user once an entire asset is completed.


D. Generating the Translation Memory


The existence of context information for TM entries is required for system 100 operation. As such, implementation of the invention requires storage of context information with every new translation added to the TM. This allows the context information of lookup segments to be effectively compared to the context information of previously translated segments without requiring access to the previously translated documents.


Toward this end, in another embodiment, the invention provides a way through which the context information is stored along with each translation when translations are saved into TM 128, thus, not requiring a translator to keep any files around, such as the previously translated documents, for the invention to function. Turning to FIG. 5, the invention also includes a method of storing a translation pair of source text and target text in TM 128. In a first step S100, a context is assigned to the translation pair using TM generator 136. Context may be assigned, for example, by implementation of the above-described SIDs during creation of content or via operation of hash algorithm 133 during a translation pass. Next, in step S101, the context is stored with the translation pair in TM 128 by TM generator 136. As described above, the context may include a usage context level and an asset context level.


It should be recognized that the above-described TM generation may also be implemented on a client-side system 150 for when an asset (segment) is created. In this embodiment, the invention includes a client-side system 150 for interacting with a translation system (i.e., system 100 along with other content management system components 140) including TM 128. Turning to FIGS. 1 and 6, in this case, the client-side system 150 may operate by providing a SID assigner 200 for assigning (step S200) a segment identifier (SID) to a lookup segment 154 to be translated by TM 128, the SID indicating a usage context of the segment. SID assigner 200 may allow a user to associate predetermined SIDs or SIDs may be generated using, for example, a hash algorithm 133. In addition, system 150 may include a communicator 202 for communicating (step S201) the SID assignment for storage as part of TM 128, e.g., by TM generator 136 of system 100.


V. CONCLUSION

The above-described invention provides value for translators by giving them the ability to perfectly match source content with that of the TM, alleviating the need to validate the source content with the TM and creating a truly reusable TM system, which allows for a more efficient translation process.


It is understood that the order of the above-described steps is only illustrative. To this extent, one or more steps can be performed in parallel, in a different order, at a remote time, etc. Further, one or more of the steps may not be performed in various embodiments of the invention.


It is understood that the present invention can be realized in hardware, software, a propagated signal, or any combination thereof, and may be compartmentalized other than as shown. Any kind of computer/server system(s)—or other apparatus adapted for carrying out the methods described herein—is suitable. A typical combination of hardware and software could be a general purpose computer system with a computer program that, when loaded and executed, carries out the respective methods described herein. Alternatively, a specific use computer, containing specialized hardware for carrying out one or more of the functional tasks of the invention (e.g., system 100), could be utilized. The present invention also can be embedded in a computer program product or a propagated signal, which comprises all the respective features enabling the implementation of the methods described herein, and which—when loaded in a computer system—is able to carry out these methods. Computer program, propagated signal, software program, program, or software, in the present context mean any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: (a) conversion to another language, code or notation; and/or (b) reproduction in a different material form. Furthermore, it should be appreciated that the teachings of the present invention could be offered as a business method on a subscription or fee basis. For example, the system and/or computer could be created, maintained, supported and/or deployed by a service provider that offers the functions described herein for customers. That is, a service provider could offer the functionality described above.


The invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. It is to be understood that the above-described embodiments are simply illustrative and not restrictive of the principles of the invention. Various and other modifications and changes may be made by those skilled in the art which will embody the principles of the invention and fall within the spirit and scope thereof and all changes which come within the meaning and range of the equivalency of the claims are thus intended to be embraced therein.

Claims
  • 1. A method for determining an in-context exact (ICE) match from context matching levels of a plurality of source texts stored in a translation memory to a lookup segment to be translated, the method comprising: generating a preceding usage context hash code for a preceding segment, based on a text stream for the preceding segment next to the lookup segment;generating a post usage context hash code for a post segment, based on a text stream for the post segment next to the lookup segment;determining any exact matches for the lookup segment in the plurality of translation memory source texts;calculating for each exact match a context matching level based on: a match between the preceding usage context hash code for the lookup segment and a preceding usage context hash code generated for a segment of a translation memory source text, anda match between the post usage context hash code for the lookup segment and a post usage context hash code generated for the segment of the translation memory source text; anddetermining, for each exact match, if the segment of the translation memory source text providing the exact match is an ICE match for the lookup segment based on the calculated context matching level.
  • 2. The method of claim 1, wherein the plurality of translation memory source texts providing the exact match includes at least two calculated context matching levels for the generated hash codes.
  • 3. The method of claim 2, wherein the ICE match determining step indicates that a respective exact match is an ICE match for the lookup segment only in the case that each of the generated preceding and post usage context hash codes of the lookup segment matches that of the generated preceding and post usage context hash codes for the respective exact match of the translation memory.
  • 4. The method of claim 3, wherein, in the case that greater than one ICE match is determined, the ICE match determining step includes prioritizing each ICE match according to a degree of context matching.
  • 5. The method of claim 1, wherein the ICE match determining step indicates that a respective exact match is an ICE match for the lookup segment only in the case that both a preceding usage context hash code level and a post usage context hash code level of the lookup segment matches that of the respective exact match.
  • 6. The method of claim 1, further comprising, in the case that greater than one ICE match is determined, prioritizing each ICE match according to a degree of context hash code matching.
  • 7. The method of claim 6, wherein the prioritizing step includes: first preferring an ICE match having both previous and post UC (usage context) levels that match those of the lookup segment over an ICE match having only one of the previous and post UC levels matching those of the lookup segment;where the first preferring step is non-conclusive, second preferring an ICE match from a same asset as the lookup segment over an ICE match from a different asset; andwhere the second preferring step is non-conclusive, third preferring an ICE match with a closest position to a position of the lookup segment within the asset.
  • 8. The method of claim 7, further comprising allowing a user to select the ICE match based on the prioritization.
  • 9. The method of claim 1, wherein the lookup segment includes a plurality of lookup segments that are substantially identical in terms of content, and wherein the ICE match determining step includes determining an ICE match for each lookup segment.
  • 10. The method of claim 9, wherein at least one lookup segment has a different ICE match than at least one other lookup segment.
  • 11. The method of claim 1, further comprising the step of allowing retrieval of at least one source text based on the context matching level.
  • 12. A system for determining an in-context exact (ICE) match from context matching levels of a plurality of source texts stored in a translation memory to a lookup segment to be translated, the system comprising: means for generating a preceding usage context hash code for a preceding segment, based on a text stream for the preceding segment next to the lookup segment;means for generating a post usage context hash code for a post segment, based on a text stream for the post segment next to the lookup segment;means for determining any exact matches for the lookup segment in the plurality of translation memory source texts;means for calculating for each exact match a context matching level based on: a match between the preceding usage context hash code for the lookup segment and a preceding usage context hash code generated for a segment of a translation memory source text, anda match between the post usage context hash code for the lookup segment and a post usage context hash code generated for the segment of the translation memory source text; andmeans for determining, for each exact match if the segment of the translation memory source text providing the exact match is an ICE match for the lookup segment based on the calculated context matching level.
  • 13. The system of claim 12, wherein the ICE match determining means indicates that a respective exact match is an ICE match for the lookup segment only in the case that each of the calculated preceding and post usage context hash codes of the lookup segment matches that of the respective hash codes of the translation memory.
  • 14. The system of claim 13, further comprising means for, in the case that greater than one exact match is determined, ranking each exact match according to a degree of context matching.
  • 15. The system of claim 12, wherein the ICE match determining means indicates that a respective exact match is an ICE match for the lookup segment only in the case that both the preceding usage context level and the post usage context level of the lookup segment matches that of the respective exact match.
  • 16. The system of claim 12, further comprising means for, in the case that greater than one ICE match is determined, prioritizing each ICE match according to a degree of context hash code matching.
  • 17. The system of claim 16, wherein the prioritizing means: first prefers an ICE match having both previous and post UC (usage context) levels that match those of the lookup segment over an ICE match having only one of the previous and post UC levels matching those of the lookup segment;where the first preference is non-conclusive, second prefers an ICE match from a same asset as the lookup segment over an ICE match from a different asset; andwhere the second preference is non-conclusive, third prefers an ICE match with a closest position to a position of the lookup segment within the asset.
  • 18. The system of claim 17, further comprising means for allowing a user to select the ICE match based on the rank.
  • 19. The system of claim 18, wherein the lookup segment includes a plurality of lookup segments that are substantially identical in terms of content, and wherein the ICE match determining means determines an ICE match for each lookup segment.
  • 20. The system of claim 19, wherein at least one lookup segment has a different ICE match than at least one other lookup segment.
  • 21. The system of claim 12, further comprising means for allowing retrieval of at least one source text based on the context matching level.
  • 22. A program product stored on a non-transitory computer readable medium for determining an in-context exact (ICE) match from context matching levels of a plurality of translation memory source texts stored in a translation memory to a lookup segment to be translated, the computer readable medium comprising program code for performing the following steps: generating a preceding usage context hash code for a preceding segment, based on a text stream for the preceding segment next to the lookup segment;generating a post usage context hash code for a post segment, based on a text stream for the post segment next to the lookup segment;determining any exact matches for the lookup segment in the plurality of translation memory source texts;calculating for each exact match a context matching level based on: a match between the preceding usage context hash code for the lookup segment and a preceding usage context hash code generated for a segment of a translation memory source text, anda match between the post usage context hash code for the lookup segment and a post usage context hash code generated for the segment of the translation memory source text; anddetermining, for each exact match, if the segment of the translation memory source text providing the exact match is an ICE match for the lookup segment based on the calculated context matching level.
  • 23. The program product of claim 22, wherein the plurality of translation memory source texts providing the exact match includes at least two calculated context matching levels for the generated hash codes.
  • 24. The program product of claim 23, wherein the ICE match determining step indicates if each exact match is an ICE match for the lookup segment only in the case that both the calculated preceding and calculated post context matching level of the lookup segment matches that of the respective exact match.
  • 25. The program product of claim 24, further comprising, in the case that greater than one exact match is determined, ranking each exact match according to a degree of context matching.
  • 26. The program product of claim 22, wherein the ICE match determining step indicates that a respective exact match is an ICE match for the lookup segment only in the case that both the preceding usage context hash code and the post usage context hash code of the lookup segment matches that of the respective exact match.
  • 27. The program product of claim 23, further comprising, in the case that greater than one exact match is determined, ranking each exact match according to a degree of context hash code matching.
  • 28. The program product of claim 26, wherein the prioritizing step includes: first preferring an ICE match having both previous and post UC (usage context) hash codes that match those of the lookup segment over an ICE match having only one of the previous and post UC hash codes matching those of the lookup segment;where the first preferring step is non-conclusive, second preferring an ICE match from a same asset as the lookup segment over an ICE match from a different asset; andwhere the second preferring step is non-conclusive, third preferring an ICE match with a closest position to a position of the lookup segment within the asset.
  • 29. The program product of claim 28, further comprising allowing a user to select the ICE match based on the prioritization.
  • 30. The program product of claim 22, wherein the lookup segment includes a plurality of lookup segments that are substantially identical in terms of content, and wherein the ICE match determining step includes determining an ICE match for each lookup segment.
  • 31. The program product of claim 30, wherein at least one lookup segment has a different ICE match than at least one other lookup segment.
  • 32. The program product of claim 22, further comprising the step of allowing retrieval of at least one source text based on the context matching level.
Parent Case Info

This U.S. Non-Provisional Patent Application is a continuation application and claims the benefit of U.S. Non-Provisional patent application Ser. No. 14/519,077 filed on Oct. 20, 2014, titled “In-Context Exact (ICE) Matching”, (Issued as U.S. Pat. No. 9,342,506 on May 17, 2016), which is a continuation application and claims the benefit of U.S. Non-Provisional patent application Ser. No. 13/175,783 filed on Jul. 1, 2011, titled “In-Context Exact (ICE) Matching” (Issued as U.S. Pat. No. 8,874,427 on Oct. 28, 2014), which is a continuation application and claims the benefit of U.S. Non-Provisional patent application Ser. No. 11/071,706 filed on Mar. 3, 2005, titled “In-Context Exact (ICE) Matching” (Issued as U.S. Pat. No. 7,983,896 on Jul. 19, 2011), which claims the benefit of U.S. Provisional Patent Application No. 60/550,795, filed on Mar. 5, 2004, titled “In-Context Exact (ICE) Matching”, all of which are hereby incorporated by reference in their entirety including all references cited therein.

US Referenced Citations (380)
Number Name Date Kind
4661924 Okamoto et al. Apr 1987 A
4674044 Kalmus et al. Jun 1987 A
4677552 Sibley, Jr. Jun 1987 A
4789928 Fujisaki Dec 1988 A
4845658 Gifford Jul 1989 A
4903201 Wagner Feb 1990 A
4916614 Kaji et al. Apr 1990 A
4920499 Skeirik Apr 1990 A
4962452 Nogami et al. Oct 1990 A
4992940 Dworkin Feb 1991 A
5005127 Kugimiya et al. Apr 1991 A
5020021 Kaji et al. May 1991 A
5075850 Asahioka et al. Dec 1991 A
5093788 Shiotani et al. Mar 1992 A
5111398 Nunberg et al. May 1992 A
5140522 Ito et al. Aug 1992 A
5146405 Church Sep 1992 A
5168446 Wiseman Dec 1992 A
5224040 Tou Jun 1993 A
5243515 Lee Sep 1993 A
5243520 Jacobs et al. Sep 1993 A
5283731 Lalonde et al. Feb 1994 A
5295068 Nishino et al. Mar 1994 A
5301109 Landauer Apr 1994 A
5325298 Gallant Jun 1994 A
5349368 Takeda et al. Sep 1994 A
5351189 Doi Sep 1994 A
5408410 Kaji Apr 1995 A
5418717 Su May 1995 A
5423032 Byrd et al. Jun 1995 A
5477451 Brown et al. Dec 1995 A
5490061 Tolin Feb 1996 A
5497319 Chong et al. Mar 1996 A
5510981 Berger Apr 1996 A
5541836 Church et al. Jul 1996 A
5548508 Nagami Aug 1996 A
5555343 Luther Sep 1996 A
5587902 Kugimiya Dec 1996 A
5640575 Maruyama et al. Jun 1997 A
5642522 Zaenen Jun 1997 A
5644775 Thompson et al. Jul 1997 A
5687384 Nagase Nov 1997 A
5708780 Levergood et al. Jan 1998 A
5708825 Sotomayor Jan 1998 A
5710562 Gormish Jan 1998 A
5715314 Payne et al. Feb 1998 A
5715402 Popolo Feb 1998 A
5724424 Gifford Mar 1998 A
5724593 Hargrave, III et al. Mar 1998 A
5751957 Hiroya et al. May 1998 A
5764906 Edelstein et al. Jun 1998 A
5765138 Aycock et al. Jun 1998 A
5794219 Brown Aug 1998 A
5799269 Schabes et al. Aug 1998 A
5802502 Gell et al. Sep 1998 A
5802525 Rigoutsos Sep 1998 A
5812776 Gifford Sep 1998 A
5818914 Fujisaki Oct 1998 A
5819265 Ravin et al. Oct 1998 A
5826244 Huberman Oct 1998 A
5842204 Andrews Nov 1998 A
5844798 Uramoto Dec 1998 A
5845143 Yamauchi et al. Dec 1998 A
5845306 Schabes et al. Dec 1998 A
5848386 Motoyama Dec 1998 A
5850442 Muftic Dec 1998 A
5850561 Church et al. Dec 1998 A
5864788 Kutsumi Jan 1999 A
5873056 Liddy Feb 1999 A
5884246 Boucher et al. Mar 1999 A
5895446 Takeda et al. Apr 1999 A
5909492 Payne et al. Jun 1999 A
5917484 Mullaney Jun 1999 A
5950194 Bennett Sep 1999 A
5956711 Sullivan et al. Sep 1999 A
5956740 Nosohara Sep 1999 A
5960382 Steiner Sep 1999 A
5966685 Flanagan et al. Oct 1999 A
5974371 Hirai et al. Oct 1999 A
5974372 Barnes Oct 1999 A
5974413 Beauregard et al. Oct 1999 A
5987401 Trudeau Nov 1999 A
5987403 Sugimura Nov 1999 A
6044344 Kanevsky Mar 2000 A
6044363 Mori et al. Mar 2000 A
6047299 Kaijima Apr 2000 A
6049785 Gifford Apr 2000 A
6070138 Iwata May 2000 A
6085162 Cherny Jul 2000 A
6092034 McCarley et al. Jul 2000 A
6092035 Kurachi et al. Jul 2000 A
6131082 Hargrave, III et al. Oct 2000 A
6139201 Carbonell et al. Oct 2000 A
6154720 Onishi et al. Nov 2000 A
6161082 Goldberg et al. Dec 2000 A
6163785 Carbonell et al. Dec 2000 A
6195649 Gifford Feb 2001 B1
6199051 Gifford Mar 2001 B1
6205437 Gifford Mar 2001 B1
6212634 Geer et al. Apr 2001 B1
6260008 Sanfilippo Jul 2001 B1
6278969 King et al. Aug 2001 B1
6279112 O'toole, Jr. et al. Aug 2001 B1
6285978 Bernth et al. Sep 2001 B1
6301574 Thomas et al. Oct 2001 B1
6304846 George Oct 2001 B1
6338033 Bourbonnais et al. Jan 2002 B1
6341372 Datig Jan 2002 B1
6345244 Clark Feb 2002 B1
6345245 Sugiyama et al. Feb 2002 B1
6347316 Redpath Feb 2002 B1
6353824 Boguraev Mar 2002 B1
6356865 Franz et al. Mar 2002 B1
6385568 Brandon et al. May 2002 B1
6393389 Chanod et al. May 2002 B1
6401105 Carlin et al. Jun 2002 B1
6415257 Junqua Jul 2002 B1
6442524 Ecker Aug 2002 B1
6449599 Payne et al. Sep 2002 B1
6470306 Pringle et al. Oct 2002 B1
6473729 Gastaldo et al. Oct 2002 B1
6477524 Taskiran Nov 2002 B1
6490358 Geer et al. Dec 2002 B1
6490563 Hon Dec 2002 B2
6526426 Lakritz Feb 2003 B1
6622121 Crepy Sep 2003 B1
6623529 Lakritz Sep 2003 B1
6658627 Gallup et al. Dec 2003 B1
6687671 Gudorf Feb 2004 B2
6731625 Eastep et al. May 2004 B1
6782384 Sloan et al. Aug 2004 B2
6865528 Huang Mar 2005 B1
6920419 Kitamura Jul 2005 B2
6952691 Drissi et al. Oct 2005 B2
6976207 Rujan Dec 2005 B1
6990439 Xun Jan 2006 B2
6993473 Cartus Jan 2006 B2
7013264 Dolan Mar 2006 B2
7020601 Hummel et al. Mar 2006 B1
7031908 Huang Apr 2006 B1
7050964 Menzes May 2006 B2
7089493 Hatori et al. Aug 2006 B2
7100117 Chwa Aug 2006 B1
7110938 Cheng et al. Sep 2006 B1
7124092 O'toole, Jr. et al. Oct 2006 B2
7155440 Kronmiller Dec 2006 B1
7177702 Knight Feb 2007 B2
7185276 Keswa Feb 2007 B2
7191447 Ellis et al. Mar 2007 B1
7194403 Okura et al. Mar 2007 B2
7207005 Laktritz Apr 2007 B2
7209875 Quirk et al. Apr 2007 B2
7249013 Al-Onaizan Jul 2007 B2
7266767 Parker Sep 2007 B2
7272639 Levergood et al. Sep 2007 B1
7295962 Marcu Nov 2007 B2
7295963 Richardson Nov 2007 B2
7333927 Lee Feb 2008 B2
7340388 Soricut Mar 2008 B2
7343551 Bourdev Mar 2008 B1
7353165 Zhou et al. Apr 2008 B2
7369984 Fairweather May 2008 B2
7389222 Langmead Jun 2008 B1
7389223 Atkin Jun 2008 B2
7448040 Ellis et al. Nov 2008 B2
7454326 Marcu Nov 2008 B2
7509313 Colledge Mar 2009 B2
7516062 Chen et al. Apr 2009 B2
7533013 Marcu May 2009 B2
7533338 Duncan May 2009 B2
7580960 Travieso et al. Aug 2009 B2
7587307 Cancedda et al. Sep 2009 B2
7594176 English Sep 2009 B1
7596606 Codignotto Sep 2009 B2
7620538 Marcu Nov 2009 B2
7620549 Di Cristo Nov 2009 B2
7624005 Koehn Nov 2009 B2
7627479 Travieso Dec 2009 B2
7640158 Detlef Dec 2009 B2
7668782 Reistad et al. Feb 2010 B1
7680647 Moore Mar 2010 B2
7693717 Kahn et al. Apr 2010 B2
7698124 Menezes et al. Apr 2010 B2
7716037 Precoda May 2010 B2
7734459 Menezes Jun 2010 B2
7739102 Bender Jun 2010 B2
7739286 Sethy Jun 2010 B2
7788087 Corston-Oliver Aug 2010 B2
7813918 Muslea Oct 2010 B2
7865358 Green Jan 2011 B2
7925493 Watanabe Apr 2011 B2
7925494 Cheng et al. Apr 2011 B2
7945437 Mount et al. May 2011 B2
7903097 Chin Jul 2011 B2
7983896 Ross et al. Jul 2011 B2
7983903 Gao Jul 2011 B2
8050906 Zimmerman Nov 2011 B1
8078450 Anisimovich et al. Dec 2011 B2
8135575 Dean Mar 2012 B1
8195447 Anismovich Jun 2012 B2
8214196 Yamada Jul 2012 B2
8239186 Chin Aug 2012 B2
8239207 Seligman Aug 2012 B2
8249855 Zhou et al. Aug 2012 B2
8275604 Jiang et al. Sep 2012 B2
8286185 Ellis et al. Oct 2012 B2
8296127 Marcu Oct 2012 B2
8352244 Gao et al. Jan 2013 B2
8364463 Miyamoto Jan 2013 B2
8386234 Uchimoto et al. Feb 2013 B2
8423346 Seo et al. Apr 2013 B2
8442812 Ehsani May 2013 B2
8521506 Lancaster et al. Aug 2013 B2
8527260 Best Sep 2013 B2
8548794 Koehn Oct 2013 B2
8554591 Reistad et al. Oct 2013 B2
8594992 Kuhn et al. Nov 2013 B2
8600728 Knight Dec 2013 B2
8606900 Levergood et al. Dec 2013 B1
8612203 Foster Dec 2013 B2
8615388 Li Dec 2013 B2
8620793 Knyphausen et al. Dec 2013 B2
8635327 Levergood et al. Jan 2014 B1
8635539 Young Jan 2014 B2
8666725 Och Mar 2014 B2
8688454 Zheng Apr 2014 B2
8725496 Zhao May 2014 B2
8768686 Sarikaya et al. Jul 2014 B2
8775154 Clinchant Jul 2014 B2
8818790 He et al. Aug 2014 B2
8843359 Lauder Sep 2014 B2
8862456 Krack et al. Oct 2014 B2
8874427 Ross et al. Oct 2014 B2
8898052 Waibel Nov 2014 B2
8903707 Zhao Dec 2014 B2
8930176 Li Jan 2015 B2
8935148 Christ Jan 2015 B2
8935149 Zhang Jan 2015 B2
8935150 Christ Jan 2015 B2
8935706 Ellis et al. Jan 2015 B2
8972268 Waibel Mar 2015 B2
9026425 Nikoulina May 2015 B2
9053202 Viswanadha Jun 2015 B2
9081762 Wu Jul 2015 B2
9128929 Albat Sep 2015 B2
9141606 Marciano Sep 2015 B2
9176952 Aikawa Nov 2015 B2
9183192 Ruby, Jr. Nov 2015 B1
9183198 Shen et al. Nov 2015 B2
9201870 Jurach Dec 2015 B2
9208144 Abdulnasyrov Dec 2015 B1
9262403 Christ Feb 2016 B2
9342506 Ross et al. May 2016 B2
9396184 Roy Jul 2016 B2
9400786 Lancaster et al. Jul 2016 B2
9465797 Ji Oct 2016 B2
9471563 Trese Oct 2016 B2
9519640 Perez Dec 2016 B2
9552355 Dymetman Jan 2017 B2
9600472 Cheng et al. Mar 2017 B2
9600473 Leydon Mar 2017 B2
9613026 Hodson Apr 2017 B2
20020002461 Tetsumoto Jan 2002 A1
20020046018 Marcu Apr 2002 A1
20020083103 Ballance Jun 2002 A1
20020093416 Goers et al. Jul 2002 A1
20020099547 Chu Jul 2002 A1
20020103632 Dutta et al. Aug 2002 A1
20020107684 Gao Aug 2002 A1
20020110248 Kovales Aug 2002 A1
20020111787 Knyphausen et al. Aug 2002 A1
20020124109 Brown Sep 2002 A1
20020138250 Okura et al. Sep 2002 A1
20020165708 Kumhyr Nov 2002 A1
20020169592 Aityan Nov 2002 A1
20020188439 Marcu Dec 2002 A1
20020198701 Moore Dec 2002 A1
20030004702 Higinbotham Jan 2003 A1
20030009320 Furuta Jan 2003 A1
20030016147 Evans Jan 2003 A1
20030040900 D'Agostini Feb 2003 A1
20030069879 Sloan et al. Apr 2003 A1
20030078766 Appelt Apr 2003 A1
20030105621 Mercier Jun 2003 A1
20030120479 Parkinson et al. Jun 2003 A1
20030158723 Masuichi et al. Aug 2003 A1
20030182279 Willows Sep 2003 A1
20030194080 Michaelis Oct 2003 A1
20030200094 Gupta Oct 2003 A1
20030229622 Middelfart Dec 2003 A1
20030233222 Soricut et al. Dec 2003 A1
20040024581 Koehn et al. Feb 2004 A1
20040034520 Langkilde-Geary Feb 2004 A1
20040044517 Palmquist Mar 2004 A1
20040122656 Abir Jun 2004 A1
20040172235 Pinkham et al. Sep 2004 A1
20040255281 Imamura et al. Dec 2004 A1
20050021323 Li Jan 2005 A1
20050055212 Nagao Mar 2005 A1
20050075858 Pournasseh et al. Apr 2005 A1
20050076342 Levins et al. Apr 2005 A1
20050094475 Naoi May 2005 A1
20050149316 Ushioda et al. Jul 2005 A1
20050171758 Palmquist Aug 2005 A1
20050171944 Palmquist Aug 2005 A1
20050197827 Ross et al. Sep 2005 A1
20050222837 Deane Oct 2005 A1
20050222973 Kaiser Oct 2005 A1
20050273314 Chang et al. Dec 2005 A1
20060015320 Och Jan 2006 A1
20060095526 Levergood et al. May 2006 A1
20060095848 Naik May 2006 A1
20060136277 Perry Jun 2006 A1
20060256139 Gikandi Nov 2006 A1
20060282266 Lu Dec 2006 A1
20060287844 Rich Dec 2006 A1
20070043553 Dolan Feb 2007 A1
20070112553 Jacobson May 2007 A1
20070118378 Skuratovsky May 2007 A1
20070136470 Chikkareddy et al. Jun 2007 A1
20070150257 Cancedda et al. Jun 2007 A1
20070192110 Mizutani et al. Aug 2007 A1
20070230729 Naylor Oct 2007 A1
20070233460 Lancaster et al. Oct 2007 A1
20070233463 Sparre Oct 2007 A1
20070244702 Kahn Oct 2007 A1
20070294076 Shore et al. Dec 2007 A1
20080077395 Lancaster et al. Mar 2008 A1
20080086298 Anismovich Apr 2008 A1
20080109374 Levergood et al. May 2008 A1
20080141180 Reed Jun 2008 A1
20080147378 Hall Jun 2008 A1
20080154577 Kim Jun 2008 A1
20080201344 Levergood et al. Aug 2008 A1
20080243834 Rieman et al. Oct 2008 A1
20080288240 D'Agostini et al. Nov 2008 A1
20080294982 Leung et al. Nov 2008 A1
20090094017 Chen et al. Apr 2009 A1
20090132230 Kanevsky et al. May 2009 A1
20090187577 Reznik Jul 2009 A1
20090204385 Cheng et al. Aug 2009 A1
20090217196 Neff et al. Aug 2009 A1
20090240539 Slawson Sep 2009 A1
20090248182 Logan Oct 2009 A1
20090248482 Knyphausen et al. Oct 2009 A1
20090313005 Jaquinta Dec 2009 A1
20090326917 Hegenberger Dec 2009 A1
20100057439 Ideuchi et al. Mar 2010 A1
20100057561 Gifford Mar 2010 A1
20100121630 Mende et al. May 2010 A1
20100179803 Sawaf Jul 2010 A1
20100223047 Christ Sep 2010 A1
20100241482 Knyphausen et al. Sep 2010 A1
20100262621 Ross et al. Oct 2010 A1
20110066469 Kadosh Mar 2011 A1
20110097693 Crawford Apr 2011 A1
20110184719 Christ Jul 2011 A1
20120022852 Tregaskis Jan 2012 A1
20120046934 Cheng et al. Feb 2012 A1
20120095747 Ross et al. Apr 2012 A1
20120185235 Albat Jul 2012 A1
20120330990 Chen et al. Dec 2012 A1
20130173247 Hodson Jul 2013 A1
20130325442 Dahlmeier Dec 2013 A1
20130346062 Lancaster et al. Dec 2013 A1
20140006006 Christ Jan 2014 A1
20140012565 Lancaster et al. Jan 2014 A1
20140058718 Kunchukuttan Feb 2014 A1
20140142917 D'Penha May 2014 A1
20140142918 Dotterer May 2014 A1
20140229257 Reistad et al. Aug 2014 A1
20140297252 Prasad et al. Oct 2014 A1
20140358519 Mirkin Dec 2014 A1
20140358524 Papula Dec 2014 A1
20140365201 Gao Dec 2014 A1
20150051896 Simard Feb 2015 A1
20150142415 Cheng et al. May 2015 A1
20150169554 Ross et al. Jun 2015 A1
20150186362 Li Jul 2015 A1
20170132214 Cheng et al. May 2017 A1
Foreign Referenced Citations (111)
Number Date Country
5240198 May 1998 AU
694367 Jul 1998 AU
5202299 Oct 1999 AU
199938259 Nov 1999 AU
761311 Sep 2003 AU
1076861 Jun 2005 BE
2221506 Dec 1996 CA
231184 Jul 2009 CA
1076861 Jun 2005 CH
1179289 Dec 2004 CN
1770144 May 2006 CN
101019113 Aug 2007 CN
101826072 Sep 2010 CN
101248415 Oct 2010 CN
102053958 May 2011 CN
102193914 Sep 2011 CN
102662935 Sep 2012 CN
102902667 Jan 2013 CN
69525374 Aug 2002 DE
69431306 May 2003 DE
69925831 Jun 2005 DE
69633564 Nov 2005 DE
2317447 Jan 2014 DE
0262938 Apr 1988 EP
0668558 Aug 1995 EP
0830774 Feb 1998 EP
0830774 Mar 1998 EP
0887748 Dec 1998 EP
1076861 Feb 2001 EP
1128301 Aug 2001 EP
1128302 Aug 2001 EP
1128303 Aug 2001 EP
0803103 Feb 2002 EP
1235177 Aug 2002 EP
0734556 Sep 2002 EP
1266313 Dec 2002 EP
1489523 Dec 2004 EP
1076861 Jun 2005 EP
1787221 May 2007 EP
1889149 Feb 2008 EP
2226733 Sep 2010 EP
2299369 Mar 2011 EP
2317447 May 2011 EP
2336899 Jun 2011 EP
2317447 Jan 2014 EP
1076861 Jun 2005 FR
2241359 Aug 1991 GB
1076861 Jun 2005 GB
2433403 Jun 2007 GB
2468278 Sep 2010 GB
2474839 May 2011 GB
2317447 Jan 2014 GB
1076861 Jun 2005 IE
H04-152466 May 1992 JP
H05-135095 Jun 1993 JP
H05-197746 Aug 1993 JP
H06-035962 Feb 1994 JP
H06-259487 Sep 1994 JP
H07-093331 Apr 1995 JP
H08-055123 Feb 1996 JP
H09-114907 May 1997 JP
H10-063747 Mar 1998 JP
H10-097530 Apr 1998 JP
H10509543 Sep 1998 JP
H11507752 Jul 1999 JP
3190881 Jul 2001 JP
3190882 Jul 2001 JP
3260693 Feb 2002 JP
2002513970 May 2002 JP
3367675 Jan 2003 JP
2003150623 May 2003 JP
2003157402 May 2003 JP
2004318510 Nov 2004 JP
2005107597 Apr 2005 JP
2005197827 Jul 2005 JP
3762882 Apr 2006 JP
2006216073 Aug 2006 JP
2007042127 Feb 2007 JP
2007249606 Sep 2007 JP
2008152670 Jul 2008 JP
2008152760 Jul 2008 JP
4485548 Jun 2010 JP
4669373 Apr 2011 JP
4669430 Apr 2011 JP
4718687 Apr 2011 JP
2011095841 May 2011 JP
5473533 Feb 2014 JP
244945 Apr 2007 MX
2317447 Jan 2014 NL
WO199406086 Mar 1994 WO
WO9516971 Jun 1995 WO
WO9613013 May 1996 WO
WO9642041 Dec 1996 WO
WO9715885 May 1997 WO
WO9804061 Jan 1998 WO
WO9819224 May 1998 WO
WO9952628 Oct 1999 WO
WO199957651 Nov 1999 WO
WO2000057320 Sep 2000 WO
WO200101289 Jan 2001 WO
WO200129696 Apr 2001 WO
WO2002029622 Apr 2002 WO
WO2002039318 May 2002 WO
WO2006016171 Feb 2006 WO
WO2006121849 Nov 2006 WO
WO2007068123 Jun 2007 WO
WO2008055360 May 2008 WO
WO2008083503 Jul 2008 WO
WO2008147647 Dec 2008 WO
WO2010062540 Jun 2010 WO
WO2010062542 Jun 2010 WO
Non-Patent Literature Citations (126)
Entry
First Examination Report dated Nov. 26, 2009 for European Patent Application 05772051.8, filed May 8, 2006.
Second Examination Report dated Feb. 19, 2013 for European Patent Application 06759147.9, filed May 8, 2006.
Langlais, et al. “TransType: a Computer-Aided Translation Typing System”, in Conference on Language Resources and Evaluation, 2000.
First Notice of Reasons for Rejection dated Jun. 18, 2013 for Japanese Patent Application 2009-246729, filed Oct. 27, 2009.
First Notice of Reasons for Rejection dated Jun. 4, 2013 for Japanese Patent Application 2010-045531, filed Oct. 27, 2009.
Rejection Decision dated May 14, 2013 for Chinese Patent Application 200910253192.6, filed Dec. 14, 2009.
Matsunaga, et al. “Sentence Matching Algorithm of Revised Documents with Considering Context Information,” IEICE Technical Report, 2003, pp. 43-48.
Pennington, Paula K. Improving Quality in Translation Through an Awareness of Process and Self-Editing Skills. Eastern Michigan University, ProQuest, UMI Dissertations Publishing, 1994.
Notice of Allowance dated Jan. 7, 2014 for Japanese Patent Application 2009-246729, filed Oct. 27, 2009.
Kumano et al., “Japanese-English Translation Selection Using Vector Space Model,” Journal of Natural Language Processing; vol. 10; No. 3; (2003); pp. 39-59.
Final Rejection and a Decision to Dismiss the Amendment dated Jan. 7, 2014 for Japanese Patent Application 2010-045531, filed Mar. 2, 2010.
Office Action dated Feb. 24, 2014 for Chinese Patent Application No. 201010521841.9, filed Oct. 25, 2010.
Extended European Search Report dated Oct. 24, 2014 for European Patent Application 10185842.1, filed Oct. 1, 2010.
Summons to attend oral proceeding pursuant to Rule 115(1)(EPC) mailed Oct. 13, 2014 in European Patent Application 00902634.5 filed Jan. 26, 2000.
Summons to attend oral proceeding pursuant to Rule 115(1)(EPC) mailed Feb. 3, 2015 in European Patent Application 06759147.9 filed May 8, 2006.
Decision to Refuse dated Mar. 2, 2015 in European Patent Application 00902634.5 filed Jan. 26, 2000.
Brief Communication dated Jun. 17, 2015 in European Patent Application 06759147.9 filed May 8, 2006.
Somers, H. “EBMT Seen as Case-based Reasoning” Mt Summit VIII Workshop on Example-Based Machine Translation, 2001, pp. 56-65, XP055196025.
The Minutes of Oral Proceedings dated Mar. 2, 2015 in European Patent Application 00902634.5 filed Jan. 26, 2000.
Notification of Reexamination dated Aug. 18, 2015 in Chinese Patent Application 200910253192.6, filed Dec. 14, 2009.
Decision to Refuse dated Aug. 24, 2015 in European Patent Application 06759147.9, filed May 8, 2006.
Komatsu, H et al, “Corpus-based predictive text input”, “Proceedings of the 2005 International Conference on Active Media Technology”, 2005, IEEE, pp. 75 80, ISBN 0-7803-9035-0.
Saiz, Jorge Civera: “Novel statistical approaches to text classification, machine translation and computer-assisted translation” Doctor En Informatica Thesis, May 22, 2008, XP002575820 Universidad Polit'ecnica de Valencia, Spain. Retrieved from Internet: http://dspace.upv.es/manakin/handle/10251/2502 [retrieved on Mar. 30, 2010]. p. 111 131.
De Gispert, A., Marino, J.B. and Crego, J.M.: “Phrase-Based Alignment Combining Corpus Cooccurrences and Linguistic Knowledge” Proc. of the Int. Workshop on Spoken Language Translation (IWSLT'04), Oct. 1, 2004, XP002575821 Kyoto, Japan. Retrieved from the Internet: http://mi.eng.cam.ac.uk/˜ad465/agispert/docs/papers/TP_gispert.pdf [retrieved on Mar. 30, 2010].
Planas, Emmanuel: “Similis Second-generation translation memory software,” Translating and the Computer Nov. 27, 2005 [London: Aslib, 2005].
Web Page New Auction Art Preview, www.netauction.net/dragonart.html, “Come bid on original illustrations,” by Greg & Tim Hildebrandt, Feb. 3, 2001. (last accessed Nov. 16, 2011).
Web Pages—BidNet, www.bidnet.com, “Your link to the State and Local Government Market,” including Bid Alert Service, Feb. 7, 2009. (last accessed Nov. 16, 2011).
Web Pages Christie's, www.christies.com, including “How to Buy,” and “How to Sell,” Apr. 23, 2009. (last accessed Nov. 16, 2011).
Web Pages Artrock Auction, www.commerce.com, Auction Gallery, Apr. 7, 2007. (last accessed Nov. 16, 2011).
Trados Translator's Workbench for Windows, 1994-1995, Trados GbmH, Stuttgart, Germany, pp. 9-13 and 27-96. Copy unavailable.
Notification of Reasons for Refusal for Japanese Application No. 2000-607125 dated Nov. 10, 2009 (Abstract Only).
Ross et al., U.S. Appl. No. 11/071,706, filed Mar. 3, 2005, Office Communication dated Dec. 13, 2007.
Ross et al., U.S. Appl. No. 11/071,706, filed Mar. 3, 2005, Office Communication dated Oct. 6, 2008.
Ross et al., U.S. Appl. No. 11/071,706, filed Mar. 3, 2005, Office Communication dated Jun. 9, 2009.
Ross et al., U.S. Appl. No. 11/071,706, filed Mar. 3, 2005, Office Communication dated Feb. 18, 2010.
Colucci, Office Communication for U.S. Appl. No. 11/071,706 dated Sep. 24, 2010.
Och, et al., “Improved Alignment Models for Statistical Machine Translation,” In: Proceedings of the Joint Workshop on Empirical Methods in NLP and Very Large Corporations, 1999, p. 20-28, downloaded from http://www.actweb.org/anthology-new/W/W99/W99-0604.pdf.
International Search Report and Written Opinion dated Sep. 4, 2007 in Patent Cooperation Treaty Application No. PCT/US06/17398.
XP 002112717 Machine translation software for the Internet, Harada K.; et al, vol. 28, Nr:2, pp. 66-74. Sanyo Technical Review San'yo Denki Giho, Hirakata, JP ISSN 0285-516X, Oct. 1, 1996.
XP 000033460 Method to Make a Translated Text File Have the Same Printer Control Tags as the Original Text File, vol. 32, Nr:2, pp. 375-377, IBM Technical Disclosure Bulletin, International Business Machines Corp. (Thornwood), US ISSN 0018-8689, Jul. 1, 1989.
XP 002565038—Integrating Machine Translation into Translation Memory Systems, Matthias Heyn, pp. 113-126, TKE. Terminology and Knowledge Engineering. Proceedings International Congress on Terminology and Knowledge Engineering, Aug. 29-30, 1996.
XP 002565039—Linking translation memories with example-based machine translation, Michael Carl; Silvia Hansen, pp. 617-624, Machine Translation Summit. Proceedings, Sep. 1, 1999.
XP 55024828 TransType2 an Innovative Computer-Assisted Translation System, ACL 2004, Jul. 21, 2004, Retrieved from the Internet: http://www.mt-archive.info/ACL-2004-Esteban.pdf [retrieved on Apr. 18, 2012].
Bourigault, Surface Grammatical Analysis for the Extraction of Terminological Noun Phrases, Proc. of Coling-92, Aug. 23, 1992, pp. 977-981, Nantes, France.
Thurmair, Making Term Extraction Tools Usable, The Joint Conference of the 8th International Workshop of the European Association for Machine Translation, May 15, 2003, Dublin, Ireland.
Sanfillipo, Section 5.2 Multiword Recognition and Extraction, Eagles LE3-4244, Preliminary Recommendations on Lexical Semantic Encoding, Jan. 7, 1999.
Hindle et al., Structural Ambiguity and lexical Relations, 1993, Association for Computational Linguistics, vol. 19, No. 1, pp. 103-120.
Ratnaparkhi, A Maximum Entropy Model for Part-Of-Speech Tagging, 1996, Proceedings for the conference on empirical methods in natural language processing, V.1, pp. 133-142.
Somers, H. “Review Article: Example-based Machine Translation,” Machine Translation, Issue 14, pp. 113-157, 1999.
Civera, et al. “Computer-Assisted Translation Tool Based on Finite-State Technology,” In: Proc. of EAMT, 2006, pp. 33-40 (2006).
Okura, Seiji et al., “Translation Assistance by Autocomplete,” The Association for Natural Language Processing, Publication 13th Annual Meeting Proceedings, Mar. 2007, p. 678-679.
Soricut, R, et al., “Using a Large Monolingual Corpus to Improve Translation Accuracy,” Proc. of the Conference of the Association for Machine Translation in the Americas (Amta-2002), Aug. 10, 2002, pp. 155-164, XP002275656.
Fung et al. “An IR Approach for Translating New Words from Nonparallel, Comparable Texts,” Proceeding COLING '998 Proceedings of the 17th International Conference on Computational Linguistics, 1998.
First Office Action dated Dec. 26, 2008 in Chinese Patent Application 200580027102.1, filed Aug. 11, 2005.
Second Office Action dated Aug. 28, 2009 in Chinese Patent Application 200580027102.1, filed Aug. 11, 2005.
Third Office Action dated Apr. 28, 2010 in Chinese Patent Application 200580027102.1, filed Aug. 11, 2005.
Summons to attend oral proceeding pursuant to Rule 115(1)(EPC) mailed Mar. 20, 2012 in European Patent Application 05772051.8 filed Aug. 11, 2005.
Notification of Reasons for Rejection dated Jan. 9, 2007 for Japanese Patent Application 2000-547557, filed Apr. 30, 1999.
Decision of Rejection dated Jul. 3, 2007 for Japanese Patent Application 2000-547557, filed Apr. 30, 1999.
Extended European Search Report and Written Opinion dated Jan. 26, 2011 for European Patent Application 10189145.5, filed on Oct. 27, 2010.
Notice of Reasons for Rejection dated Jun. 26, 2012 for Japanese Patent Application P2009-246729. filed Oct. 27, 2009.
Search Report dated Jan. 22, 2010 for United Kingdom Application GB0918765.9, filed Oct. 27, 2009.
Notice of Reasons for Rejection dated Mar. 30, 2010 for Japanese Patent Application 2007-282902. filed Apr. 30, 1999.
Decision of Rejection dated Mar. 15, 2011 for Japanese Patent Application 2007-282902, filed Apr. 30, 1999.
First Office Action dated Oct. 18, 2011 for Chinese Patent Application 2009102531926, filed Dec. 14, 2009.
Second Office Action dated Aug. 14, 2012 for Chinese Patent Application 2009102531926, filed Dec. 14, 2009.
European Search Report dated Apr. 12, 2010 for European Patent Application 09179150.9, filed Dec. 14, 2009.
First Examination Report dated Jun. 16, 2011 for European Patent Application 09179150.9, filed Dec. 14, 2009.
Notice of Reasons for Rejection dated Jul. 31, 2012 for Japanese Patent Application 2010-045531, filed Mar. 2, 2010.
First Examination Report dated Oct. 26, 2012 for United Kingdom Patent Application GB0903418.2, filed Mar. 2, 2009.
First Office Action dated Jun. 19, 2009 for Chinese Patent Application 200680015388.6, filed May 8, 2006.
“Office Action,” European Patent Application No. 10185842.1, dated Dec. 8, 2016, 7 pages.
Papineni, Kishore, et al., “BLEU: A Method for Automatic Evaluation of Machine Translation,” Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, 2002, pp. 311-318.
“Decision to Refuse,” European Patent Application 10185842.1, dated Mar. 22, 2018, 16 pages.
Westfall, Edith R., “Integrating Tools with the Translation Process North American Sales and Support,” Jan. 1, 1998, pp. 501-505, XP055392884. Retrieved from the Internet: <URL:https://rd.springer.com/content/pdf/10.1007/3-540-49478-2_46.pdf>.
“Summons to attend oral proceeding pursuant to Rule 115(1)(EPC),” European Patent Application 09179150.9, Dec. 14, 2017, 17 pages.
Nepveu et al. “Adaptive Language and Translation Models for Interactive Machine Translation” Conference on Empirical Methods in Natural Language Processing, Jul. 25, 2004, 8 pages. Retrieved from: http://www.cs.jhu.edu/˜yarowsky/sigdat.html.
Ortiz-Martinez et al. “Online Learning for Interactive Statistical Machine Translation” Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the ACL, Jun. 10, 2010, pp. 546-554. Retrieved from: https://www.researchgate.net/publication/220817231_Online_Learning_for_Interactive_Statistical_Machine_Translation.
Callison-Burch et al. “Proceedings of the Seventh Workshop on Statistical Machine Translation” [W12-3100] Proceedings of the Seventh Workshop on Statistical Machine Translation, Jun. 7, 2012, pp. 10-51. Retrieved from: http://aclanthology.info/volumes/proceedings-of-the-seventh-workshop-onstatistical-machine-translation.
Lopez, Adam. “Putting Human Assessments of Machine Translation Systems in Order” [W12-3101] Proceedings of the Seventh Workshop on Statistical Machine Translation, Jun. 7, 2012, pp. 1-9. Retrieved from: http://aclanthology.info/volumes/proceedings-of-the-seventh-workshop-onstatistical-machine-translation.
Avramidis, Eleftherios. “Quality estimation for Machine Translation output using linguistic analysis and decoding features” [W12-3108] Proceedings of the Seventh Workshop on Statistical Machine Translation, Jun. 7, 2012, pp. 84-90. Retrieved from: http://aclanthology.info/volumes/proceedings-of-the-seventh-workshop-onstatistical-machine-translation.
Buck, Christian. “Black Box Features for the WMT 2012 Quality Estimation Shared Task” [W12-3109] Proceedings of the Seventh Workshop on Statistical Machine Translation, Jun. 7, 2012, pp. 91-95. Retrieved from: Proceedings of the Seventh Workshop on Statistical Machine Translation. Retrieved from: http://aclanthology.info/volumes/proceedings-of-the-seventh-workshop-onstatistical-machine-translation.
Felice et al. “Linguistic Features for Quality Estimation” [W12-3110] Proceedings of the Seventh Workshop on Statistical Machine Translation, Jun. 7, 2012, pp. 96-103. Retrieved at: http://aclanthology.info/volumes/proceedings-of-the-seventh-workshop-onstatistical-machine-translation.
Gonzalez-Rubio et al. “PRHLT Submission to the WMT12 Quality Estimation Task” [W12-3111] Proceedings of the Seventh Workshop on Statistical Machine Translation, Jun. 7, 2012, pp. 104-108. Retrieved from: http://aclanthology.info/volumes/proceedings-of-the-seventh-workshop-onstatistical-machine-translation.
Hardmeier et al. “Tree Kernels for Machine Translation Quality Estimation” [W12-3112] Proceedings of the Seventh Workshop on Statistical Machine Translation, Jun. 7, 2012, pp. 109-113. Retrieved from: http://aclanthology.info/volumes/proceedings-of-the-seventh-workshop-onstatistical-machine-translation.
Langlois et al. “LORIA System for the WMT12 Quality Estimation Shared Task” [W12-3113] Proceedings of the Seventh Workshop on Statistical Machine Translation, Jun. 7, 2012, pp. 114-119. Retrieved from: http://aclanthology.info/volumes/proceedings-of-the-seventh-workshop-onstatistical-machine-translation.
Moreau et al. “Quality Estimation: an experimental study using unsupervised similarity measures” [W12-3114] Proceedings of the Seventh Workshop on Statistical Machine Translation, Jun. 7, 2012, pp. 120-126. Retrieved from: http://aclanthology.info/volumes/proceedings-of-the-seventh-workshop-onstatistical-machine-translation.
Gonzalez et al. “The UPC Submission to the WMT 2012 Shared Task on Quality Estimation” [W12-3115] Proceedings of the Seventh Workshop on Statistical Machine Translation, Jun. 7, 2012, pp. 127-132. Retrieved from: http://aclanthology.info/volumes/proceedings-of-the-seventh-workshop-onstatistical-machine-translation.
Popovic, Maja. “Morpheme- and POS-based IBM1 and language model scores for translation quality estimation” Proceedings of the Seventh Workshop on Statistical Machine Translation, Jun. 7, 2012, pp. 133-137. Retrieved from: http://aclanthology.info/volumes/proceedings-of-the-seventh-workshop-onstatisticalmachine-translation.
Rubino et al. “DCU-Symantec Submission for the WMT 2012 Quality Estimation Task” [W12-3117] Proceedings of the Seventh Workshop on Statistical Machine Translation, Jun. 7, 2012, pp. 138-144. Retrieved from: http://aclanthology.info/volumes/proceedings-of-the-seventh-workshop-onstatistical-machine-translation.
Soricut et al. “The Sdl Language Weaver Systems in the VVMT12 Quality Estimation Shared Task” [W12-3118] Proceedings of the Seventh Workshop on Statistical Machine Translation, Jun. 7, 2012, pp. 145-151. Retrieved from: http://aclanthology.info/volumes/proceedings-of-the-seventh-workshop-onstatistical- machine-translation.
Wu et al. “Regression with Phrase Indicators for Estimating MT Quality” [W12-3119] Proceedings of the Seventh Workshop on Statistical Machine Translation, Jun. 7, 2012, pp. 152-156. Retrieved from: http://aclanthology.info/volumes/proceedings-of-the-seventh-workshop-onstatistical-machine-translation.
Wuebker et al. “Hierarchical Incremental Adaptation for Statistical Machine Translation” Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1059-1065, Lisbon, Portugal, Sep. 17-21, 2015.
“Best Practices—Knowledge Base,” Lilt website [online], Mar. 6, 2017 [retrieved on Oct. 19, 2017], Retrieved from the Internet:<https://lilt.com/kb/translators/best-practices>, 2 pages.
“Data Security—Knowledge Base,” Lilt website [online], Oct. 14, 2016 [retrieved on Oct. 19, 2017], Retrieved from the Internet:<https://lilt.com/kb/security>, 1 pages.
“Data Security and Confidentiality,” Lilt website [online], 2017 [retrieved on Oct. 19, 2017], Retrieved from the Internet:<https://lilt.com/security>, 7 pages.
“Memories—Knowledge Base,” Lilt website [online], Jun. 7, 2017 [retrieved on Oct. 19, 2017], Retrieved from the Internet:<https://lilt.com/kb/project-managers/memory>, 4 pages.
“Memories (API)—Knowledge Base,” Lilt website [online], Jun. 2, 2017 [retrieved on Oct. 19, 2017], Retrieved from the Internet:<https://lilt.com/kb/api/memories>, 1 page.
“Quoting—Knowledge Base,” Lilt website [online], Jun. 7, 2017 [retrieved on Oct. 19, 2017], Retrieved from the Internet:<https://lilt.com/kb/project-managers/quoting>, 4 pages.
“The Editor—Knowledge Base,” Lilt website [online], Aug. 15, 2017 [retrieved on Oct. 19, 2017], Retrieved from the Internet:<https://lilt.com/kb/translators/editor>, 5 pages.
“Training Lilt—Knowledge Base,” Lilt website [online], Oct. 14, 2016 [retrieved on Oct. 20, 2017], Retrieved from the Internet<https://lilt.com/kb/troubleshooting/training-lilt>, 1 page.
“What is Lilt_—Knowledge Base,” Lilt website [online], Dec. 15, 2016 [retrieved on Oct. 19, 2017], Retrieved from the Internet:<https://lilt.com/kb/what-is-lilt>, 1 page.
“Getting Started—Knowledge Base,” Lilt website [online], Apr. 11, 2017 [retrieved on Oct. 20, 2017], Retrieved from the Internet:<https://lilt.com/kb/translators/getting-started>, 2 pages.
“The Lexicon—Knowledge Base,” Lilt website [online], Jun. 7, 2017 [retrieved on Oct. 20, 2017], Retrieved from the Internet:<https://lilt.com/kb/translators/lexicon>, 4 pages.
“Simple Translation—Knowledge Base,” Lilt website [online], Aug. 17, 2017 [retrieved on Oct. 20, 2017], Retrieved from the Internet:<https://lllt.com/kb/apl/simple-translation>, 3 pages.
“Split and Merge—Knowledge Base,” Lilt website [online], Oct. 14, 2016 [retrieved on Oct. 20, 2017], Retrieved from the Internet:<https://lilt.com/kb/translators/split-merge>, 4 pages.
“Lilt API _ API Reference,” Lilt website [online], retrieved on Oct. 20, 2017, Retrieved from the Internet:<https://lilt.com/docs/api>, 53 pages.
“Automatic Translation Quality—Knowledge Base”, Lilt website [online], Dec. 1, 2016, retrieved on Oct. 20, 2017, Retrieved from the Internet:<https://lilt.com/kb/evaluation/evaluate-mt>, 4 pages.
“Projects—Knowledge Base,” Lilt website [online], Jun. 7, 2017, retrieved on Oct. 20, 2017, Retrieved from the Internet: <https://lilt.com/kb/project-managers/projects>, 3 pages.
“Getting Started with lilt,” Lilt website [online], May 30, 2017, retrieved on Oct. 20, 2017, Retrieved from the Internet:<https://lilt.com/kb/api/lilt-js>, 6 pages.
“Interactive Translation—Knowledge Base,” Lilt website [online], Aug. 17, 2017, retrieved on Oct. 20, 2017, Retrieved from the Internet:<https://lilt.com/kb/api/interactive-translation>, 2 pages.
Hildebrand et al., “Adaptation of the Translation Model for Statistical Machine Translation based on Information Retrieval,” EAMT 2005 Conference Proceedings, May 2005, pp. 133-142. Retrieved from https://www.researchgate.net/publication/228634956_Adaptation_of the_translation_model_for_statistical_machine_translation_based_on_information_retrieval.
Och et al., “The Alignment Template Approach to Statistical Machine Translation Machine Translation,” Computational Linguistics, vol. 30. No. 4, Dec. 1, 2004, pp. 417-442 (39 pages with citations). Retrieved from http://dl.acm.org/citation.cfm?id=1105589.
Sethy et al., “Building Topic Specific Language Models Fromwebdata Using Competitive Models,” Interspeech 2005—Eurospeech, 9th European Conference on Speech Communication and Technology, Lisbon, Portugal, Sep. 4-8, 2005, 4 pages. Retrieved from https://www.researchgate.net/publication/221490916_Building_topic_specific_language_models_from_webdata_using_competitive_models.
Dobrinkat, “Domain Adaptation in Statistical Machine Translation Systems via User Feedback,” Master's Thesis, University of Helsinki, Nov. 25, 2008, 103 pages. Retrieved from http://users.ics.aalto.fi/mdobrink/online-papers/dobrinkat08mt.pdf.
Business Wire, “Language Weaver Introduces User-Managed Customization Tool,” Oct. 25, 2005, 3 pages. Retrieved from http: ProQuest.
Winiwarter, W., “Learning Transfer Rules for Machine Translation from Parallel Corpora,” Journal of Digital Information Management, vol. 6 No. 4, Aug. 2008, pp. 285-293. Retrieved from https://www.researchgate.net/publication/220608987_Learning_Transfer_Rules_for_Machine_Translation_from_Parallel_Corpora.
Potet et al., “Preliminary Experiments on Using Users' Post-Editions to Enhance a SMT System,” Proceedings of the European Association for Machine Translation (EAMT), May 2011, pp. 161-168. Retreived from Retrieved at http://www.mt-archive.info/EAMT-2011-Potet.pdf.
Ortiz-Martinez et al., “An Interactive Machine Translation System with Online Learning” Proceedings of the ACL-HLT 2011 System Demonstrations, Jun. 21, 2011, pp. 68-73. Retrieved from http://www.aclweb.org/anthology/P11-4012.
Lopez-Salcedo et al., “Online Learning of Log-Linear Weights in Interactive Machine Translation,” Communications in Computer and Information Science, vol. 328, 2011, pp. 1-10. Retrieved from http://www.casmacat.eu/uploads/Main/iberspeech2.pdf.
Blanchon et al., “A Web Service Enabling Gradable Post-edition of Pre-translations Pro duced by Existing Translation Tools: Practical Use to Provide High quality Translation of an Online Encyclopedia” Jan. 2009, 9 pages. Retrieved from http://www.mt-archive.info/MTS-2009-Blanchon.pdf.
Levenberg et al. “Stream-based Translation Models for Statistical Machine Translation” Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the ACL, Dec. 31, 2010, pp. 394 402.
Lagarda et al. “Statistical Post-Editing of a Rule Based Machine Translation System” Proceedings of NAACL HLT 2009: Short Papers, Jun. 2009, pp. 217-220.
Ehara, “Rule Based Machine Translation Combined with Statistical Post Editor for Japanese to English Patent Translation,” MT Summit XI, 2007, pp. 13-18.
Bechara et al. “Statistical Post-Editing for a Statistical MT System” Proceedings of the 13th Machine Translation Summit, 2011, pp. 308-315.
“Summons to attend oral proceeding pursuant to Rule 115(1)(EPC),” European Patent Application 10185842.1, Aug. 11, 2017, 9 pages.
Related Publications (1)
Number Date Country
20160253319 A1 Sep 2016 US
Provisional Applications (1)
Number Date Country
60550795 Mar 2004 US
Continuations (3)
Number Date Country
Parent 14519077 Oct 2014 US
Child 15150377 US
Parent 13175783 Jul 2011 US
Child 14519077 US
Parent 11071706 Mar 2005 US
Child 13175783 US