Translation of a selected text fragment of a screen

Information

  • Patent Grant
  • 9262409
  • Patent Number
    9,262,409
  • Date Filed
    Wednesday, July 18, 2012
    12 years ago
  • Date Issued
    Tuesday, February 16, 2016
    8 years ago
Abstract
Disclosed is a method for translating text fragments displayed on a screen from an input language into an output language and displaying the result. Translation may use electronic dictionaries, machine translation, natural language processing, control systems, information searches, (e.g., search engine via an Internet protocol), semantic searches, computer-aided learning, and expert systems. For a word combination, appropriate local or network accessible dictionaries are consulted. The disclosed method provides a translation in grammatical agreement in accordance with grammatical rules of the output language in consideration of the context of the text.
Description

The United States Patent Office (USPTO) has published a notice effectively stating that the USPTO's computer programs require that patent applicants reference both a serial number and indicate whether an application is a continuation or continuation-in-part. See Stephen G. Kunin, Benefit of Prior-Filed Application, USPTO Official Gazette 18 Mar. 2003. The present Applicant Entity (hereinafter “Applicant”) has provided above a specific reference to the application(s) from which priority is being claimed as recited by statute. Applicant understands that the statute is unambiguous in its specific reference language and does not require either a serial number or any characterization, such as “continuation” or “continuation-in-part,” for claiming priority to U.S. patent applications. Notwithstanding the foregoing, Applicant understands that the USPTO's computer programs have certain data entry requirements, and hence Applicant is designating the present application as a continuation-in-part of its parent applications as set forth above, but expressly points out that such designations are not to be construed in any way as any type of commentary and/or admission as to whether or not the present application contains any new matter in addition to the matter of its parent application(s).


All subject matter of the Related Applications and of any and all parent, grandparent, great-grandparent, etc. applications of the Related Applications is incorporated herein by reference to the extent such subject matter is not inconsistent herewith.


FIELD

Embodiments of the invention generally relate to the field of automated translation of word combinations or natural language sentences using dictionaries and/or linguistic descriptions and various applications in such areas as automated abstracting, machine translation, applying electronic dictionaries, natural language processing, control systems, information search (including use of search engines accessible via an Internet protocol), semantic Web, computer-aided learning, expert systems, speech recognition/synthesis and others. The present disclosure is directed towards looking up translation of word combinations, sentences or parts of text on a display screen.


BACKGROUND

Many supposed modern translation systems only use a form of a machine translation system or only use electronic dictionaries for looking up word combinations or natural-language sentences while one reads printed texts or texts on a display screen. However, dictionaries are generally limited to only words and word combinations, while using a machine translation system requires more time and more electronic storage or a connection to the Internet. Thus, there is substantial room for improvement of existing translation systems for translation of blocks or fragments of text.


There is a plethora of electronic devices with display screens capable of displaying text, including devices with a touch screen, including many mobile devices, such as laptops, tablet computers, netbooks, smartphones, mobile phones, personal digital assistants (PDAs), e-book readers, and photo and video cameras. These devices are suitable for using electronic dictionaries or a machine translation system which may be installed locally, provided on a local area network, or available over the Internet.


SUMMARY

The disclosed methods generally relates to methods, computer-readable media, devices and systems for translating text fragments from an input language into an output language, and displaying translations of text selections using electronic dictionaries, machine translation, natural language processing, etc. in such applications as control systems, information searches (including use of search engines accessible via an Internet protocol), semantic Web systems, computer-aided learning systems, and expert systems.


A user can quickly obtain a translation of any selected text fragment that is shown on an electronic display screen. The translation is in grammatical agreement with the context of portions of the text corpus (the text from which the fragment originates). The disclosed method is especially useful for translation not only of certain words but any text fragment without using substantial computational resources of the device for translation of an entire document. The disclosed method also is capable of reducing the time required for translation. The disclosed method may use a variety of means and their combinations for translating. For example, a plurality or different types of dictionaries, translation memories, and different types of machine translation systems may be used. Machine translation systems may include statistical (example-based) systems and model-based MT systems.


A “fragment” may include the notion of a “sentence” and may also describe any small portion of text or refer to a paragraph, a sentence, a title, a part of a sentence, or a word combination (e.g., a noun group, verb-adverb combination).


In one embodiment, the method comprises: selecting a portion of text to be translated by using—for example—a gesture to point at an area of the display or a motion of a figure or cursor on the screen; establishing coordinates of the selected location; performing optical character recognition, if needed; identifying words, or a word combination or a set of sentences chosen or identified by the user; and translating the identified text by electronic dictionaries, machine translation or in another way. The method may also comprise displaying a translation of the selected portion of text, for example, in a balloon, in a pop-up window or in another manner on a screen of an electronic device.


An electronic device may include a client dictionary application and one or more local dictionaries. Additionally or alternatively to a local dictionary, the application may be able to access one or more remote dictionaries located on a remote server via network connection to the server, e.g. over an Internet protocol, a wireless network protocol, and a cellular telephone-like network.


Electronic dictionaries may comprise a software program and dictionary data. The program may include a shell, which provides a graphical user interface, morphology models to provide inflected forms, context search that uses an index, a teaching module, and other features.


An electronic device may also connect to a machine translation system, databases of previous translations (hereinafter “translation databases”) and terminology or translation dictionary.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows an example of displaying an English translation of a selected fragment of a Russian text, the English translation shown in a pop-up balloon-style user interface element.



FIG. 1A shows the same example of FIG. 1 wherein the underlying text is translated from Russian into English.



FIG. 2 shows a flowchart of operations performed during translation of selected fragment of a text.



FIG. 3 shows a flowchart of operations performed during translation of word combination.



FIG. 4 shows an exemplary architecture for implementing an electronic device that is capable of performing the operations of the invention.





DETAILED DESCRIPTION

In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the invention can be practiced without these specific details.


Reference in this specification to “one embodiment” or “an implementation” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention. The appearances of the phrase “in one embodiment” or “in one implementation” in various places in the specification are not necessarily all referring to the same embodiment or implementation, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments.


Advantageously, the present invention discloses an electronic device with a display screen that allows a user to obtain translations of a selected fragment, for example word combination or natural-language sentences while reading a text on a display screen of the electronic device. Specifically, the translation may be displayed in a balloon, in a pop-up window, as subscript, as superscript, or in any other suitable manner when the user selects a part of a text on the display screen.


In one embodiment, when the selected fragment is not excessively large, only dictionary translating of a word and a word combination may be used. In another embodiment, if the selected fragment is relatively large, a combination of different techniques may be provided, for example, together with dictionary translation and using bases of other translations (hereinafter “translation databases”), a method of translating a text selection includes analyzing the source fragment using linguistic descriptions, constructing a language-independent semantic structure to represent the meaning of the source sentence, and generating an output sentence in the output language. To improve the efficiency and speed of translation, hybrid systems and different (successive) methods may be used.


Translation databases may be created as a result of previous translation processes made by one or many persons (translation databases may be the result of personal or group work, in-house translations or acquired from third parties, associated with or derived from particular types of documents or documents that are within or belong to some subject domain, etc.). Also, translation databases may be obtained as a result of the alignment (segmentation) of existing parallel texts. The alignment of parallel texts may be implemented manually or through some automatic method.


First of all, the system starts searching for fragments identical to the fragment selected by a user in one or more databases. If a translation for a fragment is found, the source fragment is substituted with its translation. If more than one translation variant is found (for example, if more than one database is used), the best variant may be selected either automatically or manually from the available variants. Automatic selection may be done on the basis of statistical ratings, a priori ratings or other type of ratings.


For the rest of the fragments (hereinafter the “untranslated fragments”), a fuzzy search may be used. A “fuzzy search” comprises looking for similar sentences in one or more databases. The sentences identified through a fuzzy search may differ from an “ideal match” in one or more words, but the matching words are arranged in them in the same order as in the source sentence. In this case, the differing parts can be translated using some other method, e.g., using a terminology or translation dictionary, and the slightly different translation that has been found can be substituted in.


For the remaining untranslated fragments (i.e., the fragments which have not been translated by means of the ordinary search and by means of the fuzzy search), a machine translation (MT) system, for example a Model-Based MT system, such as one disclosed in U.S. Pat. Nos. 8,195,447 and 8,214,199, may be used. The system ideally provides syntactically coherent output. Syntactic and morphological descriptions of the input and output languages are used for this purpose.



FIG. 1 shows an example of displaying a translation of a selected fragment of a Russian text into English. The English translation is shown in a pop-up balloon-style user interface element. With reference to FIG. 1, the electronic device 102 may comprise a computer system, such as a general purpose computer, embodied in different configurations such as a desktop personal computer (PC), laptop computer, smartphone, cell phone, tablet computer, digital camera, or another gadget or device having a display screen, display projector or display element. FIG. 4 of the drawings shows an exemplary hardware and system architecture for implementing an electronic device 102 in accordance with one embodiment (described further below).


To look up a fragment that appears in non-text or non-text-encoded files, for example in .JPG, .TIFF or .PDF files, the user's electronic device may include optical character recognition (OCR) algorithms, hardware, firmware or software which identifies a relevant region on the image where the selected fragment is located and then converts the image in the region into a text format. OCR may also be performed using a remote server or other device, which receives an image and an indication for an identified area from the device displaying the image of text, applies OCR processing so as to ascertain the word or words at issue, and returns the recognized fragment to the device displaying the image of text. The remote server may be accessible via a data path that includes a wired or wireless data path, internet connection, Bluetooth® connection, etc.



FIG. 1 of the drawings illustrates an example of an electronic device 102, comprising a display screen 104. The content on the display screen or touch screen 104 may be presented in a container or software application such as a text browser, in a text or word processor (e.g., Adobe® Acrobat®, e-book reader, a Web browser, e-mail client, a text message user interface), or in an image display application or another appropriate application that provides text to the display screen 104. In the case when text is presented in a non-editable format, such as in a .PDF, .JPG, .TIFF file format, an OCR operation may be required. In FIG. 1, a balloon user interface element appears on the screen 104 and includes a translation of a selected portion of the underlying text.



FIG. 2 shows a flowchart of operations performed by an application during translation of a selected fragment of a text. When the user reads a text on the display screen 104 of the electronic device 102 and wishes to look up a translation of a fragment, the user simply points to an area containing text with a mouse cursor or touches and moves a finger or cursor on the screen thereby selecting the corresponding region on the display screen 104 with a finger, a stylus or any other suitable object or mechanism.


The mode of selection of a text may be preliminarily preset by a user. For example pre-selection may be done by directing a cursor or touching a screen on some point at or near a word, word combination, sentence or paragraph to make a selection of a text fragment. Or, selection of a text fragment may be done by using a gesture to point at an area of the display, for example, by moving a finger cursor on the screen.


The process of selecting a fragment of a text 210 initiates a process that ultimately enables the user to see a translation of the text fragment.


If the coordinates point to an area of an image (e.g. .PDF, .JPG, .TIF, and other picture or image formats where words are not stored as collections of encoded characters), an OCR algorithm, process or software is applied. At step 220, OCR software identifies a rectangular region corresponding to the user input that contains text. To speed up recognition, the OCR software may identify a smallest rectangular image that includes an image of the word or word string in the area touched or indicated by the user.


At step 230, the OCR software operates on the identified rectangular region. The result of the OCR processing is recognized words, word combinations or sentences represented by a string of encoded characters—a text fragment. For example, the encoded characters may be a set of ASCII-encoded characters. So, at step 240 there is a text fragment (e.g., a plurality of words) identified, selected and available to be translated.


Further, it is determined whether the selected and recognized fragment of a text is a word combination. The word combination may be a string containing not more than several words, for example, 4. If the selected fragment is a word combination, a dictionary translation is performed at step 250. Otherwise, at step 260 the translation of a selected and recognized fragment of text is performed by, for example, using a “translation database” or terminology dictionary, and if available efforts are not successful, by means of a machine translation system. The terminology dictionary may include relatively long word combinations or specific word combinations, such as words or expressions used in a specific field (e.g., medicine, law, law enforcement, military), or may include a phrase or expression which should be or can be divided into smaller word combinations. The process 260 is run also in a case where an appropriate variant of the translation is not found in through dictionary translation.


At step 260, the document fragments which are already available in a translation database are substituted with their translations. The selected text may be divided into smaller fragments automatically by employing a special algorithm. For example, fragments may be identified based on sentence boundaries or based on paragraph boundaries. Finally, the following search method may be used to find the required fragment: the system each time looks in the database for the longest fragment which starts with the current word. Searches in the translation database may be performed in many different ways depending on the structure of the database. In particular, if the translation database is supplied with an index, the index may be used for improving the speed of searches.


Additionally, for those fragments for which no translation is available in the translation database, a terminology dictionary or a translation dictionary may be used. In particular, a terminology dictionary or a translation dictionary may be used to translate headings, titles, and table cells where specialized words, shortened and non-standard expressions are more common than in complete sentences.


Then, if the selected text fragment still includes untranslated portions or words, a machine translation system is used at step 270. For example, machine translation systems, such as those disclosed in U.S. Pat. Nos. 8,195,447 and 8,214,199, may be used. These systems exploit exhaustive linguistic descriptions of the source and output languages. The linguistic descriptions useful for translating the source text into another language may include morphological descriptions, syntactic descriptions, lexical descriptions, and semantic descriptions. In some cases, all available linguistic models and knowledge about natural languages may be arranged in a database and used during analysis of a source sentence and synthesis an output sentence. Integral models for describing the syntax and semantics of the source language are used in order to (1) recognize the meanings of the source sentence analyze, (2) translate complex language structures, and (3) correctly convey information encoded in the source text fragment. After that, all pieces of translations, provided by different ways, are combined for displaying.


At step 280, a system displays a translation in an output language. Specifically, the translation may be displayed in a balloon, in a pop-up window, as subscript, as superscript, or in any other suitable manner when the user selects part of a relatively lengthy text on the display screen.



FIG. 3 shows a flowchart of operations performed by an application or set of computer processor instructions during translation of a word combination in step 250 of FIG. 2. With reference to FIG. 3, first, step 310 is performed: identifying several words from the left and several words from the right of the base word from surrounding text—assuming there is some text surrounding the text fragment selected by a user. If the selected text fragment is a single word, this word is considered to be the base word. At least two words from each side from the base word are identified. If the selected text fragment includes two or more words, any word from this selected text fragment may be considered as a base word. For example, the first word from the selected fragment may be considered the base word.


Next, at step 320 combinations using the base word are constructed from words identified at step 310. As a result of this process, an array containing all possible word combinations is generated.


Then, at step 330, a processing of each element of the array constructed in the preceding step is performed. Array processing includes for each combination (word combination) inserting a hyphen in place of each space character and inserting a space character in place of a hyphen. Thus, all possible spellings are taking into account.


After that, two arrays, extracted from steps 320 and 330, are merged into one array in step 340. At step 350, each word combination of the merged array from step 340 is searched in an available dictionary or set of dictionaries that may be preliminarily selected by user or may be programmatically or automatically made available. Dictionary software may use by default one or more dictionaries or a user may specify one or more desired dictionaries. A default dictionary on a given subject may be selected if the dictionary software determines that the text belongs to a specialized subject (e.g., medicine, law, automobiles, computers). Additionally, the electronic dictionary includes a morphology module, so that the query word or word combination need not be in a base, or “dictionary” form—the morphology module identifies the base form of an inflected form. If more than one base form is possible, the morphology module identifies possible alternatives. Also, in some cases, the morphology module may determine a grammatical form of the source word, for example, a number for nouns or form of verbs, to select a proper form for the translation in an output language.


In some cases, the system may determine a part of speech of a selected word by using words to the left and to the right of the base word. This option can considerably reduce a list of variants of translation by offering only those translation variants whose part of speech correspond to the part of speech of a base word in an input language. For example, if a machine translation system establishes that the selected word is a noun, it will offer only nouns as variants of translation. This option helps to save time, storage or memory space, bandwidth, etc.


Some rules may be assigned in accordance with the system defining a part of speech. For example, if there are articles “a” or “the”, possessive pronouns “my”, “yours” etc. from the left of a base word, the system may make a conclusion that the base word is a noun.


In addition, the disclosed method may utilize the information about a subject area of translation in accordance with a context. For example, the notion “context” refers to the previously translated fragment or refers to words located to the left or right from the base word(s). In most cases, a word can have several meanings and therefore there are several alternative translations into another language. So it is necessary to determine in accordance with the context which meaning is presented in the source text and which variant of translation should be selected. For example, to translate the English word “file”, which the system assumes to be a noun, the system will select the translation equivalent meaning a “computer file” if it is translating a text about computers and software. If the system is translating a text about document management, it will select the translation equivalent meaning a “dossier, a folder containing paper documents.” And if the system is translating a text about tools, it will select the translation which means “a machinist's file.” The defining of subject area may be performed in different ways, for example by using previously translated and cached fragments, by using the title of the text or by manually specifying by user a particular subject matter.


At step 360, if no translation is found in the dictionaries for a particular word or word combination, a search for linguistically similar words is performed. It may be possible that the word or word combination is a misprint, for example, a typographical error may be present in the text fragment. In such case, variants of substitution of the particular word or word combination may be suggested to the user.


At step 370 a morphologic synthesizer is activated. The morphologic synthesizer receives as input a set of word-by-word translated words in their initial forms, analyzes them based on morphological rules and provides an output of translation in a form that is in grammatical agreement with the context of the translated fragment.


For example, the word combination “with virtual memory” is selected by a user as a fragment for translation. As a result of performing the above-described steps, the word combination “virtual memory” is identified in a dictionary during translation from the English language into the German language. In German language word combination “virtual memory” is translated as “virtueller Speicher.” Both words in this word combination are in their initial forms. Besides, the word combination “virtueller Speicher” has no grammatical agreement with the German preposition, “mit”. That is one example showing why at this stage the morphologic synthesizer is activated, it is highly useful to correctly translate specific word combinations.


Without such a morphologic synthesizer, a direct word-by-word translation of the word combination “with virtual memory” is incorrectly provided: “mit virtueller Speicher.”


A certain word may have several variants of translation. The choice of a variant of translation for each word in a word combination may be implemented, for example, in accordance with the field of the source text, if it is known, with a markup in a dictionary or in accordance with a most frequent usage. Other methods may be used to select the variant of translation.


Further, returning to the “with virtual memory” example, in accordance with morphological rules, the morphologic synthesizer determines that the German preposition “mit” requires control in a dative case. As a result, a noun and adjective combination acquires a form in a dative case: “virtuellem Speicher”. So, such morphologic synthesizer is able to deliver a more grammatically consistent word combination: “mit virtuellem Speicher” by being programmed to process and respond appropriately to parts of speech, cases, plurality, tenses, etc.


With reference to FIG. 2, if the morphologic synthesizer and dictionary translating 250 are successful, at step 280, the translation of a word combination in an output language is displayed, for example, in a balloon, in a pop-up window or in another manner on the screen of the electronic device.



FIG. 4 of the drawings shows hardware and system architecture 400 that may be used to implement the user electronic device 102 in accordance with one embodiment of the invention in order to translate a word or word combination, to display the found translations to the user, to choose the alternative of the translation and its word form, and to insert the choice in the displayed text. Referring to FIG. 4, the exemplary system 400 includes at least one processor 402 coupled to a memory 404 and has a touch screen among output devices 408, which, in this case, also serves as an input device 406. The processor 402 may be any commercially available CPU. The processor 402 may represent one or more processors (e.g. microprocessors), and the memory 404 may represent random access memory (RAM) devices comprising a main storage of the system 400, as well as any supplemental levels of memory, e.g., cache memories, non-volatile or back-up memories (e.g. programmable or flash memories), read-only memories, etc. In addition, the memory 404 may be considered to include memory storage physically located elsewhere in the hardware 400, e.g., any cache memory in the processor 402 as well as any storage capacity used as a virtual memory, e.g., as stored on a mass storage device 410.


The system 400 also typically receives a number of inputs and outputs for communicating information externally. For interface with a user or operator, the hardware 400 usually includes one or more user input devices 406 (e.g., a keyboard, a mouse, touch screen, imaging device, scanner, etc.) and a one or more output devices 408, e.g., a display device and a sound playback device (speaker). To embody the present invention, the system 400 may include a touch screen device (for example, a touch screen), or an interactive whiteboard, or another device which allows the user to interact with a computer by touching areas on the screen. The keyboard is not obligatory for embodiments of the present invention.


For additional storage, the hardware 400 may also include one or more mass storage devices 410, e.g., a removable drive, a hard disk drive, a Direct Access Storage Device (DASD), an optical drive, e.g. a Compact Disk (CD) drive and a Digital Versatile Disk (DVD) drive. Furthermore, the system 400 may include an interface with one or more networks 412 (e.g., a local area network (LAN), a wide area network (WAN), a wireless network, and/or the Internet among others) to permit the communication of information with other computers coupled to the networks. It should be appreciated that the system 400 typically includes suitable analog and/or digital interfaces between the processor 402 and each of the components 404, 406, 408, and 412 as is well known in the art.


The system 400 operates under the control of an operating system 414, and executes various computer software applications 416, components, programs, objects, modules, etc. to implement the techniques described above. In particular, the computer software applications include the client dictionary application and also other installed applications for displaying text and/or text image content such a word processor, dedicated e-book reader etc. Moreover, various applications, components, programs, objects, etc., collectively indicated by reference 416 in FIG. 4, may also execute on one or more processors in another computer coupled to the system 400 via a network 412, e.g. in a distributed computing environment, whereby the processing required to implement the functions of a computer program may be allocated to multiple computers over a network.


In general, the routines executed to implement the embodiments of the invention may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as “computer programs.” The computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects of the invention. Moreover, while the invention has been described in the context of fully functioning computers and computer systems, those skilled in the art will appreciate that the various embodiments of the invention are capable of being distributed as a program product in a variety of forms, and that the invention applies equally regardless of the particular type of computer-readable media used to actually effect the distribution. Examples of computer-readable media include but are not limited to recordable type media such as volatile and non-volatile memory devices, floppy and other removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD-ROMs), Digital Versatile Disks (DVDs), flash memory, etc.), among others. Another type of distribution may be implemented as Internet downloads.


While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative and not restrictive of the broad invention and that this invention is not limited to the specific constructions and arrangements shown and described.

Claims
  • 1. A computer-implemented method for translating a text fragment in an input language into an output language, the method comprising: receiving an indication of a selection of an area that includes text displayed on a screen of a device;identifying the text fragment based on the indication of the selection of the area that includes text;performing a dictionary lookup of the text fragment, wherein performing the dictionary lookup of the text fragment comprises: identifying several words in a left direction and identifying several words in a right direction from a base word of the text fragment;constructing combinations of the identified words with the base word;transforming each element of an array by inserting hyphens and space characters between portions of the base word and identified words;searching for each element of the array in an electronic dictionary; andsearching for a linguistically similar word or word combination in an electronic dictionary; andproviding an output of translation in a form that is in grammatical agreement with a context of the translated fragment using a morphologic synthesis;for each remaining untranslated portion of the text fragment for which a translation is not readily available, translating said untranslated portion of the text fragment based on a machine translation technique; anddisplaying the translation of the text fragment on the screen of the device, wherein the translation is displayed upon selecting the text fragment on the screen.
  • 2. The method of claim 1, wherein the identifying the text fragment includes performing an optical character recognition.
  • 3. The method of claim 1, wherein the translation alternatives are in a language different from the language of the text fragment.
  • 4. The method of claim 1, wherein selecting the text fragment may be done manually in a haptic-based or in a cursor-based manner.
  • 5. The method of claim 1, wherein identifying the text fragment includes identifying a sentence boundary or a paragraph boundary.
  • 6. The method of claim 1, wherein the method further comprises, prior to performing the dictionary lookup of the text fragment, identifying variants of the text fragment using linguistic descriptions and constructing a language-independent semantic structure to represent a meaning of a source sentence associated with the text fragment.
  • 7. The method of claim 1 wherein performing the dictionary lookup further comprises selecting and using a translation dictionary for a given subject domain.
  • 8. The method of claim 1, wherein performing the dictionary lookup of the text fragment further comprises performing a morphological analysis for identifying a base form of words of the text fragment.
  • 9. The method of claim 1, wherein performing the dictionary lookup of the text fragment further comprises performing a morphological synthesis that determines grammatical forms of words in the text fragment and provides a grammatically agreed translation that is in grammatical agreement in accordance with grammatical rules of the output language.
  • 10. The method of claim 1, wherein performing the dictionary lookup of the text fragment further comprises selecting a variant of translation based on a most frequent usage or in accordance with a subject field of the source text.
  • 11. The method of claim 1, wherein performing the translation of the text fragment further comprises selecting and using a translation database, and wherein the translation database is derived from previous translation processes or segmentation of existing parallel texts.
  • 12. The method of claim 1, wherein performing the translation of the text fragment includes usage of terminology dictionaries for a given subject domain.
  • 13. The method of claim 1, wherein the machine translation technique includes a model-based machine translation technique, wherein the model-based machine translation technique comprises using linguistic descriptions to build a semantic structure to represent a meaning of each untranslated fragment, and wherein the method further comprises providing a syntactically coherent translation.
  • 14. The method of claim 1, wherein the displaying the translation of the text fragment comprises displaying a translation in the form of one of a pop-up window, a superscript text, a subscript text, and a text balloon.
  • 15. A device for translating a text fragment in an input language into an output language, the device comprising: a processor;a memory in electronic communication with the processor, wherein the memory is configured with instructions to cause the processor to perform actions comprising: receiving an indication of a selection of an area that includes text displayed on a screen of a device;identifying the text fragment based on the indication of the selection of the area that includes text;performing a dictionary lookup of the text fragment, wherein performing the dictionary lookup of the text fragment comprises: identifying several words in a left direction and identifying several words in a right direction from base words of the text fragment;constructing combinations of the identified words with base words;transforming each element of an array by inserting hyphens and space characters between portions of the base words and identified words;searching for each element of the array in an electronic dictionary; andsearching for a linguistically similar word or word combination in an electronic dictionary; andproviding output of translation in a form that is in grammatical agreement with a context of the translated fragment using a morphologic synthesis;for each portion of the text fragment for which a translation is readily available, translating said portion of the text fragment based on said readily available translation;for each remaining untranslated portion of the text fragment for which a translation is not readily available, translating said untranslated portion of the text fragment based on a machine translation technique; anddisplaying the translation of the text fragment on the screen of the device, wherein the translation is displayed upon selecting the text fragment on the screen.
  • 16. The device of claim 15, wherein selecting text fragment may be done manually in a haptic-based or in a cursor-based manner.
  • 17. The device of claim 15, wherein identifying the text fragment includes identifying a sentence boundary or a paragraph boundary.
  • 18. The device of claim 15, wherein dictionary lookup includes selecting the translation dictionary for a given subject domain.
  • 19. The device of claim 15, wherein performing the dictionary lookup of the text fragment includes morphological analysis for identifying a base form of words of the text fragment.
  • 20. The device of claim 15, wherein performing the dictionary lookup of the text fragment includes performing a morphological synthesis that determines a grammatical form of words in the text fragment, and wherein the method further comprises providing a translation that is in grammatical agreement with grammatical rules of the output language.
  • 21. The device of claim 15, wherein performing the dictionary lookup of the text fragment includes selecting a variant of translation based on a frequency of use or in accordance with a topic field of the source text.
  • 22. The device of claim 15, wherein performing the translation of the text fragment includes selection and use of a translation database, and wherein the translation database is derived from a previous translation or segmenting existing parallel texts.
  • 23. The device of claim 15, wherein performing the translation of the text fragment includes use of a terminology dictionary for a subject domain.
  • 24. The device of claim 15, wherein the machine translation technique includes a model-based machine translation technique, wherein the model-based machine translation technique comprises use of linguistic descriptions to build a semantic structure to represent a meaning of each untranslated fragment, and wherein the method further comprises providing a syntactically coherent translation.
  • 25. The device of claim 15, wherein the displaying the translation of the text fragment comprises displaying a translation of the text fragment in the form of one of a pop-up window, a superscript text, a subscript text, and a text balloon.
CROSS-REFERENCE TO RELATED APPLICATIONS

For purposes of the USPTO extra-statutory requirements, the present application constitutes a continuation-in-part of U.S. patent application Ser. No. 12/187,131 filed on Aug. 6, 2008, which is currently co-pending, or is an application of which a currently co-pending application is entitled to the benefit of the filing date.

US Referenced Citations (172)
Number Name Date Kind
4706212 Toma Nov 1987 A
5068789 Van Vliembergen Nov 1991 A
5075851 Kugimiya Dec 1991 A
5128865 Sadler Jul 1992 A
5146405 Church Sep 1992 A
5175684 Chong Dec 1992 A
5268839 Kaji Dec 1993 A
5301109 Landauer et al. Apr 1994 A
5386556 Hedin et al. Jan 1995 A
5418717 Su May 1995 A
5426583 Uribe-Echebarria Diaz De Mendibil Jun 1995 A
5475587 Anick et al. Dec 1995 A
5477451 Brown et al. Dec 1995 A
5490061 Tolin et al. Feb 1996 A
5497319 Chong et al. Mar 1996 A
5510981 Berger et al. Apr 1996 A
5550934 Van Vliembergen et al. Aug 1996 A
5559693 Anick et al. Sep 1996 A
5677835 Carbonell et al. Oct 1997 A
5678051 Aoyama Oct 1997 A
5687383 Nakayama et al. Nov 1997 A
5696980 Brew Dec 1997 A
5715468 Budzinski Feb 1998 A
5721938 Stuckey Feb 1998 A
5724593 Hargrave, III et al. Mar 1998 A
5737617 Bernth et al. Apr 1998 A
5752051 Cohen May 1998 A
5768603 Brown et al. Jun 1998 A
5784489 Van Vliembergen et al. Jul 1998 A
5787410 McMahon Jul 1998 A
5794050 Dahlgren et al. Aug 1998 A
5794177 Carus et al. Aug 1998 A
5826219 Kutsumi Oct 1998 A
5826220 Takeda et al. Oct 1998 A
5848385 Poznanski et al. Dec 1998 A
5873056 Liddy et al. Feb 1999 A
5884247 Christy Mar 1999 A
5895446 Takeda et al. Apr 1999 A
5966686 Heidorn et al. Oct 1999 A
6006221 Liddy et al. Dec 1999 A
6055528 Evans Apr 2000 A
6076051 Messerly et al. Jun 2000 A
6081774 de Hita et al. Jun 2000 A
6139201 Carbonell et al. Oct 2000 A
6151570 Fuji Nov 2000 A
6182028 Karaali et al. Jan 2001 B1
6223150 Duan et al. Apr 2001 B1
6233544 Alshawi May 2001 B1
6243669 Horiguchi et al. Jun 2001 B1
6243670 Bessho et al. Jun 2001 B1
6246977 Messerly et al. Jun 2001 B1
6260008 Sanfilippo Jul 2001 B1
6266642 Franz Jul 2001 B1
6275789 Moser et al. Aug 2001 B1
6278967 Akers et al. Aug 2001 B1
6282507 Horiguchi et al. Aug 2001 B1
6285978 Bernth et al. Sep 2001 B1
6330530 Horiguchi et al. Dec 2001 B1
6345244 Clark Feb 2002 B1
6356864 Foltz et al. Mar 2002 B1
6356865 Franz et al. Mar 2002 B1
6381598 Williamowski et al. Apr 2002 B1
6463404 Appleby Oct 2002 B1
6470306 Pringle et al. Oct 2002 B1
6601026 Appelt et al. Jul 2003 B2
6604101 Chan et al. Aug 2003 B1
6658627 Gallup et al. Dec 2003 B1
6721697 Duan et al. Apr 2004 B1
6760695 Kuno et al. Jul 2004 B1
6778949 Duan et al. Aug 2004 B2
6871174 Dolan et al. Mar 2005 B1
6871199 Binnig et al. Mar 2005 B1
6901399 Corston et al. May 2005 B1
6901402 Corston-Oliver et al. May 2005 B1
6928448 Franz et al. Aug 2005 B1
6937974 D'agostini Aug 2005 B1
6947923 Cha et al. Sep 2005 B2
6965857 Decary Nov 2005 B1
6983240 Ait-Mokhtar et al. Jan 2006 B2
6986104 Green et al. Jan 2006 B2
7013264 Dolan et al. Mar 2006 B2
7020601 Hummel et al. Mar 2006 B1
7027974 Busch et al. Apr 2006 B1
7050964 Menzes et al. May 2006 B2
7085708 Manson Aug 2006 B2
7146358 Gravano et al. Dec 2006 B1
7149681 Hu Dec 2006 B2
7167824 Kallulli Jan 2007 B2
7191115 Moore Mar 2007 B2
7200550 Menezes et al. Apr 2007 B2
7263488 Chu et al. Aug 2007 B2
7269594 Corston-Oliver et al. Sep 2007 B2
7346493 Ringger et al. Mar 2008 B2
7356457 Pinkham et al. Apr 2008 B2
7447624 Fuhrmann Nov 2008 B2
7475015 Epstein et al. Jan 2009 B2
7596485 Campbell et al. Sep 2009 B2
7672831 Todhunter et al. Mar 2010 B2
7707025 Whitelock Apr 2010 B2
8078450 Anisimovich et al. Dec 2011 B2
8145473 Anisimovich et al. Mar 2012 B2
8214199 Anismovich et al. Jul 2012 B2
8229730 Van Den Berg et al. Jul 2012 B2
8229944 Latzina et al. Jul 2012 B2
8271453 Pasca et al. Sep 2012 B1
8285728 Rubin Oct 2012 B1
8301633 Cheslow Oct 2012 B2
8402036 Blair-Goldensohn et al. Mar 2013 B2
8533188 Yan et al. Sep 2013 B2
8548951 Solmer et al. Oct 2013 B2
8577907 Singhal et al. Nov 2013 B1
8996994 Alonichau Mar 2015 B2
20010014902 Hu et al. Aug 2001 A1
20010029455 Chin et al. Oct 2001 A1
20020040292 Marcu Apr 2002 A1
20030004702 Higinbotham Jan 2003 A1
20030158723 Masuichi et al. Aug 2003 A1
20030176999 Calcagno et al. Sep 2003 A1
20030182102 Corston-Oliver et al. Sep 2003 A1
20030204392 Finnigan et al. Oct 2003 A1
20040098247 Moore May 2004 A1
20040122656 Abir Jun 2004 A1
20040167768 Travieso et al. Aug 2004 A1
20040172235 Pinkham et al. Sep 2004 A1
20040193401 Ringger et al. Sep 2004 A1
20040199373 Shieh Oct 2004 A1
20040254781 Appleby Dec 2004 A1
20050010421 Watanabe et al. Jan 2005 A1
20050015240 Appleby Jan 2005 A1
20050021322 Richardson et al. Jan 2005 A1
20050055198 Xun Mar 2005 A1
20050080613 Colledge et al. Apr 2005 A1
20050086047 Uchimoto et al. Apr 2005 A1
20050091030 Jessee et al. Apr 2005 A1
20050137853 Appleby Jun 2005 A1
20050155017 Berstis Jul 2005 A1
20050171757 Appleby Aug 2005 A1
20050209844 Wu et al. Sep 2005 A1
20050240392 Munro, Jr. et al. Oct 2005 A1
20060004563 Campbell et al. Jan 2006 A1
20060080079 Yamabama Apr 2006 A1
20060095250 Chen et al. May 2006 A1
20060136193 Lux-Pogodalla et al. Jun 2006 A1
20060217964 Kamatani et al. Sep 2006 A1
20060224378 Chino et al. Oct 2006 A1
20060293876 Kamatani et al. Dec 2006 A1
20070010990 Woo Jan 2007 A1
20070016398 Buchholz Jan 2007 A1
20070083359 Bender Apr 2007 A1
20070100601 Kimura May 2007 A1
20080195372 Chin et al. Aug 2008 A1
20090182548 Zwolinski Jul 2009 A1
20110055188 Gras Mar 2011 A1
20110301941 De Vocht Dec 2011 A1
20120023104 Johnson et al. Jan 2012 A1
20120030226 Holt et al. Feb 2012 A1
20120131060 Heidasch et al. May 2012 A1
20120197885 Patterson Aug 2012 A1
20120203777 Laroco, Jr. et al. Aug 2012 A1
20120221553 Wittmer et al. Aug 2012 A1
20120246153 Pehle Sep 2012 A1
20120296897 Xin-Jing et al. Nov 2012 A1
20130013291 Bullock et al. Jan 2013 A1
20130054589 Cheslow Feb 2013 A1
20130091113 Gras Apr 2013 A1
20130138696 Turdakov et al. May 2013 A1
20130185307 El-Yaniv et al. Jul 2013 A1
20130254209 Kang et al. Sep 2013 A1
20130282703 Puterman-Sobe et al. Oct 2013 A1
20130311487 Moore et al. Nov 2013 A1
20130318095 Harold Nov 2013 A1
20140012842 Yan et al. Jan 2014 A1
Foreign Referenced Citations (2)
Number Date Country
2400400 Dec 2001 EP
2011160204 Dec 2011 WO
Non-Patent Literature Citations (2)
Entry
Bolshakov, Igor A., Co-Ordinative Ellipsis in Russian Texts: Problems of Description and Restoration, Viniti, Academy of Sciences of USSR, Moscow, 125219, USSR, pp. 65-67.
Hutchins, “Machine Translation: Past, Present, Future”, 1986, New York: Halsted Press, Chapters, 1, 3 and 9, 1986. pp. 1-36.
Related Publications (1)
Number Date Country
20130191108 A1 Jul 2013 US
Continuation in Parts (1)
Number Date Country
Parent 12187131 Aug 2008 US
Child 13552601 US