Field
Implementations of the present invention relate to natural language processing. In particular, implementations of the present invention relate to classifying text documents written in one or many languages.
Related Art
Many natural language processing systems involve classifying texts into predefined categories. For example, in order to sort the huge amount of news available online into some meaningful categories, e.g., politics, cultural events, sport etc., a text classification method may be applied.
Nowadays, there is a great desire to be able to analyze multi-language data. However, existing text processing systems are usually language-dependent, i.e., they are able to analyze text written only in one particular language.
The very few existing cross-language systems are based on machine translation techniques, they choose a so called target language, translate all documents to that language with machine translation techniques, and then construct document representation and apply classification. The machine translation creates additional errors and, moreover, the analysis is usually based on low-level properties of documents, and the meanings of documents are not reflected in the utilized representation.
Thus, there is a need it is possible to create systems that can improve cross-language document classification, systems that would take into account not only the symbolic information but the semantics, i.e., meaning, of documents.
So that the manner in which the recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the invention can be practiced without these specific details.
Reference in this specification to “one embodiment” or “an implementation” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention. The appearances of the phrase “in one embodiment” or “in one implementation” in various places in the specification are not necessarily all referring to the same embodiment or implementation, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments.
Implementations of the present invention disclose techniques for cross-language natural language text processing such as text classification based on exhaustive syntactic and semantic analyses of texts and language-independent semantic structures. A lot of lexical, grammatical, syntactical, pragmatic, semantic and other features of the texts may be identified, extracted and effectively used to solve said task.
A classifier is an instrument to perform classification. One implementation of document classification may be formulated as follows: given a finite set of categories {C1, C2, . . . , Cn} and an input document D, a classifier has to assign the document D to one (or more) of the categories {C1, C2, . . . , Cn} or produce an output representing a set of pairs (a so called classification spectrum) {(C1, w1), (C2, w2), . . . , (Cn, wn)}, where for each integer i from 1 to n, Ci is the category and wi is a weight (e.g., a real number in the interval [0,1]) defining to which extend the document D belongs to the category Ci. A threshold value may be defined in order to omit the categories with low weights below the threshold. For example, given the following categories {Sport, TV, Business, Art} and a document to be classified describing a TV show about football, an adequate classifier could produce the following classification spectrum for the document {(Sport, 0.7), (TV, 0.8), (Business, 0.2), (Art, 0.05)}. If the threshold is 0.3, only sport and TV categories will be considered.
Classification is a task of supervised learning, i.e., supervised (training) data is required. Training data is a set of labeled documents, i.e., each document is labeled with its category or classification spectrum. By analyzing this labeled data, a so called classification function or classification model is defined. This function or model should predict an output (category or a classification spectrum) for an input document.
Many natural language processing (NLP) problems may be formulated as a task of classification. For example, authorship attribution is a problem of assigning authors to anonymous texts, the authors are to be chosen out of a predefined list of possible authors. For each possible author, one or more documents written by the author are available. Thus, these documents are the training data and a classifier may be trained in order to assign an author to the anonymous texts. Another problem formulated as a task of classification is determining a document's genre or topic(s) out of lists of possible genres and topics when training data are available for each genre or topic.
Classification is usually performed on documents represented as vectors of so called features. Features represent characteristics of the documents to be classified and should reflect essential characteristics for the particular task. The naïve approach is to create features out of words: each word in a document may be a feature, thus vectors containing frequencies of each word may be utilized in classification. Another common way to create this vector space model is term frequency—inverted document frequency (TF-IDF) document representation (such as described by Salton, 1988), in this approach a value in a document vector is not only proportional to the corresponding word frequency in the document but is also inversely proportional to its frequency in the entire document corpus. Thus, those words that are frequently found in documents (e.g., and, but, the, a, etc.) do not get high values.
Of course, some tasks require more sophisticated features for representing and processing documents since document features should reflect those characteristics that are helpful for tasks.
For example, the topic of a document could hardly be reflected by a feature such as average sentence length. Though sentence length could be used or useful in an authorship analysis because some authors are known for using very long sentences (e.g., L. Tolstoy) while others prefer shorter ones (e.g., E. Hemingway).
Some widely used features that can be used are primarily lexical and character features, those that consider a text as a sequence of words and characters respectively. Namely, word frequencies, n-grams, letter frequencies, character n-grams, etc. A big advantage of these features is that they are easy to be extracted automatically. But they are language dependent and do not capture a document's semantics. Therefore, these lexical-based features do not allow performing cross-language, semantically rich, document analysis.
Language independent features capturing not only the symbolic information but semantics of a text often appear to be more promising for solving various tasks. For example, certain tasks associated within authorship analysis systems are promising since many authors write in different languages or their texts are translated. A language independent system could fairly compare authors across different languages. Features of the original author can be lost in translation. Language independent systems should capture an author's writing style when an author's work is translated. Language independent systems would also be highly useful to group online news by topic across languages, since there is a big amount of news in different languages over the Internet.
Previous cross-language systems do not provide accurate extraction of language independent semantically rich features of text. Therefore these systems were very rarely exploited or adopted by a large user base. Existing systems for text document processing typically are limited to analyzing documents written in a single language. However, for some tasks such as topic detection in online news or authorship attribution of translated texts, cross-language analysis techniques are required. The existing systems dealing with documents written in different languages usually translate them to one particular language (e.g., English) with machine translating systems and then apply classification. Therefore syntactic and semantic properties of the source sentences are not taken into account.
Advantageously, the problems associated with existing text processing systems are overcome or at least reduced by the techniques and systems disclosed herein.
Implementations of the invention allow a user to perform classification of natural language texts written in one or many natural languages. The disclosed classification method may take into account lexical, grammatical, syntactical, pragmatic, semantic and other features of the texts.
These features are extracted for constructing language-independent semantic structures. The system employs automatic syntactic and semantic analyses when processing texts. It indexes and stores syntactic and semantic information about each sentence, as well as parses results and lexical choices including results obtained when resolving ambiguities. The system analyzes sentences using linguistic descriptions of a given natural language to reflect the real complexities of the natural language, rather than simplified or artificial descriptions. A principle of integral and purpose-driven recognition, where hypotheses about the structure of a part of a sentence are verified within the hypotheses about the structure of the whole sentence, is implemented during the analysis stage. It avoids analyzing numerous parsing of anomalous variants.
In one implementation, a plurality of linguistic models and knowledge about natural languages may be arranged in a database and applied for analyzing each text or source sentence such as at step 106. Such a plurality of linguistic models may include morphology models, syntax models, grammar models and lexical-semantic models (not shown in
Accordingly, a rough syntactic analysis is performed on the source sentence to generate a graph of generalized constituents 532 for further syntactic analysis. All reasonably possible surface syntactic models for each element of lexical-morphological structure are applied, and all the possible constituents are built and generalized to represent all the possible variants of parsing the sentence syntactically.
Following the rough syntactic analysis, a precise syntactic analysis is performed on the graph of generalized constituents to generate one or more syntactic trees 542 to represent the source sentence. In one implementation, generating the syntactic tree 542 comprises choosing between lexical options and choosing between relations from the graphs. Many prior and statistical ratings may be used during the process of choosing between lexical options, and in choosing between relations from the graph. The prior and statistical ratings may also be used for assessment of parts of the generated tree and for the whole tree. In one implementation, the one or more syntactic trees may be generated or arranged in order of decreasing assessment. Thus, the best syntactic tree may be generated first. Non-tree links are also checked and generated for each syntactic tree at this time. If the first generated syntactic tree fails, for example, because of an impossibility to establish non-tree links, the second syntactic tree is taken as the best, etc.
With reference to
With reference to
The analysis methods ensure that the maximum accuracy in conveying or understanding the meaning of the sentence is achieved.
With reference to
A semantic hierarchy may include semantic notions or semantic entities referred to herein as “semantic classes.” The semantic classes may be arranged into a semantic hierarchy comprising hierarchical parent-child relationships. In general, a child semantic class inherits many or most properties of its direct parent and all ancestral semantic classes. For example, semantic class SUBSTANCE is a child of semantic class ENTITY and at the same time it is a parent of semantic classes GAS, LIQUID, METAL, WOOD_MATERIAL, etc.
Each semantic class in the semantic hierarchy is supplied with a deep model. The deep model of the semantic class is a set of deep slots. Deep slots reflect the semantic roles of child constituents in various sentences with objects of the semantic class as the core of a parent constituent and the possible semantic classes as fillers of deep slots. The deep slots express semantic relationships between constituents, including, for example, “agent,” “addressee,” “instrument,” “quantity,” etc. A child semantic class inherits and adjusts the deep model of its direct parent semantic class.
Semantic descriptions 104 are language-independent. Semantic descriptions 104 may provide descriptions of deep constituents, and may comprise a semantic hierarchy, deep slots descriptions, a system of semantemes, and pragmatic descriptions.
With reference to
With reference to
With reference to
With reference to
With reference to
With reference to
One implementation of the disclosed methods is a method of supervised learning such as the one shown in
One classification problem is based on the concept of similarity. There are many ways to calculate similarity between two texts. One naive way to find out if two texts are similar is to count how many words they have in common. There are also more advanced versions of this approach such as techniques involving lemmatization, stemming, weighting, etc. For example, a vector space model (G. Salton, 1975) may be built, and vector similarity measures, such as e.g. cosine similarity, may be utilized. During the text processing described here, documents may be represented with language independent semantic classes that in their turn may be considered as lexical features. Therefore, the similarity measures as were mentioned above may be.
Such similarity measures have a drawback in that they do not actually capture the semantics. For example, the two sentences, “Bob has a spaniel” and “Richard owns a dog” are semantically similar but they do not share any words but an article. Therefore, a mere lexical text similarity measure will fail to find that these sentences are similar. To capture this type of similarity, knowledge-based semantic similarity measures may be used. They require a semantic hierarchy to be calculated. Similarity between two words usually depends on a shortest path between corresponding concepts in a corresponding semantic hierarchy. For example, “spaniel” in the semantic hierarchy corresponding to the first sentence above appears as a child node (hyponym) of “dog,” therefore semantic similarity between the concepts will be high. Word-to-word similarity measures may be generalized to text-to-text similarities by combining values for similarities of each word pair. Semantic classes described here represent nodes of semantic hierarchy. Therefore, knowledge-based semantic similarity measures described above and their generalizations to text-to-text similarity measures may be utilized within document processing.
The hardware 1400 also typically receives a number of inputs and outputs for communicating information externally For interface with a user or operator, the hardware 1400 may include one or more user input devices 1406 (e.g., a keyboard, a mouse, imaging device, scanner, microphone) and a one or more output devices 1408 (e.g., a Liquid Crystal Display (LCD) panel, a sound playback device (speaker)). To embody the present invention, the hardware 1400 typically includes at least one screen device.
For additional storage, the hardware 1400 may also include one or more mass storage devices 1410, e.g., a floppy or other removable disk drive, a hard disk drive, a Direct Access Storage Device (DASD), an optical drive (e.g. a Compact Disk (CD) drive, a Digital Versatile Disk (DVD) drive) and/or a tape drive, among others. Furthermore, the hardware 1400 may include an interface with one or more networks 1412 (e.g., a local area network (LAN), a wide area network (WAN), a wireless network, and/or the Internet among others) to permit the communication of information with other computers coupled to the networks. It should be appreciated that the hardware 1400 typically includes suitable analog and/or digital interfaces between the processor 1402 and each of the components 1404, 1406, 1408, and 1412 as is well known in the art.
The hardware 1400 operates under the control of an operating system 1414, and executes various computer software applications, components, programs, objects, modules, etc. to implement the techniques described above. Moreover, various applications, components, programs, objects, etc., collectively indicated by application software 1416 in
In general, the routines executed to implement the embodiments of the invention may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as a “computer program.” A computer program typically comprises one or more instruction sets at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects of the invention. Moreover, while the invention has been described in the context of fully functioning computers and computer systems, those skilled in the art will appreciate that the various embodiments of the invention are capable of being distributed as a program product in a variety of forms, and that the invention applies equally to actually effect the distribution regardless of the particular type of computer-readable media used. Examples of computer-readable media include but are not limited to recordable type media such as volatile and non-volatile memory devices, floppy and other removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD-ROMs), Digital Versatile Disks (DVDs), flash memory, etc.), among others. Another type of distribution may be implemented as Internet downloads.
While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative and not restrictive of the broad invention and that this invention is not limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art upon studying this disclosure. In an area of technology such as this, where growth is fast and further advancements are not easily foreseen, the disclosed embodiments may be readily modified or re-arranged in one or more of its details as facilitated by enabling technological advancements without departing from the principals of the present disclosure.
For purposes of the USPTO extra-statutory requirements, the present application constitutes a continuation-in-part of U.S. patent application Ser. No. 12/983,220, filed on 31 Dec. 2010, which is a continuation-in-part of U.S. Ser. No. 11/548,214, filed on 10 Oct. 2006, which is currently co-pending, or is an application of which a currently co-pending application is entitled to the benefit of the filing date. The United States Patent Office (USPTO) has published a notice effectively stating that the USPTO's computer programs require that patent applicants reference both a serial number and indicate whether an application is a continuation or continuation-in-part. See Stephen G. Kunin, Benefit of Prior-Filed Application, USPTO Official Gazette 18 Mar. 2003. The Applicant has provided above a specific reference to the application(s) from which priority is being claimed as recited by statute. Applicant understands that the statute is unambiguous in its specific reference language and does not require either a serial number or any characterization, such as “continuation” or “continuation-in-part,” for claiming priority to U.S. patent applications. Notwithstanding the foregoing, Applicant understands that the USPTO's computer programs have certain data entry requirements, and hence Applicant is designating the present application as a continuation-in-part of its parent applications as set forth above, but points out that the designations are not to be construed as commentary or admission as to whether or not the present application contains any new matter in addition to the matter of its parent application(s). All subject matter of the Related Applications and of any and all parent, grandparent, great-grandparent, etc. applications of the Related Applications is incorporated herein by reference to the extent such subject matter is not inconsistent herewith. All subject matter of the Related Application(s) and of any and all parent, grandparent, great-grandparent, etc. applications of the Related Applications is incorporated herein by reference to the extent such subject matter is not inconsistent herewith.
Number | Name | Date | Kind |
---|---|---|---|
4914590 | Loatman et al. | Apr 1990 | A |
5268839 | Kaji | Dec 1993 | A |
5301109 | Landauer et al. | Apr 1994 | A |
5386556 | Hedin et al. | Jan 1995 | A |
5418717 | Su et al. | May 1995 | A |
5426583 | Uribe-Echebarria Diaz De Mendibil | Jun 1995 | A |
5677835 | Carbonell et al. | Oct 1997 | A |
5678051 | Aoyama | Oct 1997 | A |
5687383 | Nakayama et al. | Nov 1997 | A |
5715468 | Budzinski | Feb 1998 | A |
5752051 | Cohen | May 1998 | A |
5768603 | Brown et al. | Jun 1998 | A |
5787410 | McMahon | Jul 1998 | A |
5794050 | Dahlgren et al. | Aug 1998 | A |
5826219 | Kutsumi | Oct 1998 | A |
5884247 | Christy | Mar 1999 | A |
5930746 | Ting | Jul 1999 | A |
6006221 | Liddy et al. | Dec 1999 | A |
6055528 | Evans | Apr 2000 | A |
6076051 | Messerly et al. | Jun 2000 | A |
6081774 | de Hita et al. | Jun 2000 | A |
6182028 | Karaali et al. | Jan 2001 | B1 |
6233544 | Alshawi | May 2001 | B1 |
6243670 | Bessho et al. | Jun 2001 | B1 |
6246977 | Messerly et al. | Jun 2001 | B1 |
6275789 | Moser et al. | Aug 2001 | B1 |
6295543 | Block et al. | Sep 2001 | B1 |
6356864 | Foltz et al. | Mar 2002 | B1 |
6381598 | Williamowski et al. | Apr 2002 | B1 |
6442524 | Ecker et al. | Aug 2002 | B1 |
6463404 | Appleby | Oct 2002 | B1 |
6601026 | Appelt et al. | Jul 2003 | B2 |
6604101 | Chan et al. | Aug 2003 | B1 |
6622123 | Chanod et al. | Sep 2003 | B1 |
6631346 | Karaorman et al. | Oct 2003 | B1 |
6778949 | Duan et al. | Aug 2004 | B2 |
6871199 | Binnig et al. | Mar 2005 | B1 |
6901402 | Corston-Oliver et al. | May 2005 | B1 |
6928448 | Franz et al. | Aug 2005 | B1 |
6937974 | d'Agostini | Aug 2005 | B1 |
6947923 | Cha et al. | Sep 2005 | B2 |
6965857 | Decary | Nov 2005 | B1 |
6983240 | Ait-Mokhtar et al. | Jan 2006 | B2 |
7027974 | Busch et al. | Apr 2006 | B1 |
7146358 | Gravano et al. | Dec 2006 | B1 |
7200550 | Menezes et al. | Apr 2007 | B2 |
7263488 | Chu et al. | Aug 2007 | B2 |
7475015 | Epstein et al. | Jan 2009 | B2 |
7672831 | Todhunter et al. | Mar 2010 | B2 |
7739102 | Bender | Jun 2010 | B2 |
8078450 | Anisimovich et al. | Dec 2011 | B2 |
8145473 | Anisimovich et al. | Mar 2012 | B2 |
8214199 | Anisimovich et al. | Jul 2012 | B2 |
8229730 | Van Den Berg et al. | Jul 2012 | B2 |
8229944 | Latzina et al. | Jul 2012 | B2 |
8271453 | Pasca et al. | Sep 2012 | B1 |
8285728 | Rubin | Oct 2012 | B1 |
8301633 | Cheslow | Oct 2012 | B2 |
8402036 | Blair-Goldensohn et al. | Mar 2013 | B2 |
8577907 | Singhal et al. | Nov 2013 | B1 |
20010056352 | Xun | Dec 2001 | A1 |
20020022956 | Ukrainczyk et al. | Feb 2002 | A1 |
20030040901 | Wang | Feb 2003 | A1 |
20030176999 | Calcagno et al. | Sep 2003 | A1 |
20050155017 | Berstis et al. | Jul 2005 | A1 |
20050192792 | Carus | Sep 2005 | A1 |
20050209844 | Wu et al. | Sep 2005 | A1 |
20050240392 | Munro, Jr. et al. | Oct 2005 | A1 |
20050267871 | Marchisio et al. | Dec 2005 | A1 |
20060004563 | Campbell et al. | Jan 2006 | A1 |
20070083505 | Ferrari et al. | Apr 2007 | A1 |
20070156669 | Marchisio et al. | Jul 2007 | A1 |
20070244690 | Peters | Oct 2007 | A1 |
20080091405 | Anisimovich et al. | Apr 2008 | A1 |
20080319947 | Latzina et al. | Dec 2008 | A1 |
20090063472 | Pell et al. | Mar 2009 | A1 |
20090182738 | Marchisio et al. | Jul 2009 | A1 |
20090271179 | Marchisio et al. | Oct 2009 | A1 |
20110055188 | Gras | Mar 2011 | A1 |
20110072021 | Lu et al. | Mar 2011 | A1 |
20110301941 | De Vocht | Dec 2011 | A1 |
20120023104 | Johnson et al. | Jan 2012 | A1 |
20120030226 | Holt et al. | Feb 2012 | A1 |
20120131060 | Heidasch | May 2012 | A1 |
20120197885 | Patterson | Aug 2012 | A1 |
20120203777 | Laroco, Jr. et al. | Aug 2012 | A1 |
20120221553 | Wittmer et al. | Aug 2012 | A1 |
20120246153 | Pehle | Sep 2012 | A1 |
20120271627 | Danielyan et al. | Oct 2012 | A1 |
20120296897 | Xin-Jing et al. | Nov 2012 | A1 |
20130013291 | Bullock et al. | Jan 2013 | A1 |
20130041652 | Zuev et al. | Feb 2013 | A1 |
20130054589 | Cheslow | Feb 2013 | A1 |
20130091113 | Gras | Apr 2013 | A1 |
20130254209 | Kang | Sep 2013 | A1 |
Number | Date | Country |
---|---|---|
2400400 | Dec 2001 | EP |
2011160204 | Dec 2011 | WO |
Entry |
---|
Bolshakov, I.A. “Co-Ordinative Ellipsis in Russian Texts: Problems of Description and Restoration” Proceedings of the 12th conference on Computational linguistics, vol. 1, pp. 65-67. Association for Computational Linguistics 1988. |
Hutchins, Machine Translation: Past, Present, Future, Ellis Horwood, Ltd., Chichester, UK, 1986. |
Mitamura, T. et al. “An Efficient Interlingua Translation System for Multi-lingual Document Production,” Proceedings of Machine Translation Summit III, Washington DC, Jul. 2-4, 1991. |
Number | Date | Country | |
---|---|---|---|
20120271627 A1 | Oct 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12983220 | Dec 2010 | US |
Child | 13535638 | US | |
Parent | 11548214 | Oct 2006 | US |
Child | 12983220 | US |