A problem today for many individuals, particularly practitioners in the disciplines involving information analysis, is the scarcity of time and/or resources to review the large volumes of information that are available and potentially relevant. Effective and timely use of such large amounts of information is often impossible using traditional approaches, such as lists, tables, and simple graphs. Tools that can help individuals automatically identify and/or understand the themes, topics, and/or trends within a body of information are useful and necessary for handling these large volumes of information. Many traditional text analysis techniques focus on selecting features that distinguish documents within a document corpus. However, these techniques may fail to select features that characterize or describe the majority or a minor subset of documents within the corpus. Furthermore, when the information is streaming and/or updated over time, the corpus is dynamic and can change significantly over time. Therefore, most of the current tools are limited in that they only allow information consumers to interact with snapshots of an information space that is often times continually changing.
Since most information sources deliver information streams, such as news syndicates and information services, and/or provide a variety of mechanisms for feeding the latest information by region, subject, and/or by user-defined search interests, when using traditional text analysis tools, new information that arrives can eclipse prior information. As a result, temporal context is typically lost with employing corpus-oriented text analysis tools that do not accommodate dynamic corpora. Accurately identifying and intelligently describing change in an information space requires a context that relates new information with old. Accordingly, a need exists for systems and computer-implemented processes for computation and analysis of significant themes within a corpus of documents, particularly when the corpus is dynamic and changes over time.
Aspects of the present invention provide systems and computer-implemented processes for determining coherent clusters of individual lexical units, such as keywords, keyphrases and other document features. These clusters embody distinct themes within a corpus of documents. Furthermore, some embodiments can provide processes and systems that enable identification and tracking of related themes across time within a dynamic corpus of documents. The grouping of documents into themes through their essential content, such as lexical units, can enable exploration of associations between documents independently of a static and/or pre-defined corpus.
As used herein, lexical units can refer to significant words, symbols, numbers, and/or phrases that reflect and/or represent the content of a document. The lexical units can comprise a single term or multiple words and phrases. An exemplary lexical unit can include, but is not limited to, any lexical unit, which can provide a compact summary of a document. Additional examples can include, but are not limited to, entities, query terms, and terms or phrases of interest. The lexical units can be provided by a user, by an external source, by an automated tool that extracts the lexical units from documents, or by a combination of the two.
A theme, as used herein, can refer to a group of lexical units that are predominantly associated with a distinct set of documents in the corpus. A corpus may have multiple themes, each theme relating strongly to a unique, but not necessarily exclusive, set of documents.
Embodiments of the present invention can compute and analyze significant themes within a corpus of documents. The corpus can be maintained in a storage device and/or streamed through communications hardware. Computation and analysis of significant themes can be executed on a processor and comprises generating a lexical unit document association (LUDA) vector for each lexical unit that has been provided and quantifying similarities between each unique pair of lexical units. The LUDA vector characterizes a measure of association between its corresponding lexical unit and documents in the corpus. The lexical units can then be grouped into clusters such that each cluster contains a set of lexical units that are most similar as determined by the LUDA vectors and a predetermined clustering threshold. To each cluster a theme label can be assigned comprising the lexical unit within each cluster that has the greatest measure of association.
In preferred embodiments, the steps of providing lexical units, generating LUDA vectors, quantifying similarities between lexical units, and grouping lexical units into clusters are repeated at pre-defined intervals if the corpus of documents is not static. Accordingly, the present invention can operate on streaming information to extract content from documents as they are received and calculate clusters and themes at defined intervals. The clusters and/or themes calculated at a given interval can be persisted allowing for evaluation of overlap and differences with themes and/or clusters from previous and future intervals.
In some embodiments, the lexical units can be provided after having been automatically extracted them from individual documents within the corpus of documents. In a particular instance, extraction of lexical units from the corpus of documents can comprise parsing words in an individual document by delimiters, stopwords, or both to identify candidate lexical units. Co-occurrences of words within the candidate lexical units are determined and word scores are calculated for each word within the candidate lexical units based on a function of co-occurrence degree, co-occurrence and frequency, or both. A lexical unit score is then calculated for each candidate lexical unit based on a function of word scores for words within the candidate lexical units. Lexical unit scores for each candidate lexical unit can comprise a sum of the word scores for each word within the candidate lexical unit. A portion of the candidate lexical units can then be selected for extraction as actual lexical units based, at least in part, on the candidate lexical units with highest lexical units scores. In some embodiments, a predetermined number, T, of candidate lexical units having the highest lexical unit scores are extracted as the lexical units.
In preferred embodiments, co-occurrences of words are stored within a cooccurrence graph. Furthermore, adjoining candidate lexical units that adjoin one another at least twice in the individual document and in the same order can be joined along with any interior stopwords to create a new candidate lexical unit.
When grouping the lexical units into clusters, the measure of association can be determined by submitting each lexical unit as a query to the corpus of documents and then storing document responses from the queries as the measures. Alternatively, the measure of association can be determined by quantifying frequencies of each lexical unit within each document in the corpus and storing the frequencies as the measures. In yet another embodiment, the measure of association is a function of frequencies of each word within the lexical units within each document in the corpus. In specific instances, the similarities between lexical units can be quantified using Sorenson similarity coefficients of respective LUDA vectors. Alternatively, the similarity between lexical units can be quantified using pointwise mutual information of respective LUDA vectors.
In preferred embodiments, grouping of lexical units comprises applying hierarchical agglomerations clustering to successively join similar pairs of lexical units into a hierarchy. In a specific instance, the hierarchical clustering is Ward's hierarchical clustering, and clusters are defined using a coherence threshold of 0.65.
The corpus of documents can be static or dynamic. A static corpus refers to a more traditional understanding in which the corpus is fixed with respect to content in time. Alternatively, a dynamic corpus can refer to streamed information that is updated periodically, regularly, and/or continuously. Stories, which can refer to a dynamic set of documents that are associated to the same themes across multiple intervals and can emerge from analysis of a dynamic corpus. Stories can span multiple documents in time intervals and can develop, merge, and split as they intersect and overlap with other stories over time.
When operating on a dynamic corpus of documents, embodiments of the present invention can maintain a sliding window over time, removing old documents as time moves onward. The duration of the sliding window can be pre-defined to minimize any problems associated with scalability and the size of the corpus. Since the sliding window can limit how far back in time a user can analyze data, preferred embodiments allows a user to save to a storage device a copy of any current increment of analysis.
The purpose of the foregoing abstract is to enable the United States Patent and Trademark Office and the public generally, especially the scientists, engineers, and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from a cursory inspection the nature and essence of the technical disclosure of the application. The abstract is neither intended to define the invention of the application, which is measured by the claims, nor is it intended to be limiting as to the scope of the invention in any way.
Various advantages and novel features of the present invention are described herein and will become further readily apparent to those skilled in this art from the following detailed description. In the preceding and following descriptions, the various embodiments, including the preferred embodiments, have been shown and described. Included herein is a description of the best mode contemplated for carrying out the invention. As will be realized, the invention is capable of modification in various respects without departing from the invention. Accordingly, the drawings and description of the preferred embodiments set forth hereafter are to be regarded as illustrative in nature, and not as restrictive.
Embodiments of the invention are described below with reference to the following accompanying drawings.
The following description includes at least the best mode of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore the present description should be seen as illustrative and not limiting. While the invention is susceptible of various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention as defined in the claims.
Many current text analysis techniques focus on identifying features that distinguish documents from each other within an encompassing document corpus. These techniques may fail to select features that characterize or describe the majority or a minor subset of the corpus. Furthermore, when the information is streaming, the corpus is dynamic and can change significantly over time. Techniques that evaluate documents by discriminating features are only valid for a snapshot in time.
To more accurately characterize documents within a corpus, preferred embodiments of the present invention apply computational methods for characterizing each document individually. Such methods produce information on what a document is about, independent of its current context. Analyzing documents individually also further enables analysis of massive information streams as multiple documents can be analyzed in parallel or across a distributed architecture. In order to extract content that is readily identifiable by users, techniques for automatically extracting lexical units can be applied. Rapid Automatic Keyword Extraction (RAKE) is one such technique that can take a simple set of input parameters to automatically extract keywords as lexical units from a single document. Details regarding RAKE are described in U.S. patent application Ser. No. 12/555,916, filed on Sep. 9, 2009, which details are incorporated herein by reference. Briefly, RAKE is a computer implemented process that parses words in an individual document by delimiters, stopwords, or both to identify lexical units. Co-occurrences of words within the lexical units are determined and word scores are calculated for each word within the lexical units based on a function of co-occurrence degree, co-occurrence and frequency, or both. A lexical unit score is then calculated for each lexical unit based on a function of word scores for words within the lexical units. Lexical unit scores for each lexical unit can comprise a sum of the word scores for each word within the lexical unit. A portion of the lexical units can then be selected for extraction as essential lexical units based, at least in part, on the lexical units with highest lexical unit scores. In some embodiments, a predetermined number, T, of lexical units having the highest lexical unit scores are extracted as the essential lexical units, or keywords.
Keywords (i.e., lexical units), which may comprise one or more words, provide an advantage over other types of signatures as they are readily accessible to a user and can be easily applied to search other information spaces. The value of any particular keyword can be readily evaluated by a user for their particular interests and applied in multiple contexts. Furthermore, the direct correspondence of extracted keywords with the document text improves the accessibility of a user with the system.
For a given corpus, whether static or representing documents within an interval of time, a set of extracted lexical units arc selected and grouped into coherent themes by applying a hierarchical agglomerative clustering algorithm to a lexical unit similarity matrix based on lexical unit document associations in the corpus. Lexical units that are selected for the set can have a higher ratio of extracted document frequency, or the number of documents from which the lexical unit was extracted as a keyword, to total document frequency, or are otherwise considered representative of a set of documents within the corpus.
The association of each lexical unit within this set to documents within the corpus is measured as the document's response to the lexical unit, which is obtained by submitting each lexical unit as a query to a Lucene index populated with documents from the corpus. The query response of each document hit greater than 0.1 is accumulated in the lexical unit's document association vector. Lucene calculates document similarity according to a vector space model. In most cases the number of document hits to a particular lexical unit query is a small subset of the total number of documents in the index. Lexical unit document association vectors typically have fewer entries than there are documents in the corpus and are very heterogeneous.
The similarity between each unique pair of lexical units is calculated as the Sorensen similarity coefficient of the lexical units' respective document association vectors. The Sorensen similarity coefficient is used due to its effectiveness on heterogeneous vectors and is identical to 1.0—Bray-Curtis distance, shown in equation
Coherent groups of lexical units can then be calculated by clustering lexical units by their similarity. Because the number of coherent groups may be independent of the number of lexical units extracted, Ward's hierarchical agglomerative clustering algorithm, which does not require a pre-defined number of clusters, can be applied.
Ward's hierarchical clustering begins by assigning each element to its own cluster and then successively joins the two most similar clusters into a new, higher-level, cluster until a single top level cluster is created from the two remaining, least similar, ones. The decision distance ddij between these last two clusters is typically retained as the maximum decision distance ddmax for the hierarchy and can be used to evaluate the coherence ccn of lower level clusters in the hierarchy as shown in equation (2).
Clusters that have greater internal similarity will have higher coherence. Using a high coherence threshold prevents clusters from including broadly used lexical units such as president that are likely to appear in multiple themes. In preferred embodiments, clusters with a coherence threshold of 0.65 or greater are selected as candidate themes for the corpus.
Each candidate theme comprises lexical units that typically return the same set of documents when applied as a query to the document corpus. These lexical units occur in multiple documents together and may intersect other stories singly or together.
We select the final set of themes for the corpus by assigning documents to their most highly associated theme. The association of a document to a theme is calculated as the sum of the document's associations to lexical units that comprise the theme. After all documents in the corpus have been assigned, we filter out any candidate themes for which no documents have been assigned. Lexical units within each theme are then ranked by their associations to documents assigned within the theme. Hence the top ranked lexical unit for each theme best represents documents assigned to the theme and is used as the theme's label.
The MPQA Corpus consists of 535 news articles provided by the Center for the Extraction and Summarization of Events and Opinions in Text (CERATOPS). Articles in the MPQA Corpus are from 187 different foreign and U.S. news sources and date from June 2001 to May 2002.
RAKE was applied to extract keywords as lexical units from the title and text fields of documents in the MPQA Corpus. Lexical units that occurred in at least two documents were selected from those that were extracted. Embodiments of the present invention were then applied to compute themes for the corpus. Of the 535 documents in the MPQA Corpus, 327 were assigned to 10 themes which align well with the 10 defined topics for the corpus as shown in
The majority of the remaining themes computed in the instant example had fewer than four documents assigned, an expected result given the random selection of the remainder of documents in the MPQA Corpus.
As described elsewhere herein, embodiments of the present invention can operate on streaming information to extract essential content from documents as they are received and to calculate themes at defined time intervals. When the current time interval ends, a set of lexical units is selected from the extracted lexical units and lexical unit document associations are measured for all documents published or received within the current and previous n intervals. Lexical units are clustered into themes according to the similarity of their document associations, and each document occurring over the past n intervals is assigned to the theme for which it has the highest total association.
The set of themes computed for the current interval are persisted along with their member lexical units and document assignments. Overlap with previous and future themes may be evaluated against previous or future intervals by comparing overlap of lexical units and document assignments. Themes that overlap with others across time together relate to the same story.
Repeated co-occurrences of documents within themes computed for multiple distinct intervals are meaningful as they indicate real similarity and relevance of content between those documents for those intervals.
In addition to the expected addition of new documents to an existing story and aging out of documents older than n intervals, it is not uncommon for stories to gain or lose documents to other stories. Documents assigned to the same theme within one interval may be assigned to different themes in the next interval. Defining themes at each interval enables embodiments of the present invention to automatically adapt to future thematic changes and accommodate the reality that stories often intersect, split, and merge.
In order to show the utility, embodiments of the present invention were applied on documents within the Topic Detection and Tracking (TDT-2) corpus tagged as originating from the Associated Press's (AP) World Stream program due to its similarity to other news sources and information services of interest.
Clusters, documents, themes, and/or stories can be represented visually according to embodiments of the present invention. Two such visual representations, which can provide greater insight into the characteristics of themes and stories in a temporal context, are described below.
The first view, a portion of which is shown in
To provide a temporal context we developed the Story Flow Visualization (SFV). The Story Flow visualization, a portion of which is shown in
For a given interval, each theme is labeled with its top lexical unit in italics and lists its assigned documents in descending order by date. Each document is labeled with its title on the day that it is first published (or received), and rendered as a line connecting its positions across multiple days. This preserves space and reinforces the importance and time of each document, as the document title is only shown in one location. Similar lines across time intervals represent flows of documents assigned to the same themes, related to the same story. As stories grow over days, they add more lines. A document's line ends when it is no longer associated with any themes.
Referring to
Some embodiments can order schemes that take into account relative positions of related groups across days in order to minimize line crossings at interval boundaries. However, consistently ordering themes for each interval by their number of assigned documents, as is done in the present embodiment, can help ensure that the theme order for each day is unaffected by future days. This preserves the organization of themes in the story flow visualization across days and supports information consumers' extended interaction over days and weeks. An individual or team would therefore be able to print out each day's story flow column with document titles and lines, and post that next to the previous day's columns. Such an approach would be unrestricted by monitor resolution and support interaction and collaboration through manual edits and notes on the paper hard copies. Each foot of wall space could hold up to seven daily columns, enabling a nine foot wall to hold two months worth of temporal context along a single horizontal span.
On a single high-resolution monitor, seven days can be rendered as each daily column can be allocated a width of 300 pixels which accommodates most document titles. Longer time periods can be made accessible through the application of a scrolling function.
While a number of embodiments of the present invention have been shown and described, it will be apparent to those skilled in the art that many changes and modifications may be made without departing from the invention in its broader aspects. The appended claims, therefore, are intended to cover all such changes and modifications as they fall within the true spirit and scope of the invention.
This invention claims priority from U.S. Provisional Patent Application No. 61/222,737, entitled “Feature Extraction Methods and Apparatus for Information Retrieval and Analysis,” filed Jul. 2, 2009.
This invention was made with Government support under Contract DE-ACO576RL01830 awarded by the U.S. Department of Energy. The Government has certain rights in the invention.
Number | Date | Country | |
---|---|---|---|
61222737 | Jul 2009 | US |