As provided for under 35 U.S.C. §120, this patent claims benefit of the filing date of the following U.S. patent application, herein incorporated by reference in its entirety:
“Methods and Apparatuses for Sentiment Analysis,” filed 2012 May 14 (y/m/d), having inventors Lisa Joy Rosner, Jens Erik Tellefsen, Michael Jacob Osofsky, Jonathan Spier, Ranjeet Singh Bhatia, Malcolm Arthur De Leo, and Karl Long, and application Ser. No. 13/471,417.
This application is related to the following U.S. patent application(s), which are herein incorporated by reference in their entirety:
“Methods and Apparatuses for Clustered Storage of Information and Query Formulation,” filed 2011 Oct. 24 (y/m/d), having inventors Mark Edward Bowles, Jens Erik Tellefsen, and Ranjeet Singh Bhatia and application Ser. No. 13/280,294 (“the '294 application”);
“Method and Apparatus for Frame-Based Search,” filed 2008 Jul. 21 (y/m/d), having inventors Wei Li, Michael Jacob Osofsky and Lokesh Pooranmal Bajaj and application Ser. No. 12/177,122 (“the '122 application”);
“Method and Apparatus for Frame-Based Analysis of Search Results,” filed 2008 Jul. 21 (y/m/d), having inventors Wei Li, Michael Jacob Osofsky and Lokesh Pooranmal Bajaj and application Ser. No. 12/177,127 (“the '127 application”);
“Method and Apparatus for Determining Search Result Demographics,” filed 2010 Apr. 22 (y/m/d), having inventors Michael Jacob Osofsky, Jens Erik Tellefsen and Wei Li and application Ser. No. 12/765,848 (“the '848 application”);
“Method and Apparatus for HealthCare Search,” filed 2010 May 30 (y/m/d), having inventors Jens Erik Tellefsen, Michael Jacob Osofsky, and Wei Li and application Ser. No. 12/790,837 (“the '837 application”); and
“Method and Apparatus for Automated Generation of Entity Profiles Using Frames,” filed 2010 Jul. 20 (y/m/d), having inventors Wei Li, Michael Jacob Osofsky and Lokesh Pooranmal Bajaj and application Ser. No. 12/839,819 (“the '819 application”).
Collectively, the above-listed related applications can be referred to herein as “the Related Applications.”
The present invention relates generally to the analysis of sentiment, and more particularly to analysis across various types of corpuses of statements.
It is well known that tracking customer satisfaction is an important technique for sustained competitive advantage. Measures of customer satisfaction, based on a variety of survey techniques, have been developed and are well known. Survey techniques include: telephone interviews, emailed survey forms, and web site “intercepts.” All such techniques have the commonalities of being time consuming and expensive to perform. An example well-known measure of customer satisfaction is the American Consumer Satisfaction Index (ACSI). The ACSI is produced by the ACSI LLC, a private company based in Ann Arbor, Mich., U.S.A.
More recently, however, customers are using online tools to express their opinions about a wide range of products and services. Many such online tools can be described as being under the general category of “Social Media” (or SM). Online tools in this category include, but are not limited to, the following:
The availability of such SM content raises the question of whether, with appropriate technology, it can be used to provide information similar to traditional customer satisfaction measures.
The accompanying drawings, that are incorporated in and constitute a part of this specification, illustrate several embodiments of the invention and, together with the description, serve to explain the principles of the invention:
Reference will now be made in detail to various embodiments of the invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
Please refer to the Section 3 (“Glossary of Selected Terms”) for the definition of selected terms used below.
1 Sentiment Measurement
1.1 Overview and Related Applications
1.2 Consumer Sentiment Search
1.3 Sentiment Analysis
1.4 Brand Analysis
1.5 Temporal Dimension
1.6 Correlation With Known Metrics
1.7 Lack of Correlation Between Metrics
1.8 SWOT-type Analysis
2 Additional Information
2.1 Word Lists
2.2 Computing Environment
3 Glossary of Selected Terms
1 Sentiment Measurement
1.1 Overview and Related Applications
In addition to being incorporated by reference in their entirety, the description presented herein specifically relies on many sections of the Related Applications. A specific Related Application can be referred to as “the '123 application,” where '123 is the last three digits of the Application Number of a Related Application. A specific section of a Related Application can also be referenced herein by using any of the following conventions:
Section 4, '837 (“FBSE”) describes a Frame-Based Search Engine (or FBSE). This FBSE is a more generic form of the kind of search described herein in Section 1.2 (“Consumer Sentiment Search”).
Section 4.2, '837 discusses frames as a form of concept representation (Section 4.2.1) and the use of frame extraction rules to produce instances of frames (Section 4.2.2). A pseudo-code format for frame extraction rules is presented in Section 6.2, '837 (“Frame Extraction Rules”).
Snippets are discussed in Section 6.4, '837.
Parts of the '837 application are repeated, in this section, for convenience to the reader.
In general, a frame is a structure for representing a concept, wherein such concept is also referred to herein as a “frame concept.” A frame specifies a concept in terms of a set of “roles.” Any type of concept can be represented by a frame, as long as the concept can be meaningfully decomposed (or modeled), for the particular application, by a set of roles.
When a frame concept is detected, for a particular UNL (see below Glossary of Selected Terms for definition) in a corpus of natural language, a frame “instance” is created. The instance has, for each applicable frame role, a “role value” assigned. A role value represents a particular, of how the frame concept is being used, by the UNL where the frame concept is detected. In
If a large corpus of interest (or “C_of_I”) is to be searched, such as a significant portion of the online information available on the Internet, in order to have a practical Frame-Based Search Engine (or FBSE) it is typically necessary to perform a large amount of pre-query scanning and indexing of the C_of_I. An overview of this process is discussed at Section 4.1, '837 and Section 4.3.2.1, '837. The basic steps, of an FBSE, are:
The above-described steps can be accomplished using, for example, the computing environment described in Section 2.2. Regarding ordering of the steps, Instance Generation is performed before the steps of Instance Merging or Instance Selection. Instance Merging and Instance Selection, however, can be performed in either order (or even concurrently), depending upon the particular application.
1.2 Consumer Sentiment Search
An example application, within which the present invention can be utilized, is the performance by a “brand manager” of a “consumer sentiment search.” In relation to a brand of consumer product (or a “C_Brand”), a brand manager is a person responsible for the continued success of her or his brand. Such brand managers are often interested in consumer sentiment toward his or her C_Brand. A type of search to accomplish this, described in Section 2.1, '294 (“Consumer Sentiment Search”), is called a “consumer sentiment search.”
The '294 application describes the searching of a database that includes the collection, in a large scale and comprehensive way, of postings (such as “tweets” on Twitter) to Social Media (SM) web sites or services. Such a Social Media inclusive database is referred to as “SM_db,” and its basic elements are called documents (even though, in Social Media, an individual posting may be quite short). To create a suitably fast search tool, the '294 application describes comprehensive pre-query scanning, of the documents of SM_db, for instances of a suitable frame.
An example frame, suitable for a consumer sentiment search, is the following “Sentiment” frame (each role name has the suffix “_Role,” with a brief explanation following):
The above “Sentiment” frame is typically applied to the analysis of an individual sentence. Following is an example focus sentence, to which appropriate Natural Language Processing (NLP) can be applied to produce an instance of the “Sentiment” frame. The following example sentence discusses a fictitious brand of soda called “Barnstorm”:
Given a suitable NLP analysis, by application of suitable frame extraction rules, the following instance of the “Sentiment” frame can be produced:
For each instance of the “Sentiment” frame found, the '294 application describes the sentence, where the frame concept is found, as a “focus sentence.” In addition, a three sentence “snippet” is created, comprised of the focus sentence along with the sentences before and after the focus sentence. A single type of record, called a “SentenceObj,” is described as being created, with the SentenceObj including both the focus sentence and the snippet as fields. (The snippet described in Section 6.4, '837, “Snippet Formation” has a length of five sentences but, because of the general brevity of SM, a three sentence snippet has been found to be effective.) Section 2.1, '294, states that these fields (i.e., the fields for the focus sentence and snippet) can be called, respectively, “FocusSentence” and “Snippet.” Each field can be indexed and therefore available for queries. Further, each SentenceObj can be the root of its own “cluster” of records. Including the SentenceObj, the cluster can be hierarchically organized to contain the following three record types:
An example User Interface (or UI), for entering a consumer sentiment search, is shown in
The CSH provides a number of options, regarding exactly how a consumer sentiment search is performed, including the following:
Regardless of how it is performed, an example result, of the consumer sentiment search input of
The emotion-listing half 1111 consists of two columns: a column that summarizes the emotions found (column 1112) and a column that shows, for each emotion listed, the number of records in which it occurs (column 1113). A plus sign (i.e., “+”) is shown next to each emotion of column 1112. This indicates that each emotion is representative of a group of related expressions, with the members of the group made visible by selecting the plus sign with mouse pointer 1100. If desired by the user, more emotions can be listed by selecting link 1114 (“view more emotions”) with mouse pointer 1100.
Focus-sentence listing half 1112 shows 6 example focus sentences of the search result, numbered 1114-1119. If desired by the user, more focus sentences can be listed by selecting link 1120 (“view more focus sentences”) with mouse pointer 1100. Each focus sentence is shown with two parts emphasized:
1.3 Sentiment Analysis
A first embodiment of the present invention permits further analysis, for example, of a consumer sentiment search, such as the kind discussed in the previous section.
Stated more generally, a first embodiment of the present invention permits the sentiment of each statement to be evaluated along at least one, or both, of the following two (relatively independent) dimensions:
As used herein, each statement of the first corpus can be any type of UNL, although the type of UNL focused upon herein consists of a single sentence. As used herein, the first corpus can consist of UNL's drawn from any source or sources, although the sources focused upon herein are those of Social Media (as discussed above).
As used herein, the kind of object, about which statements can be subject to sentiment analysis, can be almost anything capable of being given a name. Some example types of objects, not intended to be limiting, include:
For each UNL of the first corpus, for which at least one type of instance is extracted of a frame “F,” that frame “F” is regarded (for purposes of example) as containing at least the following two roles (as these roles are described above, in Section 1.2, “Consumer Sentiment Search”):
A first corpus, once it has had an instance of “F” extracted for each of its statements, can be referred to herein as an “instanced corpus.”
1. Object_Role (e.g., instance for S1 has a value of “X1” for Object_Role 110)
It can be observed that, for the 8 statements shown, instanced corpus 102 addresses the following three objects:
A following stage of analysis is shown in
Object-specific corpus 103 can be related to the consumer sentiment search result, described in above Section 1.2 and illustrated by
A next stage of analysis is shown in
In
Instances that express a positive sentiment (regardless of other relatively independent variables, such as the intensity of the positive sentiment), are put in the POSITIVE category (and can be referred to as a positive corpus). In the case of
Distinguishing between positive and negative sentiment can be accomplished by any suitable technique, including matching against positive and negative word (or lexical unit) lists. A closest match can be sought, between the contents of a Sentiment_Role and a word of the two word lists.
Another categorization is shown in
Instances that express a strong sentiment (regardless of other relatively independent variables, such as whether that intensity is in the positive or negative direction), are put in STRONG category (and can be referred to as a strong corpus). In the case of
As discussed in section 2.1 below (“Word Lists”), the categorization word lists, for determining positive or negative polarity, can be created as follows:
As with polarity, distinguishing between strong and weak sentiment can be accomplished by any suitable technique, including matching against strong and weak categorization word lists. A closest match can be sought, between the contents of a Sentiment_Role and a word of the two word lists. As discussed in section 2.1 below (“Word Lists”), the categorization word lists, for determining strong or weak intensity, can be created as follows:
Once the instances, of an object-specific corpus, have been categorized, the categorization data can be input to at least one metric. In general, a metric accepts a categorized corpus as input and produces a value (or values) that represent a summarization, with respect to the category, of such corpus.
A polarity metric accepts, as input, positive and negative corpuses. While any suitable function can be used, in general, the output of a polarity metric indicates the extent to which, overall (or in “net”), the positive or negative corpus dominates. Such a metric is also referred-to herein as a “Net Polarity Metric” or NPM.
Using Np and Nn as input, Mp can be a function that produces the following range of values:
Following is a suitable function for Mp:
An intensity metric accepts, as input, strong and weak corpuses. While any suitable function can be used, in general, the output of an intensity metric indicates the extent to which, overall (or in “net”), the strong or weak corpus dominates. Such a metric is also referred-to herein as a “Net Intensity Metric” or NIM.
Using Ns and Nw as input, Mi can be a function that produces the following range of values:
Following is a suitable function for Mi:
Given a specified range of values for an NPM and/or NIM, such as the ranges provided above, a graphical representation, of an object-specific corpus, can be produced by assigning an axis to each metric utilized. For example, for the ranges specified above:
In
Starting from the quadrant of the upper-left (quadrant 410), and proceeding in a clockwise fashion, each quadrant of
If the consumer sentiment search result of Section 1.2 (and shown in
Multiple object-specific corpuses can be shown on a single IPS graph.
With regard to graphing a COS, the area of each object-specific corpus can be made proportional to the number statements it represents. Another possibility is to calculate a total number of statements, across all the object-specific corpuses of a COS, and then to make the area of each object-specific corpus proportional to its relative share of such total.
With respect to the quadrants of
A second technique, for dividing an IPS graph into quadrants, is shown in
The medians plotted can be determined with respect to any suitable COS (also referred to herein as the COSm). In some situations, it may be best for COSm to be the same COS (also referred to herein as the “display COS”) shown by the IPS graph itself. For example, in the case of
A third technique, for dividing an IPS graph into quadrants, is shown in
1.4 Brand Analysis
While any type of object (or entity) can be the subject of an IPS graph, an example area of application is the analysis of product brands.
In the case of
If, for example, the product category is “chocolate,” an example brand, for each of P1-P6, can be (while the following brands are actual names to aid illustration, of an example brand analysis within the U.S. market, the data presented herein is intended to be fictitious and for example purposes only):
In the case of
Of course, medians 610 and 611 can be based upon any suitable level of exhaustive determination of metrics NPM and NIM. This broad range of options is made possible, at least in part, by the fact that an automated, frame-based, natural language processing system is used for the analysis. Thus, for example, the NPM and NIM values can be determined for hundreds of brands, across dozens of diverse industries. Medians 610 and 611 can then be used to place the particular brands, displayed in an IPS graph, within a very broad context.
In addition to providing a broader context, such broad-based medians can also provide a relatively stable context in which to evaluate the evolution, over time, of sentiment towards a brand.
1.5 Temporal Dimension
A temporal dimension can be added to a sentiment analysis. Section 1.3 discusses sentiment analysis as applied to a single “first corpus.” Sentiment analysis, however, can be applied to a series of related corpuses. If each member of the series differs according to a time range, over which its constituent statements apply, then a time-varying sentiment analysis can be produced.
Objectives served, by a time-varying sentiment analysis, can include either or both of the following:
Assuming cross-hatched is negative and clear is positive,
Another use, for a time-series of related corpuses, is to maintain an updated sentiment analysis. This can be useful even if the sentiment analysis itself is not displayed with a temporal axis (as in the case of an IPS graph).
Regardless of the time period used, such updates to metric values can be applied to the object-specific corpuses of an IPS graph. Any suitable timescale can be used, for updating the positions of the object-specific corpuses, dependent upon the particular application. For example, in the case of a typical consumer product brand “B1,” a brand manager may wish to refer back to his or her IPS graph (showing B1 in relation to other competing brands of greatest interest) every few days. On the other hand, if an IPS graph is being used to depict the relative position of two candidates for political office, in “real time” during a political debate event, it may be necessary to update the metric values used every few minutes.
1.6 Correlation with Known Metrics
An empirical analysis has been performed, between values of NPM (as determined from well-known sources of Social Media, such as FACEBOOK) and conventionally-determined values from the American Consumer Satisfaction Index (ACSI).
In a first empirical analysis, NPM scores were determined for 12 retail businesses (all of which serve the U.S. market) on the basis of 12 months of Social Media data. For each of these businesses, its corresponding ACSI value was retrieved. An analysis was performed to compare these two sets of 12 data points: one set based on the NPM and the other from ACSI. In particular, the “Pearson Product-Moment Correlation Coefficient” (PPMCC) was determined. The PPMCC, also referred to as an “r” value, is widely used in the sciences as a measure of the strength of linear dependence between two variables. It ranges between +1 and −1 inclusive, with +1 meaning perfect correlation and 0 meaning no correlation. There was found to be a strong correlation between the two sets of 12 data points, with r=0.773. The data points, used in determining the PPMCC, are shown in the following table:
In a second empirical analysis, NPM scores were determined on the basis of a same 12 months of Social Media data, but across a wide range of industries. The industries studied included the following: automotive, airline, financial, Internet retail, Internet travel, Consumer Products & Goods (CPG), and grocery store. For each of these businesses, its corresponding ACSI value was retrieved. An analysis of the two sets of data points, one based on the NPM and the other from ACSI, showed strong correlation. In particular, the PPMCC was determined to be r=0.714.
1.7 Lack of Correlation Between Metrics
An empirical analysis has been performed, between values of NPM and NIM (as determined from well-known sources of Social Media, such as FACEBOOK). Specifically, for the same 12 retail businesses discussed above (Section 1.6 “Correlation With Known Metrics”), two sets of 12 values were produced: one set of NPM values and another set of NIM values. A correlation analysis, between the two sets of values, produced a very low PPMCC, with r=0.100. Thus, empirical evidence indicates that the polarity of a statement about an object is relatively independent of the intensity with which that polarity will be expressed (and intensity of a statement about an object is relatively independent of the polarity with which that intensity will be expressed). The study indicates that it is indeed valid to combine NPM and NIM as orthogonal axes, for purposes of creating an IPS graph.
Further, empirical evidence indicates that the volume of statements about an object is relatively independent of either the polarity or intensity of such statements. As discussed above, volume can be indicated in an IPS graph by the area of a circle, where the circle is representative of an object-specific corpus. A correlation analysis, between volume and polarity, for a collection of object-specific corpuses, found a very low PPMCC, with r=0.012.
1.8 SWOT-Type Analysis
As is well-known in the business management community, SWOT is an acronym for a strategic planning technique that seeks to identify, with respect to a company's competitive situation, its: Strengths, Weaknesses, Opportunities, and Threats (also known as “SWOT”).
Regardless of how the quadrants of an IPS graph are identified (e.g., dividing along the middle of each axis, dividing along median lines, or dividing relative to a subset of a COS), such quadrants can be used to perform a “SWOT” type analysis with respect to an object's competitive situation.
The first step is to identify, for some object X1, a list of factors or characteristics, “L1,” that can (or is expected to) effect X1's competitive situation. The list L1 can be identified by any suitable techniques.
As discussed above in Section 1.3 (“Sentiment Analysis”), an object-specific corpus P1, specific to some object X1, is produced by selecting for X1 in the Object_Role in an instanced corpus. A SWOT type analysis, for X1, can be accomplished by determining a subset of P1 for each member of L1. The subsets of P1 can be determined with any suitable technique, including the following. For each member “m” of L1, in addition to selecting each statement “s” where X1 appears in the Object_Role of its instance, it is further required that “m” appear in “s” within some pre-defined proximity of where X1 occurs in “s.” Sufficient proximity, of “m” and X1 in a statement “s,” can also be referred to as a co-occurrence. Each such subset of P1 can be called a “characteristic subset.” If L1 has “n” members, then n characteristic subsets of P1 are produced.
To accomplish a SWOT type analysis, a collection of characteristic subsets (or a “CCS”) can be placed on an IPS graph in essentially the same way as described above for a COS:
The quadrants of the resulting IPS graph can be related, as follows, to SWOT. In the following list, the object being subjected to SWOT analysis is assumed to be a brand (as discussed above in Section 1.4 “Brand Analysis”):
For example, consider a fictitious brand “STREAMER,” that is a service for streaming movies over the Internet. Following is an example list of characteristics, determined by any suitable technique, for performing a SWOT analysis of STREAMER:
It will now be assumed that, for each of the six above-listed characteristics, a corresponding STREAMER subset (i.e., the subset of STREAMER's object-specific corpus that is produced when co-occurrence of a characteristic is also required) is represented by P1-P6 of
2 Additional Information
2.1 Word Lists
As discussed above with respect to Section 1.3, for purposes of categorizing the sentiment of a statement, the following four word lists can be used:
Since the above four lists are used for sentiment categorization, we can refer to them herein as the “categorization word lists.” Rather than maintain the above-described categorization word lists, each of which contains words that overlap with those of another categorization word list, it can be more efficient to maintain the four below-listed lists. Since each of the below four lists corresponds to a quadrant, such as the quadrants of
Each categorization word list can be created by the following unions of two of the quadrant word lists:
1. Positive Words: union of Positive-Strong and Positive-Weak
2. Negative Words: union of Negative-Strong and Negative-Weak
3. Strong Words: union of Positive-Strong and Negative-Strong
4. Weak Words: union of Positive-Weak and Negative-Weak
2.1.1 Example Quadrant Word List: Positive-Weak
adequate, admirable, appreciate, appreciative, appropriate, attract, attractive, not bad, better, classy, comfortable, confidence, covet, cute, decent, desirable, desire, not disappoint, dope, elegant, enjoy, not evil, favor, favorable, fine, fond, fun, good, grateful, happy, hook, important, interest, interesting, not irritate, like, lovely, miss, neat, nice, not offend, pleasant, please, precious, prefer, preferable, pretty, no problem, satisfy, not stupid, thankful, treasure, trust, want, no weakness, no worry, not worse
2.1.2 Example Quadrant Word List: Positive-Strong
adorable, adore, amaze, awesome, beautiful, best, brilliant, cool, crave, delight, excellent, exceptional, excite, exciting, fabulous, fan, fantastic, fave, favorite, first rate, gorgeous, great, ideal, impress, impressive, incredible, long for, love, luv, magnificent, outstanding, perfect, priceless, revolutionary, sexy, stun, super, superb, superior, terrific, thrill, top notch, vital, wonderful, world class
2.1.3 Example Quadrant Word List: Negative-Strong
abhor, abysmal, anger, angry, awful, crap, crappy, despicable, despise, detest, disaster, disastrous, disgraceful, disgust, dreadful, eff, enrage, evil, fed up, fiasco, fuck, fucking, furious, garbage, hate, hatred, hideous, horrible, horrific, horrify, junk, loathe, nasty, not tolerate, offend, offensive, outrage, phucking, repulsive, rubbish, screw, shit, shitty, not stand, terrible, terrify, terrorize, trash, trashy, ugly, unacceptable, unbearable, useless, woefully, worst, worthless, yucky
2.1.4 Example Quadrant Word List: Negative-Weak
not adequate, not advantageous, alarm, not amaze, annoy, not appreciate, not appropriate, not attractive, not awesome, bad, baffle, not beautiful, not beneficial, not best, not better, bewilder, blame, bore, bother, not brilliant, bug, not charming, not classy, not comfortable, not commendable, concern, not confidence, confuse, not cool, not crave, not craving, criticize, not cute, not decent, deficient, depress, not desire, detrimental, disappoint, disappointment, dislike, displease, dissatisfy, distrust, doubt, dread, dumb, not elegant, not enjoy, not enough, not essential, not excellent, not exceptional, not excite, not exciting, not fan, not fantastic, not favor, not favorite, fear, not fine, not flawless, not fond, foolish, not friend, frighten, frustrate, not fun, not good, not gorgeous, not great, not happy, not helpful, not ideal, imperfect, not important, not impress, not impressive, inferior, not interest, not interesting, intimidate, irk, irritate, joke, let down, not like, not love, not lovely, not luv, not magnificent, not need, not nice, not outstanding, not perfect, not pleasant, not please, poor, not prefer, not pretty, not priceless, problematic, regret, resent, not revolutionary, ridiculous, sadden, not satisfactory, not satisfy, scare, scorn, not sexy, sick, silly, skeptical, spook, stupid, not super, not superior, not sure, not terrific, not thrill, tire, not top notch, unattractive, unhappy, unimpressive, unpleasant, unsatisfactory, not valuable, not want, worry, worse, wrong
2.2 Computing Environment
Cloud 930 represents data, such as online opinion data, available via the Internet. Computer 910 can execute a web crawling program, such as Heritrix, that finds appropriate web pages and collects them in an input database 900. An alternative, or additional, route for collecting input database 900 is to use user-supplied data 931. For example, such user-supplied data 931 can include the following: any non-volatile media (e.g., a hard drive, CD-ROM or DVD), record-oriented databases (relational or otherwise), an Intranet or a document repository. A computer 911 can be used to process (e.g., reformat) such user-supplied data 931 for input database 900.
Computer 912 can perform the indexing needed for formation of an appropriate frame-based database (FBDB). FBDB's are discussed above (Section 1.1 “Overview and Related Applications”) and in the Related Applications. The indexing phase scans the input database for sentences that refer to an organizing frame (such as the “Sentiment” frame), produces a snippet around each such sentence and adds the snippet to the appropriate frame-based database.
Databases 920 and 921 represent, respectively, stable “snapshots” of databases 900 and 901. Databases 920 and 921 can provide stable databases that are available for searching, about an object of interest in a first corpus, in response to queries entered by a user at computer 933. Such user queries can travel over the Internet (indicated by cloud 932) to a web interfacing computer 914 that can also run a firewall program. Computer 913 can receive the user query, collect snippet and frame instance data from the contents of the appropriate FBDB (e.g., FBDB 921), and transmit the results back to computer 933 for display to the user. The results from computer 913 can also be stored in a database 902 that is private to the individual user. When it is desired to see the snippets, on which a graphical representation is based, FBDB 921 is available. If it is further desired to see the full documents, on which snippets are based, input database 920 is also available to the user.
In accordance with what is ordinarily known by those in the art, computers 910, 911, 912, 913, 914 and 933 contain computing hardware, and programmable memories, of various types.
The information (such as data and/or instructions) stored on computer-readable media or programmable memories can be accessed through the use of computer-readable code devices embodied therein. A computer-readable code device can represent that portion of a device wherein a defined unit of information (such as a bit) is stored and/or read.
3 Glossary of Selected Terms
While the invention has been described in conjunction with specific embodiments, it is evident that many alternatives, modifications and variations will be apparent in light of the foregoing description. Accordingly, the invention is intended to embrace all such alternatives, modifications and variations as fall within the spirit and scope of the appended claims and equivalents.
Number | Name | Date | Kind |
---|---|---|---|
5694523 | Wical | Dec 1997 | A |
5940821 | Wical | Aug 1999 | A |
5963940 | Liddy et al. | Oct 1999 | A |
5995922 | Penteroudakis et al. | Nov 1999 | A |
6012053 | Pant et al. | Jan 2000 | A |
6202064 | Julliard | Mar 2001 | B1 |
6269356 | Hatton | Jul 2001 | B1 |
6278967 | Akers et al. | Aug 2001 | B1 |
6453312 | Goiffon et al. | Sep 2002 | B1 |
6560590 | Shwe et al. | May 2003 | B1 |
6571240 | Ho | May 2003 | B1 |
6578022 | Foulger et al. | Jun 2003 | B1 |
6584464 | Warthen | Jun 2003 | B1 |
6671723 | Nguyen et al. | Dec 2003 | B2 |
6675159 | Lin et al. | Jan 2004 | B1 |
6738765 | Wakefield et al. | May 2004 | B1 |
6862713 | Kraft et al. | Mar 2005 | B1 |
6965857 | Decary | Nov 2005 | B1 |
7356540 | Smith et al. | Apr 2008 | B2 |
7496593 | Gardner et al. | Feb 2009 | B2 |
7779007 | West et al. | Aug 2010 | B2 |
7805302 | Chelba et al. | Sep 2010 | B2 |
7890514 | Mohan et al. | Feb 2011 | B1 |
8046348 | Rehling et al. | Oct 2011 | B1 |
8055608 | Rehling et al. | Nov 2011 | B1 |
8131540 | Marchisio et al. | Mar 2012 | B2 |
8935152 | Li et al. | Jan 2015 | B1 |
9047285 | Li et al. | Jun 2015 | B1 |
20020065857 | Michalewicz et al. | May 2002 | A1 |
20020091671 | Prokoph | Jul 2002 | A1 |
20030093421 | Kimbrough et al. | May 2003 | A1 |
20030172061 | Krupin et al. | Sep 2003 | A1 |
20040044952 | Jiang et al. | Mar 2004 | A1 |
20040078190 | Fass et al. | Apr 2004 | A1 |
20040150644 | Kincaid | Aug 2004 | A1 |
20050149494 | Lindh et al. | Jul 2005 | A1 |
20050165600 | Kasravi et al. | Jul 2005 | A1 |
20070156677 | Szabo | Jul 2007 | A1 |
20070294784 | West et al. | Dec 2007 | A1 |
20090112892 | Cardie et al. | Apr 2009 | A1 |
20090306967 | Nicolov | Dec 2009 | A1 |
20100332511 | Stockton | Dec 2010 | A1 |
20110161071 | Duong-van | Jun 2011 | A1 |
20130018651 | Djordjevic et al. | Jan 2013 | A1 |
Entry |
---|
Gautam et al., published Feb. 17, 2008 (y/m/d), pp. 2040-2042. “Document Retrieval Based on Key Information of Sentence,” IEEE ICACT. |
Ku et al., published Mar. 27, 2006 (y-m-d), 8 pgs. “Opinion Extraction, Summarization and Tracking in News and Blog Corpora,” AAAI Spring Symposium Series 2006. |
Ruppenhofer et al., published Aug. 25, 2006 (y/m/d), 166 pages. “FrameNet II: Extended Theory and Practice,” International Computer Science Institute, University of California at Berkeley, USA. |
Schwing, Kyle M., published Sep. 1, 2009 (y/m/d), “The Flux Measure of Influence in Engineering Networks,” Master's Thesis, Dept. of ME, MIT. |
Wu, Tianhaow et al., published May 3, 2003 (y/m/d), 12 pgs. “A Supervised Learning Algorithm for Information Extraction From Textual Data,” Proceedings of the Workshop on Text Mining, Third SIAM International Conference on Data Mining. |
Zadrozny, Slawomir et al., published 2003, 5 pgs. “Linguistically quantified thresholding strategies for text categorization,” Systems Research Institute, Polish Academy of Sciences, Warszawa, Poland. |
Zhang et al., published Jun. 22, 2010 (y/m/d), 10 pgs. “Voice of the Customers: Mining Online Customer Reviews for Product Feature-based Ranking,” Proceedings of the 3rd Wonference on Online social networks (WOSN '10). USENIX Association, Berkeley, CA, USA. |
Sheard, Tim, published 2009, “Graphs in Computer Science,” Portland State University, 12 pgs. |
Lucene Support p. 2454, with comments dated May 10, 2010-Jul. 16, 2010; https://issues.apache.org/jira/browse/LUCENE-2454; retrieved Jul. 24, 2019 (y/m/d); 9 pages. |
Lucene Slide Share Presentation, dated May 7, 2010; https://www.slideshare.net/MarkHarwood/proposal-for-nested-document-support-in-lucene; retrieved Jul. 24, 2019 (y/m/d); 15 pages. |
readme.txt in LuceneNestedDocumentSupport.zip, creation date May 10, 2010; retrieved Jul. 25, 2019 (y/m/d); 2 pages. |
NestedDocumentQuery.java in LuceneNestedDocumentSupport.zip, creation date Aug. 25, 2010; retrieved Jul. 25, 2019 (y/m/d); 8 pages. |
PerParentLimitedQuery.java in LuceneNestedDocumentSupport.zip, creation date Sep. 8, 2010; retrieved Jul. 25, 2019 (y/m/d); 10 pages. |
Number | Date | Country | |
---|---|---|---|
Parent | 13471417 | May 2012 | US |
Child | 14613324 | US |