Anchor text summarization for corroboration

Information

  • Patent Grant
  • 9208229
  • Patent Number
    9,208,229
  • Date Filed
    Friday, March 31, 2006
    18 years ago
  • Date Issued
    Tuesday, December 8, 2015
    9 years ago
Abstract
A system and method for corroborating a set of facts. If the anchor text of the references to a document matches the name of a set of facts, the referenced document is used to corroborate the set of facts. By analyzing the anchor text of the references to the document, the system is capable of determining if a document is relevant to the set of facts. These documents can then be used to corroborate or refute the facts, thereby improving their overall quality.
Description
TECHNICAL FIELD

The disclosed embodiments relate generally to fact databases. More particularly, the disclosed embodiments relate to corroboration of facts extracted from multiple sources.


BACKGROUND

When information is collected from potentially contradictory, ambiguous, or untrustworthy sources, it is useful to have a mechanism for comparing information from multiple documents to ensure the accuracy of the collected information. Comparing information collected from multiple sources allows errant or ambiguous information to be detected and removed, and for the confidence in affirmed information to be consequently increased.


In order to perform comparisons of information collected from multiple sources, it is necessary to identity independent sources that are relevant to each other. For the comparison of information from different sources to be meaningful, the independent sources should be related to the same topic.


What is needed is a method for finding sources relevant to a topic so that information related to that topic can be reliably confirmed or rejected


SUMMARY

The invention is a system and method for corroborating a set of facts. If the anchor text of the references to a document matches the name of a set of facts, the referenced document is used to corroborate the set of facts. By analyzing the anchor text of the references to the document, the system is capable of determining if a document is relevant to the set of facts. The document can then be used to corroborate or refute the facts, thereby improving their overall quality.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows a network, in accordance with a preferred embodiment of the invention.



FIGS. 2(
a)-2(d) are block diagrams illustrating a data structure for facts within a repository of FIG. 1 in accordance with preferred embodiments of the invention.



FIG. 2(
e) is a block diagram illustrating an alternate data structure for facts and objects in accordance with preferred embodiments of the invention.



FIG. 3 is a data flow diagram illustrating a corroboration janitor, according to one embodiment of the present invention.



FIG. 4 is a flow chart illustrating a method for corroborating facts, according to one embodiment of the present invention.



FIG. 5(
a) is a flow chart illustrating a method for selecting anchor text from a set of candidate anchor texts.



FIG. 5(
b) is an example illustrating a method for selecting anchor text from a set of candidate anchor texts.



FIG. 6 is a flow chart illustrating a method for determining if a document contains valid data.





DESCRIPTION OF EMBODIMENTS

Embodiments of the present invention are now described with reference to the figures where like reference numbers indicate identical or functionally similar elements.



FIG. 1 shows a system architecture 100 adapted to support one embodiment of the invention. FIG. 1 shows components used to add facts into, and retrieve facts from a repository 115. The system architecture 100 includes a network 104, through which any number of document hosts 102 communicate with a data processing system 106, along with any number of object requesters 152, 154.


Document hosts 102 store documents and provide access to documents. A document is comprised of any machine-readable data including any combination of text, graphics, multimedia content, etc. A document may be encoded in a markup language, such as Hypertext Markup Language (HTML), i.e., a web page, in a interpreted language (e.g., JavaScript) or in any other computer readable or executable format. A document can include one or more hyperlinks to other documents. A typical document will include one or more facts within its content. A document stored in a document host 102 may be located and/or identified by a Uniform Resource Locator (URL), or Web address, or any other appropriate form of identification and/or location. A document host 102 is implemented by a computer system, and typically includes a server adapted to communicate over the network 104 via networking protocols (e.g., TCP/IP), as well as application and presentation protocols (e.g., HTTP, HTML, SOAP, D-HTML, Java). The documents stored by a host 102 are typically held in a file directory, a database, or other data repository. A host 102 can be implemented in any computing device (e.g., from a PDA or personal computer, a workstation, mini-computer, or mainframe, to a cluster or grid of computers), as well as in any processor architecture or operating system.



FIG. 1 shows components used to manage facts in a fact repository 115. Data processing system 106 includes one or more importers 108, one or more janitors 110, a build engine 112, a service engine 114, and a fact repository 115 (also called simply a “repository”). Each of the foregoing are implemented, in one embodiment, as software modules (or programs) executed by processor 116. Importers 108 operate to process documents received from the document hosts, read the data content of documents, and extract facts (as operationally and programmatically defined within the data processing system 106) from such documents. The importers 108 also determine the subject or subjects with which the facts are associated, and extract such facts into individual items of data, for storage in the fact repository 115. In one embodiment, there are different types of importers 108 for different types of documents, for example, dependent on the format or document type.


Janitors 110 operate to process facts extracted by importer 108. This processing can include but is not limited to, data cleansing, object merging, and fact induction. In one embodiment, there are a number of different janitors 110 that perform different types of data management operations on the facts. For example, one janitor 110 may traverse some set of facts in the repository 115 to find duplicate facts (that is, facts that convey the same factual information) and merge them. Another janitor 110 may also normalize facts into standard formats. Another janitor 110 may also remove unwanted facts from repository 115, such as facts related to pornographic content. Other types of janitors 110 may be implemented, depending on the types of data management functions desired, such as translation, compression, spelling or grammar correction, and the like.


Various janitors 110 act on facts to normalize attribute names, and values and delete duplicate and near-duplicate facts so an object does not have redundant information. For example, we might find on one page that Britney Spears' birthday is “12/2/1981” while on another page that her date of birth is “Dec. 2, 1981.” Birthday and Date of Birth might both be rewritten as Birthdate by one janitor and then another janitor might notice that 12/2/1981 and Dec. 2, 1981 are different forms of the same date. It would choose the preferred form, remove the other fact and combine the source lists for the two facts. As a result when you look at the source pages for this fact, on some you'll find an exact match of the fact and on others text that is considered to be synonymous with the fact.


Build engine 112 builds and manages the repository 115. Service engine 114 is an interface for querying the repository 115. Service engine 114's main function is to process queries, score matching objects, and return them to the caller but it is also used by janitor 110.


Repository 115 stores factual information extracted from a plurality of documents that are located on document hosts 102. A document from which a particular fact may be extracted is a source document (or “source”) of that particular fact. In other words, a source of a fact includes that fact (or a synonymous fact) within its contents.


Repository 115 contains one or more facts. In one embodiment, each fact is associated with exactly one object. One implementation for this association includes in each fact an object ID that uniquely identifies the object of the association. In this manner, any number of facts may be associated with an individual object, by including the object ID for that object in the facts. In one embodiment, objects themselves are not physically stored in the repository 115, but rather are defined by the set or group of facts with the same associated object ID, as described below. Further details about facts in repository 115 are described below, in relation to FIGS. 2(a)-2(d).


It should be appreciated that in practice at least some of the components of the data processing system 106 will be distributed over multiple computers, communicating over a network. For example, repository 115 may be deployed over multiple servers. As another example, the janitors 110 may be located on any number of different computers. For convenience of explanation, however, the components of the data processing system 106 are discussed as though they were implemented on a single computer.


In another embodiment, some or all of document hosts 102 are located on data processing system 106 instead of being coupled to data processing system 106 by a network. For example, importer 108 may import facts from a database that is a part of or associated with data processing system 106.



FIG. 1 also includes components to access repository 115 on behalf of one or more object requesters 152, 154. Object requesters are entities that request objects from repository 115. Object requesters 152, 154 may be understood as clients of the system 106, and can be implemented in any computer device or architecture. As shown in FIG. 1, a first object requester 152 is located remotely from system 106, while a second object requester 154 is located in data processing system 106. For example, in a computer system hosting a blog, the blog may include a reference to an object whose facts are in repository 115. An object requester 152, such as a browser displaying the blog will access data processing system 106 so that the information of the facts associated with the object can be displayed as part of the blog web page. As a second example, janitor 110 or other entity considered to be part of data processing system 106 can function as object requester 154, requesting the facts of objects from repository 115.



FIG. 1 shows that data processing system 106 includes a memory 107 and one or more processors 116. Memory 107 includes importers 108, janitors 110, build engine 112, service engine 114, and requester 154, each of which are preferably implemented as instructions stored in memory 107 and executable by processor 116. Memory 107 also includes repository 115. Repository 115 can be stored in a memory of one or more computer systems or in a type of memory such as a disk. FIG. 1 also includes a computer readable medium 118 containing, for example, at least one of importers 108, janitors 110, build engine 112, service engine 114, requester 154, and at least some portions of repository 115. FIG. 1 also includes one or more input/output devices 120 that allow data to be input and output to and from data processing system 106. It will be understood that data processing system 106 preferably also includes standard software components such as operating systems and the like and further preferably includes standard hardware components not shown in the figure for clarity of example.



FIG. 2(
a) shows an example format of a data structure for facts within repository 115, according to some embodiments of the invention. As described above, the repository 115 includes facts 204. Each fact 204 includes a unique identifier for that fact, such as a fact ID 210. Each fact 204 includes at least an attribute 212 and a value 214. For example, a fact associated with an object representing George Washington may include an attribute of “date of birth” and a value of “Feb. 22, 1732.” In one embodiment, all facts are stored as alphanumeric characters since they are extracted from web pages. In another embodiment, facts also can store binary data values. Other embodiments, however, may store fact values as mixed types, or in encoded formats.


As described above, each fact is associated with an object ID 209 that identifies the object that the fact describes. Thus, each fact that is associated with a same entity (such as George Washington), will have the same object ID 209. In one embodiment, objects are not stored as separate data entities in memory. In this embodiment, the facts associated with an object contain the same object ID, but no physical object exists. In another embodiment, objects are stored as data entities in memory, and include references (for example, pointers or IDs) to the facts associated with the object. The logical data structure of a fact can take various forms; in general, a fact is represented by a tuple that includes a fact ID, an attribute, a value, and an object ID. The storage implementation of a fact can be in any underlying physical data structure.



FIG. 2(
b) shows an example of facts having respective fact IDs of 10, 20, and 30 in repository 115. Facts 10 and 20 are associated with an object identified by object ID “1.” Fact 10 has an attribute of “Name” and a value of “China.” Fact 20 has an attribute of “Category” and a value of “Country.” Thus, the object identified by object ID “1” has a name fact 205 with a value of “China” and a category fact 206 with a value of “Country.” Fact 30208 has an attribute of “Property” and a value of “Bill Clinton was the 42nd President of the United States from 1993 to 2001.” Thus, the object identified by object ID “2” has a property fact with a fact ID of 30 and a value of “Bill Clinton was the 42nd President of the United States from 1993 to 2001.” In the illustrated embodiment, each fact has one attribute and one value. The number of facts associated with an object is not limited; thus while only two facts are shown for the “China” object, in practice there may be dozens, even hundreds of facts associated with a given object. Also, the value fields of a fact need not be limited in size or content. For example, a fact about the economy of “China” with an attribute of “Economy” would have a value including several paragraphs of text, numbers, perhaps even tables of figures. This content can be formatted, for example, in a markup language. For example, a fact having an attribute “original html” might have a value of the original html text taken from the source web page.


Also, while the illustration of FIG. 2(b) shows the explicit coding of object ID, fact ID, attribute, and value, in practice the content of the fact can be implicitly coded as well (e.g., the first field being the object ID, the second field being the fact ID, the third field being the attribute, and the fourth field being the value). Other fields include but are not limited to: the language used to state the fact (English, etc.), how important the fact is, the source of the fact, a confidence value for the fact, and so on.



FIG. 2(
c) shows an example object reference table 210 that is used in some embodiments. Not all embodiments include an object reference table. The object reference table 210 functions to efficiently maintain the associations between object IDs and fact IDs. In the absence of an object reference table 210, it is also possible to find all facts for a given object ID by querying the repository to find all facts with a particular object ID. While FIGS. 2(b) and 2(c) illustrate the object reference table 210 with explicit coding of object and fact IDs, the table also may contain just the ID values themselves in column or pair-wise arrangements.



FIG. 2(
d) shows an example of a data structure for facts within repository 115, according to some embodiments of the invention showing an extended format of facts. In this example, the fields include an object reference link 216 to another object. The object reference link 216 can be an object ID of another object in the repository 115, or a reference to the location (e.g., table row) for the object in the object reference table 210. The object reference link 216 allows facts to have as values other objects. For example, for an object “United States,” there may be a fact with the attribute of “president” and the value of “George W. Bush,” with “George W. Bush” being an object having its own facts in repository 115. In some embodiments, the value field 214 stores the name of the linked object and the link 216 stores the object identifier of the linked object. Thus, this “president” fact would include the value 214 of “George W. Bush”, and object reference link 216 that contains the object ID for the for “George W. Bush” object. In some other embodiments, facts 204 do not include a link field 216 because the value 214 of a fact 204 may store a link to another object.


Each fact 204 also may include one or more metrics 218. A metric provides an indication of the some quality of the fact. In some embodiments, the metrics include a confidence level and an importance level. The confidence level indicates the likelihood that the fact is correct. The importance level indicates the relevance of the fact to the object, compared to other facts for the same object. The importance level may optionally be viewed as a measure of how vital a fact is to an understanding of the entity or concept represented by the object.


Each fact 204 includes a list of one or more sources 220 that include the fact and from which the fact was extracted. Each source may be identified by a Uniform Resource Locator (URL), or Web address, or any other appropriate form of identification and/or location, such as a unique document identifier.


The facts illustrated in FIG. 2(d) include an agent field 222 that identifies the importer 108 that extracted the fact. For example, the importer 108 may be a specialized importer that extracts facts from a specific source (e.g., the pages of a particular web site, or family of web sites) or type of source (e.g., web pages that present factual information in tabular form), or an importer 108 that extracts facts from free text in documents throughout the Web, and so forth.


Some embodiments include one or more specialized facts, such as a name fact 207 and a property fact 208. A name fact 207 is a fact that conveys a name for the entity or concept represented by the object ID. A name fact 207 includes an attribute 224 of “name” and a value, which is the name of the object. For example, for an object representing the country Spain, a name fact would have the value “Spain.” A name fact 207, being a special instance of a general fact 204, includes the same fields as any other fact 204; it has an attribute, a value, a fact ID, metrics, sources, etc. The attribute 224 of a name fact 207 indicates that the fact is a name fact, and the value is the actual name. The name may be a string of characters. An object ID may have one or more associated name facts, as many entities or concepts can have more than one name. For example, an object ID representing Spain may have associated name facts conveying the country's common name “Spain” and the official name “Kingdom of Spain.” As another example, an object ID representing the U.S. Patent and Trademark Office may have associated name facts conveying the agency's acronyms “PTO” and “USPTO” as well as the official name “United States Patent and Trademark Office.” If an object does have more than one associated name fact, one of the name facts may be designated as a primary name and other name facts may be designated as secondary names, either implicitly or explicitly.


A property fact 208 is a fact that conveys a statement about the entity or concept represented by the object ID. Property facts are generally used for summary information about an object. A property fact 208, being a special instance of a general fact 204, also includes the same parameters (such as attribute, value, fact ID, etc.) as other facts 204. The attribute field 226 of a property fact 208 indicates that the fact is a property fact (e.g., attribute is “property”) and the value is a string of text that conveys the statement of interest. For example, for the object ID representing Bill Clinton, the value of a property fact may be the text string “Bill Clinton was the 42nd President of the United States from 1993 to 2001.” Some object IDs may have one or more associated property facts while other objects may have no associated property facts. It should be appreciated that the data structures shown in FIGS. 2(a)-2(d) and described above are merely exemplary. The data structure of the repository 115 may take on other forms. Other fields may be included in facts and some of the fields described above may be omitted. Additionally, each object ID may have additional special facts aside from name facts and property facts, such as facts conveying a type or category (for example, person, place, movie, actor, organization, etc.) for categorizing the entity or concept represented by the object ID. In some embodiments, an object's name(s) and/or properties may be represented by special records that have a different format than the general facts records 204.


As described previously, a collection of facts is associated with an object ID of an object. An object may become a null or empty object when facts are disassociated from the object. A null object can arise in a number of different ways. One type of null object is an object that has had all of its facts (including name facts) removed, leaving no facts associated with its object ID. Another type of null object is an object that has all of its associated facts other than name facts removed, leaving only its name fact(s). Alternatively, the object may be a null object only if all of its associated name facts are removed. A null object represents an entity or concept for which the data processing system 106 has no factual information and, as far as the data processing system 106 is concerned, does not exist. In some embodiments, facts of a null object may be left in the repository 115, but have their object ID values cleared (or have their importance to a negative value). However, the facts of the null object are treated as if they were removed from the repository 115. In some other embodiments, facts of null objects are physically removed from repository 115.



FIG. 2(
e) is a block diagram illustrating an alternate data structure 290 for facts and objects in accordance with preferred embodiments of the invention. In this data structure, an object 290 contains an object ID 292 and references or points to facts 294. Each fact includes a fact ID 295, an attribute 297, and a value 299. In this embodiment, an object 290 actually exists in memory 107.



FIG. 3 is a data flow diagram illustrating a corroboration janitor, according to one embodiment of the present invention. As described above, a document 301 is processed by an importer 302, resulting in an object 304. The facts of the object 304 are corroborated by the corroboration janitor 306 through consultation of a document 307. The resulting information is then stored in the object 304, or, alternatively, in a separate corroborated object (not shown). In a preferred embodiment, the document 301 is distinct from the document 307.


As described above, the object 304 may explicitly exist in an object repository, or it may exist merely as a collection of facts with a common object ID. Reference is made to particular objects for the purposes of illustration; one of skill in the art will recognized that the systems and methods described herein are applicable to a variety of implementations and that such references are not limiting.


The object 304 has a name 305. According to one embodiment of the present invention, the name of the object 304 is implemented as a fact associated with the object 304. In another embodiment, the object exists as a set of facts and the name 305 is a fact associated with the same object ID as the set of facts.


The document 307 contains information which may or may not be relevant to the object 304. If the corroboration janitor 306 determines that the document 307 is relevant to the object 304, the corroboration janitor 306 uses the document 307 to corroborate the object 304. According to one embodiment of the present invention, the corroboration janitor 306 iterates over a collection of documents in a repository. Each document and the anchor text of the references pointing to it are analyzed to determine if the document describes an object in the object repository. If the corroboration janitor 306 determines that the document describes an object in the object repository, such as the object 304, the corroboration janitor 306 corroborates the object 304 using the document.


To facilitate determining if the document 307 is relevant to the object 304, the corroboration janitor 306 receives a plurality of documents 309. Each document 309 includes a reference 311 to document 307. References 311 may include, for example, hyperlinks, pointers, or descriptors, but other examples of references may be used without departing from the scope of the present invention. The documents 309 may further contain references (not shown) to documents other than document 307.


Each reference 309 includes some anchor text. Anchor text is text that is presented to a user in association with the reference. For example, the reference can be a HTML hyperlink:


<A HREF=“http://maps.google.com”> Revolutionary User Interface </A>


In this example, “Revolutionary User Interface” would be the anchor text of the reference. According to the HTML protocol “Revolutionary User Interface” would be presented to the user in association with a reference to the document found at “http://maps.google.com”.


The references 311 are similar in that they refer to document 307, but the anchor text of each may vary among the various references 311. For example, the reference 311A may have anchor text “Banff” while the reference 311B has anchor text “ski resort”. Also, the anchor text may be common among some of the various references 311. For example, references 311A and 311C, may both have the anchor text “Banff”. The set of all anchor text for references to the document 307 forms a set of candidate anchor texts.


The corroboration janitor 306 may also receive other documents and inputs not shown. A method used by the corroboration janitor, according to one embodiment of the present invention, is described herein with reference to FIGS. 4-6. By consulting the object name 305, the document 307, and the references 311 to document 307, the corroboration janitor is capable of using anchor text to more accurately corroborate the object 304.


For the purposes of illustration, a single document 307 is shown for corroborating the object 304. In a preferred embodiment, a plurality of documents 307 and documents 309 are used by the corroboration janitor 306. Corroboration using a plurality of documents 307 may be performed iteratively, in parallel, or both. According to one embodiment of the present invention, the documents 307 may be filtered to select documents likely to be relevant to the object 304. For example, the documents 307 may be documents that contain in them certain distinguishing facts of the object 304, making them likely candidates for corroboration. According to another embodiment of the present invention, a group of objects could be filtered to select an object 304 to which the document 307 (or set of documents 307) will be relevant.



FIG. 4 is a flow chart illustrating a method for corroborating facts, according to one embodiment of the present invention. While the method is described herein for the purposes of illustration as being performed by a corroboration janitor, the method is also useful in other contexts in which it is desired to corroborate information with an identifier such as a name against other information, for example, information gathered from the world wide web.


According to one embodiment of the present invention, a set of candidate anchor texts is received by the corroboration janitor 306 and the corroboration janitor 306 selects 402 anchor text from the set of candidate anchor texts. The set of candidate anchor texts is the set of all the anchor texts of the references to the document described herein with reference to FIG. 3. A method for selecting 402 anchor text, according to one embodiment of the present invention, is described herein with reference to FIG. 5.


According to another embodiment of the present invention, selecting 402 anchor text is optional and the received anchor text is equivalent to the selected anchor text. Selecting anchor 402 may be superfluous, for example, when only one anchor text is contained in the references 311, or when all of the anchor texts in the references 311 are to be iteratively used for the purposes of corroboration.


The corroboration janitor 306 determines 404 if the selected anchor text matches the name 305 of the object 304. Determining 404 if the selected anchor text matches the name 305 of the object 304 may be performed using a variety of methods. For example, the corroboration janitor 306 may determine 404 if the selected anchor text matches the name of the object by comparing the selected anchor text to the name 305. Such a comparison may be performed using a variety of thresholds. Different thresholds may be useful for different purposes. For example, in one application it may be desirable to require that the selected anchor text be a character-by-character duplicate of the name, while in another application more variance between the selected anchor text and the name may be tolerated while still considering the two a match.


If the corroboration janitor 306 determines 404 that the selected anchor text does not match the name 305 of the object 304, the corroboration janitor 306 returns 410. FIG. 4 illustrates corroboration using a single document 307. In a preferred embodiment, the corroboration janitor 306 iteratively attempts to corroborate the object 304 using a variety of documents, and returning 410 causes the corroboration janitor 306 to attempt to corroborate the object 304 using a different document 307.


If the corroboration janitor 306 determines 404 that the selected anchor text matches the name 305 of the object 304, the corroboration janitor 306 determines 406 if the document 307 contains valid data. The corroboration janitor 306 may determine 406 if the document 307 contains valid data, for example, by analyzing either individually or in combination the selected anchor text, the document 307, and the name 305 of the object 304. A method for determining if a document contains valid data, according to one embodiment of the present invention, is described herein with reference to FIG. 6.


If the corroboration janitor 306 determines 406 that the document 307 does not contain valid data, the corroboration janitor 306 returns 410. If the corroboration janitor 306 determines 406 that the document 307 contains valid data, the corroboration janitor 306 corroborates the facts of the object 304 using the document 307. A method for corroborating facts is described in U.S. application Ser. No. 11/097,688, entitled “Corroborating Facts Extracted from Multiple Sources”, incorporated by reference above. Further techniques relevant to the corroboration of facts may be found in the other applications incorporated by reference above.


By determining if the anchor text matches the name associated with the object 304, the corroboration janitor 306 beneficially limits corroboration to documents likely to be relevant to the object 304, thereby increasing the effectiveness and trustworthiness of the corroboration.



FIG. 5(
a) is a flow chart illustrating a method for selecting anchor text from a set of candidate anchor texts. FIG. 5(b) is an example illustrating a method for selecting anchor text from a set of candidate anchor texts. According to one embodiment of the present invention, the method is performed by the corroboration janitor 306. While the method is described herein for the purposes of illustration as being performed by a corroboration janitor, the method is also useful in other contexts in which it is desired to select an instance of text from a set of candidate texts. The method described with reference to FIGS. 5(a) and 5(b) is herein referred to as n-gram clustering. A similar method for n-gram clustering, also applicable to selecting anchor text from a set of candidate anchor texts, is described in U.S. application Ser. No. 11/142,765, entitled “Identifying the Unifying Subject of a Set of Facts”, incorporated by reference above.


The method starts with a set of candidate anchor texts. In the described embodiment, all the candidate anchor texts refer to the same document 307. The corroboration janitor 306 aggregates 502 the instances of anchor text within the set. Aggregating 502 the instances of anchor text within the set tallies the number of repeated occurrences of the same anchor text. For example, in a set having hundreds of candidate anchor texts, it is quite likely that some of those anchor texts may be repeated. These repeated anchor texts may be identified and organized by their frequency.



FIG. 5(
b) illustrates the aggregation of 42,239 instances of anchor text in a set of candidate anchor texts. The anchor text has been aggregated, and the set of candidate anchor texts is represented as five unique instances: “Delicious cheese”, occurring 6112 times, “Eggplant Parmesan”, occurring 10917 times, “Eggplant Parmesan Recipe”, occurring 25192 times, “Eggplant Parmesiane Recipe”, occurring 17 times, and “EGG PARM!”, occurring 1 time.


The corroboration janitor 306 maps 504 the aggregated anchor texts in n-dimensional space based on the similarity among the aggregated anchor texts, where n can be any integer. According to a preferred embodiment, the corroboration janitor 306 maps the aggregated anchor text in 2023-dimensional space. More similar aggregated anchor text is mapped more closely, and less similar aggregated anchor text is mapped farther apart. FIG. 5(b) illustrates a two-dimension mapping by similarity of the aggregated anchor texts. For example, “Eggplant Parmesan” and “Eggplant Parmesan Recipe” are comparatively close, while “Delicious cheese” and “Eggplant Parmesiane Recipe” are comparatively remote.


The corroboration janitor 306 finds 506 the center of mass of the aggregated and mapped anchor texts, wherein each aggregated and mapped anchor text is weighted by the number of instances of that anchor text. FIG. 5(b) illustrates a the center of mass 510 that has been calculated. The center of mass 510 reflects both the relative n-space proximity and the frequency of the anchor texts in the set of candidate anchor texts. “Eggplant Parmesan”, for example, occurs thousands of times, and therefore will greatly influence the center of mass 510. “EGG PARM!”, however, occurs only once, and therefore will have minimal impact on the center of mass 510.


According to one embodiment of the present invention, various metrics may be used to weight the various anchor texts for the calculation of the center of mass. For example, the various anchor texts can be weighted according to some score (such as PageRank) based on their source. Other metrics will be apparent to one of skill in the art without departing from the scope of the present invention.


The corroboration janitor 306 selects 508 the anchor text closest in proximity to the center of mass. In the example illustrated in FIG. 5(b), “Eggplant Parmesan Recipe” would be selected, because it is the closest anchor text to the center of mass 510.


Selecting anchor text based on an n-gram clustering method such as the one described herein beneficially is beneficially capable of considering the influence of a large set of candidate text. By weighting each anchor text by its frequency, the influence of numerical outliers is reduced and the likelihood of an accurate summarization of the set of candidate anchor text is increased. By determining the center of mass based on the similarity among the anchor texts, there is a high likelihood of selecting an anchor text with significantly common features with the other members of the set of candidate anchor texts.


According to another embodiment of the present invention, the corroboration janitor 306 aggregates the instances of anchor text within the set and selects the most frequent instance.



FIG. 6 is a flow chart illustrating a method for determining if a document contains valid data. According to one embodiment of the present invention, the method is performed by the corroboration janitor 306. While the method is described herein for the purposes of illustration as being performed by a corroboration janitor, the method is also useful in other contexts in which it is desired to determine if a source document contains valid data.


The method begins with anchor text (according to one embodiment of the present invention, the selected anchor text from 402), an object name (the name 305 of the object 304), and the document 307. The corroboration janitor 306 determines 602 if the anchor text contains known noise. For example, the corroboration janitor 306 may analyze the anchor text and attempt to recognize known noise terms such as “click here”.


If the corroboration janitor 306 determines 602 that the anchor text contains known noise, the corroboration janitor 306 returns 608 an indication of noise, beneficially preventing the corroboration janitor 306 from using the document 307 for the purposes of corroboration. Alternatively, returning 608 an indication of noise may cause the corroboration janitor 306 to select a different anchor text. According to one embodiment of the present invention, despite the indication returned by the corroboration janitor 306, the document 307 may be used by other janitors.


If the corroboration janitor 306 determines 602 that the anchor text does not contain known noise, the corroboration janitor 306 determines 604 if the object name and/or anchor text is found in the document 307. According to one embodiment of the present invention, if the document 307 is an HTML document, the corroboration janitor 306 looks specifically in the HTML header to determine if the header contains the object name and/or anchor text. For example, the corroboration janitor 306 may search for text in between tags indicating a title of the document 307.


If the corroboration janitor 306 determines 604 that the object name and/or anchor text is not found in the document 307, the corroboration janitor 306 returns 608 an indication of noise. If the corroboration janitor 306 determines 604 that the object name and/or anchor text is found in the document 307, the corroboration janitor 306 returns 606 an indication of valid data. According to one embodiment of the present invention, returning 606 an indication of valid data would allow the corroboration janitor 306 to use the document 307 for the purposes of corroboration. By determining that the object name and/or anchor text is found in the document before corroborating the object, that corroboration janitor 306 beneficially ensures that the document is relevant to the object in question, thereby increasing confidence in the document and improving the quality of corroboration.


Because the anchor text used in references to a document typically describe the content of that document, comparing anchor text to an object name helps to determine if the document and the object refer or describe the same entity- and therefore if the document can reliably be used to corroborate the facts known about that object. Anchor text, however, can often contain noise, or can misleadingly suggest that a document is relevant to a topic to which it is not in fact relevant. By determining if the document contains the object name (or the anchor text), the impact of such misleading anchor text can be beneficially reduced, and the most relevant documents applied to an object for the purposes of corroboration.


Reference in the specification to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.


Some portions of the above are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps (instructions) leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic or optical signals capable of being stored, transferred, combined, compared and otherwise manipulated. It is convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. Furthermore, it is also convenient at times, to refer to certain arrangements of steps requiring physical manipulations of physical quantities as modules or code devices, without loss of generality.


It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or “determining” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices.


Certain aspects of the present invention include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present invention can be embodied in software, firmware or hardware, and when embodied in software, can be downloaded to reside on and be operated from different platforms used by a variety of operating systems.


The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.


The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any references below to specific languages are provided for disclosure of enablement and best mode of the present invention.


While the invention has been particularly shown and described with reference to a preferred embodiment and several alternate embodiments, it will be understood by persons skilled in the relevant art that various changes in form and details can be made therein without departing from the spirit and scope of the invention.


Finally, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.

Claims
  • 1. A computer-implemented method for corroborating a set of facts included in a fact repository, the method comprising: at a back end computer system including one or more processors and memory storing one or more programs, the one or more processors executing the one or more programs to perform the operations of:identifying a set of facts associated with an object, the set of facts having been previously extracted from multiple documents, each fact comprising an attribute-value pair, including a fact attribute type and a fact value, and wherein the object is associated with an entity having a name fact attribute type;receiving a first document not included in the multiple documents and a reference to the first document, said reference comprising user-viewable anchor text extracted from a second document;determining that the user-viewable anchor text matches the name of the entity associated with the object;determining that one or both of the name of the entity associated with the object or the user-viewable anchor text appears in the first document; andresponsive to determining that the user-viewable anchor text matches the name of the entity associated with the object and that one or both of the name of the entity associated with the object or the user-viewable anchor text appears in the first document, corroborating the set of facts using the first document;the corroborating comprising:identifying one or more facts in the first document, each identified fact having an attribute-value pair; andcomparing a respective attribute-value pair of the set of facts to an identified attribute-value pair in the first document; andupdating the set of facts in accordance with the corroborating, wherein the updating includes one or both of storing an attribute-value pair from the first document in the set of facts in association with the object or adjusting a status of an attribute-value pair of the set of facts.
  • 2. The method of claim 1, wherein receiving the reference to the document comprises receiving a set of candidate anchor texts, and wherein determining if the anchor text matches the name of the entity associated with the object comprises: selecting an anchor text from the set of candidate anchor texts; anddetermining if the selected anchor text matches the name of the entity associated with the object.
  • 3. The method of claim 2, wherein selecting the anchor text from the set of candidate anchor texts comprises aggregating the set of candidate anchor texts.
  • 4. The method of claim 2, wherein selecting the anchor text from the set of candidate anchor texts comprises performing n-gram clustering on at least one member of the set of candidate anchor texts.
  • 5. The method of claim 1, further comprising: analyzing the first document to determine if the first document contains data likely to be relevant to the object.
  • 6. The method of claim 5, wherein the data likely to be relevant to the object comprises the name of the entity associated with the object.
  • 7. The method of claim 6, wherein the first document comprises an HTML document comprising a header, and wherein analyzing the first document includes searching the header of the HTML document for the name of the entity associated with the object.
  • 8. The method of claim 5, wherein the data likely to be relevant to the object comprises anchor text.
  • 9. The method of claim 8, wherein the first document comprises an HTML document comprising a header, and wherein analyzing the first document includes searching the header of the HTML document for the anchor text.
  • 10. The method of claim 1, further comprising: analyzing the anchor text to determine if the first document contains data likely to be relevant to the object.
  • 11. The method of claim 10, wherein analyzing the anchor text to determine if the first document contains data likely to be relevant to the object comprises comparing the anchor text to a list of known noise text.
  • 12. The method of claim 1, wherein the name of the entity associated with the object comprises an object name.
  • 13. The method of claim 1, wherein the set of facts comprises an object.
  • 14. The method of claim 1, wherein adjusting the status includes marking the attribute-value pair as proper for retention in the set of facts.
  • 15. The method of claim 1, wherein adjusting the status includes marking the attribute-value pair for deletion from the set of facts.
  • 16. The method of claim 1, wherein adjusting the status includes marking the attribute-value pair as proper for access from the set of facts.
  • 17. The method of claim 1, wherein the name of the entity is selected from the group consisting of: a person's name, a place name, a company name, and an organization name.
  • 18. A back end computer system for corroborating a set of facts included in a fact repository, the system comprising: one or more processors; memory; and one or more programs stored in the memory, the one or more programs comprising instructions to:identify a set of facts associated with an object, the set of facts having been previously extracted from multiple documents of a collection of documents, each fact comprising an attribute-value pair, including a fact attribute type and a fact value, and wherein the object is associated with an entity having a name fact attribute type;receive a first document not included in the multiple documents from the collection of documents and a reference to the first document, said reference comprising user-viewable anchor text extracted from a second document from the collection of documents;determine that the user-viewable anchor text matches the name of the entity associated with the object;determine that one or both of the name of the entity associated with the object or the user-viewable anchor text appears in the first document; andresponsive to determining that the user-viewable anchor text matches the name of the entity associated with the object and that one or both of the name of the entity associated with the object or the user-viewable anchor text appears in the first document, corroborate the set of facts using the first document;the corroborating comprising: identifying one or more facts in the first document, each identified fact having an attribute-value pair; andcomparing a respective attribute-value pair of the set of facts to an identified attribute-value pair in the first document; andupdate the set of facts in accordance with the corroborating, wherein the updating includes one or both of storing an attribute-value pair from the first document in the set of facts in association with the object or adjusting a status of an attribute-value pair of the set of facts.
  • 19. The system of claim 18, wherein said set of facts comprises an object, and wherein said name comprises an object name.
  • 20. A non-transitory computer readable storage medium storing one or more programs for execution by one or more processors of a back end computer system, the one or more programs and comprising instructions for: identifying a set of facts associated with an object, the set of facts having been previously extracted from multiple documents of a collection of documents, each fact comprising an attribute-value pair, including a fact attribute type and a fact value, and wherein the object is associated with an entity having a name fact attribute type;receiving a first document not included in the multiple documents from the collection of documents and a reference to the first document, said reference comprising user-viewable anchor text extracted from a second document from the collection of documents;determining that the user-viewable anchor text matches the name of the entity associated with the object;determining that one or both of the name of the entity associated with the object or the user-viewable anchor text appears in the first document; andresponsive to determining that the user-viewable anchor text matches the name of the entity associated with the object and that one or both of the name of the entity associated with the object or the user-viewable anchor text appears in the first document, corroborating the set of facts using the first document;
  • 21. The non-transitory computer readable storage medium of claim 20, said computer-readable storage medium further comprising: program code for analyzing the anchor text to determine if the first document contains valid data.
  • 22. The non-transitory computer readable storage medium of claim 21, wherein said program code for analyzing the anchor text to determine if the first document contains valid data comprises program code for comparing the anchor text to a list of known noise text.
  • 23. The non-transitory computer readable storage medium of claim 20, wherein the name associated with the object comprises an object name.
  • 24. The non-transitory computer readable storage medium of claim 20, wherein the set of facts comprises an object.
  • 25. The non-transitory computer readable storage medium of claim 20, wherein the name of the entity is selected from the group consisting of: a person's name, a place name, a company name, and an organization name.
RELATED APPLICATIONS

This application is a continuation-in-part of U.S. application Ser. No. 11/097,688, entitled, “Corroborating Facts Extracted from Multiple Sources”, by Jonathan T. Betz, filed on Mar. 31, 2005 now U.S. Pat. No. 8,682,913, which is hereby incorporated by reference. This application is related to the following applications, all of which are hereby incorporated by reference: U.S. application Ser. No. 11/142,765, entitled, “Identifying the Unifying Subject of a Set of Facts”, by Jonathan Betz, filed on May 31, 2005;U.S. application Ser. No. 11/366,162, entitled “Generating Structured Information,” filed Mar. 1, 2006, by Egon Pasztor and Daniel Egnor;U.S. application Ser. No. 11/357,748, entitled “Support for Object Search”, filed Feb. 17, 2006, by Alex Kehlenbeck, Andrew W. Hogue;U.S. application Ser. No. 11/342,290, entitled “Data Object Visualization”, filed on Jan. 27, 2006, by Andrew W. Hogue, David Vespe, Alex Kehlenbeck, Mike Gordon, Jeffrey C. Reynar, David Alpert;U.S. application Ser. No. 11/342,293, entitled “Data Object Visualization Using Maps”, filed on Jan. 27, 2006, by Andrew W. Hogue, David Vespe, Alex Kehlenbeck, Mike Gordon, Jeffrey C. Reynar, David Alpert;U.S. application Ser. No. 11/356,679, entitled “Query Language”, filed Feb. 17, 2006, by Andrew W. Hogue, Doug Rohde;U.S. application Ser. No. 11/356,837, entitled “Automatic Object Reference Identification and Linking in a Browseable Fact Repository”, filed Feb. 17, 2006, by Andrew W. Hogue;U.S. application Ser. No. 11/356,851, entitled “Browseable Fact Repository”, filed Feb. 17, 2006, by Andrew W. Hogue, Jonathan T. Betz;U.S. application Ser. No. 11/356,842, entitled “ID Persistence Through Normalization”, filed Feb. 17, 2006, by Jonathan T. Betz, Andrew W. Hogue;U.S. application Ser. No. 11/356,728, entitled “Annotation Framework”, filed Feb. 17, 2006, by Tom Richford, Jonathan T. Betz;U.S. application Ser. No. 11/341,069, entitled “Object Categorization for Information Extraction”, filed on Jan. 27, 2006, by Jonathan T. Betz;U.S. application Ser. No. 11/356,838, entitled “Modular Architecture for Entity Normalization”, filed Feb. 17, 2006, by Jonathan T. Betz, Farhan Shamsi;U.S. application Ser. No. 11/356,765, entitled “Attribute Entropy as a Signal in Object Normalization”, filed Feb. 17, 2006, by Jonathan T. Betz, Vivek Menezes;U.S. application Ser. No. 11/341,907, entitled “Designating Data Objects for Analysis”, filed on Jan. 27, 2006, by Andrew W. Hogue, David Vespe, Alex Kehlenbeck, Mike Gordon, Jeffrey C. Reynar, David Alpert;U.S. application Ser. No. 11/342,277, entitled “Data Object Visualization Using Graphs”, filed on Jan. 27, 2006, by Andrew W. Hogue, David Vespe, Alex Kehlenbeck, Mike Gordon, Jeffrey C. Reynar, David Alpert;U.S. application Ser. No. 11/394,508 entitled “Entity Normalization Via Name Normalization”, filed on Mar. 31, 2006, by Jonathan T. Betz;U.S. application Ser. No. 11/394,610 entitled “Determining Document Subject by Using Title and Anchor Text of Related Documents”, filed on Mar. 31, 2006, by Shubin Zhao;U.S. application Ser. No. 11/394,414 entitled “Unsupervised Extraction of Facts”, filed on Mar. 31, 2006, by Jonathan T. Betz and Shubin Zhao;

US Referenced Citations (304)
Number Name Date Kind
5010478 Deran Apr 1991 A
5133075 Risch Jul 1992 A
5347653 Flynn et al. Sep 1994 A
5440730 Elmasri et al. Aug 1995 A
5475819 Miller et al. Dec 1995 A
5519608 Kupiec May 1996 A
5546507 Staub Aug 1996 A
5560005 Hoover et al. Sep 1996 A
5574898 Leblang et al. Nov 1996 A
5675785 Hall et al. Oct 1997 A
5680622 Even Oct 1997 A
5694590 Thuraisingham et al. Dec 1997 A
5701470 Joy et al. Dec 1997 A
5717911 Madrid et al. Feb 1998 A
5717951 Yabumoto Feb 1998 A
5724571 Woods Mar 1998 A
5778373 Levy et al. Jul 1998 A
5778378 Rubin Jul 1998 A
5787413 Kauffman et al. Jul 1998 A
5793966 Amstein et al. Aug 1998 A
5802299 Logan et al. Sep 1998 A
5815415 Bentley et al. Sep 1998 A
5819210 Maxwell, III et al. Oct 1998 A
5819265 Ravin et al. Oct 1998 A
5822743 Gupta et al. Oct 1998 A
5826258 Gupta et al. Oct 1998 A
5838979 Hart et al. Nov 1998 A
5909689 Van Ryzin Jun 1999 A
5920859 Li Jul 1999 A
5943670 Prager Aug 1999 A
5956718 Prasad et al. Sep 1999 A
5974254 Hsu Oct 1999 A
5987460 Niwa et al. Nov 1999 A
6006221 Liddy et al. Dec 1999 A
6018741 Howland Jan 2000 A
6038560 Wical Mar 2000 A
6044366 Graffe et al. Mar 2000 A
6052693 Smith et al. Apr 2000 A
6064952 Imanaka et al. May 2000 A
6073130 Jacobson et al. Jun 2000 A
6078918 Allen et al. Jun 2000 A
6112203 Bharat et al. Aug 2000 A
6112210 Nori et al. Aug 2000 A
6122647 Horowitz et al. Sep 2000 A
6134555 Chadha et al. Oct 2000 A
6138270 Hsu Oct 2000 A
6182063 Woods Jan 2001 B1
6202065 Wills Mar 2001 B1
6212526 Chaudhuri et al. Apr 2001 B1
6240546 Lee et al. May 2001 B1
6263328 Coden et al. Jul 2001 B1
6263358 Lee et al. Jul 2001 B1
6266805 Nwana et al. Jul 2001 B1
6285999 Page Sep 2001 B1
6289338 Stoffel et al. Sep 2001 B1
6311194 Sheth et al. Oct 2001 B1
6314555 Ndumu et al. Nov 2001 B1
6327574 Kramer et al. Dec 2001 B1
6349275 Schumacher et al. Feb 2002 B1
6377943 Jakobsson Apr 2002 B1
6397228 Lamburt et al. May 2002 B1
6438543 Kazi et al. Aug 2002 B1
6470330 Das et al. Oct 2002 B1
6473898 Waugh et al. Oct 2002 B1
6487495 Gale et al. Nov 2002 B1
6502102 Haswell et al. Dec 2002 B1
6519631 Rosenschein et al. Feb 2003 B1
6556991 Borkovsky Apr 2003 B1
6565610 Wang et al. May 2003 B1
6567846 Garg et al. May 2003 B1
6567936 Yang et al. May 2003 B1
6572661 Stern Jun 2003 B1
6578032 Chandrasekar et al. Jun 2003 B1
6584464 Warthen Jun 2003 B1
6584646 Fujita Jul 2003 B2
6594658 Woods Jul 2003 B2
6606625 Muslea et al. Aug 2003 B1
6606659 Hegli et al. Aug 2003 B1
6609123 Cazemier et al. Aug 2003 B1
6636742 Torkki et al. Oct 2003 B1
6643641 Snyder Nov 2003 B1
6665659 Logan Dec 2003 B1
6665666 Brown et al. Dec 2003 B1
6665837 Dean et al. Dec 2003 B1
6675159 Lin et al. Jan 2004 B1
6684205 Modha et al. Jan 2004 B1
6693651 Biebesheimer et al. Feb 2004 B2
6704726 Amouroux Mar 2004 B1
6738767 Chung et al. May 2004 B1
6745189 Schreiber Jun 2004 B2
6754873 Law et al. Jun 2004 B1
6763496 Hennings et al. Jul 2004 B1
6799176 Page Sep 2004 B1
6804667 Martin Oct 2004 B1
6820081 Kawai et al. Nov 2004 B1
6820093 de la Huerga Nov 2004 B2
6823495 Vedula et al. Nov 2004 B1
6832218 Emens et al. Dec 2004 B1
6845354 Kuo et al. Jan 2005 B1
6850896 Kelman et al. Feb 2005 B1
6868411 Shanahan Mar 2005 B2
6873982 Bates et al. Mar 2005 B1
6873993 Charlesworth et al. Mar 2005 B2
6886005 Davis Apr 2005 B2
6886010 Kostoff Apr 2005 B2
6901403 Bata et al. May 2005 B1
6904429 Sako et al. Jun 2005 B2
6957213 Yuret Oct 2005 B1
6963880 Pingte et al. Nov 2005 B1
6965900 Srinivasa et al. Nov 2005 B2
6996572 Chakrabarti et al. Feb 2006 B1
7003506 Fisk et al. Feb 2006 B1
7003522 Reynar et al. Feb 2006 B1
7003719 Rosenoff et al. Feb 2006 B1
7007228 Carro Feb 2006 B1
7013308 Tunstall-Pedoe Mar 2006 B1
7020662 Boreham et al. Mar 2006 B2
7043521 Eitel May 2006 B2
7051023 Kapur et al. May 2006 B2
7076491 Tsao Jul 2006 B2
7080073 Jiang et al. Jul 2006 B1
7080085 Choy et al. Jul 2006 B1
7100082 Little et al. Aug 2006 B2
7143099 Leeheler-Moore et al. Nov 2006 B2
7146536 Bingham et al. Dec 2006 B2
7158980 Shen Jan 2007 B2
7162499 Lees et al. Jan 2007 B2
7165024 Glover et al. Jan 2007 B2
7174504 Tsao Feb 2007 B2
7181471 Ibuki et al. Feb 2007 B1
7194380 Barrow et al. Mar 2007 B2
7197449 Hu et al. Mar 2007 B2
7216073 Lavi et al. May 2007 B2
7233943 Modha et al. Jun 2007 B2
7260573 Jeh et al. Aug 2007 B1
7263565 Tawara et al. Aug 2007 B2
7269587 Page Sep 2007 B1
7277879 Varadarajan Oct 2007 B2
7302646 Nomiyama et al. Nov 2007 B2
7305380 Hoelzle et al. Dec 2007 B1
7325160 Tsao Jan 2008 B2
7363312 Goldsack Apr 2008 B2
7376895 Tsao May 2008 B2
7398461 Broder et al. Jul 2008 B1
7409381 Steel et al. Aug 2008 B1
7418736 Ghanea-Hercock Aug 2008 B2
7454430 Komissarchik et al. Nov 2008 B1
7472182 Young et al. Dec 2008 B1
7483829 Murakami et al. Jan 2009 B2
7493308 Bair, Jr. et al. Feb 2009 B1
7493317 Geva Feb 2009 B2
7587387 Hogue Sep 2009 B2
7644076 Ramesh et al. Jan 2010 B1
7672971 Betz et al. Mar 2010 B2
7685201 Zeng et al. Mar 2010 B2
7698303 Goodwin et al. Apr 2010 B2
7716225 Dean et al. May 2010 B1
7747571 Boggs Jun 2010 B2
7756823 Young et al. Jul 2010 B2
7797282 Kirshenbaum et al. Sep 2010 B1
7885918 Statchuk Feb 2011 B2
7917154 Fortescue et al. Mar 2011 B2
7953720 Rohde et al. May 2011 B1
8024281 Proctor et al. Sep 2011 B2
8065290 Hogue Nov 2011 B2
8108501 Birnie et al. Jan 2012 B2
20010021935 Mills Sep 2001 A1
20020022956 Ukrainczyk et al. Feb 2002 A1
20020038307 Obradovic et al. Mar 2002 A1
20020042707 Zhao et al. Apr 2002 A1
20020065845 Naito et al. May 2002 A1
20020073115 Davis Jun 2002 A1
20020083039 Ferrari et al. Jun 2002 A1
20020087567 Spiegler et al. Jul 2002 A1
20020107861 Clendinning et al. Aug 2002 A1
20020147738 Reader Oct 2002 A1
20020169770 Kim et al. Nov 2002 A1
20020174099 Raj et al. Nov 2002 A1
20020178448 Te Kiefte et al. Nov 2002 A1
20020194172 Schreiber Dec 2002 A1
20030018652 Heckerman et al. Jan 2003 A1
20030058706 Okamoto et al. Mar 2003 A1
20030069880 Harrison et al. Apr 2003 A1
20030078902 Leong et al. Apr 2003 A1
20030088607 Ruellan et al. May 2003 A1
20030097357 Ferrari et al. May 2003 A1
20030120644 Shirota Jun 2003 A1
20030120675 Stauber et al. Jun 2003 A1
20030126102 Borthwick Jul 2003 A1
20030126152 Rajak Jul 2003 A1
20030149567 Schmitz et al. Aug 2003 A1
20030149699 Tsao Aug 2003 A1
20030154071 Shreve Aug 2003 A1
20030167163 Glover et al. Sep 2003 A1
20030177110 Okamoto et al. Sep 2003 A1
20030182310 Charnock et al. Sep 2003 A1
20030195872 Senn Oct 2003 A1
20030195877 Ford et al. Oct 2003 A1
20030196052 Bolik et al. Oct 2003 A1
20030204481 Lau Oct 2003 A1
20030208354 Lin et al. Nov 2003 A1
20040003067 Ferrin Jan 2004 A1
20040015481 Zinda Jan 2004 A1
20040024739 Copperman et al. Feb 2004 A1
20040049503 Modha et al. Mar 2004 A1
20040059726 Hunter et al. Mar 2004 A1
20040064447 Simske et al. Apr 2004 A1
20040069880 Samelson et al. Apr 2004 A1
20040088292 Dettinger et al. May 2004 A1
20040107125 Guheen et al. Jun 2004 A1
20040122844 Malloy et al. Jun 2004 A1
20040122846 Chess et al. Jun 2004 A1
20040123240 Gerstl et al. Jun 2004 A1
20040128624 Arellano et al. Jul 2004 A1
20040143600 Musgrove et al. Jul 2004 A1
20040153456 Charnock et al. Aug 2004 A1
20040167870 Wakefield et al. Aug 2004 A1
20040167907 Wakefield et al. Aug 2004 A1
20040167911 Wakefield et al. Aug 2004 A1
20040177015 Galai et al. Sep 2004 A1
20040177080 Doise et al. Sep 2004 A1
20040199923 Russek Oct 2004 A1
20040236655 Scumniotales et al. Nov 2004 A1
20040243552 Titemore et al. Dec 2004 A1
20040243614 Boone et al. Dec 2004 A1
20040255237 Tong Dec 2004 A1
20040260979 Kumai Dec 2004 A1
20040267700 Dumais et al. Dec 2004 A1
20040268237 Jones et al. Dec 2004 A1
20050055365 Ramakrishnan et al. Mar 2005 A1
20050076012 Manber et al. Apr 2005 A1
20050080613 Colledge et al. Apr 2005 A1
20050086211 Mayer Apr 2005 A1
20050086222 Wang et al. Apr 2005 A1
20050086251 Hatscher et al. Apr 2005 A1
20050097150 McKeon et al. May 2005 A1
20050108630 Wasson et al. May 2005 A1
20050114324 Mayer May 2005 A1
20050125311 Chidiac et al. Jun 2005 A1
20050138007 Amitay Jun 2005 A1
20050149576 Marmaros et al. Jul 2005 A1
20050149851 Mittal Jul 2005 A1
20050159851 Engstrom et al. Jul 2005 A1
20050165781 Kraft et al. Jul 2005 A1
20050187923 Cipollone Aug 2005 A1
20050188217 Ghanea-Hercock Aug 2005 A1
20050240615 Barness et al. Oct 2005 A1
20050256825 Dettinger et al. Nov 2005 A1
20060036504 Allocca et al. Feb 2006 A1
20060041597 Conrad et al. Feb 2006 A1
20060047691 Humphreys et al. Mar 2006 A1
20060047838 Chauhan Mar 2006 A1
20060053171 Eldridge et al. Mar 2006 A1
20060053175 Gardner et al. Mar 2006 A1
20060064411 Gross et al. Mar 2006 A1
20060074824 Li Apr 2006 A1
20060074910 Yun et al. Apr 2006 A1
20060085465 Nori et al. Apr 2006 A1
20060112110 Maymir-Ducharme et al. May 2006 A1
20060123046 Doise et al. Jun 2006 A1
20060129843 Srinivasa et al. Jun 2006 A1
20060136585 Mayfield et al. Jun 2006 A1
20060143227 Helm et al. Jun 2006 A1
20060143603 Kalthoff et al. Jun 2006 A1
20060149800 Egnor et al. Jul 2006 A1
20060152755 Curtis et al. Jul 2006 A1
20060167991 Heikes et al. Jul 2006 A1
20060224582 Hogue Oct 2006 A1
20060238919 Bradley et al. Oct 2006 A1
20060242180 Graf et al. Oct 2006 A1
20060248045 Toledano et al. Nov 2006 A1
20060248456 Bender et al. Nov 2006 A1
20060253418 Charnock et al. Nov 2006 A1
20060259462 Timmons Nov 2006 A1
20060277169 Lunt et al. Dec 2006 A1
20060288268 Srinivasan et al. Dec 2006 A1
20060293879 Zhao et al. Dec 2006 A1
20070005593 Self et al. Jan 2007 A1
20070005639 Gaussier et al. Jan 2007 A1
20070016890 Brunner et al. Jan 2007 A1
20070038610 Omoigui Feb 2007 A1
20070043708 Tunstall-Pedoe Feb 2007 A1
20070055656 Tunstall-Pedoe Mar 2007 A1
20070073768 Goradia Mar 2007 A1
20070094246 Dill et al. Apr 2007 A1
20070100814 Lee et al. May 2007 A1
20070130123 Majumder Jun 2007 A1
20070143282 Betz et al. Jun 2007 A1
20070143317 Hogue et al. Jun 2007 A1
20070150800 Betz et al. Jun 2007 A1
20070198451 Kehlenbeck et al. Aug 2007 A1
20070198480 Hogue et al. Aug 2007 A1
20070198481 Hogue et al. Aug 2007 A1
20070198503 Hogue et al. Aug 2007 A1
20070198577 Betz et al. Aug 2007 A1
20070198598 Betz et al. Aug 2007 A1
20070198600 Betz Aug 2007 A1
20070203867 Hogue et al. Aug 2007 A1
20070208773 Tsao Sep 2007 A1
20070271268 Fontoura et al. Nov 2007 A1
20080071739 Kumar et al. Mar 2008 A1
20080104019 Nath May 2008 A1
20090006359 Liao Jan 2009 A1
20090119255 Frank et al. May 2009 A1
Foreign Referenced Citations (8)
Number Date Country
5-174020 Jul 1993 JP
11-265400 Sep 1999 JP
2002-157276 May 2002 JP
2002-540506 Nov 2002 JP
2003-281173 Oct 2003 JP
WO 0127713 Apr 2001 WO
WO 2004114163 Dec 2004 WO
WO 2006104951 Oct 2006 WO
Non-Patent Literature Citations (196)
Entry
Agichtein, E., et al., “Snowball Extracting Relations from Large Plain-Text Collections,” Columbia Univ. Computer Science Dept. Technical Report CUCS-033-99, Dec. 1999, pp. 1-13.
Brin, S., et al., “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” 7th Int'l World Wide Web Conference, Brisbane, Australia, Apr. 14-18, 1998, pp. 1-26.
Bunescu, R., et al., “Using Encyclopedia Knowledge for Named Entity Disambiguation,” Department of Computer Sciences, University of Texas, retrieved from internet Dec. 28, 2006, 8 pages.
Craswell, N., et al., “Effective Site Finding using Link Anchor Information,” SIGIR '01, Sep. 9-12, 2001, pp. 250-257.
Dong, X., et al., “Reference Reconciliation in Complex Information Spaces,” SIGACM-SIGMOD, 2005, 12 pages.
Downey, D., et al., “Learning Text Patterns for Web Information Extraction and Assessment,” American Association for Artificial Intelligence, 2002, 6 pages.
Gray, R.M., “Entropy and Information Theory,” Springer-Verlag, New York, NY, 1990, pp. 17-46.
Haveliwala, T.H., “Topic-Sensitive PageRank,” Proceeding of the 11th Int'l World Wide Web Conference, Honolulu, Hawaii, May 7-11, 2002, pp. 1-23.
International Search Report and Written Opinion for International Application No. PCT/US2007/61156, mailed Feb. 11, 2008, 7 pages.
International Search Report and Written Opinion for International Application No. PCT/US2006/019807, mailed Dec. 18, 2006, 4 pages.
Jeh, G., et al., “Scaling Personalized Web Search,” Proceedings of the 12th Int'l World Wide Web Conference, Budapest, Hungary, May 20-24, 2003, pp. 1-24.
Ji, H., et al., “Re-Ranking Algorithms for Name Tagging,” Workshop on Computationally Hard Problems and Joint Inference in Speech and Language Processing, Jun. 2006, 8 pages.
Kolodner, J., “Indexing and Retrieval Strategies for Natural Language Fact Retrieval,” ACM Trans. Database Syst. 8.3., Sep. 1983, 434-464.
MacKay, D.J.C., “Information Theory, Inference and Learning Algorithms,” Cambridge University Press, 2003, pp. 22-33, 138-140.
Mann, G. et al., “Unsupervised Personal Name Disambiguation,” Proceedings of the Seventy Conference on Natural Language Learning at HLT-NAACL, 2003, 8 pages.
Page, L., et al., “The PageRank Citation Ranking: Bringing Order to the Web,” Stanford Digital Libraries Working Paper, 1998, pp. 1-17.
Pawson, D., “Sorting and Grouping,” www.dpawson.co.uk/xsl/sect2/N6280.html>, Feb. 7, 2004, pp. 1-19.
Richardson, M., et al., “Beyond Page Rank: Machine Learning for Static Ranking,” International World Wide Web Conference Committee, May 23, 2006, 9 pages.
Richardson, M., et al., “The Intelligent Surfer: Probabilistic Combination of Link and Content Information in PageRank,” Advances in Neural Information Processing Systems, vol. 14, MIT Press, Cambridge, MA, 2002, 8 pages.
Rioloff, E., et al., “Learning Dictionaries for Information Extraction by Multi-Level Bootstrapping,” American Association for Artificial Intelligence, 1999, 6 pages.
Shannon, C.E., et al., “A Mathematical Theory of Communication,” The Bell System Technical Journal, vol. 27, July, Oct. 1949, pp. 1-55.
Sun Microsystems, “Attribute Names,” http://java.sun.com/products/jndi/tutorial/basics/directory/attrnames.html>, Feb. 17, 2004, pp. 1-2.
Wang, Y., et al., “C4-2: Combining Link and Contents in Clustering Web Search to Improve Information Interpretation,” The University of Tokyo, 2002, , pp. 1-9.
Brill, E. et al., “An Analysis of the AskMSR Question-Answering System,” Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Jul. 2002, pp. 257-264.
Brin, S., “Extracting Patterns and Relations from the World Wide Web,” 12 pages.
Chang, C. et al., “IEPAD: Information Extraction Based on Pattern Discovery,” WWW10 '01, ACM, May 1-5, 2001, pp. 681-688.
Chu-Carroll, J. et al., “A Multi-Strategy with Multi-Source Approach to Question Answering,” 8 pages.
Dean, J. et al., “MapReduce: Simplified Data Processing on Large Clusters,” to appear in OSDI 2004, pp. 1-13.
Etzioni, I. et al., “Web-scale Information Extraction in KnowItAll (Preliminary Results),” WWW2004, ACM, May 17-20, 2004, 11 pages.
Freitag, D. et al., “Boosted Wrapper Induction,” American Association for Artificial Intelligence, 2000, 7 pages.
Guha, R. et al., “Disambiguating People in Search,” WWW2004, ACM, May 17-22, 2004, 9 pages.
Guha, R., “Object Co-identification on the Semantic Web,” WWW2004, ACM, May 17-22, 2004, 9 pages.
Hogue, A.W., “Tree Pattern Inference and Matching for Wrapper Induction on the World Wide Web,” Master of Engineering in Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Jun. 2004, pp. 1-106.
“Information Entropy—Wikipedia, the free encyclopedia,” [online] [Retrieved on May 3, 2006] Retrieved from the Internet<URL:http://en.wikipedia.org/wiki/Information—entropy>.
“Information Theory—Wikipedia, the free encyclopedia,” [online] [Retrieved on May 3, 2006] Retrieved from the Internet<URL:http://en.wikipedia.org/wiki/Information—theory>.
Jones, R. et al., “Bootstrapping for Text Learning Tasks,” 12 pages.
Kosseim, L, et al., “Answer Formulation for Question-Answering,” 11 pages.
Liu, B. et al., “Mining Data Records in Web Pages,” Conference '00, ACM, 2000, pp. 1-10.
McCallum, A. et al., “Object Consolodation by Graph Partitioning with a Conditionally-Trained Distance Metric,” SIGKDD '03, ACM, Aug. 24-27, 2003, 6 pages.
Mihalcea, R. et al., “PageRank on Semantic Networks, with Application to Word Sense Disambiguation,” 7 pages.
Mihalcea, R. et al., “TextRank: Bringing Order into Texts,” 8 pages.
PCT International Search Report and Written Opinion, PCT/US06/07639, Sep. 13, 2006, 6 pages.
Prager, J. et al., “IBM's PIQUANT in TREC2003,” 10 pages.
Prager, J. et al., “Question Answering using Constraint Satisfaction: QA-by-Dossier-with-Constraints,” 8 pages.
Ramakrishnan, G. et al., “Is Question Answering an Acquired Skill?”, WWW2004, ACM, May 17, 2004, pp. 111-120.
Andritsos, Information-Theoretic Tools for Mining Database Structure from Large Data Sets, ACM SIGMOD, Jun. 13-18, 2004, 12 pgs.
Chen, A Scheme for Inference Problems Using Rough Sets and Entropy, Lecture Notes in Computer Science, vol. 3642/2005, Regina, Canada, Aug. 31-Sep. 3, 2005, pp. 558-567.
Dean, Using Design Recovery Techniques to Transfolin Legacy Systems, Software Maintenance, Nov. 7-9, 2001, Proceedings, IEEE International Conference, 10 pgs.
Etzioni, Unsupervised Named-Entity Extraction from the Web: An Experimental Study, Dept. of Computer Science and Engineering, Univ. of Washington, Seattle, WA, Feb. 28, 2005, 42 pgs.
Google, Canadian Patent Application 2610208, Office Action, Sep. 21, 2011, 3 pgs.
Google, European Patent Application 06784449.8, Office Action, Mar. 26, 2012, 7 pgs.
Google, Japanese Patent Application 2008-504204, Office Action, Oct. 12, 2011, 4 pgs.
Koeller, Approximate Matching of Textual Domain Attributes for Information Source Integration, IQIS '05 Proceedings of the 2nd International Workshop on Information Source Integration, Jun. 17, 2005, 10 pgs.
Merriam Webster Dictionary defmes “normalize” as “to make conform to or reduce to a norm or standard”, date: 1865, 2 pages.
Merriam Webster Dictionary defines “value” as “a numerical quantity that is assigned or is determined by calculation or measurement”, date: 1300, 2 pages.
Microsoft Computer Dictionary defines “normalize” as “adjust number within specific range”, May 1, 2002, 4 pages.
Microsoft Computer Dictionary Defines “quantity” as a “number”, May 1, 2002, 4 pages.
Microsoft Computer Dictionary defines “value” as “a quantity”, May 1, 2002, 4 pages.
Nadeau, Unsupervised named-Entity Recognition: Generating Gazetteers and Resolving Ambiguity, Inst. for Information Technology, National Research Council Canada, Gatineau and Ottawa, Canada, Aug. 1, 2006, 12 pgs.
Betz, Examiner's Answer, U.S. Appl. No. 11/097,688, Jul. 8, 2010, 18 pgs.
Betz, Examiner's Answer, U.S. Appl. No. 11/394,414, Jan. 24, 2011, 31 pgs.
Betz, Notice of Allowance, U.S. Appl. No. 11/142,740, Apr. 16, 2009, 7 pgs.
Betz, Notice of Allowance, U.S. Appl. No. 11/142,765, Jul. 1, 2010, 14 pgs.
Betz, Notice of Allowance, U.S. Appl. No. 11/341,069, Sep. 8, 2008, 6 pgs.
Betz, Notice of Allowance, U.S. Appl. No. 12/939,981, Aug. 11, 2011, 7 pgs.
Betz, Notice of Allowance, U.S. Appl. No. 12/939,981, Apr. 26, 2011, 11 pgs.
Betz, Office Action, U.S. Appl. No. 11/142,740, Aug. 13, 2007, 12 pgs.
Betz, Office Action, U.S. Appl. No. 11/142,740, May 17, 2007, 12 pgs.
Betz, Office Action, U.S. Appl. No. 11/142,740, Jul. 23, 2008, 11 pgs.
Betz, Office Action, U.S. Appl. No. 11/142,740, Dec. 26, 2007, 12 pgs.
Betz, Office Action, U.S. Appl. No. 11/142,740, Jan. 27, 2009, 11 pgs.
Betz, Office Action, U.S. Appl. No. 11/142,740, Apr. 30, 2008, 14 pgs.
Betz, Office Action, U.S. Appl. No. 11/097,688, Mar. 18, 2009, 13 pgs.
Betz, Office Action, U.S. Appl. No. 11/097,688, Oct. 29, 2009, 11 pgs.
Betz, Office Action, U.S. Appl. No. 11/142,765, Jan. 8, 2010, 17 pgs.
Betz, Office Action, U.S. Appl. No. 11/142,765, May 9, 2008, 20 pgs.
Betz, Office Action, U.S. Appl. No. 11/142,765, Jan. 17, 2008, 16 pgs.
Betz, Office Action, U.S. Appl. No. 11/142,765, Oct. 17, 2007, 14 pgs.
Betz, Office Action, U.S. Appl. No. 11/142,765, Oct. 17, 2008, 17 pgs.
Betz, Office Action, U.S. Appl. No. 11/142,765, Jun. 18, 2007, 13 pgs.
Betz, Office Action, U.S. Appl. No. 11/142,765, Apr. 28, 2009, 16 pgs.
Betz, Office Action, U.S. Appl. No. 11/341,069, Apr. 1, 2008, 8 pgs.
Betz, Office Action, U.S. Appl. No. 11/394,414, Mar. 5, 2010, 24 pgs.
Betz, Office Action, U.S. Appl. No. 11/394,414, Sep. 15, 2009, 16 pgs.
Betz, Office Action, U.S. Appl. No. 12/939,981, Dec. 9, 2010, 12 pgs.
Betz, Office Action, U.S. Appl. No. 13/302,755, Mar. 25, 2012, 15 pgs.
Google Inc., Office Action, CA 2603085, Sep. 18, 2012, 2 pgs.
Hogue, Examiner's Answer, U.S. Appl. No. 11/142,748, Oct. 3, 2011, 23 pgs.
Hogue, Notice of Allowance, U.S. Appl. No. 11/097,689, Apr. 30, 2009, 8 pgs.
Hogue, Notice of Allowance, U.S. Appl. No. 11/356,837, Jan. 6, 2012, 12 pgs.
Hogue, Notice of Allowance, U.S. Appl. No. 11/356,837, Apr. 27, 2012, 7 pgs.
Hogue, Notice of Allowance, U.S. Appl. No. 12/546,578, Jan. 6, 2011, 8 pgs.
Hogue, Notice of Allowance, U.S. Appl. No. 12/546,578, Jul. 12, 2011, 10 pgs.
Hogue, Notice of Allowance, U.S. Appl. No. 13/206,457, Mar. 14, 2012, 9 pgs.
Hogue, Office Action, U.S. Appl. No. 11/097,689, Oct. 3, 2008, 13 pgs.
Hogue, Office Action, U.S. Appl. No. 11/097,689, Apr. 9, 2008, 11 pgs.
Hogue, Office Action, U.S. Appl. No. 11/097,689, Jun. 21, 2007, 9 pgs.
Hogue, Office Action, U.S. Appl. No. 11/097,689, Nov. 27, 2007, 10 pgs.
Hogue, Office Action, U.S. Appl. No. 11/142,748, Dec. 7, 2007, 13 pgs.
Hogue, Office Action, U.S. Appl. No. 11/142,748, Jul. 13, 2010, 12 pgs.
Howe, Office Action, U.S. Appl. No. 11/142,748, Aug. 17, 2009, 14 pgs.
Hogue, Office Action, U.S. Appl. No. 11/142,748, Nov. 17, 2010, 14 pgs.
Hogue, Office Action, U.S. Appl. No. 11/142,748, May 18, 2007, 9 pgs.
Hogue, Office Action, U.S. Appl. No. 11/142,748, Jul. 22, 2008, 18 pgs.
Hogue, Office Action, U.S. Appl. No. 11/142,748, Aug. 23, 2007, 13 pgs.
Hogue, Office Action, U.S. Appl. No. 11/142,748, Jan. 27, 2009, 17 pgs.
Hogue, Office Action, U.S. Appl. No. 11/356,837, Jun. 3, 2011, 18 pgs.
Hogue, Office Action, U.S. Appl. No. 11/356,837, Aug. 4, 2010, 20 pgs.
Hogue, Office Action, U.S. Appl. No. 11/356,837, Feb. 8, 2011, 14 pgs.
Hogue, Office Action, U.S. Appl. No. 11/356,837, May 11, 2009, 18 pgs.
Hogue, Office Action, U.S. Appl. No. 11/356,837, Feb. 19, 2010, 20 pgs.
Hogue, Office Action, U.S. Appl. No. 11/356,837, Mar. 21, 2008, 15 pgs.
Hogue, Office Action, U.S. Appl. No. 11/356,837, Oct. 27, 2009, 20 pgs.
Hogue, Office Action, U.S. Appl. No. 11/356,837, Sep. 30, 2008, 20 pgs.
Hogue, Office Action, U.S. Appl. No. 11/399,857, Mar. 1, 2012, 25 pgs.
Hogue, Office Action, U.S. Appl. No. 11/399,857, Mar. 3, 2011, 15 pgs.
Hogue, Office Action, U.S. Appl. No. 11/399,857, Jan. 5, 2009, 21 pgs.
Hogue, Office Action, U.S. Appl. No. 11/399,857, Jun. 8, 2009, 14 pgs.
Hogue, Office Action, U.S. Appl. No. 11/399,857, Sep. 13, 2010, 13 pgs.
Hogue, Office Action, U.S. Appl. No. 11/399,857, Jun. 24, 2011, 14 pgs.
Hogue, Office Action, U.S. Appl. No. 11/399,857, Dec. 28, 2009, 11 pgs.
Hogue, Office Action, U.S. Appl. No. 11/399,857, Mar. 31, 2008, 23 pgs.
Hogue, Office Action, U.S. Appl. No. 12/546,578, Aug. 4, 2010, 10 pgs.
Hogue, Office Action, U.S. Appl. No. 13/206,457, Oct. 28, 2011, 6 pgs.
Hogue, Office Action, U.S. Appl. No. 13/549,361, Oct. 4, 2012, 18 pgs.
Hogue, Office Action, U.S. Appl. No. 13/549,361, Mar. 6, 2013, 13 sgs.
Hogue, Office Action, U.S. Appl. No. 13/603,354, Jan. 9, 2013, 5 pgs.
Laroco, Notice of Allowance, U.S. Appl. No. 11/551,657, May 13, 2011, 8 pgs.
Laroco, Notice of Allowance, U.S. Appl. No. 11/551,657, Sep. 28, 2011, 8 pgs.
Laroco, Office Action, U.S. Appl. No. 11/551,657, Aug. 1, 2008, 15 pgs.
Laroco, Office Action, U.S. Appl. No. 11/551,657, Aug. 13, 2009, 16 pgs.
Laroco, Office Action, U.S. Appl. No. 11/551,657, Nov. 17, 2010, 20 pgs.
Laroco, Office Action, U.S. Appl. No. 11/551,657, Feb. 24, 2010, 17 pgs.
Laroco, Office Action, U.S. Appl. No. 11/551,657, Jan. 28, 2009, 17 pgs.
Laroco, Office Action, U.S. Appl. No. 13/364,244, Jan. 30, 2013, 8 pgs.
Rohde, Notice of Allowance, U.S. Appl. No. 11/097,690, Dec. 23, 2010, 8 pgs.
Rohde, Office Action, U.S. Appl. No. 11/097,690, May 1, 2008, 21 pgs.
Rohde, Office Action, U.S. Appl. No. 11/097,690, Jun. 9, 2010, 11 pgs.
Rohde, Office Action, U.S. Appl. No. 11/097,690, Oct. 15, 2008, 22 pgs.
Rohde, Office Action, U.S. Appl. No. 11/097,690, Aug. 27, 2009, 13 pgs.
Rohde, Office Action, U.S. Appl. No. 11/097,690, Apr. 28, 2009, 9 pgs.
Rohde, Office Action, U.S. Appl. No. 11/097,690, Sep. 28, 2007, 17 pgs.
Shamsi, Notice of Allowance, U.S. Appl. No. 11/781,891, Oct. 25, 2010, 7 pgs.
Shamsi, Notice of Allowance, U.S. Appl. No. 11/781,891, May 27, 2010, 6 pgs.
Shamsi, Office Action, U.S. Appl. No. 11/781,891, Nov. 16, 2009, 10 pgs.
Shamsi, Office Action, U.S. Appl. No. 13/171,296, Apr. 3, 2013, 7 pgs.
Vespe, Notice of Allowance, U.S. Appl. No. 11/686,217, Aug. 27, 2012, 11 pgs.
Vespe, Notice of Allowance, U.S. Appl. No. 11/745,605, Jun. 13, 2011, 9 pgs.
Vespe, Notice of Allowance, U.S. Appl. No. 11/745,605, Sep. 22, 2011, 9 pgs.
Vespe, Notice of Allowance, U.S. Appl. No. 11/745,605, Mar. 28, 2012, 10 pgs.
Vespe, Office Action, U.S. Appl. No. 11/686,217, Sep. 10, 2010, 14 pgs.
Vespe, Office Action, U.S. Appl. No. 11/686,217, Jan. 26, 2012, 12 pgs.
Vespe, Office Action, U.S. Appl. No. 11/686,217, Mar. 26, 2010, 13 pgs.
Vespe, Office Action, U.S. Appl. No. 11/745,605, Apr. 8, 2010, 15 pgs.
Vespe, Office Action, U.S. Appl. No. 11/745,605, Jul. 30, 2009, 17 pgs.
Zhao, Notice of Allowance, U.S. Appl. No. 11/394,610, May 11, 2009, 15 pgs.
Zhao, Office Action, U.S. Appl. No. 11/142,853, Oct. 2, 2009, 10 pgs.
Zhao, Office Action, U.S. Appl. No. 11/142,853, Sep. 5, 2008, 9 pgs.
Zhao, Office Action, U.S. Appl. No. 11/142,853, Mar. 17, 2009, 9 pgs.
Zhao, Office Action, U.S. Appl. No. 11/142,853, Jan. 25, 2008, 7 pgs.
Zhao, Office Action, U.S. Appl. No. 11/394,610, Apr. 1, 2008, 18 pgs.
Zhao, Office Action, U.S. Appl. No. 11/394,610, Nov. 13, 2008, 18 pgs.
Zhao, Office Action, U.S. Appl. No. 11/941,382, Sep. 8, 2011, 28 pgs.
Zhao, Office Action, U.S. Appl. No. 11/941,382, Aug. 12, 2010, 23 pgs.
Zhao, Office Action, U.S. Appl. No. 11/941,382, May 24, 2012, 26 pgs.
Zhao, Office Action, U.S. Appl. No. 11/941,382, Nov. 26, 2012, 24 pgs.
Zhao, Office Action, U.S. Appl. No. 11/941,382, Jan. 27, 2011, 24 pgs.
Zhao, Office Action, U.S. Appl. No. 11/941,382, Dec. 29, 2009, 25 pgs.
Betz, Notice of Allowance, U.S. Appl. No. 11/097,688, Nov. 19, 2013, 17 pgs.
Betz, Notice of Allowance, U.S. Appl. No. 13/302,755, Jan. 6, 2014, 9 pgs.
Betz, Notice of Allowance, U.S. Appl. No. 13/302,755, Aug. 28, 2013, 6 pgs.
Hogue, Notice of Allowance, U.S. Appl. No. 13/549,361, Oct. 2, 2013, 9 pgs.
Hogue, Notice of Allowance, U.S. Appl. No. 13/549,361, Jun. 26, 2013, 8 pgs.
Hogue, Notice of Allowance, U.S. Appl. No. 13/603,354, Nov. 12, 2013, 9 pgs.
Hogue, Notice of Allowance, U.S. Appl. No. 13/603,354, Jun. 26, 2013, 8 pgs.
Laroco, Notice of Allowance, U.S. Appl. No. 13/364,244, Aug. 6, 2013, 6 pgs.
Laroco, Notice of Allowance, U.S. Appl. No. 13/364,244, Feb. 7, 2014, 5 pgs.
Laroco, Office Action, U.S. Appl. No. 13/364,244, Dec. 19, 2013, 5 pgs.
Shamsi, Final Office Action, U.S. Appl. No. 13/171,296, Nov. 4, 2013, 29 pgs.
Zhao, Office Action, U.S. Appl. No. 11/941,382, Sep. 27, 2013, 30 pgs.
Betz, Notice of Allowance, U.S. Appl. No. 11/394,414, Apr. 30, 2014, 12 pgs.
Zhao, Notice of Allowance, U.S. Appl. No. 11/941,382, Apr. 14, 2014, 5 pgs.
Cover, T.M., et al., “Elements of Information Theory,” Wiley-InterScience, New York, NY, 1991, pp. 12-23.
Gao, X., et al., “Learning Information Extraction Patterns from Tabular Web Pages Without Manual Labelling,” Proceedings of IEEE/WIC Int'l Conf. on Web Intelligence (WI'03), Oct. 13-17, 2003, pp. 495-498.
Gigablast, Web/Directory, http://www.gigablast.com/?c=dmoz3, printed Aug. 24, 2010, 1 page.
Gilster, P., “Get Fast Answers, Easily,” The News Observer, May 14, 2003, 2 pages.
Hsu, C. et al., “Finite-State Transducers for Semi-Structured Text Mining,” IJCAI-99 Workshop on Text Mining: Foundations, Techniques and Applications, 1999, 12 pages.
Iiyas, I. et al., “Rank-aware Query Optimization,” SIGMOD 2004, Jun. 13-18, 2004, 12 pages.
International Search Report and Written Opinion for International Application No. PCT/US2006/010965, mailed Jul. 5, 2006, 4 pages.
Kosala, R., et al, “Web Mining Research: A Survey,” SIGKDD Explorations, vol. 2, Issue 1, p. 1, Jul. 2000, 15 pages.
Lin, J. et al., Question Answering from the Web Using Knowledge Annotation and Knowledge Mining Techniques, CIKM '03, Nov. 3-8, 2003, 8 pages.
Nyberg, E. et al., “The Javelin Question-Answering System at TREC 2003: A Multi-Strategy Approach with Dynamic Planning,” TREC 2003, 9 pages.
Ogden, W. et al., “Improving Cross-Language Text Retrieval with Human Interactions,” Proceedings of the 33rd Hawaii International Conference on System Sciences, IEEE 2000, 9 pages.
Plaisant, C. et al. “Interface and Data Architecture for Query Preview in Networked Information Systems,” ACM Transaction on Information Systems, vol. 17, Issue 3, Jul. 1999, 18 pages.
Wirzenius, Lars, “C Preprocessor Trick for Implementing Similar Data Types,” Jan. 17, 2000, pp. 1-9.
Zhao, S. et al., “Corroborate and Learn Facts from the Web,” KDD'07, Aug. 12-15, 2007, 9 pages.
Related Publications (1)
Number Date Country
20070143282 A1 Jun 2007 US
Continuation in Parts (1)
Number Date Country
Parent 11097688 Mar 2005 US
Child 11394552 US