The World-Wide-Web (“Web”) has become immensely popular largely due to the ease of distributing information to a large audience. However, with the volume of data available on the Internet increasing at an exponential rate, the effort required to obtain meaningful results on the Internet is also increasing. To help find, locate or navigate information on the Web, tools known as Internet search engines can be used. On the Internet, for example, most search engines provide a keyword search interface to enable their users to quickly scan the vast array of known documents on the Web for documents which are most relevant to the user's interest. Typically, they provide Boolean and other advanced search techniques that work with their private catalog or database of Web sites.
As noted in application Ser. No. 20030120654, examples of search engines include Yahoo (http://www.yahoo.com), Google (www.google.com) and others. Some search engines give special weighting to words or keywords: (i) in the title; (ii) in subject descriptions; (iii) listed in HTML META tags, (iv) in the position first on a page; and (iv) by counting the number of occurrences or recurrences (up to a limit) of a word on a page. In its simplest form, the input to keyword searches in a search engine is a string of text that represents all the keywords separated by spaces. When the “search” button is pressed by the user, the search engine finds all the documents which match all the keywords and returns the total number that match, along with brief summaries of a few such documents.
There have been a number of technologies that have been developed to improve the searching of a corpus of documents and navigating through the results. One useful technique has been the approach of building concept hierarchies, for example using a statistical approach or using a natural language processing-based model. Another well-researched area has been text clustering, in which related documents and/or terms in documents are grouped together using a wide range of algorithms. Most search engines used on the Internet today use ranking strategies that take advantage of relevance functions that can range from the simplistic to the very sophisticated.
Current content-based relevance functions typically consider the individual query keywords (or possibly related words) and their appearance in a target document. Unfortunately, sometimes a document may contain the keywords, but not be meaningfully relevant because it is too specific. For example, a document about “The Architecture of the Sistine Chapel” is statistically relevant to a query of “architecture”, even though it has nothing to do with the more general concept of architecture—instead it is overly specific. A document about “Buildings throughout the ages—An architectural history” is both more general, hence more relevant-even if it contains the word “architecture” fewer times than the document about the Sistine Chapel.
To improve the accuracy of the search engines, a ranked list of features (words or phrases) can be generated from a collection of documents (web or otherwise). Conventional methods use a “bag of words” model to determine the set of possible features. However, when adding phrases to the bag of words model, component words may be double-counted, causing the ranking of the phrases to appear incorrect. For example: a category of documents “martial arts” may be named “arts” or “martial” because those two words are very common, however the correct phrase “martial arts” is a better name for the set than its component terms.
In one aspect, systems and methods are disclosed for automatically improving web document ranking and analysis by predicting how ‘general’ a document is with respect to a larger topic area. Systems and methods are disclosed for automatically predicting how “general” a document is with respect to a larger topic area by improving the feature set through modification of feature group statistics; identifying one or more child concepts from the improved feature concept group; grouping the one or more child concepts; and determining the child concept group coverage for each document.
In another aspect, systems and methods are also disclosed for updating histogram statistics of keyword features by building a positive set histogram; selecting phrases from the positive set histogram; and modifying the frequency statistics in the histogram using the selected phrases.
In another aspect, systems and methods are disclosed for updating search features by building a positive set histogram; selecting phrases from the positive set; and updating the counts for the selected phrases in the positive set histogram.
In yet another aspect, the systems and methods updates search features by identifying one or more potential phrase-acronym pairs; selecting a best phrase-acronym pair from the potential pairs; and updating the positive set histogram with the best phrase-acronym pair.
In another aspect, systems and methods are disclosed for analyzing a set of documents by building a positive set histogram; selecting phrases from the positive set histogram; modifying the frequency statistics in the histogram using the selected phrases; identifying one or more potential phrase-acronym pairs; selecting a subset of phrase-acronym pairs from the potential pairs; adding a new feature for each selected phrase-acronym (phrase ∥ acronym) pair to a positive set histogram; determining a value for each new feature; identifying one or more child concepts based on an updated histogram; grouping the one or more child concepts; and determining a child concept group coverage for one or more documents.
Advantages of the system may include one or more of the following. The system improves search and relevance by improving the ability to rank, understand and describe documents or document clusters. The system improves the meaningfulness of features to describe a group of documents (phrasal compensation) and uses the proper features to discover groups of related (compensated) features, and to use these groups to predict documents that, although they may contain a user's query, are not actually relevant. In addition, this method provides information in the form of named concept groups, which facilitates a human deciding which documents to examine. Phrasal compensation (which includes acronym feature addition) allows for improved grouping, more meaningful names, and hence an improved ability to compute negative relevance to select documents that are not relevant. The phrasal compensation provides a simple method to compensate for the statistical errors caused by considering phrases—resulting in improvements in feature ranking. Phrasal compensation can operate in a language independent manner and requires no special knowledge. In addition, it can be done very efficiently without having to re-analyze the collection for each application—even though the important phrases vary between applications.
The system uses a combination of phrases and acronyms to enhance searching. Acronyms are combined with their appropriate phrases to produce a more meaningful name for a cluster. For example, a better name for the “computer science” community should be “computer science OR cs”, however the community “martial arts” should not be called “martial arts OR ma”. Efficiency is enhanced in that the system avoids the need to rescan the entire collection to compensate for phrases or acronyms of a cluster or community of web pages.
The system produces significantly improved results, allowing for superior automatic naming or descriptions of communities. When performing classification, phrasal compensation and acronym detection can be used to improve query expansion, classification, feature selection, feature ranking and other tasks that are fundamental to text-based document analysis. The phrasal compensation system is language independent, so it could be applied to documents in virtually any language. Moreover, the system efficiently predicts appropriate acronym phrase combinations.
The system can automatically predict how “general” a document is with respect to a larger topic area. In addition to locating documents that are “relevant”, it is sometimes desirable to rank known-relevant documents based on how general or specific they are for a given topic. A user who wants to learn about biology might prefer a page with many links and a broad coverage to one with less topic-aligned contents. The methodology disclosed in the present invention can provide a set of what the inventors refer to as “important child concept groups”. These concept groups could be used to improve a search by showing users more meaningful information about the documents and the larger topic.
The system can improve relevance ranking over existing mechanisms. Documents that are statistically relevant can be corrected automatically. Search engines and any information retrieval system can be improved by utilizing the above system. The system can also advantageously aid users in searching by improving how results are presented to the user. Enhancing the concept grouping with acronyms and phrases can aid in presenting a short document overview (of the topic areas covered), as well as ranking documents to maximize the overall value to the user. The extra information can aid the user in formulating new queries and filtering through a smaller set of more relevant results.
The invention will be more fully understood from the description of the preferred embodiment with reference to the accompanying drawings, in which:
FIGS. 5 shows an exemplary process for phrasal compensation.
During a search operation, in addition to locating documents that are “relevant”, it is sometimes desirable to rank known-relevant documents based on how general or specific they are for a given topic. A user who wants to learn about biology might prefer a page with many links and a broad coverage to one with less topic-aligned contents. Although “generality” can be a very subjective concept, certain characteristics of documents can help statistically identify how general or specific a given document is. An advantageous definition of generality is as follows. A document can be considered to be “general” if it satisfies the following properties: (1) It covers many of the important sub topics for the given search category; (2) It is not overly focused on only a few of these sub topics; and (3) It has enough information about the topic and doesn't merely mention the topic.
In one embodiment, the system automatically predicts how “general” a document is with respect to a larger topic area. “Important child groups” are used to improve a search by showing users more meaningful information about the documents and the larger topic. In this embodiment of the invention, the search procedure can be divided into a three-step approach, each step providing its own unique advantages, with the combination being the most useful. The process starts with an initial set of “probably” relevant documents. An existing relevance function can be utilized for this.
As shown in
In a second exemplary embodiment shown in
Details of the foregoing operations are described next.
In order to know how “general” a document is, it is advantageous to first identify the “concept groups” associated with the results. In a previous patent filing (Ser. No. 10/209,594, entitled “INFERRING HIERARCHICAL DESCRIPTIONS OF A SET OF DOCUMENTS”, filed on Mar. 31, 2002, the contents of which are hereby incorporated by reference herein), an advantageous method was disclosed for discovering a local topic hierarchy from a set of initial documents, the topic hierarchy containing “parent”, “self’ and “child” concepts. Thus, it is possible to statistically determine the terms used to describe the parents, self and children for a given category. These techniques can be utilized to find the list of features that describe the child terms associated with the given search results.
In choosing the child concepts, the process starts with a “collection histogram”, and then builds a “positive set histogram.” For each feature in the positive set histogram, examine the positive set frequency (percent), and the collection frequency (percent). If the positive set frequency percentage and collection set percentage of a given feature are between predefined ranges, select that feature as a child, otherwise skip. For the present application, a range of (X1, 0)-(X2, Y2) can be used, where the x co-ordinate refers to the positive set frequency, and the y co-ordinate refers to the collection frequency. X1 is the minimum positive set frequency, denoted herein minChildPositive; X2 is the maximum positive set frequency, denoted herein maxChildPositive; and Y2 is the maximum collection frequency, denoted herein maxChildNegative. Once the set of children is obtained, one can further rank these children to determine the likely “best” or “primary” children. This is done by ranking on a function of the positive set frequency and collection frequency. One function can include (Fp*(Fc+e)). Fp refers to the positive set frequency, and Fc is the collection frequency, e is epsilon (a small constant). If the boundaries for selecting the “child” concepts is not known, a static guess value can be used.
The following threshold S used for identifying the child features can be used:
Any term that satisfies the above threshold S can be considered as a child term. For purposes of the preferred embodiment described herein, the self and the parent terms are not considered in identifying generality self terms are not considered since it is desirable to distinguish the documents that merely mention the self concepts and do not cover enough child groups to qualify as a general document.
The above step provides a list of child concepts, although some of these concepts might be similar or related. At times for a given search result there may be a number of child terms present for each of the sub topics. For example, for a query “digital cameras” there are features for different companies, say “Nikon”, “Olympus”, “Cannon” etc. Also in some cases there might be terms that mean the same but are written differently for example “Megapixel”, “Mega Pixel” or “MP”. Considering such child features independently could often be confusing and misleading. Hence a document mentioning “MP” and “Mega Pixel” would count as containing two features, even though they are strongly related. Hence counting related features separately can skew the percentage coverage.
In order to overcome this problem, it is advantageous to automatically discover features that should be grouped together. Child concept grouping is a method for discovering the features that should be grouped together, and can be performed using the method shown in
The methodology shown in
The most similar pairs are then grouped until there are no pairs (or groups of features) that have more than some minimum S similarity. For an implementation, threshold S can be between 0.4 and 0.6.
For the SimMatrix computation, one way for identifying term relationships is by using statistical co-occurrence information. Each Child concept present in the result set is associated with a k dimensional vector, where k is the number of documents in the result set. Each element in this vector can be represented by a 1 or 0 indicating the presence or absence of the term in that document. The similarity score between the child concepts can be computed by taking the cosine of the vectors associated with each of the terms. If the score is closer to 1 it indicates that the two words frequently occur together in the same document.
Thus an N*N upper triangular matrix called SimMatrix is constructed, where N=the number of Child concepts present in the result set. Each element in this matrix represents how closely the two terms are related to each other.
Agglomerative clustering technique is used to group the most similar child concepts together. A dimensionally reduced feature set consisting only of the important child concepts is used so that term similarities for large result sets can be determined efficiently. Also, efficiency is enhanced by considering only the child concepts results in clusters that are conceptually more related. The concepts that are related would often co-occur. For example, say that “mp” co-occurs often with “price”, “review” “sensor” jpeg”. The feature “megapixel” also co-occurs with the same features, thus they are considered similar.
The clusters obtained in the previous step can be ranked based on the popularity of its members in both the negative set histogram and the positive set histogram. The following formula can be used to rank the clusters according to the following:
where F+ is the positive set frequency, F− is the negative set frequency, n is the number of concepts in the group, and epsilon is a constant (typically 0.004). The negative set popularity is used to make the system resistant to some of the statistical biases that may exist in the positive set. Thus terms that are equally popular in the positive set can be ranked by also taking into account their popularity in the negative set. The term that has the highest score based in this ranking function is used to represent the cluster. The assumption here is that the term that best represents the group is the one that is most popular in the overall collection.
An example of the clustering of the concepts and their group names is shown in
Since generality is a very subjective concept, it may be difficult to compare and rank the individual documents in the order of generality. Instead, a set that contains the most general documents from the results is generated.
A determination can be made as to the child concept group coverage for each document, and this can be utilized to produce a generality score. The goal is to determine a “generality score” for each document; this is accomplished by identifying the concept groups covered by each document. There are several ways to do this, ranging from the simplistic to the more complicated. A simple methodology is as follows: If any of the keywords from the child-concept-group occur anywhere in a document, that child concept group is considered covered. The total score is the total number of “covered child concept groups”. A better methodology would be as follows: Examine the regions of the document covered by a given child-concept group. Concepts can be given scores based on the amount of coverage, and a document score can consider both. For example a dictionary document contains all “words”, and hence would appear “perfect” by the simple measure. By considering the relative coverage, then the score would be lower, since only a tiny fraction of the dictionary addresses each of the covered concepts.
A new method for improving relevance ranking is disclosed called “negative relevance”, where one removes documents that are either too specific or do not address any of the primary themes of the topic area. Such documents are likely not useful for the purpose of identifying generality. First, all of the documents that are likely relevant (by an existing method) to the query are found and then the results are pruned by eliminating the documents that do not satisfy the negative relevance maximizing function. Thus, the negative relevance technique can be used to retrieve the result set and then a negative scoring function is applied. Any result that doesn't satisfy a minimum cutoff for the negative scoring function can be eliminated from the result set.
In the simplest case the function can aim to select documents that cover the most concept groups, i.e. documents that cover a large number of concept groups qualify as general documents since they talk about most of the sub topics. However, in some cases there might be documents that mention most of the concepts but might be biased towards a specific concept. In order to distinguish such documents we need to introduce partial group membership or need to evaluate them based on the frequency distribution of the terms belonging to each of the concept groups. Documents that are heavily biased towards a few of the child concepts alone are termed to be overly specific and are hence could be eliminated.
A better negative relevance function would be to examine the region of the document covered by a given child-concept group. Concepts can be given scores based on the amount of coverage, and thus partial group memberships can be defined. A document can be said to belong “x % to group A”. This would be beneficial for example in cases where the document contains all the terms but only a small fraction of the document actually talks about the subject.
Thus using negative relevance function, existing ranking schemes can be used and if certain attributes described by the function are covered below a certain threshold the document is defined as “bad”. This is different from the typical “positive relevance functions” where we try to include “good documents”, here the aim is to exclude the “obviously bad documents”.
Negative Relevance
For each URL in the ResultSet
The end result provides a much smaller set of documents with which one could gather enough information about the topic without having to look through a large number of results. The technique also provides added benefits of identifying the subtopics present in the results and can be applied to any type of collections.
The system can be used to remove “spam” documents or documents that are written to contain many keywords that a user is likely to type in. The present approach considers much more than just the user's query words, thereby making it much more difficult to make a document that will score highly unless the document is actually relevant to the topic area.
In another aspect of the system, the feature rankings are improved through special consideration of phrases, their component words and acronyms. Phrases influence the frequency counts of its component words. For example, “computer science” documents contains phrases such as “computer programming”, “computer languages.” All three phrases add to the count of “computer” and causes shift in the statistical significance from the phrase to the component feature. To compensate for this shift, an exemplary process to perform phrasal compensation is shown in
During initial set-up, the process of
All phrases and words are always included in the set histograms. A new set is selected to be used in a subsequent pass to modify other features in the histogram. Thus, phrasal compensation can be performed by determining “key phrases” or “selected phrases” (112). The determination of key phrases is discussed in more detail in
The process applies the updated histogram for subsequent use (116), for example for use in a ranking method in response to a search query, or for local hierarchy generation as discussed in co-pending, commonly-assigned U.S. application Ser. No. 10/209,594, entitled “INFERRING HIERARCHICAL DESCRIPTIONS OF A SET OF DOCUMENTS”, filed on Mar. 31, 2002, the contents of which are hereby incorporated by reference herein. Thus, the updated histograms can be used in discovering a local topic hierarchy from a set of initial documents, the topic hierarchy containing “parent”, “self’ and “child”0 concepts and to statistically determine the terms used to describe the parents, self and child of a given category. These techniques can be utilized to find the list of features that describe the child terms associated with the given search results.
Turning now to
Next, the process of
Alternatively, a second method for determining key phrases is illustrated in
First, the process of
A number of methods can be used for histogram updating. In one embodiment which can be slow, but most accurate, the process rebuilds the positive set histogram, but considers the “key phrase” list as atomic features, not permitting them to be broken down. For example, in the exemplary sentence: “I am a computer science student, and I study computer programming in a computer science laboratory environment,” if the key phrase list were blank, then the term “computer” occurs three times, and the phrase “computer science” occurs twice. If the key phrase list included “computer science” and “computer science laboratory”, then the term “computer” is only counted once, since the times computer occurs as part of a key phrase are discounted. Likewise, “computer science” only counts once, since the second time it appears in the sentence is part of a key phrase “computer science laboratory”.
The above method regenerates the document vectors by reprocessing each positive document. In some cases, the reprocessing of documents can be computationally expensive. As an alternate approach, to avoid reprocessing, the original document vectors can be saved, and re-used as described below.
In an alternative method to update the histogram, the positive set document vectors are cached for a performance boost. In this alternative, the process sorts the features in the key-phrase list in order by number of terms, with the largest number of terms first. For each key phrase P, the process obtains the current count Pc from the histogram. For each component term or phrase from the key phrase, the process subtracts Pc.
For example: Using the example document vector described above if the key phrases were “computer science laboratory” and “computer science”, the key phrase “computer science laboratory” has a count of 1. The component terms and phrases are all sub-phrases (and single terms) in this example includes computer, computer science, science, science laboratory, laboratory. The process then subtracts 1 from each count for an updated document vector of:
Then the process continues to the next key phrase “computer science”, and subtracts 1 (the updated number) from the terms “computer” and “science”. The positive set histogram can be built by adding a count of one for each present feature (with a count greater than zero)—in this case the features “laboratory” and “science” are effectively removed.
The advantage of this method is that it is not necessary to reprocess every positive document, only their pre-processed vectors. The disadvantage is that if a term occurs as part of multiple key-phrases (but in different places in them) the counts could go negative. To adjust for this, a count of less than zero is treated as zero.
An example is if the key phrases included “computer science laboratory” and “laboratory environment”, then “laboratory” might end up with a negative count—since it is discounted from both of these phrases, even though it actually only occurred once in the original text.
In yet another embodiment, the process rescans the negative set to compensate for the phrases. If a phrase is deemed significant for a specific community only, then it is likely to be rare in the collection as a whole, and the actual numerical difference in the negative set is likely to be small. If very few phrases are common in large collections, an entropy constraint can be applied before selecting a phrase for correction. Alternatively, commonly occurring phrases for the negative set can be preprocessed and act as an initial key-phrase list. Hence, if the system were looking at a category called “artificial intelligence” which is a subset of “computer science” and the phrase “computer science” was common for the whole collection (in addition to the specific sub category) the negative set histogram could be updated prior to processing—and the key phrase list will always contain “computer science”.
Pseudo-code for one embodiment of the phrasal compensation system is as follows:
Acronym detection
Phrasal compensation can improve feature ranking by compensating for statistical anomalies due to overcounting of component terms. A second problem with determining a human understandable name is when multiple features should be grouped. For example, calling the “computer science” community “computer science or CS” is better than “computer science” alone. CS is an acronym for “computer science”.
In acronym discovery operation 302, multiple phrases are matched to a given acronym as follows:
In one embodiment of step 350 to select the best acronym for a given phrase, the most appropriate acronym match is the one with the highest frequency in the positive set. The system then introduces an “OR” feature of the form (phrase ∥ acronym) e.g. (artificial intelligence ∥ ai) in step 350 of
In operation 380 to update histograms for the OR features, the positive and negative frequencies of the new feature (phrase ∥ acronym) are determined. The positive frequency can be computed by rescanning raw document data. However, the negative frequency can be computationally expensive to rescan. One approach is to use information from the positive set such as nphr, nacro and results from the positive set. Based the information, one embodiment predicts (nphr U nacro)
First, the embodiment computes the co-occurrence probability for each acronym phrase pair, i.e. (pphr ∩ pacro). The probability of occurrence of OR features in the positive set can be computed as follows
(pphr ∩ pacro)=pphr+pacro−(pphr ∩ pacro).
Also, from the positive set:
pacro/phr=(pphr ∩ pacro)/pphr
where pacro=Probability of the acronym occurring in the positive collection
To simplify computations, we assume that the probability pacro/phr=remains constant for all documents in both the positive and negative set and nacro/phr=pacro/phr. The probability of occurrence of the OR features in the negative set is then determined as follows:
(nphr ∩ nacro)=nphr+nacro−(nphr*nacro/phr).
where nacro=Probability of the acronym occurring in the negative collection
During the identifying potential phrase-acronym pairs (302), in one implementation, the process builds a hash structure where the key is an acronym, and the value is a list of possibly matching phrases. Next, for each feature, if it could be a possible acronym, insert it into the hash, with the list it points to initialized as blank. In one embodiment, case insensitive features may be used so “CS” and “cs” are not distinguishable. In another embodiment, a case-sensitive approach is used. After the possible acronyms are inserted (the lists are initialized as blank), for each feature that is a phrase, check if the possible acronym (or acronyms) is defined in the hash (308). If the possible acronym is defined, then add that phrase to the list (310). For example: “atm” may be an acronym, so it is added to the list. Later, when “automatic teller machine” is encountered, this phrase is added to the list for “atm”. However, if the process of
During the selection of the best acronym for each phrase (340), after the lists have been populated, then for each list the best phrase is chosen. In one embodiment, the phrase with the highest positive set frequency is chosen. Thus, if the list includes “computer science” and “cognitive science”, the phrase that occurs in the greatest number of positive documents (which depends on the positive set) is selected. For each acronym, the best phrase is selected (if one exists). Next, the process adds a new feature: phrase acronym. For example, an exemplary entry may be “computer science ∥ cs” or “automatic teller machine ∥ atm”.
During the updating of the histograms (380), one embodiment rescans every document and includes the new logical features. The new logical features represent an OR, so if either component (the phrase or the acronym) is present the whole feature is present. Since the original document vectors probably do not consider co-occurrence, it may not be possible to re-use the previously computed document vectors. Hence, this embodiment can be quite expensive in terms of computational and time costs.
In another embodiment, Bayes rule is used to estimate the new values. The probability of the acronym occurring given that the phrase is present is assumed to be constant for all documents. Given this is constant, the probability can be computed from the positive set, and then used to adjust the negative set, without rescanning all documents. The positive set, which is typically much smaller than the collection or negative set, is rescanned. When rescanning the value for each new logical-OR feature is determined and the probability of the acronym occurring given that the phrase is also computed. Using Bayes rule, the following equation (discussed above) is computed
(nphr U nacro)=nphr+nacro−(nphr*nacro/phr).
where nphr is the probability of the phrase occurring in a random document from the negative set (or the negative set frequency), nacro is the probability of the acronym occurring in a random document from the negative set (negative set frequency of the acronym), nacro/phr is the probability of the acronym occurring given the phrase occurred in the negative set—which from the assumption above is the same as pacro/phr, which is computed from rescanning the positive set. The resulting compute (nphr U nacro) is the same as the frequency of the new logical or feature PHRASE ∥ ACRONYM without having to rescan the negative set or the collection set.
The phrasal compensation and the acronym extension improve the quality of the top ranked features. To illustrate, the feature list before and after for an exemplary artificial intelligence corpus is shown below.
Thus, a search for “artificial intelligence” or “ai” will return better results given the enhanced feature list. In addition to phrasal compensation and acronym extension, it is contemplated that synonyms and other information from a thesaurus can be used to promote certain features. Additionally, negative probabilities for the OR features can be computed. The resulting improved feature list can be used in automatic naming and hierarchy discovery.
The present invention is applicable to a wide range of uses including, without limitation, any search engine, information retrieval system, or text analysis system that performs document ranking. Embodiments of the present invention can be readily implemented, for example, into a search engine such as the architecture disclosed in U.S. Utility patent application Ser. No. 10/404,939, entitled “METASEARCH ENGINE ARCHITECTURE,” filed on Apr. 1, 2003, the contents of which are incorporated by reference herein. A result processor module can be readily developed that supplements the feature list with phrases and acronyms and identifies and ranks the documents based on the enhanced feature list.
The invention has been described in terms of specific examples which are illustrative only and are not to be construed as limiting. The invention may be implemented in digital electronic circuitry or in computer hardware, firmware, software, or in combinations of them. Apparatus of the invention may be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a computer processor; and method steps of the invention may be performed by a computer processor executing a program to perform functions of the invention by operating on input data and generating output. Suitable processors include, by way of example, both general and special purpose microprocessors. Storage devices suitable for tangibly embodying computer program instructions include all forms of non-volatile memory including, but not limited to: semiconductor memory devices such as EPROM, EEPROM, and flash devices; magnetic disks (fixed, floppy, and removable); other magnetic media such as tape; optical media such as CD-ROM disks; and magneto-optic devices. Any of the foregoing may be supplemented by, or incorporated in, specially-designed application-specific integrated circuits (ASICs) or suitably programmed field programmable gate arrays (FPGAs).
From the aforegoing disclosure and certain variations and modifications already disclosed therein for purposes of illustration, it will be evident to one skilled in the relevant art that the present inventive concept can be embodied in forms different from those described and it will be understood that the invention is intended to extend to such further variations. While the preferred forms of the invention have been shown in the drawings and described herein, the invention should not be construed as limited to the specific forms shown and described since variations of the preferred forms will be apparent to those skilled in the art. Thus the scope of the invention is defined by the following claims and their equivalents.
This Application claims priority to Provisional Application Ser. No. 60/523,851, filed on Nov. 20, 2003 and entitled “Method and System for Improving Document Relevance Ranking by Discovering General and Specific Documents”, the content of which is incorporated by reference. This application is also related to U.S. Utility patent application, Ser. No. 10/209,594, entitled “INFERRING HIERARCHICAL DESCRIPTIONS OF A SET OF DOCUMENTS”, filed on Mar. 31, 2002, the contents of which are hereby incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
60523851 | Nov 2003 | US |