This application is related by subject matter to the inventions disclosed in the following commonly assigned applications filed on even date herewith: U.S. application Ser. No. 12/951,528, entitled “MATCHING FUNNEL FOR LARGE DOCUMENT INDEX”; U.S. application Ser. No. 12/951,659, entitled “DECOMPOSABLE RANKING FOR EFFICIENT PRECOMPUTING”; U.S. application Ser. No. 12/951,747, entitled “EFFICIENT FORWARD RANKING IN A SEARCH ENGINE”; and U.S. application Ser. No. 12/951,799, entitled “TIERING OF POSTING LISTS IN SEARCH ENGINE INDEX.” Each of the aforementioned applications is herein incorporated by reference in its entirety.
The amount of information and content available on the Internet continues to grow very fast. Given the vast amount of information, search engines have been developed to facilitate searching for electronic documents. In particular, users may search for information and documents by entering search queries comprising one or more terms that may be of interest to the user. After receiving a search query from a user, a search engine identifies documents and/or web pages that are relevant based on the search query. Because of its utility, web searching, that is, the process of finding relevant web pages and documents for user issued search queries has arguably become the one of the most popular service on the Internet today.
Further, search engines typically use a one-step process that utilizes a search index to identify relevant documents to return to a user based on a received search query. Search engine ranking functions, however, have emerged into very complex functions that can be both time consuming and expensive if used for every document that is indexed. Additionally, the storage of data needed for these complex formulas can also present issues, especially when stored in reverse indexes that are typically indexed by words or phrases. The extraction of relevant data needed for the complex formulas, when stored in reverse indexes, is inefficient.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Embodiments of the present invention relate to the employment of both atom-sharded and document-sharded distribution across the same set of nodes such that each node, or machine, stores both a portion of a reverse index (e.g., sharded by atom) and a portion of a forward index (e.g., sharded by document). A segment may be assigned a group of documents for which it is responsible. The group of documents is indexed both by atom and document so that there is a reverse index and forward index associated with that group of documents. Each segment comprises multiple nodes, and each node may be assigned a different portion of both the reverse and forward indexes. Further, each node is responsible for performing multiple ranking calculations using both the reverse and forward index portions stored thereon. For instance, a preliminary ranking process may utilize the reverse index and a final ranking process may utilize the forward index. These ranking processes form an overall ranking process that is employed to identify the most relevant documents based on a received search query.
The present invention is described in detail below with reference to the attached drawing figures, wherein:
The subject matter of the present invention is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described.
As noted above, embodiments of the present invention provide for nodes that form a segment to each store a portion of a reverse index and a forward index for that segment. For instance, out of a total quantity of documents (e.g., one trillion) that are to be indexed, each segment may be allotted a certain portion of documents such that the segment is responsible for indexing and performing ranking calculations for those documents. The portion of the reverse and forward indexes stored on that particular segment is the complete reverse and forward index with respect to the documents assigned to that segment. Each segment is comprised of multiple nodes, which are essentially machines or computational devices with storage capabilities. An independent portion of the reverse index and forward index are assigned to each node in the segment such that each node may be employed to perform various ranking calculations. As such, each node has stored thereon a subset of the segment's reverse index and the forward index, and is responsible for accessing each in various ranking processes within the segment. For instance, an overall ranking process may comprise a matching phase, a preliminary ranking phase, and a final ranking phase. The matching/preliminary phase may require that those nodes whose reverse indexes have indexed a certain atom from a search query be employed to identify a first set of documents that is relevant to the search query. The first set of documents is a set of documents from the documents allocated to the segment. Subsequently, those nodes whose forward indexes have indexed a document identification associated with a document in the first set of documents may be employed to identify a second set of documents that are even more relevant to the search query. The second of documents, in one embodiment, is a subset of the first set of document. This overall process may be employed to limit a set of documents to those that are found to be relevant so that the final ranking process, which is typically more time consuming and costly than the preliminary ranking process, is employed to rank fewer documents than it would be if ranking every document in the index, whether relevant or not.
Accordingly, in one aspect, an embodiment of the present invention is directed to one or more computer-storage media storing computer-useable instructions that, when used by a computing device, cause the computing device to perform a method for utilizing a hybrid-distribution system for identifying relevant documents based on a search query. The method includes allocating a group of documents to a segment, the group of documents being indexed by atom in a reverse index and indexed by document in a forward index and storing a different portion of the reverse index and the forward index on each of a plurality of nodes that form the segment. Further, the method includes accessing the reverse index portion stored on each of a first set of nodes to identify a first set of documents that is relevant to the search query. The method additionally includes, based on document identifications associated with the first set of documents, accessing the forward index portion stored on each of a second set of nodes to limit a quantity of relevant documents in the first set of documents to a second set of documents.
In another embodiment, an aspect of the invention is directed to one or more computer-storage media storing computer-useable instructions that, when used by a computing device, cause the computing device to perform a method for generating a hybrid-distribution system for a multi process document-retrieval system. The method includes receiving an indication of a group of documents assigned to a segment, the segment comprising a plurality of nodes. For the segment, the method further includes indexing the allocated group of documents by atom to generate a reverse index and indexing the allocated group of documents by document to generate a forward index. The method additionally includes assigning a portion of the reverse index and a portion of the forward index to each of a plurality of nodes that form the segment such that each of the plurality of nodes has stored a different portion of the forward index and a different portion of the reverse index.
A further embodiment of the invention is directed to one or more computer-storage media storing computer-useable instructions that, when used by a computing device, cause the computing device to perform a method for utilizing a hybrid-distribution system for identifying relevant documents based on a search query. The method includes receiving a search query, identifying one or more atoms in the search query, and communicating the one or more atoms to a plurality of segments that have each been assigned a group of documents that is indexed both by atom and by document such that a reverse index and a forward index are generated and stored at each of the plurality of segments. Each of the plurality of segments is comprised of a plurality of nodes that are each assigned a portion of the forward index and the reverse index. Based on the one or more atoms, the method identifies a first set of nodes at a first segment whose reverse index portions contain at least one of the one or more atoms from the search query. Additionally, the method includes accessing the reverse index portion stored at each of the first set of nodes to identify a first set of documents that is found to be relevant to the one or more atoms and based on document identifications associated with the first set of documents, identifying a second set of nodes whose forward index portions contain one or more of the document identifications associated with the first set of documents. The method also includes accessing the forward index portion stored at each of the second set of nodes to identify a second set of documents that is a subset of the first set of documents.
Having briefly described an overview of embodiments of the present invention, an exemplary operating environment in which embodiments of the present invention may be implemented is described below in order to provide a general context for various aspects of the present invention. Referring initially to
The invention may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program modules, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program modules including routines, programs, objects, components, data structures, etc., refer to code that perform particular tasks or implement particular abstract data types. The invention may be practiced in a variety of system configurations, including handheld devices, consumer electronics, general-purpose computers, more specialty computing devices, etc. The invention may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network.
With reference to
Computing device 100 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by computing device 100 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer-storage media and communication media. Computer-storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer-storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 100. Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
Memory 112 includes computer-storage media in the form of volatile and/or nonvolatile memory. The memory may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid-state memory, hard drives, optical-disc drives, etc. Computing device 100 includes one or more processors that read data from various entities such as memory 112 or I/O components 120. Presentation component(s) 116 present data indications to a user or other device. Exemplary presentation components include a display device, speaker, printing component, vibrating component, etc.
I/O ports 118 allow computing device 100 to be logically coupled to other devices including I/O components 120, some of which may be built in. Illustrative components include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc.
Referring now to
Among other components not shown, the system 200 includes a user device 202, a segment 204, and a hybrid-distribution system server 206. Each of the components shown in
The user device 202 may be any type of computing device owned and/or operated by an end user that can access network 208. For instance, the user device 202 may be a desktop computer, a laptop computer, a tablet computer, a mobile device, or any other device having network access. Generally, an end user may employ the user device 202 to, among other things, access electronic documents by submitting a search query to a search engine. For instance, the end user may employ a web browser on the user device 202 to access and view electronic documents stored in the system.
The segment 204 typically comprises multiple nodes, also termed leaves. In
As mentioned, an index may be indexed or sharded by atom (reverse index) or sharded by document (forward index). As used herein, sharding refers to the process of indexing a set of documents, whether by atom or by document. There are pros and cons to using each approach separately without the other. For instance, when sharding by document, the pros include the isolation of processing between shards such that only merging of results is needed. Further, per-document information is easily aligned with the matching. Even further, network traffic is small. To the contrary, the cons include that every shard is needed to process any particular query. A minimum of O(KN) disk seeks are needed for a K atom query on N shards if the reverse index data is placed on a disk. When sharding by atom, the pros include the reduced computing such that only K shards are needed to process a K atom query. O(K) disk seeks are required for a K atom query if the reverse index data is placed on a disk. But, to the contrary, the cons include the need for connected processing such that all shards storing atoms which participate in a query need to collaborate. Network traffic is significant, in addition to per-document information not being easily managed. Embodiments of the present invention requires less management of per-document data than traditional approaches. Reasons for this include that some scores are precomputed and scored in the indexes, such as the reverse index, and further refinement and filtering of documents also happens subsequent to the matching phase (L0). As such, the cons described above are greatly reduced with respect to management of per-document data.
Further, each node in a particular segment is capable of performing various functions, including ranking functions that allow for relevant search results to be identified. In some embodiments, the search engine may employ a staged process to select search results for a search query, such as the staged approach described in U.S. patent application Ser. No. 12/951,528, entitled “MATCHING FUNNEL FOR LARGE DOCUMENT INDEX.” Here, each node may be capable of employing multiple stages of an overall ranking process. An exemplary ranking process is described below, but is simply one example of ranking processes that may be employed by each node. An overall ranking process may be employed when a search query is received to pair the quantity of matching documents down to a manageable size. When a search query is received, the search query is analyzed to identify atoms. The atoms are then used during the various stages of the overall ranking process. These stages may be referred to as the L0 stage (matching stage) to query the search index and identify an initial set of matching documents that contain the atoms from the search query. This initial process may reduce the number of candidate documents from all documents indexed in the search index to those documents matching the atoms from the search query. For instance, a search engine may search through millions or even trillions of documents to determine those that are most relevant to a particular search query. Once the L0 matching stage is complete, the number of candidate documents is greatly reduced. Many algorithms for locating the most relevant documents, however, are costly and time-consuming. As such, two other stages may be employed, including a preliminary ranking stage and a final ranking stage.
The preliminary ranking stage, also termed the L1 stage, employs a simplified scoring function used to compute a preliminary score or ranking for candidate documents retained from the L0 matching stage described above. The preliminary ranking component 210, as such, is responsible for providing preliminary rankings for each of the candidate documents retained from the L0 matching stage. Alternatively, candidate documents may be scored, and as such given absolute numbers instead of a ranking. The preliminary ranking stage is simplified when compared to the final ranking stage as it employs only a subset of the ranking features used by the final ranking stage. For instance, one or more, but in some embodiments not all, of the ranking features used in the final ranking stage are employed by the preliminary ranking stage. Additionally, features not employed by the final ranking stage may be employed by the preliminary ranking stage. In embodiments of the present invention, the ranking features used by the preliminary ranking stage do not have atom-interdependencies, such as term closeness and term co-occurrence. For example, the ranking features used in the preliminary ranking stage may include, for exemplary purposes only, static features and dynamic atom-isolated components. Static features, generally, are those components that only look into features that are query-independent. Examples of static features include page rank, spam ratings of a particular web page, etc. Dynamic atom-isolated components are components that only look at features that are related to single atoms at a time. Examples may include, for instance, BM25f, frequency of a certain atom in a document, location (context) of the atom in the document (e.g., title, URL, anchor, header, body, traffic, class, attributes), etc.
Once the number of candidate documents has again been reduced by the preliminary ranking stage, the final ranking stage, also termed the L2 stage, ranks the candidate documents provided to it by the preliminary ranking stage. The algorithm used in conjunction with the final ranking stage is a more expensive operation with a larger number of ranking features when compared to the ranking features used in the preliminary ranking stage. The final ranking algorithm, however, is applied to a much smaller number of candidate documents. The final ranking algorithm provides a set of ranked documents, and search results are provided in response to the original search query based on the set of ranked documents. In some embodiments, the final ranking stage as described herein may employ a forward index, as described in U.S. patent application Ser. No. 12/951,747, entitled “EFFICIENT FORWARD RANKING IN A SEARCH ENGINE.”
Returning to
When a search query is received via a user interface on the user device 202, for instance, the query parsing component 224 operates to reformulate the query. The query is reformulated from its free text form into a format that facilitates querying the search indexes, such as the reverse indexes and forward indexes, based on how data is indexed in the search indexes. In embodiments, the terms of the search query are parsed and analyzed to identify atoms that may be used to query the search indexes. The atoms may be identified using similar techniques that were used to identify atoms in documents when indexing the documents in the search indexes. For instance, atoms may be identified based on the statistics of terms and query distribution information. The query parsing component 224 may provide a set of conjunction of atoms and cascading variants of these atoms.
An atom, or an atomic unit, as used herein, may refer to a variety of units of a query or a document. These units may include, for example, a term, an n-gram, an n-tuple, a k-near n-tuple, etc. A term maps down to a single symbol or word as defined by the particular tokenizer technology being used. A term, in one embodiment is a single character. In another embodiment, a term is a single word or grouping of words. An n-gram is a sequence of “n” number of consecutive or almost consecutive terms that may be extracted from a document. An n-gram is said to be “tight” if it corresponds to a run of consecutive terms and is “loose” if it contains terms in the order they appear in the document, but the terms are not necessarily consecutive. Loose n-grams are typically used to represent a class of equivalent phrases that differ by insignificant words (e.g., “if it rains I'll get wet” and “if it rains then I'll get wet”). An n-tuple, as used herein, is a set of “n” terms that co-occur (order independent) in a document. Further, a k-near n-tuple, as used herein, refers to a set of “n” terms that co-occur within a window of “k” terms in a document. Thus, an atom is generally defined as a generalization of all of the above. Implementations of embodiments of the present invention may use different varieties of atoms, but as used herein, atoms generally describes each of the above-described varieties.
The query distribution component 226 is essentially responsible for receiving a submitted search query and distributing it amongst the segments. In one embodiment, every search query is distributed to every segment such that each segment provides a preliminary set of search results. For instance, when a segment receives a search query, the segment or a component within the segment determines which nodes will be tasked with performing a preliminary ranking function that utilizes a reverse index portion stored on the nodes. In one case, the selected nodes that are a part of a first set of nodes are those whose reverse index has indexed one or more of the atoms that have been parsed from the search query, as described above. As such, when the search query is reformulated, one or more atoms are identified and sent to each segment. Each of the first set of nodes returns a first set of documents that are found to be relevant to the search query based on a preliminary ranking function, as briefly described above. Subsequently, a second set of nodes is determined. In one embodiment, each of these nodes has stored in its respective forward index at least one of the documents in the first set of documents. Each of the second set of nodes performs a final ranking function using forward index data and other considerations and as a result, a second set of documents is identified. In one embodiment, each of the documents in the second set is included in the first set, as the document identifications associated with the first set of documents are used in the final ranking stage.
The result merging component 228 is given the search results (e.g., document identifications and snippets) from each segment and from those results, a merged and final search results list is formed. There are various ways that the final search results list is formed, including simply removing any duplicate documents and putting each document in a list in an order determined by the final rankings. In one embodiment, a component similar to the result merging component 228 is present on each segment such that the results produced by each node are merged into a single list at the segment before that list is sent to the result merging component 228.
Turning now to
As shown, each segment root is comprised of multiple nodes. Because of space constraints, three nodes are illustrated for segment root 320 and segment root 332. Segment root 320 is comprised of node 322, node 324, and node 326. Ellipses 328 indicates that more than three nodes are contemplated to be within the scope of the present invention. Segment root 334 comprises node 334, node 336, and node 338. As any number of nodes may comprise a segment root, ellipses 340 indicates any additional quantity of nodes. As mentioned, each node is a machine or computational device that is capable of performing multiple calculations, such as ranking functions. For instance, in one embodiment, each node comprises an L01 matcher 322A and an L2 ranker 322B, as shown at node 322. Similarly, node 334 comprises an L01 matcher 334A and an L2 ranker 334B. These are described in more detail above, but the L0 matching and L1 ranking phases (preliminary ranking phase) of an overall ranking process may be combined and collectively called the L01 matcher. As each of the nodes comprises an L01 matcher and an L2 ranker, each node must also have stored a portion of a reverse index and forward index, as the L01 matcher, in one embodiment, utilizes a reverse index and the L2 ranker utilizes a forward index. As mentioned, each of the nodes may be assigned a portion of the reverse and forward indexes that belong to the segment. The segment communication bus 330 associated with segment 314 and the segment communication bus 342 associated with segment 316 allow for each of the nodes to communicate when necessary, such as with the segment root.
The group of documents that is allocated to a particular segment is indexed or sharded both by atom (reverse index) and by document (forward index). These indexes are divided into portions equal to the number of nodes that comprise that particular segment. In one embodiment, there are forty nodes, and thus each of the reverse index and the forward index is divided into forty portions and stored at each of the respective nodes. When a search query is submitted to a search engine, the query is sent to each segment. It is the segment's responsibility to identify a first set of nodes whose reverse index has one or more of the atoms from the query indexed. Using this method, if the query is parsed into two atoms, for instance “William” and “Shakespeare” from the query “William Shakespeare,” the most number of nodes in a segment that would be engaged for the L01 matcher would be two. This is shown in
This first set of documents is collected at the segment root 410 from each of the nodes from the first set of nodes, including nodes 412 and 416. These results are combined in any of a number of ways so that the segment root 410 can next identify a second set of nodes that will be used in conjunction with the final ranking stage. As shown, each L2 ranker is employed in the final ranking stage, or the L2 stage. This is because each node has stored a portion of the forward index for that segment, and as such there is a good chance that most or all forward indexes will need to be accessed in the final stage of ranking. In the final ranking stage, each node in the second set of nodes is given the document identification that it contains in its forward index so that the node can rank that document based on, at least, data found in the forward index. Because most or all of the nodes are employed in the final ranking stage, as shown in the system 400 of
Referring to
Step 516 indicates that the reverse index portion is accessed at each node of a first set of nodes. Each node in the first set of nodes has been identified as having indexed one of the atoms of a received search query. A first set of documents is identified at step 518. These documents, in one embodiment, have been ranked using a preliminary ranking function so that the most relevant documents can be identified. This step may correspond, for instance, to the L1 preliminary ranking phase and/or the L0 matching phase. Based on document identifications associated with the documents in the first set of documents, the forward index portion is accessed at each of a second set of nodes, shown at step 520. This step may correspond to the L2 final ranking stage. This effectively limits the quantity of relevant documents for a particular search query. As such, the quantity of documents is limited to a second set of documents, shown at step 522. In many or most instances, the number of nodes in the second set is greater than the nodes in the first set, as described in greater detail above. This is because a search query may have only two atoms such that two nodes, at the most, are needed for the L01 matching phase, but thousands of documents are identified as being relevant to the two atoms of the search query, and as such many more nodes may be employed to use their respective forward indexes to perform the final ranking computations to identify the second set of documents. Further, in embodiments, because the final ranking function utilizes the document identifications produced from the preliminary ranking function, the number of document in the second set is less than the number of documents in the first set such that each document in the second set is also contained in the first set.
In one embodiment, the overall process may involve receiving a search query. One or more atoms in the search query are identified, and once each segment is aware of the one or more atoms, a first set of nodes is identified in the segment that contains at least one of the one or more atoms from the search query. Each of the nodes in the first set of nodes sends a first set of documents (e.g., document identifications) to the segment root, for example, so that the segment root can consolidate (e.g., delete duplicates) and merge the results. A second set of nodes then sends the segment root a second set of document. Similarly, the segment root consolidates and merges the results to generate a final set of documents that is presented to the user in response to the search query.
Turning to
In embodiments, at the segment, an indication is received of one or more atoms that have been identified from a search query. A first set of nodes is identified whose reverse index portions include at least one of the one or more atoms. These nodes are each capable of performing various ranking functions. A first set of documents is identified based on the reverse index portions of the first set of nodes. Each node in the first set may produce a first set and send it to the segment root such that the various first sets of nodes can be consolidated and merged. In one instance, the first set of documents is produced by way of a preliminary ranking process of a multistage ranking process that utilizes the reverse index portions stored thereon. Further, a second set of nodes may then be identified whose forward index portions have indexed one or more document identifications corresponding to the first set of documents. A second set of documents may then be identified based, partially, on data stored in the forward index, and may compute features in real-time rather than using precomputed scores. The second set of documents may be identified based on a final ranking process of a multistage ranking process that utilizes the forward index. Once the second set of documents from each node in the second set of nodes is consolidated and merged, it is also merged with the second sets of documents from all of the other segments so that a final set of documents is formed and returned to the user as search results.
The present invention has been described in relation to particular embodiments, which are intended in all respects to be illustrative rather than restrictive. Alternative embodiments will become apparent to those of ordinary skill in the art to which the present invention pertains without departing from its scope.
From the foregoing, it will be seen that this invention is one well adapted to attain all the ends and objects set forth above, together with other advantages which are obvious and inherent to the system and method. It will be understood that certain features and subcombinations are of utility and may be employed without reference to other features and subcombinations. This is contemplated by and is within the scope of the claims.
Number | Name | Date | Kind |
---|---|---|---|
4769772 | Dwyer | Sep 1988 | A |
5193180 | Hastings | Mar 1993 | A |
5640487 | Lau et al. | Jun 1997 | A |
5983216 | Kirsch et al. | Nov 1999 | A |
6167397 | Jacobson et al. | Dec 2000 | A |
6173298 | Smadja | Jan 2001 | B1 |
6507829 | Richards et al. | Jan 2003 | B1 |
6571251 | Koski et al. | May 2003 | B1 |
6704729 | Klein et al. | Mar 2004 | B1 |
6807545 | VanDamme | Oct 2004 | B1 |
6901411 | Li et al. | May 2005 | B2 |
6999958 | Carlson et al. | Feb 2006 | B2 |
7039631 | Finger, II | May 2006 | B1 |
7072889 | Ogawa | Jul 2006 | B2 |
7152064 | Bourdoncle et al. | Dec 2006 | B2 |
7305385 | Dzikiewicz et al. | Dec 2007 | B1 |
7330857 | Svingen et al. | Feb 2008 | B1 |
7421418 | Nakano | Sep 2008 | B2 |
7433893 | Lowry | Oct 2008 | B2 |
7593934 | Li et al. | Sep 2009 | B2 |
7596745 | Dignum et al. | Sep 2009 | B2 |
7693813 | Cao et al. | Apr 2010 | B1 |
7702614 | Shah et al. | Apr 2010 | B1 |
7761407 | Stern | Jul 2010 | B1 |
7765215 | Hsu et al. | Jul 2010 | B2 |
7783644 | Petrou et al. | Aug 2010 | B1 |
7792846 | Raffill et al. | Sep 2010 | B1 |
7930290 | Farouki | Apr 2011 | B2 |
7966307 | Iwayama et al. | Jun 2011 | B2 |
7984043 | Waas | Jul 2011 | B1 |
8010482 | Andersen et al. | Aug 2011 | B2 |
8166203 | Yang | Apr 2012 | B1 |
8255386 | Annau et al. | Aug 2012 | B1 |
8527523 | Ravid | Sep 2013 | B1 |
20020032772 | Olstad | Mar 2002 | A1 |
20020091671 | Prokoph | Jul 2002 | A1 |
20020129015 | Caudill et al. | Sep 2002 | A1 |
20030191737 | Steele et al. | Oct 2003 | A1 |
20030217052 | Rubenczyk et al. | Nov 2003 | A1 |
20040044952 | Jiang et al. | Mar 2004 | A1 |
20040098399 | Risberg et al. | May 2004 | A1 |
20040133557 | Wen et al. | Jul 2004 | A1 |
20040139167 | Edsall et al. | Jul 2004 | A1 |
20050010560 | Altevogt et al. | Jan 2005 | A1 |
20050038866 | Noguchi et al. | Feb 2005 | A1 |
20050210383 | Cucerzan | Sep 2005 | A1 |
20050283526 | O'Neal et al. | Dec 2005 | A1 |
20060018551 | Patterson | Jan 2006 | A1 |
20060020571 | Patterson | Jan 2006 | A1 |
20060080311 | Potok et al. | Apr 2006 | A1 |
20060155690 | Wen et al. | Jul 2006 | A1 |
20060195440 | Burges et al. | Aug 2006 | A1 |
20060248066 | Brewer | Nov 2006 | A1 |
20070040813 | Kushler et al. | Feb 2007 | A1 |
20070067274 | Han et al. | Mar 2007 | A1 |
20070078653 | Olsen | Apr 2007 | A1 |
20070150467 | Beyer et al. | Jun 2007 | A1 |
20070250501 | Grubb | Oct 2007 | A1 |
20080027912 | Liu et al. | Jan 2008 | A1 |
20080027920 | Schipunov et al. | Jan 2008 | A1 |
20080028010 | Ramsey | Jan 2008 | A1 |
20080059187 | Roitblat et al. | Mar 2008 | A1 |
20080059489 | Han et al. | Mar 2008 | A1 |
20080082520 | Bohn et al. | Apr 2008 | A1 |
20080114750 | Saxena et al. | May 2008 | A1 |
20080208836 | Zheng et al. | Aug 2008 | A1 |
20080216715 | Langford | Sep 2008 | A1 |
20080294634 | Fontoura et al. | Nov 2008 | A1 |
20090012956 | Wen et al. | Jan 2009 | A1 |
20090070322 | Salvetti et al. | Mar 2009 | A1 |
20090083262 | Chang et al. | Mar 2009 | A1 |
20090106232 | Burges | Apr 2009 | A1 |
20090112843 | Hsu et al. | Apr 2009 | A1 |
20090132515 | Lu et al. | May 2009 | A1 |
20090132541 | Barsness et al. | May 2009 | A1 |
20090187550 | Mowatt et al. | Jul 2009 | A1 |
20090187555 | Liu et al. | Jul 2009 | A1 |
20090216715 | Dexter | Aug 2009 | A1 |
20090216740 | Ramakrishnan et al. | Aug 2009 | A1 |
20090248669 | Shetti et al. | Oct 2009 | A1 |
20090254523 | Lang et al. | Oct 2009 | A1 |
20090271385 | Krishnamoorthy et al. | Oct 2009 | A1 |
20090327274 | Kejariwal et al. | Dec 2009 | A1 |
20100057718 | Kulkarni | Mar 2010 | A1 |
20100082617 | Liu et al. | Apr 2010 | A1 |
20100114561 | Yasin | May 2010 | A1 |
20100121838 | Tankovich et al. | May 2010 | A1 |
20100138426 | Nakayama et al. | Jun 2010 | A1 |
20100179933 | Bai et al. | Jul 2010 | A1 |
20100198857 | Metzler et al. | Aug 2010 | A1 |
20100205172 | Luk | Aug 2010 | A1 |
20100318516 | Kolen et al. | Dec 2010 | A1 |
20100318519 | Hadjieleftheriou et al. | Dec 2010 | A1 |
20110093459 | Dong et al. | Apr 2011 | A1 |
20110191310 | Liao et al. | Aug 2011 | A1 |
20120130925 | Risvik et al. | May 2012 | A1 |
Number | Date | Country |
---|---|---|
1517914 | Aug 2004 | CN |
1670723 | Sep 2005 | CN |
1728143 | Feb 2006 | CN |
101246492 | Aug 2008 | CN |
101322125 | Dec 2008 | CN |
101388026 | Mar 2009 | CN |
101393565 | Mar 2009 | CN |
101437031 | May 2009 | CN |
101583945 | Nov 2009 | CN |
101635741 | Jan 2010 | CN |
101950300 | Jan 2011 | CN |
0952535 | Oct 1999 | EP |
Entry |
---|
Ganti, et al., “Precomputing Search Features for Fast and Accurate Query Classification,” In: Third ACM International Conference on Web Search and Data Mining, Feb. 4-6, 2010, 10 pages, New York City, NY. |
Tandon, et al., “Information Extraction from Web-Scale N-Gram Data,” In: Special Interest Group on Information Retrieval Web N-Gram Workshop, 2010, 8 pages. |
Zobel, et al., “Finding Approximate Matches in Large Lexicons,” Software—Practice and Experience, Mar. 1995, by John Wiley & Sons, Ltd., pp. 331-345, vol. 25, Issue 3, Australia. |
Pike, et al., “Interpreting the Data: Parallel Analysis with Sawzall,” In Scientific Programming—Dynamic Grids and Worldwide Computing, vol. 13, Issue 4, 2005, pp. 1-33. |
Shah, et al., “Flux: An Adaptive Partitioning Operator for Continuous Query Systems,” 19th International Conference on Data Engineering (ICDE'03), 2003, 16 pp. |
Tamura, et al., “Parallel Database Processing on a 100 Node PC Cluster: Cases for Decision Support Query Processing and Data Mining,” In Proceedings of the 1997 ACM/IEEE conference on Supercomputing (CDROM), 1997, 16 pp. |
Zhaohui Zheng, et al. Query-Level Learning to Rank Using Isotonic Regression—Pub. Date: Sep. 26, 2008 http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04797684. |
Gui-Rong Zue Ke Zhou, et al., Learning to Rank with Ties—Pub. Date: Jul. 24, 2008 http://sites.google.com/site/archkzhou/home/p275-zhou.pdf. |
Tao Qin, et al., Learning to Rank Relational Objects and Its Application to Web Search—Pub. Date: Apr. 25, 2008 http://www2008.org/papers/pdf/p407-qinA.pdf. |
Andrew Cencini, SQL Server 2005 Full-Text Search: Internals and Enhancements—Pub. Date: Dec. 2003 http://msdn.microsoft.com/en-us/library/ms345119%28SQL.90%29.aspx. |
Mark Bennett, Do You Need Synthetic Tokens? (part 2)—Published Date: Dec. 2009 http://www.ideaeng.com/tabld/98/itemId/209/Do-You-Need-Synthetic-Tokens-part-2.aspx. |
Steven Burrows, et al., Efficient and Effective Plagiarism Detection for Large Code Repositories—Pub. Date: 2004 http://www.cs.berkeley.edu/˜benr/publications/auscc04/papers/burrows-auscc04.pdf. |
Andrew Kane, Simulation of Distributed Search Engines: Comparing Term, Document and Hybrid Distribution—Published Date: Feb. 18, 2009 http://www.cs.uwaterloo.ca/research/tr/2009/CS-2009-10.pdf. |
Lei Zheng, et al., Document-Oriented Pruning of the Inverted Index in Information Retrieval Systems—Pub. Date: 2009 http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5136730. |
Ahmad Abusukhon, et al., Comparison Between Document-based, Term-based and Hybrid Partitioning—Pub. Date: Aug. 4, 2008 http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04664324. |
Chunqiang Tang, et al., Hybrid Global-Local Indexing for Efficient Peer-To-Peer Information Retrieval—Pub. Date: 2004 http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.81.5268&rep=rep1&type=ps. |
Kuansan Wang, et al., Multi-Style Language Model for Web Scale Information Retrieval—Pub. Date: Jul. 23, 2010 http://research.microsoft.com/en-us/um/people/jfgao/paper/fp580-wang.pdf. |
David Carmel, et al., Juru at TREC 10—Experiments with Index Pruning RD—Retrieved Date: Aug. 12, 2010 http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.81.6833&rep=rep1&type=pdf. |
Using and storing the index—Retrieved Date: Aug. 13, 2010 http://www.cs.princeton.edu/courses/archive/spring10/cos435/Notes/indexing—topost.pdf. |
Matthias Bender, et al., Design Alternatives for Large-Scale Web Search: Alexander was Great, Aeneas a Pioneer, and Anakin has the Force—Retrieved Date: Aug. 16, 2010 http://qid3.mmci.uni-saarland.de/publications/lsds2007.pdf. |
Parallel Information Retrieval—Retrieved Date: Aug. 16, 2010 http://www.ir.uwaterloo.ca/book/14-parallel-information-retrieval.pdf. |
Diego Puppin, et al., Query-Driven Document Partitioning and Collection Selection—Retrieved Date: Aug. 16, 2010 http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.101.6421&rep=rep1&type=pdf. |
Ohm Sornil, et al., Hybrid Partitioned Inverted Indices for Large-Scale Digital Libraries—Retrieved Date: Aug. 16, 2010 http://ncsi-net.ncsi.iisc.ernet.in/gsdl/collect/icco/index/assoc/HASH472d.dir/doc.pdf. |
Non-Final Office Action mailed Jan. 31, 2012, in U.S. Appl. No. 13/045,278. |
Non-Final Office Action mailed Apr. 13, 2012 in U.S. Appl. No. 12/951,799. |
Non-Final Office Action mailed Apr. 5, 2012 in U.S. Appl. No. 12/951,747. |
Non-final Office Action mailed Apr. 11, 2012 in U.S. Appl. No. 12/951,528. |
International Search Report and Written Opinion in PCT/US2011/059650, mailed Apr. 10, 2012. |
Final Office Action in U.S. Appl. No. 13/045,278, mailed Jul. 19, 2012. |
International Search Report and Written Opinion in PCT/US2011/059834, mailed May 23, 2012. |
Non-Final Office Action in U.S. Appl. No. 12/951,747, mailed Nov. 1, 2012. |
Final Office Action in U.S. Appl. No. 13/072,419, mailed Aug. 9, 2013, 45 pages. |
Notice of Allowance in U.S. Appl. No. 12/951,528 mailed Aug. 26, 2013, 33 pages. |
Final Office Action in U.S. Appl. No. 12/951,747, mailed Apr. 9, 2013, 16 pages. |
Final Office Action in U.S. Appl. No. 12/951,528, mailed Apr. 8, 2013, 22 pages. |
Notice of Allowance and Fee(s) Due in U.S. Appl. No. 12/951,747, mailed Dec. 11, 2013, 35 pages. |
Non Final Office Action in U.S. Appl. No. 13/932,866, mailed Dec. 20, 2013, 19 pages. |
Non-Final Office Action dated Sep. 25, 2012 in U.S. Appl. No. 12/951,528, 15 pages. |
Chinese Office Action dated Sep. 16, 2014 in Chinese Application No. 201110373395.6, 6 pages. |
Non-Final Office Action dated Aug. 19, 2014 in U.S. Appl. No. 12/951,799, 11 pages. |
Final Office Action dated Nov. 2, 2012 in U.S. Appl. No. 12/951,799, 11 pages. |
Notice of Allowance dated Mar. 8, 2013 in U.S. Appl. No. 12/951,659, 10 pages. |
Chinese Office Action dated Aug. 11, 2014 in Chinese Application No. 201110373345.8, 6 pages. |
Non-Final Office Action dated Jan. 15, 2013 in U.S. Appl. No. 13/072,419, 22 pages. |
Notice of Allowance dated Apr. 11, 2014 in U.S. Appl. No. 13/932,866, 7 pages. |
Notice of Allowance dated Jul. 10, 2015 in U.S. Appl. No. 13/072,419, 16 pages. |
Final Office Action dated Mar. 12, 2015 in U.S. Appl. No. 12/951,799, 12 pages. |
Non-Final Office Action dated Mar. 25, 2015 in U.S. Appl. No. 13/045,278, 33 pages. |
Notice of Allowance dated Nov. 25, 2015 in U.S. Appl. No. 13/045,278, 5 pages. |
Non-Final Office Action dated Jan. 29, 2016 in U.S. Appl. No. 12/951,799, 12 pages. |
Chinese Office Action dated May 5, 2016 with Search Report dated Apr. 18, 2016 in Chinese Patent Application No. 201210060934.5, 11 pages. |
Chinese Office Action dated Jun. 8, 2016 with Search Report dated May 27, 2016 in Chinese Patent Application No. 201210079487.8, 13 pages. |
Number | Date | Country | |
---|---|---|---|
20120130997 A1 | May 2012 | US |