The architecture of contemporary web search engines is process-centric. A crawler crawls documents (web pages) from the web, and feeds them into a document processor. The document processor performs webpage parsing, word-breaking and feature extraction. Based on the extracted data, the pages are indexed for ranking to return results corresponding to what the ranker deems the most relevant pages.
However, in this architecture, there is only a limited amount of time to process the web pages. As a result, only relatively lightweight processing is possible, whereby the analysis of web documents is relatively shallow. Further, the architecture provides a tightly coupled system that in general lacks flexibility and extensibility. For example, a small change in one part of the system may create an adverse effect (a “ripple effect”) in the entire system. Also, there is a general lack of capabilities with respect to data management, as web data is scattered and often transient, making it difficult to accumulate knowledge about the data.
This Summary is provided to introduce a selection of representative concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used in any way that would limit the scope of the claimed subject matter.
Briefly, various aspects of the subject matter described herein are directed towards a data-centric web search engine technology/architecture, in metadata including offline-extracted metadata, is part of an indexing and ranking pipeline. A large central data store is used to manage the data in the search engine. Other components retrieve data from the store and store the processed result back to the store, rather than operating via a pipeline method. For example, the crawling component retrieves URLs from the store to crawl, extracts document metadata from the downloaded documents, and stores these data into back into the store. An indexing component retrieves the document metadata to build an index for the documents. A serving component uses the index and the document metadata to serve content, e.g., search results.
Other advantages may become apparent from the following detailed description when taken in conjunction with the drawings.
The present invention is illustrated by way of example and not limited in the accompanying figures in which like reference numerals indicate similar elements and in which:
Various aspects of the technology described herein are generally directed towards a data-centric web search engine architecture, in which a web data management component based upon a data store (repository) including metadata is part of the indexing and ranking pipeline. As will be understood, this facilitates deeper document understanding and query understanding.
A crawler crawls pages from the web and places them into the repository. The document processor reads web pages from the repository, extracts page features and then stores the newly extracted features back into the repository. The indexer retrieves pages from the repository to build an inverted index. As a result, significant knowledge may be accumulated, and otherwise time-consuming feature extraction tasks may be performed to provide more desirable search results for queries.
It should be understood that any of the examples herein are non-limiting. Indeed, one particular architecture is described, however this is only one example. As such, the present invention is not limited to any particular embodiments, aspects, concepts, structures, functionalities or examples described herein. Rather, any of the embodiments, aspects, concepts, structures, functionalities or examples described herein are non-limiting, and the present invention may be used various ways that provide benefits and advantages in computing and webpage processing/search technology in general.
Turning to
A crawling component 104 (as also represented in
As described below, the chunk builder nodes 2241-224m build chunks that are sent to the web data management component 102. Note that the crawling component 103 also provides changes (deltas) 106 to an indexing component 108 for updating the inverted index over time. In one alternative, the crawler directly outputs the delta changes to the indexer. In another alternative, the crawler stores the data into the data store and the indexer fetches the delta changes from the store, e.g., by a timestamp. (Note that the former is generally more efficient, while the latter is more flexible as it is a synchronized method.)
The web data management component 102 performs data mining on the crawled pages to obtain metadata for those pages. The mined metadata is provided to the indexing component 108 to build the inverted index. The mined metadata is also provided to a serving component 110 that uses the index and metadata to serve search results. Note that the indexing component 108 and serving component 110 can be considered/run together as an index generating and serving (IGS) component (e.g., IGS nodes 2281-228i,
Among other actions, the web data management component 102 manages chunks by groups (e.g., groups 2301-230j,
The crawling component 102 thus acts as the content provider, and continuously (or regularly) provides content chunks to the rest of the system. In general, the crawler, chunk builder and static ranker modules of the crawling component 102 are the same as or very similarly to the existing process-centric architecture, however, page-download-failure information may be recorded such that the index generation and serving node has sufficient information to remove outdated pages. More particularly, the download-failure information of a URL may be used, e.g., if a URL keeps returning an HTTP 404 response (the path does not exist), it is not indexed or removed from the index even if it has a large static rank.
In addition, a document-adding tool may be built on each crawler and chunk builder machine to import the crawler-generated chunks into the data store 103, namely monitor a content chunk folder and add new content chunks into the data store 103. Note that once successfully added into the data store, it can be deleted from the crawler and chunk builder machine.
The data store 103 thus provides persistent storage for crawled documents and re-organizes them into the groups. In one implementation, an interface AddDocuments is provided for use by the document-adding tool to feed chunks into the store. Another interface, RetrieveDocs, is provided for each index generation and serving node to retrieve newly-crawled documents according to any specified conditions, e.g., a time range. To identify newly crawled or added documents, the crawling time or adding time of each document is used. For example, CrawlingTime>=#2008-12-05 19:10:00# specifies documents crawled after 19:10:00, Dec. 5, 2008; AddTime>=#2007-12-05 19:00:00# AND AddTime<#2008-12-05 20:00:00# specifies documents added in one hour (from 19 to 20 o'clock, on Dec. 5, 2008). When no condition is specified, all documents in the group are retrieved.
The data of each group may be replicated to multiple machines to avoid data loss and to improve availability. The data store alternatively may be designed as a data transformation component that is only utilized for processing chunks into groups, without persistently storing the documents.
In one implementation, each index serving and generating node 2281-228i indexes approximately 10 million documents and performs index serving based on the index. As can be seen in
Similar to chunks in the existing pipeline, each group has a primary hosting node and two secondary nodes in each row. Only primary replicas are used for serving search. When some machines in a row are offline, one secondary replica of the corresponding groups participates in index serving. An index serving and generating node provides service most of time, however after a certain period of time (e.g., one day), it may stop serving and perform an index update. To perform index update, an index serving and generating node retrieves newly-crawled web pages from the data store module by calling its interfaces. Then, indices of the newly-crawled pages are built, one index per group. Thereafter, the new and existing indices of all groups are merged to generate a large index.
Note that newer versions of existing documents may be contained in the newly-crawled documents, however they will be in the same group as the previous version. Further, some existing documents may need to be deleted because they have disappeared from the web. Still further, with the newly-crawled documents being added, the overall number of documents may exceed the desired number (e.g., 10 million) forone node, whereby additional nodes may be needed. A URL table may be built and used to deal with these situations.
More particularly, to select 10 million pages (in this example) for each IGS node, a URL table is maintained for each group, referred to as the group URL table, containing the URLs of the group which are in current index. In one implementation, the URL table contains the following information:
StaticRank, RefreshFrequency, and DiscardCoefficient are used to calculate a score (referred to as PageF) for each URL, with PageF then used as a measure to select documents. Other information also may be used in the PageF calculation.
The system gets the new documents for each group. The information to calculate PageF may be included in the content header as current hdc header:
To select 10 million pages, a threshold for PageF is determined. The pages whose PageF are larger than the threshold are selected for the index. Note that indexing may include the URLs of new documents never seen before as well as new versions of current pages. The PageF for these pages are based on the new information. One suitable update process is described below:
After getting the threshold PageFthres for PageF, the system updates each group URL table with certain changes, namely some entries are removed, because their PageF are below PageFthres, some entries are updated, because they have new versions, and some entries need to be added, because they have not been seen before. Note that the GroupId and DocOffset are updated for new version to append the new version documents to the group content file in content update. For a new document, the correct GroupId and DocOffset are provided.
Because not all of the new documents are selected, information about which pages are selected in this update is output, referred to as group NADInfo. Group NADInfo contains two types of information, namely documents which have not been seen before in this group with a PageF larger than PageFthres, and documents which have been seen in this group with their new PageF are larger than PageFthres, that is, new versions of current documents.
Another additional output is the information about which pages need to be removed for each group, referred to as group NRDInfo. Group NRDInfo contains documents having a new version, and documents that do not have a new version but have their PageF below PageFthres.
For content update, a content file is maintained for each group that contains the documents in the group index file. The group content files can be used to generate captions for search results. Based on group NADInfo, corresponding documents from the new documents may be appended to the old group content file.
For index update, represented in
Based on the NADInfo of the group, the documents that need to be added to the new index are known. The system first builds an index for these new documents for adding to the group index file later. To refresh group index files, the system removes the information of outdated documents using group NRDInfo and adds information from the index file of new documents.
In one implementation, three types of information are updated, namely DocData, to remove corresponding document data from the array based on group NRDInfo and add corresponding document data using group NADInfo, EndDocLocations, to remove corresponding locations based on group NRDInfo and make changes to other locations caused by the removing. New locations are added for the new documents, and TermLocations, which is similar to EndDocLocations.
A suitable group index file refresh process is shown below:
After group index files are updated, a known index merger merges the index files to provide an updated, large index for serving.
Note that the above steps do not actually remove the outdated documents from group content files. These documents are “dead” because they will disappear from the index after the index file is updated. In one implementation, these documents are not immediately removed from the group content files because removing them results in a write that is close to a backup of all the old group content files, even though the number of these documents is relatively small. Instead, the documents are cleaned out when there are a sufficient number of these documents (e.g. more than 5 million). Note that updating the group URL table to remove dead documents causes changes to the DocOffset of some documents. Also, with respect to the group content file, the dead documents of each group are documents that do not exist in the group URL table, and these are removed from the group content file. For the group index file, the Dodd of some documents is updated because their DocOffset numbers in the group content file are changed due to the removal of dead documents. With respect to the large index, instead of doing extra updates after the incremental indexing process, the system may make small changes to each step (document selection, content update, and index update) to do a clean update.
Thus, in general, as part of the above-described nodes that provide a distributed programming and execution environment 446, a document understanding process 447 provides the document metadata 442, while a query understanding process 448 provides the query metadata 445. Via a flexible indexing and ranking mechanism 449, evaluation 450 may be performed on the data and metadata, such as described in U.S. patent application Ser. No. ______ (attorney docket no. 327766.01), and Ser. No. ______ (attorney docket no. 327768.01), entitled Experimental Web Search System and “Flexible Indexing and Ranking for Search,” respectively, hereby incorporated by reference.
The invention is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to: personal computers, server computers, hand-held or laptop devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, and so forth, which perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in local and/or remote computer storage media including memory storage devices.
With reference to
The computer 710 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by the computer 710 and includes both volatile and nonvolatile media, and removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by the computer 710. Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above may also be included within the scope of computer-readable media.
The system memory 730 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 731 and random access memory (RAM) 732. A basic input/output system 733 (BIOS), containing the basic routines that help to transfer information between elements within computer 710, such as during start-up, is typically stored in ROM 731. RAM 732 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 720. By way of example, and not limitation,
The computer 710 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media, described above and illustrated in
The computer 710 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 780. The remote computer 780 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 710, although only a memory storage device 781 has been illustrated in
When used in a LAN networking environment, the computer 710 is connected to the LAN 771 through a network interface or adapter 770. When used in a WAN networking environment, the computer 710 typically includes a modem 772 or other means for establishing communications over the WAN 773, such as the Internet. The modem 772, which may be internal or external, may be connected to the system bus 721 via the user input interface 760 or other appropriate mechanism. A wireless networking component such as comprising an interface and antenna may be coupled through a suitable device such as an access point or peer computer to a WAN or LAN. In a networked environment, program modules depicted relative to the computer 710, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
An auxiliary subsystem 799 (e.g., for auxiliary display of content) may be connected via the user interface 760 to allow data such as program content, system status and event notifications to be provided to the user, even if the main portions of the computer system are in a low power state. The auxiliary subsystem 799 may be connected to the modem 772 and/or network interface 770 to allow communication between these systems while the main processing unit 720 is in a low power state.
While the invention is susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention.