Many types of search engine indexing algorithms utilize inverted indexes. An inverted index is a data structure that is utilized to store a mapping between terms and the location of the terms within a database, document, or set of documents. For instance, an inverted index may be utilized to store a mapping between words and World Wide Web (“Web”) pages in which the words are utilized. Data identifying the particular location at which each term appears within a document might also be stored in an inverted index. The list of documents in which a particular term appears is commonly referred to as a posting list.
Some types of indexing algorithms generate a separate entry in the inverted index for each semantic role that a term occurs in. This results in a separate posting list and a separate entry in the index to the posting lists, called the lexicon, for each term-role pair. For instance, one posting list may be created in the index for the word “dog” and the role “subject.” Another posting list may be created for the word “cake” and the role “object.” In order to identify documents where a dog is the subject and a cake is the object, such as for example where a dog is described as eating a cake, an intersection operation is performed between the two posting lists. Semantically based search engines may utilize this type of indexing and document retrieval.
Because inverted indices can grow very large in size, they are often stored on disk. Portions of the inverted index may be read from disk into main memory for quicker access. Regardless of the type of physical storage medium an inverted index is stored upon, it is often the case that no particular assumption is made about the layout of posting lists on the physical storage medium relative to one another. However, an arbitrary layout of posting lists on a physical storage medium can lead to poor performance, especially in systems using an inverted index where runtime operations are performed to the intersection of posting lists for multiple terms that are related to each other in a strict dominance relation, such as semantically based search engines.
It is with respect to these considerations and others that the disclosure made herein is presented.
Technologies are described herein for efficient storage and retrieval of posting lists. Through the use of the concepts and technologies presented herein, posting lists are stored in a manner that allows posting lists for related semantic roles of a term to be retrieved from a physical storage medium, such as a mass storage device or random access memory, as a single contiguous block.
According to one aspect presented herein, a hierarchy of semantic roles is defined. For instance, a role tree having nodes corresponding to the semantic roles may be defined. In one embodiment, the nodes of the role tree are related to one another in a strict dominance relation. This means that there is a single term that is at the top of the hierarchy and that the other terms are directly dominated by exactly one other node and are dominated by the root, either directly or indirectly. The semantic roles defined by the hierarchy are associated with a term. For instance, if the word “dog” is utilized in the semantic role of subject, then a semantic role within the hierarchy for subject will be associated with the word “dog” (e.g. “dog.subject”).
A posting list is also generated for each association of a term and a semantic role in the hierarchy. The posting list includes data that identifies one or more documents that include the usage of the term in the associated semantic role. For instance, using the example above, the posting list for the term “dog.subject” would include data identifying those documents wherein the word “dog” is used in the semantic role of subject. The posting lists may also include additional data such as data identifying the locations in the document at which the word is utilized.
Once the posting lists have been generated, they are stored contiguously on a physical storage medium such that a subtree of the hierarchy of semantic roles can be loaded from the storage medium as a single, contiguous block. For instance, in one embodiment, the posting lists are stored by performing a pre-order, depth-first traversal of the nodes of the role tree. At each of the nodes, the posting list is written to the physical storage medium for the term associated with the semantic role corresponding to the node. As an example, the posting list for the term “dog.subject” would be written to disk when the node in the hierarchy corresponding to the semantic role subject is encountered during the pre-order, depth-first traversal of the role tree. In this manner, the posting lists for the various semantic uses of a term are written to the physical storage medium in a contiguous manner.
In order to assist with the retrieval of the posting lists from the physical storage medium, data may also be written during the traversal of the role tree that identifies the starting position of the posting list for each node in the role tree and that indicates the total size for the posting lists under each node. This data may be stored in an index to the posting lists, also referred to herein as a “lexicon”, or in another location.
In order to retrieve the posting lists for a subtree of the hierarchy, the data identifying the beginning location on the physical storage medium of the posting lists for the term at the top of a desired subtree of the hierarchy is retrieved. The data identifying the length of the posting lists in the desired subtree of the hierarchy is also retrieved. A single contiguous block that includes the posting lists for the desired subtree of the hierarchy is then retrieved from the beginning location through the specified length.
According to one embodiment, a natural language engine utilizes the posting lists in a semantic index. It should be appreciated, however, that other types of search engines might also utilize the concepts and technologies presented herein for efficiently storing posting lists. It should also be appreciated that the above-described subject matter may also be implemented as a computer-controlled apparatus, a computer process, a computing system, or as an article of manufacture such as a computer-readable medium. These and various other features will be apparent from a reading of the following Detailed Description and a review of the associated drawings.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended that this Summary be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
The following detailed description is directed to technologies for efficiently storing and retrieving posting lists. While the subject matter described herein is presented in the general context of program modules that execute in conjunction with the execution of an operating system and application programs on a computer system, those skilled in the art will recognize that other implementations may be performed in combination with other types of program modules. Generally, program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the subject matter described herein may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like.
In the following detailed description, references are made to the accompanying drawings that form a part hereof, and which are shown by way of illustration specific embodiments or examples. Referring now to the drawings, in which like numerals represent like elements through the several figures, aspects of a computing system and methodology for efficiently representing word sense probabilities will be described.
Turning now to
According to one or more embodiments, the natural language engine 130 may support search engine functionality. In a search engine scenario, a user query may be issued from a client computer 110A-110D through the network 140 and on to the server 120. The user query may be in a natural language format. At the server, the natural language engine 130 may process the natural language query to support a search based upon syntax and semantics extracted from the natural language query. Results of such a search may be provided from the server 120 through the network 140 back to the client computers 110A-110D.
One or more search indexes may be stored at, or in association with, the server 120. Information in a search index may be populated from a set of source information, or a corpus. For example, in a web search implementation, content may be collected and indexed from various web sites on various web servers (not illustrated) across the network 140. Such collection and indexing may be performed by software executing on the server 120, or on another computer (not illustrated). The collection may be performed by web crawlers or spider applications. The natural language engine 130 may be applied to the collected information such that natural language content collected from the corpus may be indexed based on syntax and semantics extracted by the natural language engine 130. Indexing and searching is discussed in further detail with respect to
The client computers 110A-110D may act as terminal clients, hypertext browser clients, graphical display clients, or other networked clients to the server 120. For example, a web browser application at the client computers 110A-110D may support interfacing with a web server application at the server 120. Such a browser may use controls, plug-ins, or applets to support interfacing to the server 120. The client computers 110A-110D can also use other customized programs, applications, or modules to interface with the server 120. The client computers 110A-110D can be desktop computers, laptops, handhelds, mobile terminals, mobile telephones, television set-top boxes, kiosks, servers, terminals, thin-clients, or any other computerized devices.
The network 140 may be any communications network capable of supporting communications between the client computers 110A-110D and the server 120. The network 140 may be wired, wireless, optical, radio, packet switched, circuit switched, or any combination thereof. The network 140 may use any topology, and links of the network 140 may support any networking technology, protocol, or bandwidth such as Ethernet, DSL, cable modem, ATM, SONET, MPLS, PSTN, POTS modem, PONS, HFC, satellite, ISDN, WiFi, WiMax, mobile cellular, any combination thereof, or any other data interconnection or networking mechanism. The network 140 may be an intranet, an internet, the Internet, the World Wide Web, a LAN, a WAN, a MAN, or any other network for interconnection computers systems.
It should be appreciated that, in addition to the illustrated network environment, the natural language engine 130 can be operated locally. For example, a server 120 and a client computer 110A-110D may be combined onto a single computing device. Such a combined system can support search indexes stored locally or remotely.
Referring now to
The text content 210 may comprise documents in a very general sense. Examples of such documents can include web pages, textual documents, scanned documents, databases, information listings, other Internet content, or any other information source. This text content 210 can provide a corpus of information to be searched. Processing the text content 210 can occur in two stages as syntactic parsing 215 and semantic mapping 225. Preliminary language processing steps may occur before, or at the beginning of parsing 215. For example, the text content 210 may be separated at sentence boundaries. Proper nouns may be identified as the names of particular people, places, objects or events. Also, the grammatical properties of meaningful word endings may be determined. For example, in English, a noun ending in “s” is likely to be a plural noun, while a verb ending in “s” may be a third person singular verb.
Parsing 215 may be performed by a syntactic analysis system such as the Xerox Linguistic Environment (XLE). Parsing 215 can convert sentences to representations that make explicit the syntactic relations among words. Parsing 215 can apply a grammar 220 associated with the specific language in use. For example, parsing 215 can apply a grammar 220 for English. The grammar 220 may be formalized, for example, as a lexical functional grammar (LFG). The grammar 220 can specify possible ways for constructing meaningful sentences in a given language. Parsing 215 may apply the rules of the grammar 220 to the strings of the text content 210.
A grammar 220 may be provided for various languages. For example, LFG grammars have been created for English, French, German, Chinese, and Japanese. Other grammars may be provided as well. A grammar 220 may be developed by manual acquisition where grammatical rules are defined by a linguist or dictionary writer. Alternatively, machine learning acquisition can involve the automated observation and analysis of many examples of text from a large corpus to automatically determine grammatical rules. A combination of manual definition and machine learning may be also be used in acquiring the rules of a grammar 220.
Parsing 215 can apply the grammar 220 to the text content 210 to determine constituent structures (c-structures) and functional structures (f-structures). The c-structure can represent a hierarchy of constituent phrases and words. The f-structure can encode roles and relationships between the various constituents of the c-structure. The f-structure can also represent information derived from the forms of the words. For example, the plurality of a noun or the tense of a verb may be specified in the f-structure.
During a semantic mapping process 225 that follows the parsing 215, information can be extracted from the f-structures and combined with information about the meanings of the words in the sentence. A semantic map or semantic representation of a sentence can be provided as content semantics 240. Semantic mapping 225 can augment the syntactic relationships provided by the parsing 215 with conceptual properties of individual words. The results can be transformed into representations of the meaning of sentences from the text content 210. Semantic mapping 225 can determine roles played by words in a sentence. For example, the subject performing an action, something used to carry out the action, or something being affected by the action. For the purposes of search indexing, words can be stored in a semantic index 250 along with their roles. Thus, retrieval from the semantic index 250 can depend not merely on a word in isolation, but also on the meaning of the word in the sentences in which it appears within the text content 210. Semantic mapping 225 can support disambiguation of terms, determination of antecedent relationships, and expansion of terms by synonym, hypernym, or hyponym.
Semantic mapping 225 can apply knowledge resources 230 as rules and techniques for extracting semantics from sentences. The knowledge resources can be acquired through both manual definition and machine learning, as discussed with respect to acquisition of grammars 220. The semantic mapping 225 process can provide content semantics 240 in a semantic extensible markup language (semantic XML or semxml) representation. Content semantics 240 can specify roles played by words in the sentences of the text content 210. The content semantics 240 can be provided to an indexing process 245.
An index can support representing a large corpus of information so that the locations of words and phrases can be rapidly identified within the index. A traditional search engine may use keywords as search terms such that the index maps from keywords specified by a user to articles or documents where those keywords appear. The semantic index 250 can represent the semantic meanings of words in addition to the words themselves. Semantic relationships can be assigned to words during both content acquisition 200 and user search 205. Queries against the semantic index 250 can be based on not only words, but words in specific roles. The roles are those played by the word in the sentence or phrase as stored in the semantic index 250. The semantic index 250 can be considered an inverted index that is a rapidly searchable database whose entries are semantic words (i.e. word in a given role) with pointers to the documents, or web pages, on which those words occur. The semantic index 250 can support hybrid indexing. Such hybrid indexing can combine features and functions of both keyword indexing and semantic indexing.
User entry of queries can be supported in the form of natural language questions 260. The query can be analyzed through a natural language pipeline similar, or identical, to that used in content acquisition 200. That is, the natural language question 260 can be processed by syntactic parsing 265 to extract syntactic structure. Following syntactic parsing 265, the natural language question 260 can be processed for semantic mapping 270. The semantic mapping 270 can provide question semantics 275 to be used in a retrieval process 280 against the semantic index 250 as discussed above. The retrieval process 280 can support hybrid index queries where both keyword index retrieval and semantic index retrieval may be provided alone or in combination.
In response to a user query, results of the retrieval process 280 from the semantic index 250 along with the question semantics 275 can inform a ranking process 285. Ranking can leverage both keyword and semantic information. During ranking 285, the results obtained by the retrieval process 280 can be ordered by various metrics in an attempt to place the most desirable results closer to the top of the retrieved information to be provided to the user as a result presentation 290.
Turning now to
As will be described in greater detail below, one embodiment presented herein operates in the context of an indexing scheme wherein a separate term 304 is generated for a word in each of the roles that word occurs in. This results in a separate posting list 306 and a separate entry in a lexicon 302, also referred to herein as the index of posting lists, for each word-role pair. In this regard, words identified within the content 210 are decorated with their semantic role. A semantic role represents the particular role that a word played in the context in which it was found and/or analyzed. This decoration splits the lexicon 302 into multiple entries for each ‘word’, thus decorated with semantic roles. For example, the word “dog” might occur in the role of “subject”, “object”, or “relation” in various contexts. Therefore, in this example, the lexicon 302 shown in
As discussed briefly above, a posting list 306 is generated for each association of a term and a semantic role in a hierarchy. The posting list 306 includes data that identifies one or more documents that include the usage of the term in the associated semantic role. For instance, using the example above, the posting list 306A for the term “dog.subject” 304A would include data identifying those documents wherein the word “dog” is used in the semantic role of subject. The posting lists may also include additional data such as data identifying the locations in the document at which the word is utilized. Data is stored in the lexicon indicating the initial starting position of each of the posting lists 306A-306C on a physical storage medium. It should be appreciated that, as used herein, the term document refers to any type of text content including, but not limited to, Web pages, text documents, office documents, database entries, and others.
Turning now to
The example role tree 400 shown in
Referring now to
As will be described in greater detail below with respect to
Referring now to
It should be appreciated that the logical operations described herein are implemented (1) as a sequence of computer implemented acts or program modules running on a computing system and/or (2) as interconnected machine logic circuits or circuit modules within the computing system. The implementation is a matter of choice dependent on the performance and other requirements of the computing system. Accordingly, the logical operations described herein are referred to variously as states operations, structural devices, acts, or modules. These operations, structural devices, acts and modules may be implemented in software, in firmware, in special purpose digital logic, and any combination thereof. It should also be appreciated that more or fewer operations may be performed than shown in the figures and described herein. These operations may also be performed in a different order than those described herein.
The routine 600 begins at operation 602, where a pre-order depth-first traversal of the role tree 400 is performed. At each of the nodes of the role tree 400, the posting list 306 for the corresponding role and current word is written to a physical storage medium. If no posting list exists for the current node, no posting list will be written to physical storage. This occurs at operation 604. From operation 604, the routine 600 proceeds to operation 606.
At operation 606, data is stored indicating the starting position of the posting list at each node in the role tree 400. The routine 600 then continues to operation 608, where data is stored indicating the total size of the posting lists under each node. As mentioned above, this data may be stored in the lexicon 302 or in another location. From operation 608, the routine 600 proceeds to operation 610, where it ends. It should be appreciated that once the routine 600 has completed its execution, the posting lists 306 in a subtree of the hierarchy defined by the role tree 400 can be loaded from the physical storage medium as a single contiguous block.
Turning now to
From operation 704, the routine 700 proceeds to operation 706, where the ending position of the desired posting lists is determined utilizing by adding the length of the posting list to the indicated beginning location on the physical storage medium. Once the length has been determined, the routine 700 proceeds to operation 708, where a single continuous block is retrieved from the physical storage medium beginning at the beginning location of the desired posting lists for the subtree through the computed end of the posting lists. In this manner, the posting lists for the desired subtree can be retrieved using a single read operation. From operation 708, the routine 700 proceeds to operation 710, where it ends.
The computer architecture shown in
The mass storage device 810 is connected to the CPU 802 through a mass storage controller (not shown) connected to the bus 804. The mass storage device 810 and its associated computer-readable media provide non-volatile storage for the computer 800. Although the description of computer-readable media contained herein refers to a mass storage device, such as a hard disk or CD-ROM drive, it should be appreciated by those skilled in the art that computer-readable media can be any available computer storage media that can be accessed by the computer 800.
By way of example, and not limitation, computer-readable media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, digital versatile disks (“DVD”), HD-DVD, BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer 800. It should be appreciated that the term physical storage medium as utilized herein is synonymous with the term computer-readable media, defined above.
According to various embodiments, the computer 800 may operate in a networked environment using logical connections to remote computers through a network such as the network 820. The computer 800 may connect to the network 820 through a network interface unit 806 connected to the bus 804. It should be appreciated that the network interface unit 806 may also be utilized to connect to other types of networks and remote computer systems. The computer 800 may also include an input/output controller 812 for receiving and processing input from a number of other devices, including a keyboard, mouse, or electronic stylus (not shown in
As mentioned briefly above, a number of program modules and data files may be stored in the mass storage device 810 and RAM 814 of the computer 800, including an operating system 818 suitable for controlling the operation of a networked desktop, laptop, or server computer. The mass storage device 810 and RAM 814 may also store one or more program modules 820 and data 822, such as those program modules presented herein and described above with respect to
Based on the foregoing, it should be appreciated that technologies for efficiently storing and retrieving posting lists are provided herein. It should also be appreciated that although the concepts and technologies presented herein are described in the context of a natural language search system that utilizes a semantic index 250, these concepts and technologies may be utilized in conjunction with any inverted index wherein it is useful to apply runtime operations to the union of posting lists for multiple terms that are related to each other in a strict dominance relation.
Although the subject matter presented herein has been described in language specific to computer structural features, methodological acts, and computer readable media, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific features, acts, or media described herein. Rather, the specific features, acts and mediums are disclosed as example forms of implementing the claims.
The subject matter described above is provided by way of illustration only and should not be construed as limiting. Various modifications and changes may be made to the subject matter described herein without following the example embodiments and applications illustrated and described, and without departing from the true spirit and scope of the present invention, which is set forth in the following claims.
This application claims the benefit of U.S. provisional patent application No. 60/969,495, which was filed on Aug. 31, 2007, and entitled “Efficient Posting Layout for Retrieval of Terms in Dominance Hierarchies” and U.S. provisional patent application 60/969,486, which was filed on Aug. 31, 2007, and entitled “Fact-Based Indexing for Natural Language Search”, both of which are expressly incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4876643 | McNeil et al. | Oct 1989 | A |
5519608 | Kupiec | May 1996 | A |
5530939 | Mansfield, Jr. et al. | Jun 1996 | A |
5696962 | Kupiec | Dec 1997 | A |
5933822 | Braden-Harder et al. | Aug 1999 | A |
6076051 | Messerly | Jun 2000 | A |
6161084 | Messerly | Dec 2000 | A |
6185592 | Boguraev et al. | Feb 2001 | B1 |
6189002 | Roitblat | Feb 2001 | B1 |
6243670 | Bessho et al. | Jun 2001 | B1 |
6246977 | Messerly et al. | Jun 2001 | B1 |
6269368 | Diamond | Jul 2001 | B1 |
6366908 | Chong et al. | Apr 2002 | B1 |
6374209 | Yoshimi et al. | Apr 2002 | B1 |
6654740 | Tokuda et al. | Nov 2003 | B2 |
6675159 | Klein et al. | Jan 2004 | B1 |
6678677 | Roux et al. | Jan 2004 | B2 |
6741981 | McGreevy | May 2004 | B2 |
6766316 | Claudill et al. | Jul 2004 | B2 |
6766320 | Wang et al. | Jul 2004 | B1 |
6795820 | Barnett | Sep 2004 | B2 |
6823301 | Ishikura | Nov 2004 | B1 |
6842730 | Ejerhed et al. | Jan 2005 | B1 |
6871199 | Binning et al. | Mar 2005 | B1 |
6901399 | Corston | May 2005 | B1 |
6947923 | Cha et al. | Sep 2005 | B2 |
6968332 | Milic-Frayling | Nov 2005 | B1 |
7016828 | Coyne et al. | Mar 2006 | B1 |
7027974 | Busch et al. | Apr 2006 | B1 |
7031910 | Eisele | Apr 2006 | B2 |
7035789 | Abrego et al. | Apr 2006 | B2 |
7120574 | Troyanova et al. | Oct 2006 | B2 |
7171349 | Wakefield et al. | Jan 2007 | B1 |
7184950 | Weise | Feb 2007 | B2 |
7194406 | Ejerhed et al. | Mar 2007 | B2 |
7225121 | Maxwell et al. | May 2007 | B2 |
7269594 | Corston-Oliver | Sep 2007 | B2 |
7319951 | Rising et al. | Jan 2008 | B2 |
7346490 | Fass | Mar 2008 | B2 |
7389224 | Elworthy | Jun 2008 | B1 |
7398201 | Marchisio | Jul 2008 | B2 |
7401077 | Bobrow et al. | Jul 2008 | B2 |
7403938 | Harrison et al. | Jul 2008 | B2 |
7424467 | Fontoura et al. | Sep 2008 | B2 |
7461064 | Fontoura et al. | Dec 2008 | B2 |
7502810 | Acevedo-Aviles et al. | Mar 2009 | B2 |
7634466 | Rose et al. | Dec 2009 | B2 |
7698259 | Xue | Apr 2010 | B2 |
7743060 | Fontoura et al. | Jun 2010 | B2 |
7774383 | Acevedo-Aviles et al. | Aug 2010 | B2 |
20020091684 | Nomiyama et al. | Jul 2002 | A1 |
20030233224 | Marchisio et al. | Dec 2003 | A1 |
20040103090 | Dogl et al. | May 2004 | A1 |
20040103105 | Lindblad et al. | May 2004 | A1 |
20040243554 | Broder et al. | Dec 2004 | A1 |
20040243556 | Ferrucci et al. | Dec 2004 | A1 |
20040249795 | Brockway | Dec 2004 | A1 |
20050043936 | Corston-Oliver et al. | Feb 2005 | A1 |
20050065777 | Dolan et al. | Mar 2005 | A1 |
20050071150 | Nasypny | Mar 2005 | A1 |
20050108001 | Aarskog | May 2005 | A1 |
20050108630 | Wasson et al. | May 2005 | A1 |
20050182619 | Azara | Aug 2005 | A1 |
20050267871 | Marchisio et al. | Dec 2005 | A1 |
20060020607 | Patterson | Jan 2006 | A1 |
20060047632 | Zhang | Mar 2006 | A1 |
20060156222 | Chi | Jul 2006 | A1 |
20060161534 | Carson, Jr. et al. | Jul 2006 | A1 |
20060184517 | Anderson | Aug 2006 | A1 |
20060224582 | Hogue | Oct 2006 | A1 |
20060271353 | Berkan et al. | Nov 2006 | A1 |
20060294086 | Rose et al. | Dec 2006 | A1 |
20070073533 | Thione et al. | Mar 2007 | A1 |
20070073745 | Scott | Mar 2007 | A1 |
20070143098 | Van der Berg | Jun 2007 | A1 |
20070156393 | Todhunter et al. | Jul 2007 | A1 |
20080033982 | Parikh | Feb 2008 | A1 |
20080086498 | Sureka | Apr 2008 | A1 |
20080120279 | Xue | May 2008 | A1 |
20080172628 | Mehrotra | Jul 2008 | A1 |
20090271179 | Marchisio et al. | Oct 2009 | A1 |
20100106706 | Rorex | Apr 2010 | A1 |
Number | Date | Country |
---|---|---|
0597630 | May 1994 | EP |
10-0546743 | Apr 2005 | KR |
WO 02067145 | Aug 2002 | WO |
Number | Date | Country | |
---|---|---|---|
20090132521 A1 | May 2009 | US |
Number | Date | Country | |
---|---|---|---|
60969495 | Aug 2007 | US | |
60969486 | Aug 2007 | US |