Systems and methods for aggregating search results

Information

  • Patent Application
  • 20080071742
  • Publication Number
    20080071742
  • Date Filed
    September 19, 2006
    18 years ago
  • Date Published
    March 20, 2008
    16 years ago
Abstract
Systems and methods for aggregating search results are disclosed herein. The systems and methods include receiving a user search query, analyzing the user search query to identify a plurality of properties of the user search query, identifying a plurality of search results that match the user search query, each search result being based on a different scheme, and aggregating the search results to produce a search results list. The search results list may be a combined and selected results list. Feedback-based optimization is also disclosed.
Description

BRIEF DESCRIPTION OF THE DRAWINGS

The invention is described by way of example with reference to the accompanying drawings, wherein:



FIG. 1 is a block diagram illustrating a system for searching in accordance with one embodiment of the invention;



FIG. 2 is a flow diagram illustrating a method for analyzing queries and aggregating rankings;



FIG. 3 is a flow diagram illustrating a method for analyzing queries;



FIG. 4 is a block diagram illustrating a method for analyzing query context;



FIG. 5 is a block diagram illustrating a method for matching queries to databases;



FIG. 6 is a flow diagram illustrating a method for providing search results in response to a user query;



FIG. 7 is a flow diagram illustrating a method for optimizing search results using user feedback; and



FIG. 8 is a block diagram illustrating a method for analyzing queries, aggregating rankings and optimizing results.





DETAILED DESCRIPTION OF THE INVENTION


FIG. 1 of the accompanying drawings shows a network system 10 which can be used in accordance with one embodiment of the present invention. The network system 10 includes a search system 12, a search engine 14, a network 16, and a plurality of client systems 18. The search system 12 includes a server 20, a database 22, an indexer 24, and a crawler 26. The plurality of client systems 18 includes a plurality of web search applications 28a-f, located on each of the plurality of client systems 18. The server 20 includes a plurality of databases 30a-d.


The server 12 is connected to the search engine 14. The search engine 14 is connected to the plurality of client systems 18 via the network 16. The server 20 is in communication with the database 22 which is in communication with the indexer 24. The indexer 24 is in communication with the crawler 26. The crawler 26 is capable of communicating with the plurality of client systems 18 via the network 16 as well.


The web search server 20 is typically a computer system, and may be an HTTP server. It is envisioned that the search engine 14 may be located at the web search server 20. The web search server 20 typically includes at least processing logic and memory.


The indexer 24 is typically a software program which is used to create an index, which is then stored in storage media. The index is typically a table of alphanumeric terms with a corresponding list of the related documents or the location of the related documents (e.g., a pointer). An exemplary pointer is a Uniform Resource Locator (URL). The indexer 24 may build a hash table, in which a numerical value is attached to each of the terms. The database 22 is stored in a storage media, which typically includes the documents which are indexed by the indexer 24. The index may be included in the same storage media as the database 22 or in a different storage media. The storage media may be volatile or non-volatile memory that includes, for example, read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices and zip drives.


The crawler 26 is a software program or software robot, which is typically used to build lists of the information found on Web sites. Another common term for the crawler 26 is a spider. The crawler 26 typically searches Web sites on the Internet and keeps track of the information located in its search and the location of the information.


The network 16 is a local area network (LAN), wide area network (WAN), a telephone network, such as the Public Switched Telephone Network (PSTN), an intranet, the Internet, or combinations thereof.


The plurality of client systems 18 may be mainframes, minicomputers, personal computers, laptops, personal digital assistants (PDA), cell phones, and the like. The plurality of client systems 18 are capable of being connected to the network 16. Web sites may also be located on the client systems 18. The web search application 28a-f is typically an Internet browser or other software.


The databases 30a-d are stored in storage media located at the server 20. The storage media may be volatile or non-volatile memory that includes, for example, read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices and zip drives.


In use, the crawler 26 crawls websites, such as the websites of the plurality of client systems 18, to locate information on the web. The crawler 26 employs software robots to build lists of the information. The crawler 26 may include one or more crawlers to search the web. The crawler 26 typically extracts the information and stores it in the database 22. The indexer 24 creates an index of the information stored in the database 22. Alternatively, if a database 22 is not used, the indexer 24 creates an index of the located information and the location of the information on the Internet (typically a URL).


When a user of one of the plurality of client systems 18 enters a search on the web search application 28, the search is communicated to the search engine 14 over the network 16. The search engine 14 communicates the search to the server 20 at the search system 12. The server 20 matches the query to one or more of the databases 30a-d to identify a search result. The server 20 communicates the search result to the user via the search engine 14 and network 16.



FIG. 2 shows a method for analyzing user queries and aggregating search results. The process 40 begins at block 42 where a user search query is received. The user search query may be a natural language search query.


At block 44, an intention and property analysis is performed on the user search query. Any number of techniques may be used to identify the user intent. The user intent is quantified by identifying properties of the search query. Exemplary properties include location (e.g., local vs. national vs. global), time (e.g., recent vs. historical), commerce (e.g., buying/selling products/services), news, language, homepage, and the like.


At block 46, the query is matched with databases based on the intention and property analysis. The databases may be the databases 30a-d located at the server 20. Each of the databases is related to a different property or contains documents with multiple properties matchable with the selected property. Those databases relating to properties identified as query properties are searched.


At block 48, the results are aggregated. The search results from each of the databases searched may be combined to produce an aggregated, ranked list of search results. At block 50, the ranked list of the search results is provided to the user.



FIG. 3 shows the process for identifying a user's intent and the property analysis of the user search query in more detail. The process 60 begins at block 62 where the user search query is analyzed. The query analysis is performed to classify the query. By classifying the query, the intention of the user and the type of content that should be matched and/or the ranking scheme can be identified.


At block 64, the query is classified by one or more properties Pj and a confidence level Fj. Exemplary properties include location, time, commerce, news, language, homepage and the like.


The query can be classified by identifying concepts that differentiate attributes of the query (block 66). This can be done by identifying terminology relating to the property. (block 68). For example, if the user query is “Infiniti Silicon Valley,” or “Who sells Infiniti's in the Silicon Valley?” relevant properties that may characterize the search include location (e.g., Silicon Valley indicates a region in northern California), commerce (e.g., Infiniti is a well-known brand of cars and ‘sells’ is a common commerce term), etc. Other well-known Natural Language Processing (NLP)-based text matching techniques may also be used to classify the query.


Alternatively, the query can be classified by matching the query to keywords of databases containing documents with the property Pj (block 70). For example, a database relating to commerce may include the keywords: buy, sell, product, service, price, certain brand names, and the like. A query that includes terms matching (exact matches or similar matches) the keywords is likely to be related to that property. The confidence Fj can be determined based on the degree of matching between the query and the keywords (block 72). An offline web data mining system can scan through all web sites on the Internet and identify home pages of persons or organizations. Keywords associated with each page may be stored with the database.


Alternatively, a content hit ratio can be identified. A content hit ratio is a relative measurement of hits based on a degree of matching towards each property. That is, the number of results matched to the query in a database containing documents with the property Pj is compared to the total number of results in that database (block 74). The following formula can be used in this analysis:







H
j





k
=
1

n







H
k






A high value exceeding a certain threshold indicates that it is more appropriate to match the query with database content of property Pj. For example, if a query is compared to all of the possible databases, and the hit rate is in a range of, for example, 1% to 60%, the databases with, for example, more than a 30% hit ratio are related to a relevant property for the query. The hit ratio threshold may be any value or range of values between 1% and 100%.


After the query has been classified with the property Pj and confidence Fj, the confidence level can, optionally, be adjusted (block 76). The confidence level can be adjusted, for example, by analyzing historical query logs. The historical query logs identify past search queries by any number of users, the search results provided, and may also identify the links/documents selected by the users.



FIG. 4 shows an exemplary method for adjusting the confidence level Fj 80. Using a historical query log, at least a first query 82a and second query 82b are examined within a user session.


A transition probability matrix is computed for property changes for two consecutive queries (e.g., query 82a and query 82b) from the same user. The transition probability matrix includes element Xi,j which represents the probability the second query 82b has property Pj 84 when the first query 82a has property Pi 84 in a query context. This element Xi,j may be represented by the following formula:







X

i
,
j






#





query





pairs





with





property





pair






(


P
i

,

P
j


)



Total





#





query





pairs


·


Total





#





queries


#





queries





with





property






P
j








The confidence level can be determined using the following equation:







F
j
new

=



i




X

i
,
j


·

F
i
old







For example, if the user first searches for Britney Spears and subsequently searches for Christina Aguilera, the relevant properties may include, for example, time and news. The probability that the search for Christina Aguilera includes similar properties is very high, and, therefore, the confidence level that the same ranking schemes should be used for the related queries can be increased. Thus, certain ranking schemes can be associated with certain properties, and the confidence that a particular ranking scheme should be associated with a particular property can be adjusted.



FIG. 5 shows a method for matching the query with databases. The process 90 includes identifying a first ranking scheme 92a, second ranking scheme 92b, and any number of additional ranking schemes 92c. The ranking schemes represent different methodologies for ranking the documents in a database. Exemplary ranking schemes include link popularity, page popularity, frequency and location of words in a document, link analysis and the like.


Each ranking scheme 92 is associated with a database 94. The databases 94a-c each are related to a different property P1, P2 and Pn, respectively. The databases 94 include a plurality of documents having the associated property.


Exemplary properties include location, time, commerce, news, language, homepage and the like. Thus, the documents can be classified by the language of the documents, the geographical location of document publishers, the publishing period of a document, the extent of commercial content, the likelihood the site is a home page, etc.


A query having a set of attributes Pi of confidence Fi, as determined with the user intent and property analysis described above with reference to FIG. 3, is matched against the databases 94 to produce a list 96. Each list 96a-c contains documents relating to each property associated with the database and ranked according to the associated ranking scheme 92.


In particular, for each property Pj and its ranked page list Lj, a specific ranking score Si,j is provided for each matched document di, representing relevancy of the document with respect to the desired property. The ranking score Si,j is determined by the ranking scheme.


As described above, a confidence ratio Fj is also determined for matching the intention of the query with the property Pj. A confidence ratio Gi,j is also determined for classifying the document di for property Pj. The combined confidence for selecting such a document di for the property Pj is Ci,j=X(Gi,j, Fj). For example, Ci,j=Gi,j×Fj. The combined confidence can be used to improve ranking of the documents.



FIG. 6 shows an alternative process for identifying user intention and identifying search results 100. The process 100 begins at block 102 where a user enters a search query.


User intention is identified for property Pi with confidence Fi (block 104). For each document, a confidence for selecting it for Fi is calculated (block 106). The property Pi and confidence Fi can be identified as described above with reference to FIG. 3 (and, optionally, FIG. 4).


A list of results, separated into multiple zones, is returned for each property Pi (block 108). The list of results is returned by matching the query to databases having the identified properties, as described above with reference to FIG. 5. The multiple zones represent tiers of quality. There may be any number of zones including as few as one zone. For example, the multiple zones may include a “highly relevant” zone, a “relevant” zone, and a “probably relevant” zone. In general, for each list Lj, a criterion can be used to divide the list into t zones. The sublist in zone k for list Lj is Lj,k. Each sublist Lj,k includes all of the documents that match the query in that zone.


Aggregated results are calculated for each zone based on user intention (block 110). Results from each list are selected and ranked at each zone. For each zone k, there are n sublists: L1,k, L2,k, . . . , Ln,k. The documents may be sorted based on an aggregated ranking score. The aggregated ranking score for di at zone k is:







R

i
,
k


=




j
=
1

n




W
j

*

C

i
,
j


*

S

i
,
j

k







where Wj is a weighting factor to be adjusted based on the final ranking need, Ci,j is the combined confidence for selecting document di for property Pj, and Ski,j is a ranking score in the list Lj for document di. For each zone, the results from each sublist are combined.


At block 112, the multiple zones are combined into a final result. The combined lists from each zone are then combined together into a final result including all the results from all the lists organized by the zones. This approach allows matched results with different properties and confidence scores to be selected and combined to produce an aggregated list. In one embodiment, the system may set a limit to select the top k results to present to the user, based on application needs.



FIG. 7 shows a detailed method for optimizing search results 120. At block 122, search results are provided to a user. At block 124, user's selection of links or documents in the search results are monitored.









j
=
1

n








W
j

*

C

i
,
j


*

S

i
,
j

k






At block 126, the satisfaction level towards a scheme choice is identified. Exemplary scheme choices include link popularity, page popularity, frequency and location of words in a document, link analysis and the like. The satisfaction level towards scheme choice can be identified by selecting results from different result lists Lj and computing the satisfactory degree Bj to the use of the list Lj. The satisfactory degree Bj with respect to the use of list Lj, measured by the percentage of times that the list Lj contributes to the final ranking which has been confirmed by users, can be identified by the following formula:







B
j

=





s
=
1

m










t
=
1


h
s








F


(


T

s
,
t


,
j

)








s
=
1

m







h
s







where F(Ts,t, j)=1 if jεTs,t; otherwise 0, Ts,t is the set of list names which have contributed positively to ranking position t for the s-th query in the log, hs is the number of results selected by users for a query, m is the number of times the same query was asked in the log, and







B
avg

=







k
=
1

n







B
j


n

·

B
max


=


Max

k
=
1

n



B
j







where n is the number of possible properties. If the absolute value of (Bj−Bavg)/Bmax exceeds a threshold, then the feedback is strong enough to influence a new ranking. The adjusted weighting factor with the feedback can be:







Q
j

=



W
j



(

1
+

(



B
j

-

B
avg



B
max


)


)




2

α

+
1






if










B
j

-

B
avg



B
max


>
δ

,




where δ is a threshold.


At block 128, the satisfaction level towards individual results is identified. The satisfaction level can be determined by calculating a satisfactory degree Vi, which is assessed for each individual result for document di.







V
i

=





s
=
1

m










t
=
1


h
s








Z


(


T

s
,
t


,

d
i


)




m





where Z (Ts,t, di)=1 if D(Ts,t)=di; otherwise 0.


At block 130, the combined confidence level Ci,j can be adjusted based on the satisfaction level towards the scheme choice and/or the satisfaction level towards individual results. The combined confidence level Ci,j can be adjusted as follows:






U
i,j
=C
i,j
*β+V
i*(1−β)


The final ranking with feedback for zone k can be computed as follows:







R

i
,
k


=




j
=
1

n








Q
j

*

U

i
,
j


*

S

i
,
j








For the results clicked by a user, an algorithmic choice may be kept in the query log history. Thus, personalized confidence scores can be identified to improve ranking.



FIG. 8 shows a detailed method for analyzing a query, aggregating results and optimizing the results with user feedback 140. The process 140 begins at block 142 where a user query is received.


At block 144, an intention and property analysis is performed. The intention and property analysis is performed as described above with reference to FIG. 3 (and, optionally, FIG. 4).


At block 146, the query is matched against the databases having different properties. As described above with reference to FIG. 5, a first ranking schema 148a, second ranking schema 148b and any additional number of ranking schemas 148c may be included. Each ranking schema 148 includes a respective database 150a, 150b and 150c that relates to a specific property. Results in the database that match the query are presented in respective lists 152a, 152b and 152c.


At block 154, the results are aggregated. In one embodiment, the results are aggregated by zones 156a, 156b, 156c, as described above with reference to FIG. 6.


At block 158, the aggregated ranking results are presented to the user, as described above with reference to FIG. 6.


At block 160, user feedback may be monitored. At block 162, a satisfaction assessment may be performed. The user feedback and satisfaction assessment are performed as described above with reference to FIG. 7. In particular, the schema choice (block 164), URL (document) choice (block 166) and context evaluation (block 168) can be monitored and assessed, as described above with reference to FIG. 7. The assessment may be used to modify the result aggregation (block 154).


Systems and methods described herein are advantageous because they produce better search results. For example, the systems and methods described herein can automatically determine whether the search query relates to a local search vs. a global search, commercial products, temporal content, mixed languages, etc. Systems and methods described herein also provide for personalization of search methods.


Multiple ranking schemes can be used with the same database, allowing a combination from different ranking strategies and properties. The results with the different ranking strategies can then be aggregated.


The zoning-based aggregation scheme allows for diversification of the results and ensures that documents from each list have a chance to appear in the top positions if they are highly relevant. Thus, one list does not necessarily dominate the final ranking.


The foregoing description with attached drawings is only illustrative of possible embodiments of the described method and should only be construed as such. Other persons of ordinary skill in the art will realize that many other specific embodiments are possible that fall within the scope and spirit of the present idea. The scope of the invention is indicated by the following claims rather than by the foregoing description. Any and all modifications which come within the meaning and range of equivalency of the following claims are to be considered within their scope.

Claims
  • 1. A method for aggregating search results comprising: receiving a user search query;analyzing the user search query to identify a plurality of properties of the user search query;identifying a plurality of search results that match the user search query, each search result being based on a different scheme utilizing the search query;determining a relevance factor for each scheme; andaggregating the search results from each scheme to produce a search results list.
  • 2. The method of claim 1, wherein analyzing the user search query to identify a plurality of properties comprises: identifying concepts that differentiate attributes of the query.
  • 3. The method of claim 1, wherein analyzing the user search query to identify a plurality of properties comprises: matching the user search query to keywords of the databases.
  • 4. The method of claim 1, wherein analyzing the user search query to identify a plurality of properties comprises: identifying documents in a plurality of databases, each database associated with a scheme, that match the user search query;comparing the number of documents that match the user search query with the total number of results in the database to produce a content ratio hit; andif the content hit ratio exceeds a threshold, then determining the user search query has the property of the database.
  • 5. The method of claim 1, wherein the properties are selected from the group consisting of location, time, commerce, news, language and homepage.
  • 6. The method of claim 5, wherein the properties are determined through query and log analysis.
  • 7. The method of claim 1, wherein the user search query is a natural language query.
  • 8. The method of claim 1, aggregating the search results to produce a search results list comprises: dividing each of the search results from each scheme into a plurality of zones;combining each of the search results from each scheme in each zone; andcombining the search results from each zone.
  • 9. The method of claim 8, further comprising selecting a portion of the search results to present to the user, wherein the portion presented corresponds to one of the plurality of zones.
  • 10. The method of claim 9, wherein the portion presented corresponds to the highest ranked zone.
  • 11. The method of claim 1, further comprising providing the search results list to a user.
  • 12. The method of claim 11, further comprising optimizing the aggregation and ranking of search results that match a search query with user feedback.
  • 13. The method of claim 12, wherein optimizing aggregation of search results comprises: assessing a user's satisfaction with each scheme.
  • 14. The method of claim 12, wherein optimizing ranking of search results comprises: assessing a user's satisfaction with a document in the search results list.
  • 15. The method of claim 1, wherein each scheme has a database associated therewith.
  • 16. The method of claim 1, wherein the search results list is a combined and selected results list.
  • 17. A search system comprising: a search engine to receive a user search query;a plurality of databases to store a plurality of search results, each database related to a scheme; anda server to analyze the search query to identify a plurality of properties of the search query, match the user search query with search results in the plurality of databases based on the plurality of properties and aggregate the search results from each of the plurality of databases to produce a search results list.
  • 18. The search system of claim 17, wherein the search engine is further to provide the search results list to a user.
  • 19. The search system of claim 17, wherein the plurality of databases each have one of the plurality of properties associated therewith.
  • 20. The search system of claim 17, wherein the server is further to divide each of the search results from each database into a plurality of zones, combine each of the search results from each database in each zone, and combine the search results from each zone.
  • 21. A method of integrating multiple ranking strategies comprising: matching a user search query with a plurality of databases, each database relating to one of a plurality of properties and a ranking scheme;producing a list of search results matching the query ranked according to the ranking scheme; andaggregating the list of search results from each database to produce a final search results list; andpresenting the final search results list to a user.
  • 22. The method of claim 21, further comprising: dividing each of the search results from each database into a plurality of zones;combining each of the search results from each database in each zone; andcombining the search results from each zone.
  • 23. The method of claim 21, further comprising optimizing the identification of search results that match a search query with user feedback.
  • 24. The method of claim 23, wherein optimizing the identification of search results that match a search query with user feedback comprises assessing a user's satisfaction with a ranking scheme or assessing a user's satisfaction with a document in the search results list with feedback optimization.