Web searching has become a common technique for finding information. Popular search engines allow users to perform broad based web searches according to search terms entered by the users in user interfaces provided by the search engines (e.g. search engine web pages displayed at client devices). A broad based search can return results that may include information from a wide variety of domains (where a domain refers to a particular category of information).
In some cases, users may wish to search for information that is specific to a particular domain. For example, a user may seek to perform a job search or to perform a product search. Such searches (referred to as “query intent searches”) are examples of searches where a user has a specific query intent for information from a specific domain in mind when performing the search (e.g. search for a particular type of job, search for a particular product, and so forth). Query intent searching can be provided by a vertical search service, which can be a service offered by a general-purpose search engine, or alternatively, by a vertical search engine. A vertical search service provides search results from a particular domain, and typically does not return search results from domains un-related to the particular domain.
A query intent classifier can be used to determine whether or not a query received by a search engine should trigger a vertical search service. For example, a job intent classifier is able to determine whether or not a received query is related to a job search. If the received query is classified as relating to a job search, then the corresponding vertical search service can be invoked to identify search results in the job search domain (which can include websites relating to job searching, for example). In one specific example, a job intent classifier may classify a query containing the search phase “trucking jobs” as being positive as a job intent search, which would therefore trigger a vertical search for information relating to jobs in the trucking industry. On the other hand, the job intent classifier will classify a query containing the search phrase “bob jobs” (which is a name of a person) as being negative for a job intent search, and therefore, would not trigger a vertical search service. Because “bob jobs” is the name of a person, the presence of “jobs” in the search phrase should not trigger a job-related query intent search.
A challenge faced by developers of query intent classifiers is that typical training techniques (for training the query intent classifiers) have to be provided with an adequate amount of labeled training data (training data that has been labeled as either positive or negative for a query intent) for proper training of the query intent classifiers. Building a classifier with insufficient labeled training data can lead to an inaccurate classifier.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In general, a classifier is constructed by receiving a data structure that correlates queries to items identified by queries, where the data structure contains initial labeled queries that have been labeled with respect to predetermined classes, and unlabeled queries that have not been labeled with respect to the predetermined classes. The data structure is then used to label at least some of the unlabeled queries with respect to the predetermined classes. The queries in the data structure that have been labeled with respect to the predetermined classes are then used as training data to train the classifier.
Other or alternative features will become apparent from the following description, from the drawings, and from the claims.
Some embodiments of the invention are described with respect to the following figures:
In accordance with some embodiments, a technique or mechanism of constructing a query intent classifier includes receiving a data structure that correlates queries to items that are identified by the queries, and producing training data based on the data structure for training the query intent classifier (also referred to as “learning the query intent classifier”). A query intent classifier is a classifier used to assign queries to classes that represent whether or not corresponding queries are associated with particular intents of users to search for information from particular domains (e.g., intent to perform a search for a job, intent to perform a search for a particular product, intent to search for music, intent to search for movies, etc.). Such classes are referred to as “query intent classes.” A “domain” (or alternatively, a “query intent domain”) refers to a particular category of information that a user wishes to perform search in.
The term “query” refers to any type of request containing one or more search terms that can be submitted to a search engine (or multiple search engines) for identifying search results based on the search term(s) contained in the query. The “items” that are identified by the queries in the data structure are representations of search results produced in response to the queries. For example, the items can be uniform resource locators (URLs) or other information that identify addresses or other identifiers of locations (e.g. websites) that contain the search results (e.g., web pages).
In one embodiment, the data structure that correlates queries to items identified by the queries can be a click graph that correlates queries to URLs based on click-through data. “Click-through data” (or more simply, “click data”) refers to data representing selections made by one or more users in search results identified by one or more queries. A click graph contains links (edges) from nodes representing queries to nodes representing URLs, where each link between a particular query and a particular URL represents at least one occurrence of a user making a selection (a click in a web browser, for example) to navigate to the particular URL from search results identified by the particular query. The click graph may also include some queries and URLs that are not linked, which means that no correlation between such queries and URLs has been identified.
In the ensuing discussion, reference is made to click graphs that contain representations of queries and URLs, with at least some of the queries and URLs correlated (connected by links). However, it is noted that the same or similar techniques can be applied with other types of data structures other than click graphs.
The click graph correlating queries to URLs initially includes a relatively small set of queries that have been labeled (such as by one or more humans) with respect to query intent classes. In one implementation, the query intent classes can be binary classes that include a positive class and a negative class with respect to a particular query intent. A query labeled with a “positive class” indicates that the query is positive with respect to the particular query intent, whereas a query labeled with the “negative class” means that the query is negative with respect to the query intent.
In addition to queries that are labeled with respect to query intent classes, the click graph initially can also contain a relatively large number of queries that are unlabeled with respect to query intent classes. The unlabeled queries are those queries that have not been assigned to any of the query intent classes. The small set of labeled queries initially in the click graph are referred to as “seed queries.” The seed queries are provided to enable iterative propagation of query intent class labels to the unlabeled queries in the click graph.
An example of a click graph 100 is depicted in
Links (or edges) 106 connect certain pairs of query nodes 102 and URL nodes 104. Note that not all of the query nodes 102 and 104 are linked. For example, the query node 102 corresponding to the search phrase “warehouse+job+opening” is linked to just the URL node “jobs.abcusajobs.com,” and to no other URL nodes in the click graph 100. What this means is that in response to the search results to the search query containing search phrase “warehouse+job+opening,” the user made a selection in the search results to navigate to the URL “jobs.abcusajobs.com,” and did not make selections to navigate to the other URLs depicted in
Each of the links 106 in
In accordance with some embodiments, the unlabeled queries in the click graph 100 are labeled with respect to the query intent classes based on correlations between the URLs and labeled queries in the click graph 100. Query intent class memberships of unlabeled query nodes in the click graph 100 are inferred from the class memberships of labeled query nodes according to the proximities of the labeled query nodes to the unlabeled query nodes in the click graph 100. Proximity of a labeled query node to an unlabeled query node is based on similarity of click patterns to corresponding URL nodes.
Using techniques according to some embodiments, a relatively large portion (or even all) of the unlabeled queries in the click graph can be labeled with the query intent classes. The labeled queries (including the seed queries as well as queries later labeled based on the seed queries) in the click graph are then used as training data to train the query intent classifier.
As can be seen from the example of
In the example of
The click graph 100 shows URL nodes that represent corresponding individual URLs. Note that in an alternative embodiment, instead of each URL node representing an individual URL, a node 104 can represent a cluster of URLs that have been clustered together based on some similarity metric.
Using some embodiments of the invention, the amount of training data that is available for training a query intent classifier can be expanded in an automated fashion, for more effective training of a query intent classifier, and to improve the performance of such query intent classifier. In some cases, with the large amounts of training data that can be obtained in accordance with some embodiments, query intent classifiers that use just query words or phrases as features can be relatively accurate. Consequently, selection of other features (other than the search terms or phrases in the queries) does not have to be performed for improving performance of query intent classifiers.
More formally, according to an embodiment, from a collection of click data (selections or clicks made by users in search results for queries to navigate to selected URLs), a bipartite graph (one example type of a click graph) G=(X∪Z, E) is constructed, where X={xi}i=1m represents a set of m queries (nodes 102 in
Also, let F denote an m×2 matrix, in which element fi,y is a non-negative, real number indicating the “likelihood” (expressed as a probability or some other value) that query xi belongs to class y (a positive or negative query intent class). The two columns of the F matrix correspond to two (binary) possible query intent classes: (1) positive query intent class and (2) negative query intent class. In alternative implementations, instead of binary query intent classes, more than two query intent classes can be considered. In the latter case, the matrix F will have more than two columns. Although F is assumed to be a matrix in this discussion, note that in other implementations, F can be any other type of collection of elements that can be assigned values representing likelihoods that queries belong to query intent classes.
Note that initially, each element fi,y of F has either a “0” value (probability is 0) or a “1” value (the probability is 1) of a query xi belonging to class y. The seed queries (queries that have been labeled by one or more humans with respect to the query intent classes) are either assigned a positive query intent class or negative intent class, and thus the value of element fi,y can be assigned to a “1” or “0” correspondingly. Moreover, any unlabeled query is assigned a probability of “0” in both columns of the m×2 F matrix.
F0 is used to denote an instantiation of F that is consistent with the manual labels of the seed queries in XL: for xi∈XL that are labeled as positive, fi,+10=1−fi,−10=1; for those labeled as negative, fi,+10=1−fi,−10=0; and for all unlabeled queries xi∈X\XL, fi,+10=fi,−10=0. The technique according to some embodiments is to estimate F given G and F0.
In accordance with one embodiment, a first algorithm (referred to as a “graph-based learning algorithm”) is used for assigning values to elements of F using an iterative process that continues until F converges to a final solution (in other words, F satisfies a convergence criterion). The graph-based learning algorithm is performed in the context of a procedure depicted in
The
Based on F, the unlabeled queries in the click graph are automatically labeled (at 208). Recall that F contains elements fi,y each indicating the probability that each given query belongs to the positive query intent class or negative query intent class. If the probability exceeds some predefined threshold for the corresponding query intent class, then the corresponding unlabeled query can be assigned to that class and be labeled accordingly.
Once the queries of the click graph have been labeled, such labeled queries are used as training data for training (at 210) a query intent classifier. In one exemplary embodiment, the query intent classifier is a maximum entropy classifier. Alternatively, other types of classifiers can be used as a query intent classifier.
Once the query intent classifier has been trained, the query intent classifier is output (at 212) for use in classifying queries. For example, the query intent classifier can be used in connection with a search engine. The query intent classifier is able to classify a query received at the search engine as being positive or negative with respect to a query intent. If positive, then the search engine can invoke a vertical search service. On the other hand, if the query intent classifier categorizes a received query as being negative for a query intent, then the search engine can perform a general purpose search.
Details associated with one exemplary embodiment of the graph-based learning algorithm applied at 206 in
In performing the graph-based learning algorithm, a normalized click count matrix B=D−1/2W is defined. Here D is a diagonal matrix, in which element di,i equals the sum of all elements in the ith row (or column) of WWT. The element di,i can be understood as the “volume” of all length-of-two paths that start at xi. A length-of-two path starts at query node xi, proceeds to a URL node, and then returns either to query node xi or to another query node. The “volume” of all length-of-two paths that starts at xi can be represented as the sum of weights of all length-of-two paths that start at xi. The weight of a length-of-two path is the product of the weights of the two edges.
The diagonal matrix D (and more specifically the square root of D) normalizes the weight matrix W.
Given the above definitions, the graph-based learning algorithm is as follows:
Input: matrix F0 and B=D−1/2W
Output: F*
The algorithm above iteratively proceeds through calculating different Fi matrices (for i=1, 2, . . . ) until a convergence criterion is satisfied.
In step 1, the matrix F is initialized with F0, which as explained above contains elements assigned to “0” or “1” based on labels assigned to seed queries, and contains elements assigned to “0” for unlabeled queries.
In step 2, the transpose of the normalized click count matrix (BT) is multiplied by the previous Fi−1 matrix to produce an Hi matrix, where the Hi matrix is used in step 3 to calculate the present Fi matrix. Each element of the Hi matrix has a value indicating the likelihood that a corresponding URL has a particular query intent class membership. Effectively, calculating the Hi matrix causes class memberships of the query nodes to be propagated to the URL nodes in the click graph.
In step 3, the value of a is selected empirically, with a determining the tradeoff between selecting values of the Fi matrix to be consistent with the internal structure of the click graph (patterns of links from queries to URLs) and selecting values of the Fi matrix to be consistent with manual labels assigned in the seed queries. In the equation Fi=αBHi+(1−α)F0 calculated in step 3, αBHi has a value that is dependent upon the patterns of clicks between query nodes and URL nodes in the click graph, whereas (1−α)F0 has a value that is dependent upon the initial values of F assigned based on the seed queries.
Step 4 causes steps 2 and 3 to be repeated if the convergence criterion is not met (Fi has not converged to F*).
It is noted that steps 2 and 3 above can be merged into a single step, as follows: Fi=αAFi−1+(1−α)F0 where A=BBT, in an alternative implementation.
F* is an optimal solution of minimizing the following objective function:
The Q1(F) term in the objective function of Eq. 1 specifies that if two queries are close to each other in the click graph (in terms of closeness of click patterns from the corresponding query nodes to URL nodes), then the two queries should be assigned the same query intent class. The Q2(F) term, on the other hand, regularizes the estimate of F towards F0 (the values based on the seed queries). In this regard, Q(F) is a tradeoff between the consistency with the intrinsic structure of the click graph and the consistency with manual labels. In other words, the values of F are computed based on an objective function (Eq. 1) that balances computing the values of F based on patterns of correlations between the queries and URLs in the click graph and computing the values of F based on the initial labeled seed queries.
In step 4 of the graph-based learning algorithm above, Fi has converged to F* if the value of Eq. 1 above does not change.
Once F* is obtained, its elements can be normalized to p(y|xi) values. A posterior probability p(y|xi) can be computed by normalizing fi,y/(fi,+1+fi,−1). In training the query intent classifier, these posterior probabilities can be used directly. Alternatively, the posterior probabilities can be clamped to either 0 or 1 values (e.g., a posterior probability less than some threshold is clamped to 0, whereas a posterior probability greater than the threshold is clamped to 1). An alternative convergence criterion in step 4 discussed above is that the clamped F values do not change.
An issue associated with click graphs is that they can be sparse; in other words, there may be missing edges (links) between queries and URLs that are relevant. In addition, user clicks are often noisy (such as when a user inadvertently clicks on the hyperlink corresponding to a URL that was not intended) which can result in edges between queries and URLs that are un-related. Missing edges in a click graph can prevent correct label information from being propagated, while spurious edges between queries and URLs in a click graph can result in classification errors.
To compensate for sparsity and noise of a click graph, techniques according to some embodiments can regularize the learning of the classifier with a click graph using content-based classification. In other words, the graph-based learning (in which labels are assigned to unlabeled queries in the click graph) and classifier learning (training of the classifier based on the automatically labeled queries and seed queries) are performed jointly. This is contrasted to the procedure of
Combining graph-based learning with content-based learning allows a trained classifier to provide an output that is then fed back to the graph-based learning algorithm to modify the output of the graph-based learning algorithm. Such feedback from the trained classifier to the graph-based learning algorithm addresses the issues of sparsity and noise. Sparsity is addressed since labeling performed by the classifier is able to cause additional links to be added to the click graph, whereas noise is addressed since re-labeling of classes made by the classifier can cause un-related links to be removed.
A procedure in which graph-based learning is performed jointly with classifier learning is depicted in
As depicted in
The feedback iteration (feeding back classifier output to the graph-based learning algorithm) can be performed a number of times (e.g., two or greater).
More formally, Fc(λ) is used to denote an m×2 matrix, representing the output of a maximum entropy classifier (assuming that the query intent classifier is implemented with a maximum entropy classifier), where λ represents the parameters of the classifier. Each element fi,yc=pλ(y|xi) in Fc(λ) is a classification function defined in Eq. 2:
where x denotes an input, y denotes classes, and φj(x, y), j=1, 2, . . . represent a set of lexicon features extracted from queries. The lexicon features can be n-grams, where an n-gram refers to consecutive n word tokens that appear together. For example, the query “trucking jobs” has 1) unigrams: “trucking” and “jobs”; and 2) bigrams: “<s>+trucking”, and “trucking+jobs”, and “jobs+</s>”, where <s> represents the beginning of a sentence and </s> represents the end of a sentence. The classifier is parameterized by λ (represented by λ).
Then Fc can be treated as a prior of F (in other words, the re-labeling of the queries made by the classifier is provided as an input to the graph-based learning algorithm for calculating the next F), and the objective function of Eq. 1 can be modified accordingly:
Q(F,λ)=αQ1(F)+(1−α)Q2(F,λ) (Eq. 3)
where Q1(F) is the same as that in Eq. 1, and Q2(F,λ) has the following form:
The new objective Q(F,λ) asks F to be consistent with the output of the maximum entropy classifier, while staying consistent with the intrinsic structure of the click graph.
The
Input: matrix F0; and matrix B=D−1/2W
Output: F* and λ* (where F* is the optimal F, and λ* represents the trained maximum entropy classifier)
using stochastic gradient descent (or other maximum entropy model training algorithm such as generalized iterative scaling);
using the graph-based learning algorithm, where the inputs are F(λ*) and B;
In step 2 above,
means that a value of λ is found that minimizes Q(F*,λ) according to Eq. 3, using a general optimization algorithm such as the stochastic gradient descent algorithm. In step 3 above,
means that a value of F is found that minimizes Q(F,λ*) according to Eq. 3, using the graph-based learning algorithm.
One way of constructing a click graph is to simply form a relatively large click graph based on collected click data. In some scenarios, this may be inefficient. A more efficient manner of constructing a click graph is to start by building a compact click graph and then iteratively expanding the click graph until the click graph reaches a target size.
To build a compact click graph, a technique according to some embodiments removes navigational queries and clusters related URLs. A query is considered navigational when the user is primarily interested in visiting a specific web page in mind. For example, “youtube” is likely to be a navigational query that refers to the URL “www.youtube.com.” Such a query usually has a skewed click count (relatively large click count) on one URL and the class membership of that URL can be excessively influenced by this single query. To avoid their adverse effect on the learning algorithms discussed above, navigational queries can be identified and removed from click graphs.
Moreover, related URLs can be clustered into a merged URL to compensate for sparsity of a click graph. Specifically, if a set of URLs have exactly the same top-, second-, and third-level domain names, such URLs can be grouped into a single node and their click counts can be summed accordingly. For example, the individual URLs nursejobs.123usajobs.com, financejobs.123usajobs.com, miami.jobs123usajobs.com can be grouped to a cluster URL referred to as “jobs.123usajobs.com.”
Finally, since the most reliable information of query classes resides in seed queries, it would be more efficient to apply the learning algorithms discussed above only to a relatively compact click graph that covers these queries. To this end, construction of the click graphs starts from the seed queries and the click graph is iteratively expanded in the following fashion,
Note that in the
The storage 408 also includes click data 416, which may have been collected based on monitoring the search engine system 412 (and possibly other search engines). From the click data 416, a click graph 418 is developed. The storage 408 also stores seed queries 420 that are provided to the graph-based learning module 402 along with the click graph 418 for labeling unlabeled queries in the click graph 418 to produce training data 422. The training data 422 is used for training the query intent classifier 404.
Instructions of software described above (including graph-based learning module 402 and query intent classifier 404) are loaded for execution on a processor (such as one or more CPUs 406). The processor includes microprocessors, microcontrollers, processor modules or subsystems (including one or more microprocessors or microcontrollers), or other control or computing devices. A “processor” can refer to a single component or to plural components.
Data and instructions (of the software) are stored in respective storage devices, which are implemented as one or more computer-readable or computer-usable storage media. The storage media include different forms of memory including semiconductor memory devices such as dynamic or static random access memories (DRAMs or SRAMs), erasable and programmable read-only memories (EPROMs), electrically erasable and programmable read-only memories (EEPROMs) and flash memories; magnetic disks such as fixed, floppy and removable disks; other magnetic media including tape; and optical media such as compact disks (CDs) or digital video disks (DVDs).
In the foregoing description, numerous details are set forth to provide an understanding of the present invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these details. While the invention has been disclosed with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover such modifications and variations as fall within the true spirit and scope of the invention.