Information retrieval systems, such as internet search systems, use ranking functions to generate document scores which are then sorted to produce a ranking. Typically these functions have had only a small number of free parameters (e.g. two free parameters in BM25) and as a result they are easy to tune for a given collection of documents (or other search objects), requiring few training queries and little computation to find reasonable parameter settings.
These functions typically rank a document based on the occurrence of search terms within a document. More complex functions are, however, required in order to take more features into account when ranking documents, such as where search terms occur in a document (e.g. in a title or in the body of text), link-graph features and usage features. As the number of functions is increased, so is the number of parameters which are required. This increases the complexity of learning the parameters considerably.
Machine learning may be used to learn the parameters within a ranking function (which may also be referred to as a ranking model). The machine learning takes an objective function and optimizes it. There are many known metrics which are used to evaluate information retrieval systems, such as Normalized Discounted Cumulative Gain (NDCG), Mean Average Precision (MAP) and RPrec (Precision at rank R, where R is the total number of relevant documents), all of which only depend on ranks of documents and as a result are not suitable for use as test objectives. This is because the metrics are not smooth with respect to the parameters within the ranking function (or model): if small changes are made to the model parameters, the document scores will change smoothly; however, this will typically not affect the ranking of the documents until one document's score passes another and at which point the information retrieval metric will make a discontinuous change.
The embodiments described below are not limited to implementations which solve any or all of the disadvantages of known information retrieval systems.
The following presents a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not an extensive overview of the disclosure and it does not identify key/critical elements of the invention or delineate the scope of the invention. Its sole purpose is to present some concepts disclosed herein in a simplified form as a prelude to the more detailed description that is presented later.
Methods to enable optimization of discontinuous rank metrics are described. The search scores associated with a number of search objects are written as score distributions and these are converted into rank distributions for each object in an iterative process. Each object is selected in turn and the score distribution of the selected object is compared to the score distributions of each other object in turn to generate a probability that the selected object is ranked in a particular position. For example, with three documents the rank distribution may give a 20% probability that a document is ranked first, a 60% probability that the document is ranked second and a 20% probability that the document is ranked third. In some embodiments, the rank distributions may then be used in the optimization of discontinuous rank metrics.
Many of the attendant features will be more readily appreciated as the same becomes better understood by reference to the following detailed description considered in connection with the accompanying drawings.
The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein:
Like reference numerals are used to designate like parts in the accompanying drawings.
The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present example may be constructed or utilized. The description sets forth the functions of the example and the sequence of steps for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples.
The scores 102 which are generated by the model 10 and label data 107 (or other data which indicates the relevance of each search object to a particular search query) are input to a training module 11 which generates an objective function 105 based on an information retrieval metric 106. For the purposes of the following explanation, NDCG is used as the information retrieval metric 106. However, other metrics may alternatively be used, such as Average Precision. The generation of the objective function 105 is described in more detail below. The label data 107 may be generated by judges who assign relevance levels to documents for a particular search query. In an example, gradient based learning may be used.
The model 10 uses the objective function 105 to optimize the values of the parameters in the ranking function 103. The optimization uses label data 107 for the particular search objects or other data which indicates the relevance of each search object to a particular search query. The model 10 outputs a set of learned parameters 108, which may be based on many iterations (as described below) and on a number of different training queries. The learned parameters 108 and the ranking function 103 may then be used in a search tool (i.e. the above method is performed off-line and therefore the computation time in learning the parameters does not affect the speed of searching). Different learned parameters may be generated for different collections of search objects or an existing set of learned parameters may be used on a new collection of search objects. A collection of search objects is also known as a corpus.
In addition to inputting the scores 102 to a training module 11, the scores may also be input to an evaluation module 12 which uses an information retrieval metric 109 and the label data 107 to evaluate the effectiveness of the ranking function 103 (including the current values of the parameters). The metrics 106, 109 used for training and evaluation may be the same or different. In some examples, a first set of search objects may be used for training and a second set of search objects may be used for evaluation.
For the purposes of the following explanation the following notation is used: for a given training query, it is assumed that there are N documents, each with a known human-defined rating (label data 107) and an individual document indexed by j is denoted as docj. A ranking function f with weights (parameters) w is assumed to take in document features xj and produces a score sj. The score is denoted:
sj=ƒ(w,xj) (1)
It will be appreciated that a search may be for items other than documents, such as images, sound files etc, however, for the purposes of the following explanation only, the search objects are referred to as documents.
The conversion (in block 201) of deterministic scores to smoothed scores (or score distributions) may be performed by treating them as random variables. Each score may be given the same smoothing using equal variance Gaussian distributions. Hence the deterministic score sj in equation (1) becomes the mean of a Gaussian score distribution, with a shared smoothing variance σs:
p(sj)=N(sj|
Using:
N(x|μ,σ2)=(2πσ2)−0.5exp[−(x−μ)2/2σ2]
An alternative motivation would be to consider the source of noise as an inherent uncertainty in the model parameters w, arising from inconsistency between the ranking model and the training data. This would be the result of a Bayesian approach to the learning task.
Deterministic scores, as shown in graph 301, result in deterministic rank distributions, as shown in
a) sample a vector of N scores, one from each score distribution,
b) sort the score samples and
c) accumulate histograms of the resulting ranks for each document.
However, the use of sorting (in step b) results in a discontinuous function which causes problems for gradient based optimizations.
Alternatively, an approximate algorithm for generating the rank distributions may be used that avoids an explicit sort, as shown in
For a given docj (selected in block 501), the probability that another doci (selected in block 502) will rank above docj is determined (in block 503). Denoting Sj as a draw (i.e. a sampled selection) from p(sj), the probability that Si>Sj is required, or equivalently Pr(Si−Sj>0). Therefore the required probability is the integral of the difference of two Gaussian random variables, which is itself a Gaussian, and therefore the probability that document i beats document j, which is denoted πij, is:
This quantity represents the fractional number of times that doci would be expected to rank higher than docj on repeated pairwise samplings from the two Gaussian score distributions. For example, referring to
These pairwise probabilities may then be used to generate ranks (in block 504). If the probabilities of a document being beaten by each of the other documents were added up, this would give a quantity that is related to the expected rank of the document being beaten, i.e. if a document is never beaten, its rank will be 0, the best rank. More generally, using the pairwise contest trick, an expression for the expected rank rj of document j can be written as:
which can be easily computed using equation (3). As an example,
The actual distribution of the rank rj of a document j under the pairwise contest approximation is obtained by considering the rank rj as a Binomial-like random variable, equal to the number of successes of N−1 Bernoulli trials, where the probability of success is the probability that document j is beaten by another document i, namely πij. If i beats j then rj goes up by one.
However, because the probability of success is different for each trial, it is a more complex discrete distribution than the Binomial: it is referred to herein as the Rank-Binomial distribution. Like the Binomial, it has a combinatoric flavour: there are few ways that a document can end up with top (and bottom) rank, and many ways of ranking in the middle. Unlike the Binomial, it does not have an analytic form. However, it can be computed using a standard result from basic probability theory, that the probability density function (pdf) of a sum of independent random variables is the convolution of the individual pdfs. In this case it is a sum of N independent Bernoulli (coin-flip) distributions, each with a probability of success πij. This yields an exact recursive computation for the distribution of ranks as follows.
If the initial rank distribution for document j is defined as pj(1)(r), where the superscript number identifies the stage of recursion and there is just the document j, then the rank can only have value zero (the best rank) with probability one:
pj(1)(r)=δ(r) (5)
where δ(x)=1 only when x=0 and zero otherwise. There are N−1 other documents that contribute to the rank distribution and these may be indexed with i=2 . . . N. Each time a new document i is added, the event space of the rank distribution gets one larger, taking the r variable to a maximum of N−1 on the last iteration. The new distribution over the ranks is updated by applying the convolution process described above, giving the following recursive relation:
pj(i)(r)=pj(i−1)(r−1)πij+pj(i−1)(r)(1−πij). (6)
The recursive relation shown in equation (6) can be interpreted in the following manner. If document i is added, the probability of rank rj can be written as a sum of two parts corresponding to the new document i beating document j or not. If i beats j then the probability of being in rank rat this iteration is equal to the probability of being in rank r−1 on the previous iteration, and this situation is covered by the first term on the right of equation (6). Conversely, if the new document leaves the rank of j unchanged (it loses), the probability of being in rank r is the same as it was in the last iteration, corresponding to the second term on the right of equation (6).
If rj<0, then pj(i)(r) is defined as pj(i)(r)=0. The final rank distribution is defined as pj(r)≡pj(N)(r).
The pairwise contest trick yields Rank-Binomial rank distributions, which are an approximation to the true rank distributions. Their computation does not require an explicit sort. Simulations have shown that this gives similar rank distributions to the true generative process. These approximations can, in some examples, be improved further as shown in
A sequence of column and row operations are then performed on this matrix (block 702). The operations comprise dividing each column by the column sums, then dividing each row of the resulting matrix by the row sums, and iterating to convergence. This process is known as Sinkhorn scaling, its purpose being to convert the original matrix to a doubly-stochastic matrix. The solution can be shown to minimize the Kullback-Leibler distance of the scaled matrix from the original matrix.
Having generated the rank distributions (block 202,
NDCG is a metric which is a reasonable way of dealing with multiple relevance levels in datasets. It is often truncated at a rank position R (indexed from 0, e.g. R=10) and is defined as:
where the gain g(r) of the document at rank r is usually an exponential function g(r)=2l(r) of the labels l(r) (or ratings) of the document at rank r. The labels identify the relevance of a particular document to a particular search query and typically take values from 0 (bad) to 4 (perfect). The rank discount D(r) has the effect of concentrating on documents with high scores and may be defined in many different ways, and for the purposes of this description D(r)=1/log(2+r). GR,max is the maximum value of
obtained when the documents are optimally ordered by decreasing label value and is a normalization factor. Where no subscript is defined, it should be assumed that R=N.
The expression for deterministic NDCG is given in equation (7). Based on this expression, the expected NDCG can be computed given the rank distributions described above. Rewriting NDCG as a sum over document indices j rather than document ranks r gives:
The deterministic discount D(r) is replaced by the expected discount E[D(rj)], giving:
This is referred to herein as ‘SoftNDCG’.
where the rank distribution pj(r) is given in equation (6) above. The variable Gsoft provides a single value per query, which may be averaged over several queries, and which evaluates the performance of the ranking function. The equation (10) may then be used as an objective function to learn parameters (in block 204 of
The use of SoftNDCG to learn parameters is described below using one particular learning method. It will be appreciated that the objective function given above in equation (10) may be used in other ways to learn the parameters in a ranking function.
Having derived an expression for a SoftNDCG, it is differentiated with respect to the weight vector, w. The derivative with respect to the weight vector with K elements is:
The first matrix is defined by the neural net model and is computed via backpropagation (e.g. as described in a paper by Y. LeCun et al entitled ‘Efficient Backprop’ published in 1998). The second vector is the gradient of the objective function (equation (10)) with respect to the score means, s. The task is to define this gradient vector for each document in a training query.
Taking a single element of this gradient vector corresponding to a document with index m (1≦m≦N), equation (10) can be differentiated to obtain:
This says that changing score
Hence a parallel recursive computation is used to obtain the required derivative of pj(r). Denoting
it can be shown from equation (7) that:
where the recursive process runs i=1 . . . N. Considering now the last term on the right of equation (13), differentiating πij with respect to
it can be shown from equation (3) that:
and so substituting equation (15) in equation (13), the recursion for the derivatives can be run. The result of this computation can be defined as the N-vector over ranks:
Using this matrix notation, the result can be substituted in equation (12):
The following are now defined: the gain vector g (by document), the discount vector d (by rank) and the N×N square matrix Ψm whose rows are the rank distribution derivatives implied above:
So to compute the N-vector gradient of Gsoft which is defined as:
the value of Ψm is computed for each document.
For a given query of N documents, calculation of the πij is O(N2), calculation of all the pj(r) is O(N3), and calculation of the SoftNDCG is O(N2). Similar complexity arises for the gradient calculations. So the calculations are dominated by the recursions in equations (6) and (13).
A substantial computational saving can be made by approximating all but a few of the Rank-Binomial distributions. The motivation for this is that a true binomial distribution, with N samples and probability of success π, can be approximated by a normal distribution with mean Nπ and variance Nπ(1−π) when Nπ is large enough. For the rank binomial distribution, π is not constant, but simulations confirm that it can be approximated similarly, for a given j, by a normal distribution with mean equal to the expected rank
and variance equal to
As the approximation is an explicit function of the πij, the gradients of the approximated pj(r) with respect to πij can be calculated and therefore they can also be calculated with respect to the
As described above, the NDCG discount function has the effect of concentrating on high-ranking documents (e.g. approximately the top 10 documents). In some implementations, however, a different discount function may be used for training purposes in order to exploit more of the training data (e.g. to also consider lower ranked documents).
The description above used a neural net as the model 10 by way of example. In another example, a Gaussian Process (GP) regression model may be used. The following is a summary of GPs for regression and more detail can be found in ‘Gaussian Processes for Machine Learning’ by Rasmussen and Williams (MIT Press, 2006). A Gaussian process defines a prior distribution over functions ƒ(x), such that any finite subset of function values ƒ={ƒn}n=1N is multivariate Gaussian distributed given the corresponding feature vectors X={xn}n=1N:
p(f|X)=N(f|0,K(X,X)) (19)
The covariance matrix K(X,X) is constructed by evaluating a covariance or kernel function between all pairs of feature vectors: K(X,X)ij=K(xi, xj).
The covariance function K(x, x′) expresses some general properties of the functions f(x) such as their smoothness, scale etc. It is usually a function of a number of hyperparameters θ which control aspects of these properties. A standard choice is the ARD+Linear kernel:
where θ={c, λ1, . . . , λD, w0, . . . , wD}. This kernel allows for smoothly varying functions with linear trends. There is an individual lengthscale hyperparameter λd for each input dimension, allowing each feature to have a differing effect on the regression.
In standard GP regression the actual observations y={yn}n=1N are assumed to lie with Gaussian noise around the underlying function:
p(yn|fn)=N(yn|fn,σ2)
Integrating out the latent function values we obtain the marginal likelihood:
p(y|X,θ,σ2)=N(y|0,K(X,X)+σ2I) (21)
which is typically used to train the GP by finding a (local) maximum with respect to the hyperparameters θ and noise variance σ2.
Prediction is made by considering a new input point x and conditioning on the observed data and hyperparameters. The distribution of the output value at the new point is then:
where K(x,X) is the kernel evaluated between the new input point and the N training inputs. The GP is a nonparametric model, because the training data are explicitly required at test time in order to construct the predictive distribution.
There are several ways in which Gaussian processes could be applied to ranking and the following example describes a combination of a GP model with the smoothed ranking training scheme (as described above).
The GP predictive mean and variance functions (as shown in equation (22)) are of exactly the right form to be used as the score means and uncertainties in equation (2) above:
p(sj)=N(sj|
The GP mean and variance functions, from equation (22), are regarded as parameterized functions to be optimized in the same way as the neural net in the methods described above.
Equation (22) shows that the regression outputs y can be made into virtual or prototype observations—they are free parameters to be optimized. This is because all training information enters through the NDCG objective, rather than directly as regression labels. In fact the corresponding set of input vectors X on which the GP predictions are based do not have to correspond to the actual training inputs, but can be a much smaller set of free inputs. To summarize, the mean and variance for the score of document j are given by:
where (Xu, yu) is a small set of M prototype feature-vector/score pairs. These prototype points are free parameters that are optimized along with the hyperparameters θ using the SoftNDCG gradient training.
By using a small set of M prototypes this gives a sparse model, and reduces the training time from O(N3) to O(NM2+NMD). If these prototypes are positioned well then they can mimic the effect of using all the training data.
To implement the SoftNDCG optimization, the πij from equation (3) are now a function of both the score means and variances:
Derivatives of
Computing-based device 1200 comprises one or more processors 1201 which may be microprocessors, controllers or any other suitable type of processors for processing computing executable instructions to control the operation of the device in order to generate score distributions and/or generate smoothed metrics, as described above. Platform software comprising an operating system 1202 or any other suitable platform software may be provided at the computing-based device to enable application software 1203-1205 to be executed on the device. The application software may comprise a model 1204 and a training module 1205.
The computer executable instructions may be provided using any computer-readable media, such as memory 1206. The memory is of any suitable type such as random access memory (RAM), a disk storage device of any type such as a magnetic or optical storage device, a hard disk drive, or a CD, DVD or other disc drive. Flash memory, EPROM or EEPROM may also be used.
The computing-based device 1200 may further comprise one or more inputs which are of any suitable type for receiving media content, Internet Protocol (IP) input, etc, a communication interface and one or more outputs, such as an audio and/or video output to a display system integral with or in communication with the computing-based device. The display system may provide a graphical user interface, or other user interface of any suitable type.
Although the present examples are described and illustrated herein as being implemented in a system as shown in
The term ‘computer’ is used herein to refer to any device with processing capability such that it can execute instructions. Those skilled in the art will realize that such processing capabilities are incorporated into many different devices and therefore the term ‘computer’ includes PCs, servers, mobile telephones, personal digital assistants and many other devices.
The methods described herein may be performed by software in machine readable form on a tangible storage medium. The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously.
This acknowledges that software can be a valuable, separately tradable commodity. It is intended to encompass software, which runs on or controls “dumb” or standard hardware, to carry out the desired functions. It is also intended to encompass software which “describes” or defines the configuration of hardware, such as HDL (hardware description language) software, as is used for designing silicon chips, or for configuring universal programmable chips, to carry out desired functions.
Those skilled in the art will realize that storage devices utilized to store program instructions can be distributed across a network. For example, a remote computer may store an example of the process described as software. A local or terminal computer may access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP, programmable logic array, or the like.
Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person.
It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. It will further be understood that reference to ‘an’ item refers to one or more of those items.
The steps of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without departing from the spirit and scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought.
The term ‘comprising’ is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements.
It will be understood that the above description of a preferred embodiment is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exemplary embodiments of the invention. Although various embodiments of the invention have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the spirit or scope of this invention.
Number | Name | Date | Kind |
---|---|---|---|
6654742 | Kobayashi et al. | Nov 2003 | B1 |
6701312 | Lau et al. | Mar 2004 | B2 |
7024404 | Gerasoulis et al. | Apr 2006 | B1 |
7257577 | Fagin et al. | Aug 2007 | B2 |
7895206 | Narayanan et al. | Feb 2011 | B2 |
20050060310 | Tong et al. | Mar 2005 | A1 |
20050065916 | Ge et al. | Mar 2005 | A1 |
20060074910 | Yun et al. | Apr 2006 | A1 |
20070016574 | Carmel et al. | Jan 2007 | A1 |
20070038620 | Ka et al. | Feb 2007 | A1 |
20070038622 | Meyerzon et al. | Feb 2007 | A1 |
20070094171 | Burges et al. | Apr 2007 | A1 |
20070239702 | Vassilvitskii et al. | Oct 2007 | A1 |
20070288438 | Epstein | Dec 2007 | A1 |
20080010281 | Berkhin et al. | Jan 2008 | A1 |
20080027936 | Liu et al. | Jan 2008 | A1 |
20090125498 | Cao et al. | May 2009 | A1 |
Number | Date | Country |
---|---|---|
1006458 | Jun 2000 | EP |
Number | Date | Country | |
---|---|---|---|
20090228472 A1 | Sep 2009 | US |