The invention is a matching pursuit method for sparse gaussian process (GP) regression. Specifically, the invention provides a new basis selection system and method for building sparse GP regression models that provides gains in accuracy as well as efficiency over previous methods.
Many problems in information processing involve the selection or ranking of items in a large data set. For example, a search engine that locates documents meeting a search query must often select items from a large result set of documents to display to the user, and often must rank those documents based on their relevance or other criteria. Similar exercises of selecting a set of items from a larger set of possible items are undertaken in other fields such as weather pattern prediction, analyzing commercial markets, and the like. Some complex mathematical models and processes have been developed to perform this analysis.
One such set of processes, Bayesian Gaussian processes, provide a probabilistic kernel approach to supervised learning tasks. The advantage of Gaussian process (GP) models over non-Bayesian kernel methods, such as support vector machines, comes from the explicit probabilistic formulation that yields predictive distributions for test instances and allows standard Bayesian techniques for model selection. The cost of training GP models is O(n3), where n is the number of training instances, which results in a huge computational cost for large data sets. Furthermore, when predicting a test case, a GP model requires O(n) cost for computing the mean and O(n2) cost for computing the variance. These heavy scaling properties obstruct the use of GPs in large scale problems.
Sparse GP models bring down the complexity of training as well as testing. The Nystrom method has been applied to calculate a reduced rank approximation of the original n×n kernel matrix. One on-line algorithm maintains a sparse representation of the GP models. Another algorithm uses a forward selection scheme to approximate the log posterior probability. Another fast and greedy selection method builds sparse GP regression models. All of these attempt to select an informative subset of the training instances for the predictive model. This subset is usually referred to as the set of basis vectors, denoted as I. The maximal size of I is usually limited by a value dmax. As dmax<<n, the sparseness greatly alleviates the computational burden in both training and prediction of the GP models. The performance of the resulting sparse GP models depends on the criterion used in the basis vector selection.
It would be desirable to provide a system and method for greedy forward selection for sparse GP models. Accordingly, there's a need for a system and method that yield better generalization performance, while essentially not effecting algorithm complexity. The preferred embodiments of the system and method described herein clearly address this and other needs.
In a preferred embodiment, a system and method is provided for supervised learning. A training set is provided to the system. The system selects a training element from the provided training set, and adds the training element to a set of basis elements I (basis element set). The system conducts an optimization test on the basis element set I with the selected training element to produce a selection score. The system determines whether the selection score indicates an improvement in optimization for the basis element set I. The system discards the selected element if the selection score does not indicate an improvement, and keeps the selected element if the selection score does indicate improvement. The process may then be repeated for other training elements until either the specified maximum number of basis functions is reached or improvement in optimization is below a threshold. At that point, the chosen set I should represent an optimized basis set.
In one preferred embodiment, the selected element is selected at random. In another preferred embodiment, the system adds, in succession, a plurality of selected elements. In this embodiment, the scoring is performed on the basis element set I every tth addition of a selected element to the set of basis element set I, with t comprising a positive integer. This selection score is then used to decide whether to keep or discard the added selected elements.
In another preferred embodiment, the process of scoring the basis element comprises a sparse Gaussian process.
In another preferred embodiment, the process of determining the selection score comprises using post-backfitting.
In another preferred embodiment, the training set is for categorizing indexed web pages for a search engine.
A preferred embodiment of a matching pursuit approach to sparse gaussian process regression system, constructed in accordance with the claimed invention, is directed towards a criterion of greedy forward selection for sparse GP models. The criterion used is more efficient than previous methods, while not affecting algorithm complexity. While the method is described as pertaining to regression, the method is also applicable to other supervised learning tasks.
In one embodiment, as an example, and not by way of limitation, an improvement in Internet search engine categorization and scoring of web pages is provided. The World Wide Web is a distributed database comprising billions of data records accessible through the Internet. Search engines are commonly used to search the information available on computer networks, such as the World Wide Web, to enable users to locate data records of interest. A search engine system 100 is shown in
To use search engine 100, a user 112 typically enters one or more search terms or keywords, which are sent to a dispatcher 110. Dispatcher 110 compiles a list of search nodes in cluster 106 to execute the query and forwards the query to those selected search nodes. The search nodes in search node cluster 106 search respective parts of the primary index produced by indexer 104 and return sorted search results along with a document identifier and a score to dispatcher 110. Dispatcher 110 merges the received results to produce a final result set displayed to user 112 sorted by relevance scores. The relevance score is a function of the query itself and the type of document produced. Factors that affect the relevance score may include: a static relevance score for the document such as link cardinality and page quality, placement of the search terms in the document, such as titles, metadata, and document web address, document rank, such as a number of external data records referring to the document and the “level” of the data records, and document statistics such as query term frequency in the document, global term frequency, and term distances within the document. For example, Term Frequency Inverse Document Frequency (TFIDF) is a statistical technique that is suitable for evaluating how important a word is to a document. The importance increases proportionally to the number of times a word appears in the document but is offset by how common the word is in all of the documents in the collection.
Referring to
Usually the number of web pages in the result set is very large, sometimes even as large as a million. It is important to ensure that the documents displayed to the user are ordered according to relevance, with the most relevant displayed at the top. In one embodiment, for a given query-webpage pair, the Gaussian process (GP) nonlinear regression method described herein is used as a relevance scoring function.
With reference to
According to one embodiment, the input vector (x) associated with a query-webpage pair is set to be selected features that help in prediction. By way of example, and not by way of limitation, those features include: query term occurrence frequency in the web page, web page length, eigenrank of the web page, spam index of the web page, first occurrence location of the query terms, family friendly rating of the web page, proximity of occurrences of the query terms in the web page, and the like. The relevance score is the output variable, y that is to be predicted.
In one embodiment, the method is implemented in the indexer/categoriser 104 as a supervised learning task. One of the advantages of the system 100 is that it results in faster categorization by using smaller and more accurate basis sets. In one embodiment, the method is implemented in software on a server or personal computer. However, those skilled in the art would recognize that the method can be implemented on any combination of hardware or software platforms that can be programmed to implement the methods described herein. Further, the method is not limited to use in search engine technology, but is generally useful in any kind of training or regression problem.
In regression problems, a training data set is provided composed of n samples. Each sample is a pair having an input vector xiεm and its corresponding target xiε
.
is the set of reals and
m is the m dimensional real vector space. The true function value at xi is represented as an unobservable latent variable f(xi) and the target yi is a noisy measurement of f(xi). The goal is to construct a predictive model that estimates the relationship x
f(x).
In standard GPs for regression, the latent variables {f(xi)} are random variables in a zero mean Gaussian process indexed by {xi}. The prior distribution of {f(xi)} is a multivariate joint Gaussian, denoted as
P(f)=N(f;0,K)
where f=[f(x1), . . . , f(xn)]T and K is the n×n covariance matrix whose ij-th element is K(xi, xj), K being the kernel function. P denotes the probability function. The likelihood is essentially a model of the measurement noise, which is conventionally evaluated as a product of independent Gaussian noises,
P(y|f)=N(y;f,σ2I)
where y=[y1, . . . , yn]T and σ2 is the noise variance. N(f, μ, σ2) denotes the normal density function with mean μ and the variance σ2. The posterior distribution P(f|y)∝P(y|f)P(f) is also exactly a Gaussian:
P(f|y)=N(f;Kα*;σ2K(K+σ2I)−1) (1)
where α*=(K+σ2I)−1y. For any test instance x, the predictive model is represented by the predictive distribution
P(f(x)|y)=∫P(f(x)|f)P(f|y)df=N(f(x);μx,σx2)
where
μx=kT(K+σ2I)−1y=kTα* (2)
σx2=K(x,x)−kT(K+σ2I)−1k (3)
and k=[K(x1, x), . . . , K(xn, x)]T. The computational cost of training is O(n3), which mainly comes from the need to invert the matrix (K+σ2I) and obtain the vector α*. For predicting a test instance the cost is O(n) to compute the mean (2) and O(n2) for computing the variance (3). This heavy scaling with respect to n makes the use of standard GP computationally prohibitive on large datasets.
As opposed to other methods for working with a reduced number of latent variables, instead of assuming n latent variables for all the training instances, sparse GP models assume only d latent variables placed at some chosen basis vectors {{tilde over (x)}i}, denoted as a column vector fI=[f({tilde over (x)}1), . . . , f({tilde over (x)}d)]T. The prior distribution of the sparse GP is a joint Gaussian over fI only, i.e.,
P(fI)=N(fI;0,KI) (4)
where KI is the d×d covariance matrix of the basis vectors whose ij-th element is K({tilde over (x)}i, {tilde over (x)}j)
These latent variables are then projected to all the training instances. The conditional mean at the training instances is KI,T.KI−1fI:, where KI,. is a d×n matrix of the covariance functions between the basis vectors and all the training instances. The likelihood can be evaluated by these projected latent variables as follows
P(y|fI)=N(y;KI,T.KI−1fI,σ2I) (5)
The posterior distribution can then be written as
P(fI|y)=N(fI;KIαI*,σ2KI(σ2KI+KI+KI,.KI,T.)−1KI) (6)
where αI*=(σ2KI+KI,.KI,T.)−1KI,.y.
The predictive distribution at any test instance x is given by
P(f(x)|y)=∫P(f(x)|fI)P(fI|y)dfI=N(f(x);{tilde over (μ)}x,{tilde over (σ)}x2)
where
{tilde over (μ)}x={tilde over (k)}TαI* (7)
{tilde over (σ)}x2=K(x,x)−{tilde over (k)}TKI−1{tilde over (k)}T+σ2{tilde over (k)}T(σ2KI+KI,.KI,T.)−1{tilde over (k)} (8)
and {tilde over (k)} is a column vector of the covariance functions between the basis vectors and the test instance x, i.e. {tilde over (k)}=[K({tilde over (x)}1, x), . . . , K({tilde over (x)}d, x)]T.
In large scale regression methods, the maximal size of the basis vectors is limited by a relatively small value dmax that is much less than the number of the training instances n. A value for dmax can be decided based on the CPU time that is available during training and/or testing.
While the cost of training the full GP model is O(n3), the training complexity of sparse GP models is only O(ndmax2). This corresponds to the cost of forming KI−1, (σ2KI+KI,.KI,T.)−1 and αI*. Thus, if dmax is not big, learning on large datasets can be accomplished via sparse GP models. Also, for these sparse models, prediction for each test instance costs O(dmax) for the mean (7) and O(dmax2) for the variance (8).
The mean of the posterior distribution is the maximum a posteriori (MAP) estimate, and it is possible to give an equivalent parametric representation of the latent variables as f=Kα where α=[α1, . . . , αn]T. The MAP estimate of the full GP is equivalent to minimizing the negative logarithm of the posterior (1):
Similarly, using fI=KIαI for sparse GP models, the MAP estimate of the sparse GP is equivalent to minimizing the negative logarithm of (equation 6 above):
Suppose α in equation 9 is composed of two parts, i.e. α=[αI; αR] where I denotes the set of basis vectors and R denotes the remaining instances. The optimization problem of equation 10 is the same as minimizing π(α) in equation 9 using αI only, namely, with the constraint, αR=0. In other words, the basis vectors of the sparse GPs can be selected to minimize the negative log-posterior of the full GPs, π(α) defined as in equation 9.
In the sparse GP approach described above, the choice of I, the set of basis vectors, can make a difference. Generally the basis vectors can be placed anywhere in the input space m. Since training instances usually cover the input space of interest adequately, in some methods, basis vectors are selected from just the set of training instances. For a given problem, dmax is chosen to be as large as possible subject to constraints on computational time in training and/or testing. A basis selection method is then used to find I of size dmax.
One cheap method (in terms of processing time requirements) is to select the basis vectors at random from the training data set. But, such a choice may not work well when dmax is much smaller than n. In one embodiment, an I is selected that makes the corresponding sparse GP approximate well the posterior distribution of the full GP. In one embodiment, the optimization formulation described above can be used. In one embodiment, it is preferable to choose, among all subsets, I of size dmax, the one that gives the best value of {tilde over (π)} in (10). Such a combinatoric search is expensive. In another embodiment, a cheaper approach is to do greedy forward selection used in prior methods.
With regard to time complexities associated with forward selection, there are two costs involved. There is a basic cost associated with updating of the sparse GP solution, given a sequence of chosen basis functions, referred to as Tbasic. This cost is the same for all forward selection methods, and is O(ndmax2). Then, depending on the basis selection method, there is the cost associated with basis selection. The accumulated value of this cost for choosing all dmax basis functions is referred to as Tselection. Forward basis selection methods differ in the way they choose effective basis functions while keeping Tselection small. It is noted that the total cost associated with the random basis selection method mentioned earlier is Trandom=Tbasic=O(ndmax2). This cost forms a baseline for comparison.
One example is a typical situation in forward selection having a current working set I, and the next basis vector, xi, is to be selected. One method evaluates each given xi∉I by trying its complete inclusion, i.e., set I′=I∪{xi} and optimize π(α) using αI′=[(αI; αi]. Thus, the selection criterion for the instance xi∉I is the decrease in π(α) that can be obtained by allowing both αI and αi as variables to be non-zero. The minimal value of π(α) can be obtained by solving minα
In one relatively cheap heuristic criterion for basis selection, the “informativeness” of an input vector xi∉I is scored by the information gain
where I′ denotes the new set of basis vectors after including a new element xi into the current set I, and fI′ denotes the vector of latent variables of I′. P(fI′|y) is the true posterior distribution of fI′ defined as in equation 6. Q(fI′|y)∝Q(y|fI′)P(fI′) is a posterior approximation, in which the “pseudoinclusion” likelihood is defined as
Q(y|fI′)∝N(y\i;(KI,\i.)TKI−1fI,σ2In−1)N(yi|f(xi),σ2) (12)
where y\i denotes the target vector after removing yi, KI,\i. denotes the d×(n−1) covariance matrix after removing, from KI,., the column corresponding to xi and f(xi) denotes the latent variable at y\i. In the proper inclusion as in equation 5, the latent variable f(xi) would be coupled with the targets y\i in the likelihood, whereas the approximation (equation 12) ignores these dependencies. Due to this simplification, the score in equation 11 can be computed in O(1) time, given the current predictive model represented by I. Thus, the scores of all instances outside I can be efficiently evaluated in O(n) time, which makes this algorithm almost as fast as using random selection. In one embodiment, the correlation in the remaining instances {xi: xi∉I} is not used, as the criterion (equation 11) is evaluated upon the current model.
The two methods presented above are extremes in efficiency. In one method, Tselection is disproportionately larger than Tbasic while, in the other method, Tselection is very much smaller than Tbasic. Now described is a moderate method that is effective and whose complexity is in between the two above described methods. This method uses a kernel matching pursuit approach.
Kernel matching pursuit is a sparse method for ordinary least squares that includes two general greedy sparse approximation schemes, called “pre-backfitting” and “post-backfitting.” The presently described method illustrates that both methods can be generalized to select the basis vectors for sparse GPs. This method is an efficient selection criterion that is based on post-backfitting methodology.
With reference to
αI*=(σ2KI+KI,.KI,T.)−1KI,.y (13)
The scoring criterion for an instance xi∉I is based on optimizing π(α) by fixing αI=αI* and changing αi, only. On step 402, the one-dimensional minimizer can be found as
where Ki,. is the n×1 matrix of covariance functions between xi, and all the training data, and {tilde over (k)}i is a d dimensional vector having K(xj, xi), xjεI. In Step 404, the selection score of the instance xi, is the decrease in π(α) achieved by the one dimensional optimization of αi, which can be written in closed form as
Note that a full kernel column Ki,. is used and so it costs O(n) time to compute (15). In contrast, for scoring one instance. Previous methods require, for example, O(nd) time and or O(1) time.
In one embodiment, processing is performed on all xi∉I, step 406, and the instance which gives the largest decrease is selected, step 408. This uses O(n2) effort. In summing the cost till dmax basis vectors are selected, there is an overall complexity of O(n2dmax) which is much higher than Tbasic.
In one embodiment, to restrict the overall complexity of Tselection to O(n2dmax), the method randomly selects basis vectors which provides a relatively good selection rather than the absolute best selection. Since it costs only O(n) time to evaluate the selection criterion in equation 15 for one instance, the method can choose the next basis vector from a set of dolt instances randomly selected from outside of I. Such a selection method keeps the overall complexity of Tselection to O(ndmax2). From a practical point of view the scheme can be expensive because the selection criterion (equation 15) preferably computes a full kernel row KI,. for each instance to be evaluated.
As kernel evaluations could be very expensive, in one embodiment, a modified method is used to keep the number of such evaluations small.
Each step corresponding to a new basis vector selection proceeds as follows. First Δi is computed for the c instances corresponding to the rows of C, step 502, and the instance with the highest score is selected for inclusion in I, step 504. Let xj denote the chosen basis vector. Then the remaining instances are sorted (that define C) according to their Δi values. Finally, the system selects r, fresh instances (from outside of I and the vectors that define C) to replace xj and the κ−1 cached instances with the lowest score. Thus, in each basis selection step, the system computes the criterion scores for c instances, but evaluates full kernel rows only for κ fresh instances, step 508.
One advantage of the above scheme is that, those basis elements which have very good scores, but are overtaken by another better element in a particular step, continue to remain in C and, according to probability, get selected in future basis selection steps. The method uses κ=59. The value of c can be set to be any integer between κ and dmax. For any c in this range, the complexity of Tselection remains at most O(ndmax2).
The above cache method is unique to the point that it cannot be used in previously known methods without unduly increasing complexity. In one embodiment, an extra cache is used for storing kernel rows of instances which get discarded in one step, but which are to be considered again in a future step.
With respect to model adaptation, once dmax basis vectors are selected from the training instances according to a criterion, the marginal likelihood of the sparse GP model for model selection is computed. The sparse GP model is conditional on the parameters in the kernel function and the Gaussian noise level σ2, which can be collected together in θ, the hyperparameter vector. The likelihood approximation with projected latent variables (equation 5), and equation 4, leads to the following marginal likelihood
which is known as the evidence, a yardstick for model selection. The optimal values of θ can be inferred by maximizing the evidence. Equivalently, φ(θ)=−log P(y|θ) can be minimized. This can be performed by using gradient-based optimization techniques.
One of the problems in using gradient based methods is the dependence of I on θ that makes φ a non-differentiable function. In one embodiment, with reference to
Because of the alternation with steps 600 and 602, in some instances, the gradients and I will not stabilize cleanly. Hence, smooth convergence cannot be expected in the model adaptation process. In one embodiment, to resolve this issue, a method is used for relative improvement on the evidence value in two iteration periods. The relative improvement at iteration t is defined as
where τ1 and τ2 are lag parameters satisfying 0<τ1≦τ2, φ(t) is the minimal value of φ after the t-th major iteration, and φ(τ,t)=min{φ(t−τ+1), . . . , φ(t)}, step 608. In one embodiment, it is preferable to use τ1=τ2=10, and to stop the adaptation procedure when Λ(t)≦0.001, step 610. In one embodiment, a minimum of τ1+τ2 iterations are first performed before the convergence criterion is used.
In one embodiment, in forward selection of basis functions, the following quantities are incrementally updated: the Cholesky factorization LLT of A=σ2KI+KI,.KI,T., KI,.β={tilde over (L)}−1KI,T.y, αI*={tilde over (L)}−Tβ and μ=KI,T.αI*.
The negative logarithm of evidence can be computed as
The derivative of φ with respect to log σ2 is
In the above, tr(LTA−1L) can be computed by first computing {tilde over (L)}−1L and then computing the sum of squares of all elements of that matrix. The derivative with respect to a general hyperparameter, v in the kernel function is:
where B=A−1KI,. The i-th column, pi of B can be computed by solving two lower triangular systems: {tilde over (L)}z=qi, where qi, is the i-th column of KI, and then {tilde over (L)}Tpi=z. Computation of B is one of the most expensive computations of the algorithm; it costs O(ndmax2). Note that B and the cache matrix C can occupy the same storage space. Also, note that
can be obtained from
and
does not require storage space, as the kernel values in KI,. can be used to compute the elements of
directly.
Thus, a more efficient system and method for regression and other supervised learning tasks has been described. While the system and method is generally described herein in the context of text categorization, specifically with respect to training vectors for categorizing indexed web pages, those skilled in the art would recognize that the system and method can be applied to any problem or system requiring selection of training vectors or elements, such as in weather pattern prediction, commercial market analysis, and the like.
Although the invention has been described in language specific to computer structural features, methodological acts, and by computer readable media, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific structures, acts, or media described. Therefore, the specific structural features, acts and mediums are disclosed as exemplary embodiments implementing the claimed invention.
Furthermore, the various embodiments described above are provided by way of illustration only and should not be construed to limit the invention. Those skilled in the art will readily recognize various modifications and changes that may be made to the claimed invention without following the example embodiments and applications illustrated and described herein, and without departing from the true spirit and scope of the claimed invention, which is set forth in the following claims.
This application claims priority to U.S. Ser. No. 60/685,660, filed on May 26, 2005 entitled “A MATCHING PURSUIT APPROACH TO SPARSE GAUSSIAN PROCESS REGRESSION.”
Number | Date | Country | |
---|---|---|---|
60685660 | May 2005 | US |