The principles of the present invention will be discussed herein in the context of a transductive support vector machine (SVM) algorithm solved in dual formulation with an emphasis on two-class classification problems. One skilled in the art will recognize that the principles of the present invention may be applied to alternative problems, such as regression or multi-class classification, in a similar manner. The transduction methods described herein allow for large scale training with high dimensionality and number of examples.
Consider a set of L labeled training pairs ={(x1,y1), . . . ,(xL,yL)}, x∈n, y∈{1,−1} and an unlabeled set of U test vectors ={xL+1, . . . , xL+U}. Here, y is the label for the labeled data. SVMs have a decision function fθ(·) of the form:
f
θ(x)=ω·Φ(x)+b,
where θ=(ω,b) represents parameters of the hyperplane classifier that classifies the data, and Φ(x) is a feature map which maps real world data to a high dimensional feature space. SVMs and TSVMs can be used to classify any type of real world data into two classes. However, the real world data often cannot be classified with a linear hyperplane classifier, so the real world data is transformed to labeled training data and unlabeled test data in a high dimensional feature space. For example, in text classification, pieces of text are transformed into data points in a high dimensional feature space, so that a hyperplane classifier can be used to classify the data points corresponding to the text into two classes. Other examples of real word data that can be transformed into training data for classification include, but are not limited to, images (e.g., faces, objects, digits, etc.), sounds (speech, music signal, etc.), and biological components (proteins, types of cells, etc.). The transformation of data to the high dimensional feature space can be performed using a kernel function k(x1,x2)=Φ(x1)·Φ(x2), which defines implicitly the mapping into the higher dimensional feature space. The use of the kernel function to map data from a lower to a higher dimensionality is well known in the art, for example as described in V. Vapnik, Statistical Learning Theory, Wiley, New York, 1998.
Given the training set Λ and the test set Y, the TSVM optimization problem attempts to find among the possible binary vectors
{Y=(yL+1, . . . ,yL+U)}
the one that an SVM trained on ∪(×Y) yields the largest margin. Accordingly, the TSVM optimization problem attempts to label each of the test set Y with the label that maximizes the margin.
The TSVM optimization problem is a combinatorial problem, but it can be approximated as finding an SVM separating the training set under constraints which force the unlabeled examples to be as far as possible from the margin. The can be expressed as minimizing:
subject to
y
i
f
θ(xi)≧1−ζi, i=1, . . . ,L
|fθ(xi)|≧1−ζi,i=L+1, . . . ,L+U
where ζi is the distance of a data point from the margin, C is the cost function for the labeled data, and C* is the cost function for the unlabeled data. The cost functions assign a cost or a penalty to the data points based on a location of the data point relative to the margin. This minimization problem is equivalent to minimizing:
where the function H1(·)=max(0,1−·) is a classical Hinge Loss function for the labeled data. This classical Hinge Loss function is shown in graph 704 of
Graph 606 of
zRs(z)+Rs(−z), (2)
where s<1 is a hyper-parameter and Rs refers to a “Ramp Loss” function, which is a “cut” version of the Hinge Loss function.
As illustrated in
Training a TSVM using the Symmetric Ramp Loss function 606 expressed in Equation (2) is equivalent to training an SVM using the Hinge Loss function H1(·) 704 for labeled examples, and using the Ramp Loss Rs(·) 702 for unlabeled examples, where each unlabeled example appears as two examples labeled with both possible classes. Accordingly, by introducing:
y
i=1i∈[L+1 . . . L+U]
y
i=−1i∈[L+U+1 . . . L+2U]
x
i
=x
i−U
i∈[L+U+1 . . . L+2U],
it is possible to rewrite Equation (1) as:
Accordingly, each of the unlabeled examples are duplicated in order to associate a cost with assigning each of the classes to the unlabeled examples. The minimization of the TSVM objective function expressed by Equation (3) will be considered hereinafter.
One problem with TSVMs is that in high dimensions with few training data it is possible to classify all of the unlabeled examples as belonging to only one of the classes with a very large margin. This can lead to poor performance of a TSVM. In response to this problem, it is possible to constrain the solution of the TSVM objective function by introducing a balancing constraint, which assures that data are assigned to both classes. The balancing constraint enforces that a fraction of positives and negatives assigned to the unlabeled data is the same fraction as found in the labeled data. An example of a possible balancing constraint can be expressed as:
The TSVM optimization problem as expressed in Equation (3) is not convex and minimizing a non-convex objective function can be very difficult. The “Concave-Convex Procedure” (CCCP) is a procedure for solving non-convex problems that can be expressed as the sum of a convex function and a convex function. The CCCP procedure is generally described in A. L. Yuille and A. Rangarajan, “The Concave-Convex Procedure (CCCP)”, Advances in Neural Information Processing Systems 14, MIT Press, Cambridge, Mass., 2002. However, in order to apply the CCCP procedure to the TSVM optimization problem, it is necessary to express the objective function in terms of a convex function and a concave function, and to take into account the balancing constraint.
At step 802, the TSVM objective function is decomposed into a convex function and a concave function. A convex function is a function that always lies over its tangent, and a concave function is a function that always lies under its tangent. The TSVM objective function is expressed in Equation (3). As described above and illustrated in
R
s(z)=H1(z)−Hs(z). (5)
Based on the deconstruction of the Ramp Loss function for the unlabeled data, the TSVM objective function Js(θ) can be decomposed into the sum of a convex function Jvexs (θ) and a concave function Jcavs (θ) as follows:
The convex function Jvexs(θ) of Equation (6) can be reformulated and expressed using dual variables α using the standard notation of SVM. The dual variables α are Lagrangian variables corresponding to the constraints of the SVM.
At step 804, the balancing constraint is introduced to the decomposed objective function for the CCCP procedure. Enforcing the balancing constraint of Equation (4) can be achieved by introducing an extra Lagrangian variable α0 and an example (or data point) x0 explicitly defined by:
with label y0=1. Thus, if we note K the kernel matrix such that
K
ij=Φ(xi)·Φ(xj),
the column corresponding to the example x0 is calculated as follows:
the computation of this column can be efficiently achieved by calculating it one time, or by approximating Equation (7) using a know sampling method.
At step 806, a hyperplane classifier is initialized. As described above, a hyperplane classifier which classifies the data into two classes is defined in terms of θ=(ω,b). An initial estimate for the hyperplane classifier can be determined using a SVM solution on only the labeled data points. This step is shown at 850 in the pseudo code of
At step 808, an initial approximation of the concave function is calculated. This initial approximation is a local approximation of the concave function at the initialized hyperplane classifier. For example, a tangent of the concave function Jcavs (θ) at the initial hyperplane classifier θ0 can be used as an initial estimate of the concave function Jcavs(θ). A first order approximation of the concave part Jcavs(θ) of the TSVM objective function can be calculated as:
for unlabeled examples (i.e., i≧L+1). The concave function Jcavs(θ) does not depend on labeled examples (i≦L), so βi=0 for all i≦L. The initial approximation Bi0 can be calculated Equation (8). This step is shown at 852 in the pseudo code of
At step 810, a convex problem combining the convex function Jvexs(θ) of the TSVM objective function and approximation of the concave function at the current hyperplane is solved. The convex function Jvexs(θ) is combined with the approximation of the concave function such that the resulting function remains convex. Since this resulting problem is convex, we can apply any efficient convex optimization algorithm. For example, the resulting function may be solved using a known SVM algorithm. Therefore, as in known algorithms used in SVMs, the primal minimization problem can be transformed into a dual maximization problem. This step is shown at 854 in the pseudo code of
At step 812, an updated hyperplane classifier is determined based on the solution to the convex problem. The parameters of the (ω,b) hyperplane classifier 6 are updated based on the solution to the convex problem of step 810. This step is shown at 856 and 858 in the pseudo code of
At step 814, an updated approximation for the concave function of the TSVM objective function is calculated based on the update hyperplane classifier. Using Equation (8), a first order local approximation (i.e., the tangent) of the concave function is calculated in at the updated hyperplane. This step is shown at 860 in the pseudo code of
At step 816, it is determined whether the solution has converged. It is possible to determine whether the solution has converged by comparing the updated approximation for the concave function (βt+1) based on the updated hyperplane with the previous approximation for the concave function (βt). If the updated approximation is equal to the previous approximation than the solution has converged. This step is shown at 862 in the pseudo code of
This method is guaranteed to converge to a solution in finite time because the variable β can only take a finite number of values and because J(θt) decreases with every iteration. As described herein a CCCP-TSVM is used to classify data. Training a CCCP-TSVM amounts to solving a series of SVM optimization problems with L+2U variables. Although conventional SVM training has a worst case complexity of O((L+2U)3), it typically scales quadratically. The CCCP-TSVM method described above also scales quadratically similar to the conventional SVM training. This is faster than the conventional TSVM training methods.
The steps of the method described herein may be performed by computers containing processors which are executing computer program code which defines the functionality described herein. Such computers are well known in the art, and may be implemented, for example, using well known computer processors, memory units, storage devices, computer software, and other components. A high level block diagram of such a computer is shown in
The foregoing Detailed Description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention.
This application claims the benefit of U.S. Provisional Application No. 60/747,225 filed May 15, 2006, the disclosure of which is herein incorporated by reference.
Number | Date | Country | |
---|---|---|---|
60747225 | May 2006 | US |