The classification problem can be formulated as a verification scheme where the objective is to determine if a pair of points is positive or negative, i.e. positive if the points belong to the same class, negative otherwise. Given a set of labeled data, one can try to learn the metric that gives a discriminative distance in a given feature space that would perform the best for this task; which is to give a low distance between points of a positive pair and a high distance to points of a negative pairs. A detailed overview of this type of methods can be found in [1].
Global metric learning has become popular to improve classification algorithms, such as the K-nearest neighbors (KNN) classification algorithm [2]. It often consists of estimating the optimal covariance matrix of the Mahanlobis distance that will be used for classification. While these kinds of global metrics have shown impressive improvements for classification, they do not capture local properties in the feature space which may be relevant for complex data distributions. To overcome this difficulty, a two step approach is generally employed [3, 4]. Firstly, a global metric is learned and training points in the feature space are transformed accordingly; secondly, a local metric is estimated in the neighborhood of each transformed training point. These local metrics allow for better adaptiveness to the data but often require an heuristic choice of locality.
The proposed invention is instead a hierarchical global-to-local approach, where the metric is iteratively refined and learned using the data distribution Itself. The approach starts by estimating a global metric and applying the corresponding (metric) transformation to the data. Then the transformed points are clustered to obtain a set of K clusters. These two steps are recursively applied to each cluster until a termination criterion is satisfied on the cluster. such criteria can be e.g. maximal height in the tree, minimal variance of the data points in the cluster or a minimum number of data points in the cluster. This forms a tree with a metric associated to each node.
Below follows a detailed description of the invention.
Given a set of labeled points {xi, i=1 . . . N}, we successively apply a global metric learning algorithm on sets of hierarchically clustered points. This has the effect of forming a metric tree. For simplicity of presentation, we assume the number of clusters for each instance of the clustering algorithm to be constant and equal to K. Let ni,j, be the jth cluster at level i and Ai,j be the associated metric transformation matrix. The metric is learned on the set of transformed points {yi,j=πk=0i−1Ai−1,j/Kxi,j}, e.g. using, but not restricted to, the Information Theoretic Metric Learning algorithm (ITML) proposed in [5].
Before applying the clustering algorithm to a node ni,j, we apply the transformation associated to the metric of that to node the points given by its parents nodes.
We can now use our metric tree to evaluate the distance between any two data points, e.g. image descriptors, face feature vectors or other feature representations. First, each point is injected in the tree and its path is recovered. In particular, we identify the last node of the tree that contain both points. The distance between the points is then the one obtained using the metric associated to this node, possibly compounded with the metrics of parent nodes.
In the worst case, the last common node is the root of the tree, and therefore, the distance will use the global metric. The deeper in the tree the common node is, the more local the metric is. Compared to pure local or global methods, this approach has the advantage of refining the metric in dense or complex areas, according to the termination criterion (e.g. maximum leaf size, maximum leaf variance, maximum height limit).
A possible issue with this formulation is the high dependence on the clustering boundary. Points can be close to each other and separated quite early in the clustering tree. To reduce the influence of this decision, it is possible to construct multiple metric trees and average the distances given by each one of them.
In a preferred embodiment of the invention, a method for global-to-local metric learning is presented, the method comprising the steps of;
In another embodiment of the present invention, a computer program stored in a computer readable storage medium and executed in a computational unit for global-to-local metric learning comprising the steps of; learning a metric tree, classifying or comparing test points using this metric tree.
Yet another embodiment of the present invention, a system for global-to-local metric learning and classification containing a computer program for global-to-local metric learning comprising the steps of;
In another embodiment of the present invention a system or device is used for obtaining images, analyzing, and responding to results from classification using a global-to-local metric, as may be seen in
We have described the underlying method used for the present invention together with a list of embodiments. Possible application areas for the above described invention range from, but are not restricted to, object recognition and face recognition to classification of image content.
Number | Name | Date | Kind |
---|---|---|---|
5832182 | Zhang et al. | Nov 1998 | A |
6259458 | Theisen et al. | Jul 2001 | B1 |
6363334 | Andrews et al. | Mar 2002 | B1 |
6468476 | Friend et al. | Oct 2002 | B1 |
6496208 | Bernhardt et al. | Dec 2002 | B1 |
6636849 | Tang et al. | Oct 2003 | B1 |
6751343 | Ferrell et al. | Jun 2004 | B1 |
7177295 | Sholander et al. | Feb 2007 | B1 |
7243112 | Qu et al. | Jul 2007 | B2 |
7519200 | Gokturk et al. | Apr 2009 | B2 |
7536064 | Venkatesan et al. | May 2009 | B2 |
7624337 | Sull et al. | Nov 2009 | B2 |
7657100 | Gokturk et al. | Feb 2010 | B2 |
7813531 | Becker | Oct 2010 | B2 |
7823055 | Sull et al. | Oct 2010 | B2 |
7933915 | Singh et al. | Apr 2011 | B2 |
8064685 | Solem et al. | Nov 2011 | B2 |
20080301133 | Brown et al. | Dec 2008 | A1 |
20090175509 | Gonion et al. | Jul 2009 | A1 |
Entry |
---|
Ramanan et al., “Local Distance Functions: A Taxonomy, New Algorithms, and an Evaluation”, 2009 IEEE 12th International Conference on computer Vision (ICCV), 301-308, Kyoto, Japan, Sep. 2009. |
Weinberger et al., “Distance Metric Learning for Large Margin Nearest Neighbor Classification”, Advances in Neural Information Processing Systems 18, MIT Press, Cambridge, MA, pp. 1473-1480, 2006. |
Domeniconi et al., “Locally Adaptive Metric Nearest-Neighbor Classification”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, No. 9, pp. 1281-1285, Sep. 2002. |
Hastie et al., “Discriminant Adaptive Nearest Neighbor Classification”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, No. 6, pp. 607-616, Jun. 1996. |
Davis et al., “Information-Theoretic Metric Learning”, Proceedings of the 24th International Conference on Machine Learning, Corvalis, OR, pp. 209-216, Jun. 20-24, 2007. |
Davis et al., “Information-Theoretic Metric Learning”, retrieved from the Internet Jan. 21, 2001, http://doi.acm.org/10.1145/1273496.1273523, pp. 209-216, XP002617812 (2007). |
Fu et al., “Locally adaptive subspace and similarity metric learning for visual data clustering and retrieval”, Computer Vision and Image Understanding, Academic Press, vol. 110, No. 3, Jun. 1, 2008 (retrieved Dec. 23, 2007), pp. 390-402, XP022652947. |
Authorized Officer Henriette Huysing-Solles, European Patent Office, International Search Report and Written Opinion in Application No. PCT/EP2010/064723, mailed Feb. 4, 2011, 13 pages. |
Number | Date | Country | |
---|---|---|---|
20110081074 A1 | Apr 2011 | US |