Face Recognition Using Discriminatively Trained Orthogonal Tensor Projections

Information

  • Patent Application
  • 20080310687
  • Publication Number
    20080310687
  • Date Filed
    June 15, 2007
    17 years ago
  • Date Published
    December 18, 2008
    15 years ago
Abstract
Systems and methods are described for face recognition using discriminatively trained orthogonal rank one tensor projections. In an exemplary system, images are treated as tensors, rather than as conventional vectors of pixels. During runtime, the system designs visual features—embodied as tensor projections—that minimize intraclass differences between instances of the same face while maximizing interclass differences between the face and faces of different people. Tensor projections are pursued sequentially over a training set of images and take the form of a rank one tensor, i.e., the outer product of a set of vectors. An exemplary technique ensures that the tensor projections are orthogonal to one another, thereby increasing ability to generalize and discriminate image features over conventional techniques. Orthogonality among tensor projections is maintained by iteratively solving an ortho-constrained eigenvalue problem in one dimension of a tensor while solving unconstrained eigenvalue problems in additional dimensions of the tensor.
Description
BACKGROUND

Appearance-based face recognition is often formulated as a problem of comparing labeled example images with unlabeled probe images. Viewed in terms of conventional machine learning, the dimensionality of the data is very high, the number of examples is very small, and the data is corrupted with large confounding influences such as changes in lighting and pose. As a result, conventional techniques such as nearest neighbor classification are not very effective.


A predominant conventional solution is to find a projective embedding of the original data into a lower dimensional space that preserves discriminant information and discards confounding information. These conventional solutions must address three challenges: high dimensionality, learning capacity, and generalization ability. Learning capacity, sometimes called inductive bias or discriminant ability, is the capacity of an algorithm to represent arbitrary class boundaries. Generalization ability is a measure of the expected errors on data outside of the training set, e.g., as measured by classification margin. While tradeoffs of these factors apply in any practical machine learning approach, face recognition presents extreme challenges.


The conventional face recognition technologies can be categorized into two classes: biometric-based methods and learning-based methods. The biometric-based methods match invariant geometrical facial metrics such as the relative distances between the eyes and nose. Learning-based methods use machine learning techniques to extract discriminant facial features for recognition.


In general, complex models with more parameters (e.g., neural networks) have higher learning capacity but are prone to over-fit and thus have low generalization ability. When available, a large quantity of diversified training data can be used to better constrain the parameters. Simpler models with fewer parameters tend to yield better generalization, but have limited learning capacity. The tradeoff in implementing these issues, especially with high dimensional visual data, remains an open issue.


Many discriminant learning methods treat image data as vectors. These approaches have difficulty with high dimensionality, a matter made worse when there is only a small set of training data. Many conventional methods involve solving an eigenvalue problem in the high dimensional input vector space (i.e., 1024 dimensions for 32×32 pixel images). Solving an Eigen decomposition in high dimensions is not only computationally intensive, but also prone to numerical difficulties in which the best discriminative projections may be discarded. Vector-based representations also ignore the spatial structure of image data which may be very useful for visual recognition.


SUMMARY

Systems and methods are described for face recognition using discriminatively trained orthogonal rank one tensor projections. In an exemplary system, images are treated as tensors, rather than as conventional vectors of pixels. During runtime, the system designs visual features—embodied as tensor projections—that minimize intraclass differences between instances of the same face while maximizing interclass differences between the face and faces of different people. Tensor projections are pursued sequentially over a training set of images and take the form of a rank one tensor, i.e., the outer product of a set of vectors. An exemplary technique ensures that the tensor projections are orthogonal to one another, thereby increasing ability to generalize and discriminate image features over conventional techniques. Orthogonality among tensor projections is maintained by iteratively solving an ortho-constrained eigenvalue problem in one dimension of a tensor while solving unconstrained eigenvalue problems in additional dimensions of the tensor.


This summary is provided to introduce the subject matter of for face recognition using discriminatively trained orthogonal rank one tensor projections, which is further described below in the Detailed Description. This summary is not intended to identify essential features of the claimed subject matter, nor is it intended for use in determining the scope of the claimed subject matter.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a diagram of an exemplary face recognition system.



FIG. 2 is a block diagram of an exemplary face recognition engine.



FIG. 3 is a diagram of an example visualization of a face represented as tensor projections.



FIG. 4 is a diagram of exemplary tensor rearrangement.



FIG. 5 is a diagram showing a discriminative ability of consecutively pursued orthogonal rank one tensor projections.



FIG. 6 is a flow diagram of an exemplary method of face recognition using orthogonal tensor projections.





DETAILED DESCRIPTION

Overview


This disclosure describes systems and methods for face recognition using discriminatively trained orthogonal rank one tensor projections. Exemplary systems base face recognition on discriminative linear projection. In the exemplary system, images are treated as tensors, rather than as conventional vectors of pixels. A tensor can be considered a multidimensional array, which for purposes of face recognition is utilized herein to exploit untapped information of high-dimensionality that is inherent in an image. In an exemplary system, an extracted series of orthogonal rank one tensor projections maximizes discriminative information for recognizing faces.


In one implementation, projections are pursued sequentially in the form of a rank one tensor, i.e., a tensor which is the outer product of a set of vectors. An exemplary method effectively ensures that the rank one tensor projections are orthogonal to one another. The orthogonality constraints on the tensor projections provide a strong inductive bias and result in better generalization on small training sets. That is, the orthogonality increases the ability of the exemplary system to generalize relevant image features for recognizing a face (e.g., that changes somewhat across multiple images) while at the same time also increases the ability of the system to discriminate between visual nuances.


In one implementation, the exemplary system achieves orthogonal rank one projections by pursing consecutive projections in the complement space of previous projections. Although consecutive projections may be conventionally meaningful for applications such as image reconstruction, they may be less meaningful for pursuing discriminant projections. However, the exemplary system achieves the consecutive projections, as described in greater detail below, by iteratively solving an eigenvalue problem with orthogonality constraints on one tensor dimension, while solving unconstrained eigenvalue problems on the other dimensions.


Experiments demonstrate that on small and medium-sized face recognition datasets, the exemplary system utilizing the exemplary orthogonality technique outperforms previous conventional embedding methods. On large face datasets the exemplary system achieves results comparable with the best conventional techniques while often using fewer discriminant projections.


Exemplary System



FIG. 1 shows an exemplary system 100 for performing face recognition using discriminatively trained orthogonal tensor projections. A computing device 102 hosts an exemplary face recognition engine 104. The face recognition engine 104 learns a set of orthogonal rank one tensor projections from a set of training images 106. The set of training images 106 includes multiple images of a first person 108, multiple images of a second person 110 . . . and multiple images of other people—to the “nth” person 112.


The set of orthogonal rank one tensor projects extracted from the set of training images 106 by the face recognition engine 104 depends on the actual faces portrayed in the images 106. That is, the face recognition engine 104 derives image features to be embodied in the orthogonal tensor projections from pixel-level and/or human perceptual level visual features, such that differences across multiple instances of a person's face in multiple images is minimized, while differences between that person's face and other faces (i.e., of other people) are maximized. Thus, the face recognition engine 104 performs an optimization that results in the “best discriminative” tensor projections being extracted. The discriminative quality of the tensor projections—the ability to recognize a face as belonging to a particular person—is greatly enhanced by the exemplary technique of maintaining the projections in the set of tensor projections as orthogonal (ortho-normal) to each other.


Once trained to a set of training images 106, the face recognition engine 104 can discriminate faces in test images 114 from different faces in other images, and can recognize a face that matches a known face in the training set 106.


Exemplary Engine



FIG. 2 shows the face recognition engine 104 of FIG. 1 in greater detail. The illustrated implementation is only one example configuration, for descriptive purposes. Many other arrangements of the components of an exemplary face recognition engine 104 are possible within the scope of the subject matter. Such an exemplary face recognition engine 104 can be executed in hardware, software, or combinations of hardware, software, firmware, etc.


The illustrated face recognition engine 104 includes an input or access path to the set of training images 106 of FIG. 1. The face recognition engine 104 includes a learning engine 202, which extracts a set of “most discriminant” orthogonal rank one tensor projections 204 from the set of training images 106. The face recognition engine 104 further includes a test image input 206 and a recognition tester 208 to determine if a test image 114 is recognizable from the set of training images 106. The test image input 206 may retrieve a test image 114 input by a user, from a file, from the Internet, etc., for example the test image 114 may be an image search criterion for searching a vast database of stored images. Or, the test image 114 may be obtained by a camera or face scanner 210.


The learning engine 202 may further include a best-discriminant-feature extraction engine 212, which in turn includes a sequential projection generator 214 to learn the set of orthogonal tensor projections 204 from example image pairs 216. A tensor rearranger 218 may unfold or reconfigure the dimensionality of a given tensor to enable the sequential projection generator 214 to pursue a greater number of projections for the set of orthogonal tensor projections 204. For example, the tensor rearranger 218 may utilize or comprise a GLOCAL transform to reduce the number of dimensions needed to organize the same tensor data. (See, H.-T. Chen, T.-L. Liu, and C.-S. Fuh, “Learning effective image metrics from few pairwise examples,” IEEE ICCV, pages 1371-1378, Beijing, China, October 2005, which is incorporated herein by reference.)


The sequential projection generator 214 may include a discriminative efficacy optimizer 220, including a “same-face” intraclass dissimilarity minimizer 222 and a “different-face” interclass dissimilarity maximizer 224. An associated orthogonality engine 226 includes an iterative eigenvalue engine 228 that effects an optimization which finds the most discriminant tensor projections 204 via an ortho-constrained one-dimension eigenvalue engine 230 and an unconstrained multi-dimension eigenvalue engine 232. A random dimension selector 234 chooses one dimension out of multiple tensor space dimensions 236 for the ortho-constrained one-dimension eigenvalue engine 230 to operate on. Then the unconstrained multi-dimension eigenvalue engine 232 operates on the remaining dimensions of the tensor space dimensions 236. Operation of the above components will be described in greater detail below.


The recognition tester 208 includes a projection embedder 238, a similarity comparator 240, and an image categorizer 242. The projection embedder 238 applies the set of orthogonal rank one tensor projections 204 to a test image 114, while the similarity comparator 240 decides whether the test image 114 matches a known image or another test image 114, and the image categorizer 242 classifies test images 114 according to recognized faces they may contain. Operation of the components just introduced will now be described.


Operation of the Exemplary Face Recognition Engine


The exemplary face recognition engine 104 regards image data as a tensor (i.e., a multiple dimensional array). FIG. 3 shows some aspects of representing an image as a tensor. For example, the tensor rearranger 218 may represent a tensor as a GLOCAL image with 4×2 blocks. Each row (e.g., 310 and 312) visualizes one rank one projection. The terms G(•) and IG(•) denote a GLOCAL transform and its inverse transform, respectively. FIG. 3(a) shows a source image “X” 302. For a 2-dimensional tensor, a rank one projection is defined with left and right projections “l” and “r.” FIG. 3(b) shows a reconstructed left projection IG(llTG(Y)) 304; FIG. 3(c) shows a full bilinear projection IG(lTG(X)r) 306; and FIG. 3(d) shows a reconstructed right projection IG(G(X)rrT) 308.


Using tensor representation of image data, the best-discriminant-feature extraction engine 212 can pursue discriminant multilinear projections (e.g., bi-linear projections, for a 2-dimensional tensor) to construct discriminant embedding. In many cases, the sequential projection generator 214 obtains discriminant multi-linear projections by solving the eigenvalue problems iteratively on n different dimensions of the tensor space 236. The exemplary techniques described herein are different from conventional “rank one projections with adaptive margins” (RPAM). First, the rank one projections pursued by the exemplary face recognition engine 104 are orthogonal, while those learned from RPAM are not. Previous research has shown that orthogonality increases the discriminative power of the projections. Although in one implementation the face recognition engine 104 does not use adaptive margin, such could be easily incorporated into the exemplary system.


Tensor representations of images do not suffer from the same “curse-of-dimensionality” as vector space representations. Tensor projections are represented as the outer product of n lower dimensional vectors. For example, rather than expending 1024 parameters for each projection, 2-dimensional tensors can operate with as few as 64 parameters per projection. As discussed below, a tensor rearranger 218, e.g., utilizing a GLOCAL tensor representation, has the added benefit of respecting the geometric structure inherent in images.


Most previous tensor-based learning methods for discriminant embedding constrain the spanning set of multi-linear projections that are to be formed by combination of outer products of a small number of column vectors. This may over-constrain the learning capacity of the projection vectors.


The exemplary face recognition engine 104 addresses the conflicting goals of capacity and generalization by learning a projection which is a combination of orthogonal rank one tensors. It is worth noting that two rank one tensors are orthogonal if and only if they are orthogonal on at least one dimension of the tensor space. The exemplary orthogonality engine 226 uses this insight in achieving orthogonality among projections. The iterative eigenvalue engine 228 iteratively solves an eigenvalue problem with orthogonality constraints on one dimension, and solves unconstrained eigenvalue problems on the other dimensions of the tensor space 236.


Rank One Projection and Orthogonality


In linear algebra, an order n real-valued tensor is a multiple dimensional array X ∈ Rm0×m1 . . . ×mn−1 and χi0i1 . . . in−1 is the element at position (i0,i1, . . . in−1). Next . . . , a rank one projection is defined.


Definition: Given an order n tensor X, a rank one projection is a X ∈ Rm0×m1 . . . ×mn→y ∈ R mapping defined by {tilde over (P)}={p0,p1, . . . ,pn−1} where each pi is a column vector of dimension mi with the kth element Pik, such that in Equation (1):









y
=




i

n
-
1

















(




i
1





(




i
0





x


i
0



i
1









i

n
-
1






p

0


i
0





)



p

1


i
1











)




p

n
-

1

i

n
-
1





.







(
1
)







The notation can be simplified using a k-mode product, i.e.:


Definition: The k-mode product of tensor X ∈ Rm0×mk . . . ×mn−1 and a matrix (i.e., an order 2 tensor) B ∈ Rmk×m′k is a X ∈ m0× . . . mk× . . . ×mn−1→Y ∈ m0× . . . mk× . . . ×mn−1 mapping, i.e., Y=X×kB, where, as shown in Equation (2):










y


i
0









i

k
-
1




i
k




i

k
+

1







in





=




j
=
0


m

k
-
1






x


i

0










i

k
-
1






ji


k
+

1








,

i

n
-
1






b

ji
k









(
2
)







Equation (1) can then be written as y=X×0p0×n−1pn−1, or in short y=XΘ{tilde over (P)}. Let {tilde over (P)}d={{tilde over (P)}(0), . . . ,{tilde over (P)}(d−1)} be a set of d rank one projections, then the mapping from X to y=[y0,y1, . . . ,yd−1]T ∈ Rd is denoted as in Equation (3):






y=[XΘ{tilde over (P)}
(0)
, . . . ,XΘ{tilde over (P)}
(d−1)]T ΔXΘ{tilde over (P)}  (3)


A rank one projection is also the sum of element-wise product of X and the reconstruction tensor of {tilde over (P)}.


Definition: The reconstruction tensor of {tilde over (P)} is P′ ∈ m0×m2 . . . ×mn−1 such that, as in Equation (4):










P


=


[

p


i
0



i
1









i

n
-
1





]

=

[




k
=
0


n
-
1








p

ki
0



]






(
4
)







Then y=X Θ{tilde over (P)}=Σi0i1 . . . in−1 xi0i1 . . . in−1p′i0i1 . . . in−1. An order n rank one projection is indeed a constrained vector space linear projection x ∈ R Πimi→y ∈ R such that y={circumflex over (p)}T x, where x is the vector scanned dimension-by-dimension from X and {circumflex over (p)} is defined as in Equation (5):





{circumflex over (p)}=pn−1pn−2  p0   (5)


Where  is the Kronecker product of matrices. Next, the orthogonality of two rank one projections is defined, i.e.:


Definition: Two rank one projections {tilde over (P)}(1) and {tilde over (P)}(2) are orthogonal, if and only if the corresponding vectors {tilde over (P)}1 and {tilde over (P)}2 calculated from Equation (5) are orthogonal. Similarly, {tilde over (P)} can be called a normal rank one projection if and only if {circumflex over (p)} is a normal vector. If all pi of {tilde over (P)} are normal vectors, then {tilde over (P)} is a normal rank one projection.


Ortho Rank One Discriminant Analysis


Given a training set {Xi ∈ Rm0×m1 . . . ×mn−1}i=0N−1 106 and set of pairwise labels L={l(i,j):i<j;i,j ∈ {0, . . . ,N−1}}, where l(i,j)=1, if Xi and Xj are in the same category 242, and l(i,j)=0 otherwise. Let k(i) be the set of k-nearest neighbors of Xi, then:






D={(i,j)|i<j,l(i,j)=0,Xi ∈ Nk(j)∥Xj ∈ Nk(i)}






S={(i,j)|i<j,l(i,j)=1,Xi ∈ Nk(j)∥Xj ∈ Nk(i)}


are the indices set of all example pairs 216 which are k-nearest neighbors of one another, and are from different and same categories, respectively. The objective is to learn a set of K ortho-normal rank one projections {tilde over (P)}K=({tilde over (P)}(0),{tilde over (P)}(1), . . . ,{tilde over (P)}(K−1)) such that in the projective embedding space, the distances of the example pairs 216 in  (same face) are minimized, while the distances of the example pairs 216 in  (different faces) are maximized.


To achieve this, the sequential projection generator 214 maximizes a series of local weighted discriminant cost functions. Suppose the sequential projection generator 214 has obtained k discriminant rank one projections indexed from 0 to k−1, then to pursue the (k+1)th rank one projection, the sequential projection generator 214 solves the following constrained optimization problem in Equations (6) and (7):










max


P
~


(
k
)








D




ω
ij








X
i


Θ



P
~


(
k
)



-


X
j


Θ



P
~


(
k
)






2






S




ω
ij








X
i


Θ



P
~


(
k
)



-


X
j


Θ



P
~


(
k
)






2








(
6
)








s
.
t
.
















P
~


(
k
)






P
~


(

k
-
1

)



,





,



P
~


(
k
)





P
~


(
0
)







(
7
)







where ∥•∥ is the Euclidean distance, and ωij is a weight assigned according to the importance of the example pair (Xi,Xj) 216. In one implementation, the heat kernel weight is used, i.e.,







ω
ij

=

exp


{

-






X
i

-

X
j




F
2

t


}






where








·


F






denotes the Frobenius norm, and t is a constant parameter. The heat kernel weight introduces heavy penalties to the cost function for example pairs 216 that are close to one another. Notice that for k=0, only an unconstrained optimization problem of Equation (6) is solved.


There are two difficulties in the constrained maximization of Equation (6). First, it is generally difficult to maintain both the rank one and orthogonality properties. Second, there is no closed-form solution to the unconstrained optimization problem of Equation (6). It is well known that the second problem can be addressed numerically using a sequential, iterative optimization scheme. A solution to the first problem is described in the following section.


Exemplary Learning Engine


In one implementation, the exemplary learning engine 202 uses the following proposition:


Proposition: Two rank one projections {tilde over (P)}(1) and {tilde over (P)}(2) are orthogonal to each other, if and only if for at least one i, pi(1) ∈ {tilde over (P)}(1) is orthogonal to pi(2) ∈ {tilde over (P)}(2) i.e., pi(1) ⊥ pi(2).


From this Proposition, an equivalent set of constraints of Equation (7) is given in Equation (8):





∃ {jl:l ∈ {0, . . . ,k−1};j1 ∈ {0, . . . ,n−1}}:pjk−1(k) ⊥ pjk−1(k−1), . . . ,pj0(k) ⊥ pj0(0).   (8)


To make the optimization more tractable, the constraints on Equation (7) can be replaced with the following stronger constraints.





∃j ∈ {0, . . . ,n−1}:pj(k) ⊥ pj(k−1), . . . ,pj(k) ⊥ pj(0).   (9)


These constraints are stronger because they require all jl in Equation (8) to be the same. The constraints in Equation (9) are sufficient conditions for the constraints in Equation (7).


It is well known that the unconstrained problem in Equation (6) can be solved numerically in a sequentially iterative manner. That is, at each iteration, the iterative eigenvalue engine 228 fixes {tilde over (P)}i(k)={p0(k), . . . ,pi−1(k),pi+1(k), . . . ,pn−1(k)} for one i ∈ {0, . . . , n−1}, and maximizes Equation (6) with respect to pi(k). To simplify notation, Equation (10) denotes:






y
(k)
=X×
0
p
0
(k) . . . ×i−1pi−1(k)×i+1pi+1(k) . . . pn−1(k) ΔXΘ{tilde over (P)}i(k)   (10)


which is a mi dimensional vector. Then Equation (11) is to be solved:










max
p





p
T



A
d

(
i
)



p



p
T



A
s

(
i
)



p






(
11
)







where, in Equations (12), (13), and (14):










A
d

(
i
)


=









ω
op



(


y
0

(
k
)


-

y
p

(
k
)



)





(


y
0

(
k
)


-

y
p

(
k
)



)

T







(
12
)







A
s

(
i
)


=



S





ω
op



(


y
0

(
k
)


-

y
p

(
k
)



)





(


y
0

(
k
)


-

y
p

(
k
)



)

T







(
13
)








y
0

(
k
)


=


X
o


Θ



P
~

i

(
k
)




,





o
=
1

,





,
N




(
14
)







The iterative eigenvalue engine 228 obtains the optimal solution of Equation (11) by solving the generalized eigenvalue problem in Equation (15):






A
d
(i)
p=λA
s
(i)
p,   (15)


and the optimal solution pi(k)* is the eigenvector associated with the largest eigenvalue. Equation (15) is solved iteratively over i=1,2, . . . ,n one-by-one until convergence. The final output {tilde over (P)}(k)*={p0(k)*, p1(k)*, . . . ,pn(k)*} is regarded as the optimal solution to the unconstrained Equation (6). This iterative technique can only guarantee a locally optimal solution.


To solve Equation (6) with the constraints in Equation (9), suppose j have been selected, the iteration steps to optimize those Pi(k) where i≠j should remain unchanged since the constraints do not apply to them.


Now Equation (11) is to be solved for i=j, such that the constraints in Equation (9) hold. It is equivalent to solving the problem in Equation set (16):












max

p
j

(
k
)








(

p
j

(
k
)


)

T




d

(
j
)




p
j

(
k
)








s
.
t
.







(

p
j

(
k
)


)

T




s

(
j
)




p
j

(
k
)



=
1














(

p
j

(
k
)


)

T



p
j

(

k
-
1

)



=
0
























(

p
j

(
k
)


)

T



p
j

(
0
)



=
0.







(
16
)







In one implementation, the iterative eigenvalue engine 228 obtains the solution by solving the following eigenvalue problem in Equation (17):






{tilde over ()}p
j
(k)=((As(j))−1Ad(j))pj(k)=λpj(k)   (17)


where, in Equations (18), (19), and (20):






M=I−(As(j))−1AB−1(A)T   (18)






=[pj(0),pj(1), . . . ,pj(k−1)]  (19)






B=[b
uv
]=A
T(As(j))−1A   (20)


The optimal pi(k)* is the eigenvector corresponding the largest eigenvalue of {tilde over (M)}.


Table 1 below summarizes the exemplary techniques iteratively executed by the learning engine 202, namely, exemplary orthogonal rank one tensor discriminant analysis. It too can only guarantee a locally optimal solution.









TABLE 1





Orthogonal Rank One Tensor Discriminant Analysis.















  INPUT: {Xi}i=1N−1,S and D


  OUTPUT: {tilde over (P)}K = ({tilde over (P)}(0),{tilde over (P)}1,...,{tilde over (P)}(K−1) )


  1.  k = 0, iteratively solving Equation (15) over i = 0, 1,..., n − 1


to obtain {tilde over (P)}(0) .k = k + 1


  2.  Randomly initialize each pi(k) as a normal vector, randomly


generate number j ε {l|l = 0,...,n − 1 & m1 > k}


    a.  For each i = [j,0,1...,j − 1,j + 1,...,n − 1], fixing all other


pm(k),m ≠ i. If i = j, update pi(k) by solving Equation (17). Otherwise


update pi(k) by solving Equation (15). Then normalize pi(k).


    b.  Repeat Step 2a until the optimization of Equation (6) is


converged to obtain {tilde over (P)}(k)


  3.  k = k + 1 . If  k < K, repeat Step 2, else output


{tilde over (P)}K = ({tilde over (P)}(0),{tilde over (P)}1,...,{tilde over (P)}(K−1) )









There is no theoretic guidance on how to choose j in Equation (9). In one implementation, not too many constraints are placed on one specific dimension. Therefore the random dimension selector 234 chooses one dimension j when pursing each one of the K rank one projections.


Second, the orthogonality engine 226 performs the constrained optimization on pj(k) first. This ensures that the constraints in Equation (7) hold in all the iterations.


Third, when k rank one projections have been obtained and k≧mi, then orthogonality constraints can no longer be imposed on the ith dimension. The reason is that {pi(l)|l=0, . . . ,k−1} already span Rmi; so only m=max{mi}i=0n−1 orthogonal rank one projections can be pursued. However, the tensor rearranger 218 can address this issue by transforming the tensor space from Rm0×m1 . . . ×mn−1 to Rm′0×m′1 . . . ×m′n−1, where m′=max{m′i}i=0n−1. In this new transformed space, the learning engine 202 can then find a maximum of m′ rank one projections. For example, exploring second order tensors, a GLOCAL transform can be used in the tensor rearranger 218.



FIG. 4 shows an example GLOCAL transform 400. The GLOCAL transform 400 partitions a tensor of size m0×m1 into







m
0


=



m
0

×

m
1




l
0

×

l
1







non-overlapping blocks of size l0×l1. The blocks are ordered by a raster scan. Each block i is then itself raster scanned to be a vector of dimension m′1=l0×l1, and put into the ith column of the target tensor of size m′0×m′1. The GLOCAL transform 400 can be interpreted in the following manner: the column space 402 expresses local features in the pixel level, and the row space 404 expresses global features in the appearance level.


Effectiveness of Projections



FIG. 5 shows the discriminative power of consecutively pursued orthogonal rank one tensor projections. The discriminant power (evaluated by the quotient Equation (6)) does not strictly decrease over the sequentially pursued projections. In order to explore the effectiveness of projections with varying dimensions, they are first sorted according to discriminant power. FIG. 5 displays the discriminant power of the orthogonal rank one projections obtained on a training set of images 106, such as the CMU Pose, Illumination, and Expression (PIE) benchmark database (Carnegie Mellon University, Pittsburgh, Pa.). Curve 502 shows the unsorted quotients, and curve 504 displays the sorted quotients. In one implementation, the learning engine 202 performs a GLOCAL transform with 4×2 blocks to form a tensor of size 8×128, allowing a total of 128 orthogonal projections.


Exemplary Method



FIG. 6 shows an exemplary method 600 of face recognition using orthogonal tensor projections. In the flow diagram, the operations are summarized in individual blocks. The exemplary method 600 may be performed by hardware, software, or combinations of hardware, software, firmware, etc., for example, by components of the exemplary face recognition engine 104.


At block 602, an image that includes a face is represented as a tensor. When images are treated as tensors, rather than as conventional vectors of pixels, optimal discriminant features can be extracted as discriminant multilinear projections. The discriminant multi-linear projections can be obtained by solving eigenvalue problems iteratively on n different dimensions of the tensor space.


At block 604, orthogonal tensor projections are derived from the image for recognizing the face, e.g., in other, different images. During runtime, the system “designs” the visual features on the fly—to be embodied as tensor projections—that minimize intraclass differences between instances of the same face while maximizing interclass differences between the face and faces of different people. In one implementation, tensor projections are pursued sequentially over a training set of images and take the form of a rank one tensor, i.e., the outer product of a set of vectors. An exemplary technique ensures that the tensor projections are orthogonal to one another, thereby increasing ability to generalize and discriminate image features over conventional techniques. Orthogonality among tensor projections can be maintained by iteratively solving an ortho-constrained eigenvalue problem in one dimension of a tensor while solving unconstrained eigenvalue problems in additional dimensions of the tensor.


Conclusion


Although exemplary systems and methods have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claimed methods, devices, systems, etc.

Claims
  • 1. A method, comprising: representing images as tensors; andderiving orthogonal tensor projections to characterize image features for recognizing a person's face across different images containing the person's face.
  • 2. The method as recited in claim 1, wherein the orthogonal tensor projections comprise a set of orthogonal rank one tensor projections.
  • 3. The method as recited in claim 1, wherein the orthogonal tensor projections are optimized such that dissimilarities between different instances of the person's face are minimized while dissimilarities between the person's face and faces of other people are maximized.
  • 4. The method as recited in claim 1, wherein the deriving the orthogonal tensor projections further includes learning the orthogonal tensor projections from a set of training images, wherein the set of training images portrays multiple faces and multiple instances of each face.
  • 5. The method as recited in claim 4, further comprising: receiving a test image;scanning the test image for an associated orthogonal tensor projection;comparing the corresponding orthogonal tensor projection with the orthogonal tensor projections learned from the images in the training set of images, to categorize the test image.
  • 6. The method as recited in claim 5, wherein the deriving further includes learning a set of orthogonal rank one projections, such that in a projective embedding space distances between instances of a person's face across multiple images are minimized, while distances between the person's face and corresponding faces of other people across multiple images are maximized.
  • 7. The method as recited in claim 1, wherein deriving the orthogonal tensor projections to recognize an image feature across different images includes extracting or deriving the image features from visual attributes of the image, wherein the visual attributes include pixel-level attributes and human perceptual-level attributes.
  • 8. The method as recited in claim 1, wherein deriving the orthogonal tensor projections includes pursuing a consecutive projection in a complement space of previous projections.
  • 9. The method as recited in claim 1, wherein deriving the orthogonal tensor projections further includes preserving an orthogonality among consecutively pursued tensor projections to increase abilities of a set of the orthogonal tensor projections to generalize and discriminate image features.
  • 10. The method as recited in claim 9, wherein the preserving the orthogonality of each projection within the set of orthogonal rank one tensor projections includes iteratively solving for an eigenvalue with orthogonality constraints on one dimension and solving for unconstrained eigenvalues on other dimensions.
  • 11. The method as recited in claim 1, further comprising rearranging a tensor in order to achieve a higher number of orthogonal tensor projections.
  • 12. The method as recited in claim 11, wherein the rearranging uses a GLOCAL schema.
  • 13. A system, comprising: a face recognition engine to represent faces in images as orthogonal rank one tensor projections; anda learning engine in the face recognition engine to learn a set of the orthogonal rank one tensor projections from a set of training images in order to differentiate faces.
  • 14. The system as recited in claim 13, wherein the learning engine includes a feature derivation engine to extract an image feature for differentiating faces via the orthogonal rank one tensor projections.
  • 15. The system as recited in claim 13, wherein the learning engine includes a sequential projection generator for consecutively pursuing orthogonal tensor projections in a complement space of previous projections.
  • 16. The system as recited in claim 13, wherein the learning engine includes a discriminative efficacy optimizer to learn the set of the orthogonal rank one tensor projections from the set of training images such that dissimilarities between different instances of a person's face are minimized across multiple images while dissimilarities between the person's face and faces of other people are maximized across multiple images.
  • 17. The system as recited in claim 16, wherein the learning engine includes an orthogonality engine to maintain orthogonality between consecutively pursued tensor projections in order to increase abilities of a set of the orthogonal tensor projections to generalize and discriminate image features.
  • 18. The system as recited in claim 17, wherein the orthogonality engine includes an eigenvalue engine to maintain the orthogonality by iteratively solving for an ortho-constrained eigenvalue in one dimension of a tensor and by iteratively solving for unconstrained eigenvalues in other dimensions of the tensor.
  • 19. The system as recited in claim 13, further comprising an tensor rearranger to achieve a higher number of orthogonal tensor projections.
  • 20. A system, comprising: means for representing images as tensors; andmeans for discriminating faces in the images via orthogonal tensor projections.