The present invention relates generally to image recognition. In particular, the present invention relates to a logic arrangement, data structure, system and method for acquiring data, and more particularly to a logic arrangement, data structure, system and method for acquiring data describing at least one characteristic of an object, synthesizing new data, recognizing acquired data and reducing the amount of data describing one or more characteristics of the object (e.g., a human being).
An important problem in data analysis for pattern recognition and signal processing is finding a suitable representation. For historical and computational simplicity reasons, linear models that optimally encode particular statistical properties of the data have been desirable. In particular, the linear, appearance-based face recognition method known as “Eigenfaces” is based on the principal component analysis (“PCA”) technique of facial image ensembles. See L. Sirovich et al., “Low dimensional procedure for the characterization of human faces,” Journal of the Optical Society of America A., 4:519-524, 1987, and M. A. Turk and A. P. Pentland, “Face recognition using eigenfaces,” Proceedings IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 586-590, Hawaii, 1991, both of which are hereby incorporated by this reference. The PCA technique encodes pairwise relationships between pixels, the second-order statistics, correlational structure of the training image ensemble, but it ignores all higher-order pixel relationships, the higher-order statistical dependencies. In contrast, a generalization of the PCA technique known as independent component analysis (“ICA”) technique learns a set of statistically independent components by analyzing the higher-order dependencies in the training data in addition to the correlations. See A. Hyvarinen et al., Independent Component Analysis, Wiley, New York, 2001, which is hereby incorporated by this reference. However, the ICA technique does not distinguish between higher-order statistics that rise from different factors inherent to an image formation—factors pertaining to scene structure, illumination and imaging.
The ICA technique has been employed in face recognition and, like the PCA technique, it works best when person identity is the only factor that is permitted to vary. See M. S. Bartlett, “Face Image Analysis by Unsupervised Learning,” Kluwer Academic, Boston, 2001, and M. S. Bartlett et al., “Face recognition by independent component analysis,” IEEE Transactions on Neural Networks, 13(6):1450-1464, 2002, both of which are hereby incorporated by this reference. If additional factors, such as illumination, viewpoint, and expression can modify facial images, recognition rates may decrease dramatically. The problem is addressed by multilinear analysis, but the specific recognition algorithm proposed in M. A. O. Vasilescu et al. “Multilinear analysis for facial image recognition,” In Proc. Int. Conf on Pattern Recognition, Quebec City, August 2002 was based on linear algebra, and such algorithm does not fully exploit the multilinear approach.
One of the objects of exemplary embodiments of the present invention is to overcome the above-described deficiencies. Another object of the present invention is to provide a method, system, storage medium, and data structure for generating an object descriptor.
According to an exemplary embodiment of the present invention such method can include steps of computing a response of an image to a basis tensor, flattening the image response, extracting a coefficient vector from the image response, and comparing the extracted coefficient to a plurality of different parameters stored in rows of a coefficient matrix.
In another exemplary embodiment of the present invention, a computer system can be provided which includes a storage arrangement (e.g., a memory), and a processor which is capable of receiving data associated with an image, and provided in communication with the storage arrangement. The storage arrangement can store computer-executable instructions for performing a method of processing data. For example, a response of an image to a basis tensor may be determined, the image response can be flattened, a coefficient vector can be extracted from the image response, and the extracted coefficient may be compared to a plurality of different parameters stored in rows of a coefficient matrix.
In yet another exemplary embodiment of the present invention, a computer-readable medium is provided having stored thereon computer-executable instructions for performing a method. The method includes steps of computing a response of an image to a basis tensor, flattening the image response, extracting a coefficient vector from the image response, and comparing the extracted coefficient to a plurality of different parameters stored in rows of a coefficient matrix.
In yet another exemplary embodiment of the present invention, a method of processing data is provided. The method includes steps of applying a multilinear independent component analysis to image data to create a factorial code; and generating a representation of the data having a plurality of sets of coefficients that encode people, viewpoints, and illuminations, wherein each set is statistically independent.
In yet another exemplary embodiment of the present invention, a data structure is provided having set of coefficient vectors for a target image. In particular, the vectors include an identifier of the target, a viewpoint of the target, and an illumination direction of the target.
Further objects, features and advantages of the invention will become apparent from the following detailed description taken in conjunction with the accompanying figures showing illustrative embodiments of the invention, in which:
Throughout the figures, the same reference numerals and characters, unless otherwise stated, are used to denote like features, elements, components or portions of the illustrated embodiments. Moreover, while the present invention will now be described in detail with reference to the figures, it is done so in connection with the illustrative embodiments. It is intended that changes and modifications can be made to the described embodiments without departing from the true scope and spirit of the subject invention as defined by the appended claims.
Exemplary embodiments of the present invention relate to and may utilize a nonlinear, multifactor model of image ensembles that generalizes conventional ICA technique. Whereas the ICA technique employs linear (matrix) algebra, the exemplary embodiment of a Multilinear ICA (“MICA”) procedure according to the present invention uses multilinear (tensor) algebra. Unlike its conventional, linear counterpart, the MICA procedure is able to learn the interactions of multiple factors inherent to image formation and separately encode the higher order statistics of each of these factors. Unlike the multilinear generalization of “Eigenfaces”, referred to as “TensorFaces” which encodes only second order statistics associated with the different factors inherent to image formation, the MICA procedure can also encode higher order dependencies associated with the different factors.
The multilinearICA procedure of the exemplary embodiment of the present invention can be understood in the context of the mathematics of PCA, multilinearPCA, and ICA techniques.
For example, the principal component analysis of an ensemble of I2 images can be determined by performing a singular value decomposition (“SVD”) on a I1×J2 data matrix D whose columns are the “vectored” I1-pixel “centered” images.
In a factor analysis of D, the SVD technique orthogonalizes these two spaces and decomposes the matrix as
D=UΣVT, (1)
with the product of an orthogonal column-space represented by the left matrix UεIRI
The analysis of an ensemble of images resulting from the confluence of multiple factors related to scene structure, illumination, and viewpoint is a problem of multilinear algebra. See M. A. O. Vasilescu et al., “Multilinear analysis of image ensembles: Tensorfaces,” In Proc. European Conf. on Computer Vision (ECCV 2002), pp. 447-460, Copenhagen, Denmark, May 2002. Within this mathematical framework, the image ensemble can be represented as a higher-order tensor. This image data tensor should be decomposed in order to separate and parsimoniously represent the constituent factors. To this end, an N-mode SVD procedure may be used as a multilinear extension of the above-mentioned conventional matrix SVD technique.
A tensor, also known as an n-way array or multidimensional matrix or n-mode matrix, is a higher order generalization of a vector (first order tensor) and a matrix (second order tensor). The description and use of tensors is described in greater detail in International patent application publication no. WO 03/055119A3, filed Dec. 6, 2002, entitled “Logic Arrangement, Data Structure, System and Method for Multilinear Representation of Multimodal Data Ensembles for Synthesis, Recognition and Compression,” which is hereby incorporated by this reference as though set forth fully herein. For example, a tensor can be defined as a multi-linear mapping over a set of vector spaces. The tensor can be represented in the following manner: εIRI
In tensor terminology, column vectors can be referred to as mode-1 vectors, and row vectors are referred to as mode-2 vectors. Mode-n vectors of an Nth order tensor εIRI
Rn=rankn()=rank(A(n)).
A generalization of the product of two matrices can be the product of the tensor and matrix. The mode-n product of tensor εIRI
The mode-n product can be expressed as =×nM, or in terms of flattened matrices as B(n)=MA(n). The mode-n product of a tensor and a matrix is a special case of the inner product in multilinear algebra and tensor analysis. The mode-n product is often denoted using Einstein summation notation, but for purposes of clarity, the mode-n product symbol can be used. The mode-n product may have the following properties:
1. Given a tensor εIRI
2. Given a tensor εIRI
(×nU)×nV=×n(VU).
An Nth-order tensor εIRI
A singular value decomposition (SVD) can be expressed as a rank decomposition as is shown in the following simple example:
It should be noted that an SVD can be a combinatorial orthogonal rank decomposition, but that the reverse is not true. In general, rank decomposition is not necessarily singular value decomposition. Also, the N-mode SVD can be expressed as an expansion of mutually orthogonal rank-1 tensors, as follows:
where Un(in) is the in column vector of the matrix Un. This is analogous to the equation
For example, an order N>2 tensor or N-way array is an N-dimensional matrix comprising N spaces. N-mode SVD is a “generalization” of conventional matrix (i.e., 2-mode) SVD. It can orthogonalize these N spaces, and decompose the tensor as the mode-n product, denoted ×n, of N-orthogonal spaces, as follows:
=×1U1×2U2 . . . ×nUn . . . ×NUN. (2)
Tensor , known as the core tensor, is analogous to the diagonal singular value matrix in conventional matrix SVD (although it does not have a simple, diagonal structure). Using mode-n products, the conventional SVD in equation (1) can be rewritten as D=Σ×1U×2V. The core tensor governs the interaction between the mode matrices U1, . . . , UN. Mode matrix Un contains the orthonormal vectors spanning the column space of matrix D(n) resulting from the mode-n flattening of .
An N-mode SVD technique can be used for decomposing according to equation (2):
1. For n=1, . . . , N, compute matrix Un in equation (2) by computing the SVD of the flattened matrix D(n) and setting Un to be the left matrix of the SVD. When D(n) is a non-square matrix, the computation of Un in the singular value decomposition (“SVD”) D(n)=UnΣVnT can be performed efficiently, depending on which dimension of D(n) is smaller, by decomposing either D(n)D(n)T=UnΣ2UnT and then computing VnT=Σ+UnTD(n) or by decomposing D(n)TD(n)=VnΣ2VnT and then computing Un=D(n)VnΣ+.
2. Solve for the core tensor as follows:
=×1U1T×2U2T . . . ×nUnT . . . ×NUNT. (3)
A dimensionality reduction in the linear case does not have a trivial multilinear counterpart. S useful generalization to tensors can involve an optimal rank-(R1, R2, . . . , RN) approximation which iteratively optimizes each of the modes of the given tensor, where each optimization step involves a best reduced-rank approximation of a positive semi-definite symmetric matrix. See L. de Lathauwer et al., “On the best rank-1 and rank-(R1, R2, . . . , Rn) approximation of higher order tensors,” SIAM Journal of Matrix Analysis and Applications, 21(4):1324-1342, 2000. This technique is a higher-order extension of the orthogonal iteration for matrices.
The independent component analysis (“ICA”) technique of multivariate data looks for a sequence of projections such that the projected data look as far from Gaussian as possible. The ICA technique can be applied in two exemplary ways: Architecture I applies the ICA technique to DT, each of whose rows is a different vectorized image, and finds a spatially independent basis set that reflects the local properties of faces. On the other hand, architecture II applies the ICA technique to D and finds a set of coefficients that are statistically independent while the basis reflects the global properties of faces.
Architecture I: the ICA technique starts essentially from the factor analysis or PCA solution shown in equation (1) of a pre-whittened data set, and computes a rotation of the principal components such that they become independent components. See J. Friedman et al., “The Elements of Statistical Learning: Data Mining, Inference, and Prediction,” Springer, New York, 2001.
where every column of D is a different image, W is an invertible transformation matrix that is computed by the ICA technique, C=UWT are the independent components shown in
Alternatively, in architecture II, ICA can be applied to D and it rotates the principal components directions such that the coefficients are statistically independent, as follows:
where C are the basis and K are the independent coefficients. Note, that C, K and W are computed differently in the two different architectures.
As indicated above, the exemplary MICA procedure according to the present invention may be implemented in the exemplary embodiments of the present invention. For example, architecture I can apply the MICA procedure, and may result in a factorial code. It can locate a representation in which each set of coefficients that encodes people, viewpoints, illuminations, etc., is statistically independent. Architecture II finds a set of independent bases across people, viewpoints, illuminations, etc.
Architecture I: Transposing the flattened data tensor in the nth mode and computing the ICA as in (4)-(8):
where Cn=UnWnT. Thus, we can derive the N-mode ICA from N-mode SVD (2) as follows:
where the core tensor S=Z×1W1−T . . . ×NWN−T. The rows associated with each of the mode matrices, Ci where i=1 . . . N are statistically independent.
A multilinear ICA decomposition can be performed of the tensor of vectored training images dd,
=×1Cpeople×2Cviews×3Cillums, (16)
extracting a set of mode matrices—the matrix Cpeople containing row vectors of cpT coefficients for each person p, the matrix Cviews containing row vectors cvT of coefficients for each view direction v, the matrix Cillums containing row vectors clT of coefficients for each illumination direction l—and an MICA basis tensor B=S×4Cpixels that governs the interaction between the different mode matrices as illustrated by
For architecture I, each of the mode matrices contains a set of statistically independent coefficients, while architecture II yields B, a set of independent bases across people, viewpoints, illuminations, etc.
Architecture II: MICA has the same mathematical form as in equation (15). However, the core tensor S and mode matrices C1 . . . CN are computed according to equations (9)-(11). This architecture results in a set of basis vectors that are statistically independent across the different modes.
In the PCA or eigenface technique, a data matrix D of known “training” facial images dd is decomposed into a reduced-dimensional basis matrix BPCA, and a matrix C containing a vector of coefficients cd associated with each vectored image dd. Given an unknown facial image dnew, the projection operator BPCA−1 linearly projects this new image into the reduced-dimensional space of image coefficients, cnew=BPCA−1dnew.
The recognition procedure described in the reference immediately following was based on this linear projection approach, so it does not fully exploit the multilinear framework. See M. A. O. Vasilescu et al. “Multilinear analysis for facial image recognition,” In Proc. Int. Conf on Pattern Recognition, Quebec City, August 2002. One exemplary embodiment of the present invention addresses the fundamental problem of inferring the identity, illumination, viewpoint, and expression labels of an unlabeled test image. Given a solution to this problem, a simple recognition algorithm is obtained that is based on the statistical independence properties of ICA and the multilinear structure of the tensor framework.
=×1cpT×2cvT×3C1T. The first coefficient vector encodes the person's identity, the second encodes the view point, the third encodes the illumination direction, etc.
R(pixels)=(C(pixels)S(pixels))−1dnew. (17)
where is a multi-modal response to the different factors that make up the image. This tensorial response has a particular structure shown in
R(people)=[l1v1cp . . . l1vncp . . . lnvncp . . . lnvncp] (18)
The image response can be reorganized as a matrix whose columns are multiples of the people parameters cp in equation (18). The reorganization of the image responses is achieved by flattening along the people mode. On closer inspection, the matrix R(people) has rank 1, it's column are multiples of cp. Therefore, the people rank of is 1; hence, an SVD of R(people) can extract cp. Similarly, a flattening along the viewpoint mode or the illumination mode etc. results in the matrices R(viewpoints) and R(illumination) can be observed whose columns are multiples of cv and cl respectively. These coefficient vectors, the viewpoint coefficient vector, the illumination coefficient vector etc., are extracted by computing a singular value decomposition on the respective matrices.
Therefore, as indicated above, all the constituent factors, and all the coefficient vectors, associated with a test image can be extracted by computing the N-mode SVD on the multimodal response tensor whose rank-(R1, R2, . . . , Rn)=rank-(1,1, . . . , 1,).
Using the extracted cp, it is possible to perform individual recognition. It can be compared against the people parameters that are stored in the rows of Cpeople using a cosine function which is equivalent to a normalized nearest neighbor. Assuming one of the image factors is the facial expression one can similarly perform expression recognition. In this manner all the factors associated with an image can be recognized. In one exemplary embodiment of the present invention, the exemplary procedure can be applied to gray-level facial images of 75 subjects. Each subject can be imaged from 15 different viewpoints (θ=−35° to +35° in 5° steps on the horizontal plane φ=0°) under 15 different illuminations (θ=−35° to +35° in 5° steps on an inclined plane φ=45°).
In summary, the independent Component Analysis (“ICA”) technique may minimize the statistical dependence of the representational components of a training image ensemble. However, the ICA technique generally is not able to distinguish between the different factors related to scene structure, illumination and imaging, which are inherent to image formation. A nonlinear, multifactor ICA procedure according to the present invention can be utilized (as described above) that generalizes the ICA technique. For example, the exemplary Multilinear ICA (“MICA”) procedure according to the present invention of image ensembles can learn the statistically independent components of multiple factors. Whereas the ICA technique employs linear (matrix) algebra, the MICA procedure according to the present invention generally may exploit multilinear (tensor) algebra. In the context of facial image ensembles, we demonstrate that the statistical regularities learned by the exemplary MICA procedure can capture information that improves automatic face recognition can be addressed. In this context, we also address an issue fundamental to the multilinear framework for recognition, the inference of mode labels (person, viewpoint, illumination, expression, etc.) of an unlabeled test image.
While the invention has been described in connecting with preferred embodiments, it will be understood by those of ordinary skill in the art that other variations and modifications of the preferred embodiments described above may be made without departing from the scope of the invention. Other embodiments will be apparent to those of ordinary skill in the art from a consideration of the specification or practice of the invention disclosed herein. It is intended that the specification and the described examples are considered as exemplary only, with the true scope and spirit of the invention indicated by the following claims. Additionally, all references cited herein are hereby incorporated by this reference as though set forth fully herein.
The present application claims priority from U.S. Patent Application Ser. No. 60/536,210, filed Jan. 13, 2004, entitled “Face Recognition Using Multilinear Independent Component Analysis,” the entire disclosure of which is incorporated herein by reference.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US2005/001671 | 1/13/2005 | WO | 00 | 11/29/2007 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2005/067572 | 7/28/2005 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5170455 | Goossen et al. | Dec 1992 | A |
5301350 | Rogan et al. | Apr 1994 | A |
5321816 | Rogan et al. | Jun 1994 | A |
5347653 | Flynn et al. | Sep 1994 | A |
5428731 | Powers, III | Jun 1995 | A |
5493682 | Tyra et al. | Feb 1996 | A |
5560003 | Nilsen et al. | Sep 1996 | A |
5673213 | Weigl | Sep 1997 | A |
5692185 | Nilsen et al. | Nov 1997 | A |
5717919 | Kodavalla et al. | Feb 1998 | A |
5740425 | Povilus | Apr 1998 | A |
5784294 | Platt et al. | Jul 1998 | A |
5794256 | Bennett et al. | Aug 1998 | A |
5799312 | Rigoutsos | Aug 1998 | A |
5802525 | Rigoutsos | Sep 1998 | A |
5845285 | Klein | Dec 1998 | A |
5852740 | Estes | Dec 1998 | A |
5870749 | Adusumilli | Feb 1999 | A |
5884056 | Steele | Mar 1999 | A |
5890152 | Rapaport et al. | Mar 1999 | A |
5974416 | Anand et al. | Oct 1999 | A |
5974418 | Blinn et al. | Oct 1999 | A |
5995999 | Bharadhwaj | Nov 1999 | A |
6003038 | Chen | Dec 1999 | A |
6029169 | Jenkins | Feb 2000 | A |
6105041 | Bennett et al. | Aug 2000 | A |
6208992 | Bruckner | Mar 2001 | B1 |
6219444 | Shashua et al. | Apr 2001 | B1 |
6349265 | Pitman et al. | Feb 2002 | B1 |
6381507 | Shima et al. | Apr 2002 | B1 |
6404743 | Meandzija | Jun 2002 | B1 |
6408321 | Platt | Jun 2002 | B1 |
6441821 | Nagasawa | Aug 2002 | B1 |
6470360 | Vaitheeswaran | Oct 2002 | B1 |
6501857 | Gotsman et al. | Dec 2002 | B1 |
6510433 | Sharp et al. | Jan 2003 | B1 |
6535919 | Inoue et al. | Mar 2003 | B1 |
6549943 | Spring | Apr 2003 | B1 |
6591004 | VanEssen et al. | Jul 2003 | B1 |
6631364 | Rioux et al. | Oct 2003 | B1 |
6631403 | Deutsch et al. | Oct 2003 | B1 |
6691096 | Staats | Feb 2004 | B1 |
6721454 | Qian et al. | Apr 2004 | B1 |
6724931 | Hsu | Apr 2004 | B1 |
6732124 | Koseki et al. | May 2004 | B1 |
6738356 | Russell et al. | May 2004 | B1 |
6741744 | Hsu | May 2004 | B1 |
6789128 | Harrison et al. | Sep 2004 | B1 |
7085426 | August | Aug 2006 | B2 |
7130484 | August | Oct 2006 | B2 |
7280985 | Vasilescu | Oct 2007 | B2 |
20050210036 | Vasilescu | Sep 2005 | A1 |
Number | Date | Country | |
---|---|---|---|
20080247608 A1 | Oct 2008 | US |
Number | Date | Country | |
---|---|---|---|
60536210 | Jan 2004 | US |