The field of the disclosure is that of the processing of images and image sequences, such as video sequences. More specifically, the disclosure relates to a technique for the recognition of faces from a set of facial images of one or more persons.
The disclosure can be applied especially but not exclusively in the fields of biometrics, video surveillance or video indexing in which it is important to recognize a face from a still image or a video sequence (for example to authorize a recognized person to obtain access to a protected place).
There are several techniques to date for face recognition from sequences of still or moving images. These techniques rely classically on a first learning phase in which a learning base is built, out of facial images of different persons (possibly extracted from learning video sequences) and on a second phase of recognition during which the images of the learning base are used to recognize a person.
Theses techniques generally use statistical methods for the computation, on the basis of the learning base, of a description space in which the similarity between two faces is evaluated. The goal then is to express the notion of resemblance between two faces as faithfully as possible in a simple notion of spatial proximity between the projections of faces in the description space.
The main differences between the different existing techniques lie in the processing performed during the recognition phase.
Thus, A. W. Senior in “Recognizing Faces in Broadcast Video”, Proc. of Int. Workshop on Recognition, Analysis and Tracking of Faces and Gestures in Real Time Systems, Corfu, Greece, September 1999, pp. 105-110, proposes the use, during the recognition phase, of either all the facial images extracted from a video sequence or a single key facial image, namely the one to which the face detector has assigned the highest confidence score.
In another approach, A. Hadid, and M. Pietikäinen in “From Still Image to Video-Based Face Recognition: An Experimental Analysis”, Proc. of 6th Int. Conf. on Automatic Face and Gesture Recognition, Seoul, Korea, May 2004, pp. 813-818, for their part propose the selection of key images from the video sequence without analyzing the faces that they contain, and then the performance of the recognition in considering solely the faces extracted from the key images. Since each face returns a different result, a classic procedure of merger of the results, done a posteriori, is then used.
Finally, E. Acosta and al, for their part, in “An Automatic Face Detection and Recognition System for Video Indexing Applications” Proc. of the Int. Conf. on Acoustic Speech and Signal Processing (vol. 4), Orlando, Fla., May 2002, pp. IV-3644-IV-3647, use all the faces extracted from the query video sequence during the recognition. To evaluate the proximity between the request and the model of one of the persons stored in the learning base, a measurement of similarity is computed between each face image extracted from the query sequence and the model. The final value of the similarity is the median value of all the measurements computed, and this amounts to considering only one face image from among all those that had been extracted.
These different techniques of the prior art all rely on statistical methods enabling the building of a description space in which facial images are projected. Now these projections must be capable of absorbing the variations that may affect the facial images, i.e. they must be capable of highlighting the resemblances between facial images despite variations that may affect the images.
These variations may be of two types. There are first of all variations inherent in changes in facial expression (for example in smiling) and forms of concealment (e.g. wearing glasses, a beard etc). Then, there are variations due to the conditions of acquisition of the image (e.g. lighting conditions) and to the segmentation of the face (i.e. extraction and centering of the image portion containing the face).
While the prior art methods for the recognition of faces are efficient when the facial images are well framed and taken in good lighting conditions, their performance deteriorates sharply when the facial images used for learning or during recognition are not very well aligned (i.e. the different attributes of the faces, the eyes, the mouth, the nose etc. are not in the same place in all the facial images) and/or are not of good quality.
Now, in the context of facial recognition from video sequences, these conditions of alignment and high quality of the images of faces are generally not verified. On the one hand, the acquisition of the sequences is not subjected to very great constraint and the person to be recognized does not generally remain in a frontal position facing the camera throughout the acquisition time. Secondly, the facial images are extracted automatically from video sequences by means of face detection techniques, which may generate false detections and are imprecise in terms of framing. The images of faces used in the context may therefore be of poor quality and badly framed and may contain poor detections.
The inventors of the present patent application have therefore identified the fact that one of the major drawbacks of existing methods for the recognition of faces from video sequences lies in the fact that the quality of the facial images used is not taken into account.
Thus, for example, all the facial images available (for example all the facial images extracted from video sequences) are routinely taken into account during the learning stage. This considerably reduces the performance of these techniques because the statistical methods (of the PCA or principal component analysis type) used for face recognition are extremely sensitive to noise because they rely on the computation of a covariance matrix (i.e. first and second order moments).
Similarly, according to these prior art methods, the choice of the facial images used during the recognition phase is not optimal. Now, the choice of these images strongly influences the performance of these face recognition techniques: they have to be well framed and of good quality. However, none of the prior art methods referred to here above proposes a mode of selection of the images that takes account of their “quality”.
An embodiment of the invention relates to a method of identification of at least one face from a group of at least two facial images associated with at least one person, said method comprising a phase of learning and a phase of recognition of said at least one face.
According to an embodiment of the invention, the learning phase comprises at least one first step of filtering said images, using a group of at least two learning facial images associated with said at least one person, enabling the selection of at least one learning image representing said face to be identified, the recognition phase using solely said learning images selected during the learning phase. The filtering is done using at least one of the thresholds belonging to the group comprising:
Thus, an embodiment of the invention relies on a wholly novel and inventive approach to face recognition from still images or images extracted from video sequences. Indeed, an embodiment of the invention proposes not to take account of the set of available facial images to identify the face of a person but to carry out a filtering of the images in order to select solely good-quality images, i.e. images representative of the face to be identified (because the face is in a frontal pose, or is well framed etc). This filtering is done by means of one or two filtering thresholds which are robust distance to the center or DRC and/or the orthogonal distance or DD. A filtering of this kind is done on the vectors associated with the images and, after analysis of the distribution and statistical properties of these vectors, enables the detection and isolation of the aberrant vector or vectors. It is based on the assumption that the majority of the images available are good-quality images. This enables the identification of all the vectors that do not follow the properties of distribution of the set of vectors available as aberrant vectors and are therefore associated with lower-quality images or in any case are poorly representative of the face to be identified.
The robust distance to the center or DRC takes account of the distance of a vector from the center of the cloud of vectors and the membership of the vector considered in this cloud. The orthogonal distance or DD is the distance between a vector and the vector obtained after projection of the original vector in a space associated with the cloud of vectors followed by inverse projection.
Thus, unlike in the methods of the prior art in which all the available images were systematically taken into account during the learning process, an embodiment of the invention proposes the selection only of a part of the learning images as a function of their quality so as to keep only those that are the most representative of facial images.
According to a first advantageous characteristic of an embodiment of the invention, at least one of said thresholds is determined from vectors associated with said learning images.
Advantageously, said learning phase also comprises a step of building a vector space of description of said at least one person from said representative learning image or images. This building step uses a technique belonging to the group comprising:
In a second advantageous characteristic of an embodiment of the invention, said recognition phase implements a second filtering step, from a group of at least two facial images associated with said at least one person, called query images, and enables the selection of at least one query image representing said face to be identified, and at least one of said thresholds being determined during said learning phase from vectors associated with learning facial images.
Thus, the query images are filtered as a function of their quality so as to carry out the recognition only on the basis of the least noisy and most representative faces. Thus, facial identification performance is considerably improved as compared with performance in prior art techniques. This second filtering done during the recognition phase is thus complementary to the first filtering done during the learning phase. Furthermore, it is particularly advantageous to use the thresholds computed during the learning phase because the learning images are generally of higher quality than the query images owing to their conditions of acquisition.
In one variant of an embodiment of the invention, at least one of said thresholds is determined during said recognition phase, using vectors associated with a set of images comprising at least two facial images associated with said at least one person, called query images and at least two learning images representing said face to be identified, selected during said learning phase, and said recognition phase implements a second filtering step, using said query images, and enables the selection of at least one query image representative of said face to be identified.
Thus, both the least noisy learning images and the least noisy query images are selected, greatly improving face recognition performance as compared with the prior art techniques.
In this variant, filtering is carried out also on the query images during the recognition phase in using the results of the learning phase but this time in the form of learning images representing the face or faces to be identified and no longer in the form of thresholds.
Preferably, said recognition phase also includes a step of comparison of projections, in a vector space of description of said at least one person built during said learning phase, of vectors associated with said at least one representative query image and with at least one representative learning image selected during said learning phase so as to identify said face. The notion of resemblance between two faces is then expressed as a simple notion of spatial proximity between the projections of the faces in the description space.
During this comparison step:
Preferably, said first step of filtering said learning images and/or said second step of filtering said query images apply said two thresholds, namely DOmax and DRCmax (computed for all the images or sequence by sequence).
For a preferred application of an embodiment of the invention, at least certain of said images are extracted from at least one video sequence by implementation of a face detection algorithm well known to those skilled in the art.
The identification method of an embodiment of the invention also comprises a step of resizing said images so that said images are all of the same size. More specifically, in the presence of an image or a video sequence, a face detector enables the extraction of a facial image of a fixed size (all the images coming from the detector are thus of a same size). Then, during the processing of this facial image of a fixed size, a first resizing is performed on the image during the filtering of the learning phase so as to reduce its size. This averts the need to take account of the details and removes the noise (for example, only one in every three pixels of the original image is kept). A second resizing of the image is also done during the building of the description space.
Advantageously, said vectors associated with said images are obtained by concatenation of rows and/or columns of said images.
According to a first advantageous variant of an embodiment of the invention, said learning phase being implemented for learning images associated with at least two persons, said thresholds associated with the learning images of each of said at least two persons are determined and, during said recognition phase, said query images are filtered from said threshold associated with each of said at least two persons. There are as many thresholds DO(j)max and DRC(j)max computed as there are persons j in the learning base.
According to a second advantageous variant of an embodiment of the invention, said learning phase being implemented for learning images associated with at least two persons, said thresholds associated with the learning images of the set of said at least two persons are determined and, during said recognition phase, said query images are filtered from said threshold associated with the set of said at least two persons. Then, only two thresholds DOmax and DRCmax are computed for the set of persons of the learning base.
According to an advantageous characteristic of an embodiment of the invention, said thresholds DOmax and DRCmax are determined at the end of a Robust Principal Component Analysis (RobPCA) applied to said vectors associated with said learning images, enabling the determining also of a robust mean μ associated with said vectors, and a projection matrix P built from eigen vectors of a robust covariance matrix associated with said vectors,
and said thresholds are associated with the following distances:
The values of DOmax and DRCmax are determined by analysis of the distribution of the values of DOi and DRCi for the set of vectors xi.
It will be noted that, throughout this document, the following notations are used:
The an embodiment of invention also pertains to a system for the identification of at least one face from a group of at least two facial images associated with at least one person, said system comprising a learning device and a device for the recognition of said at least one face.
In such a system, the learning device comprises means for determining at least one of the thresholds belonging to the group comprising:
An embodiment of the invention also pertains to a learning device of a system for the identification of at least one face from a group of at least two facial images associated with at least one person.
Such a device comprises:
means of analysis of said learning images that make it possible, using vectors associated with said learning images, to determine at least one of the thresholds belonging to the group comprising:
first means of filtering said learning images, using at least one of said thresholds, so as to select at least one learning image representing said face to be identified;
means of building a vector space of description of said at least one person from said representative learning image or images,
so that only said learning images selected by said learning device are used by a recognition device.
An embodiment of the invention also pertains to a device for the recognition of at least one face from a group of at least two facial images associated with at least one person, called query images, said recognition device belonging to a system of identification of said at least one face also comprising a learning device.
A recognition device of this kind comprises:
An embodiment of the invention also relates to a computer program comprising program code instructions for the execution of the learning phase of the method of identification of at least one face described here above when said program is executed by a processor.
An embodiment of the invention finally concerns a computer program comprising program code instructions for the execution of the steps of the phase of recognition of the method of identification of at least one face described here above when said program is executed by a processor.
Other features and advantages shall appear more clearly from the following description of a preferred embodiment, given by way of a simple illustrative and non-restrictive example and from the appended drawings.
The general principle of an embodiment of the invention relies on the selection of a subset of images to be used during the learning phase and/or the recognition phase, by the use of a Robust Principal Component Analysis or RobPCA. An embodiment of the invention can be used for example to isolate the noisy images of faces during the learning and to deduce parameters enabling the filtering also of the facial images during the recognition. This enables a description space to be rebuilt without taking account of the noise and the recognition to be done on the basis of several examples of facial images that are also non-noisy. The proposed approach thus enables a considerable increase in the recognition rates as compared with an approach that would take account of all the images of the sequence.
Referring to
We shall strive, throughout the rest of the document, to describe an example of an embodiment of the invention in the context of the recognition of faces from video sequences both during the learning phase and during the recognition phase. An embodiment of the invention can be applied, naturally, also to the recognition of facial images from a set of still images obtained for example by means of a camera in burst mode.
Furthermore, we shall strive to describe a particular embodiment in which the noisy images are filtered both during the learning phase and during the recognition phase, in which the results of the learning phase are used. These two phases may of course also be implemented independently of each other.
It is of course possible, although infrequent, that no image is of a quality good enough to be kept as a representative image during the filtering. It is then necessary to select at least one image, according to a criterion to be defined: for example it can be chosen to select the first image of the sequence.
Here below, these different main steps are presented in greater detail.
Each person 40 (also identified by the index j) has an associated video sequence S(j). A sequence S(j) may be acquired in filming the person 40 by means of a camera 41 for a determined duration. By the application of a face detector 42 to each of the images of the sequence S(j) (according to a technique well known to those skilled in the art which is not an object of an embodiment of the present invention and shall therefore not be described in greater detail), a set of facial images (I1(j), . . . IN(j)), is extracted from the sequence S(j). An embodiment of the invention then enables the selection solely of the facial images that are in a frontal position and are well framed, and this is done in analyzing the images of the faces themselves. To this end, an embodiment of the invention uses a robust principal component analysis (RobPCA), as described by M. Hubert, P. J. Rousseeuw, and K. Vanden Branden in “ROBPCA: A New Approach to Robust Principal Component Analysis”, Technometrics, 47(1): 64-79 Feb. 2005.
The idea here is to consider each of the facial images Ii(j) as a vector vi(j) and liken the problem to a problem of detection of aberrant vectors, in assuming that the majority of the faces extracted from the sequence S(j) are of good quality (i.e. well framed and in a frontal pose). This is a reasonable assumption because it may be considered that the acquisition of the video of the person 40 which is being learned can be performed under well-controlled conditions. For each set of facial images (I1(j), . . . IN(j)) extracted from a video sequence S(j), the following procedure is followed:
In one variant of an embodiment of this step of selection of the learning images representative of the face to be identified, simultaneous consideration is given to the set of facial images extracted from all the learning video sequences S(j). In this case, a single projection P, a single robust mean μ, a single decision threshold DOmax and a single decision threshold DRCmax are computed during the learning phase. The learning facial images are therefore filtered in using P, μ, DOmax et DRCmax. An image I′I is filtered if:
DOi>DOmaxouDRCi>DRCmax
where DOi and DRCi are respectively the orthogonal distance and the robust distance to the centre of v′i (the vector associated with I′i) in using P and μ.
Only the facial images selected 50 during the previous step are included in the learning base 51 used for the building of the description space. This space is computed by using one of the known statistical techniques such as the PCA (principal component analysis), LDA (linear discriminant analysis), 2DPCA or 2DLDA (i.e. two-dimensional PCA or LDA). The goal of these techniques is to find a space of reduced size in which the vectors vi(j) associated with the facial images are projected and compared.
Once the projection has been computed, all the vectors vi(j) associated with the facial images Ii(j) of the learning base 51 are projected in the description space. Their projections are then saved and used during the recognition phase.
These learning images 53 representative of the faces to be identified are used to build 54 a description space 55, or model, associated with the persons to be identified, and to carry out the projection 56 of the vectors associated with the representative learning images 53.
Here below, we present the processing operations performed during the recognition phase of the identification method of an embodiment of the invention.
1.3 Selection of the Representative Images from the Query Sequence
As illustrated in
In a sub-optimal variant of the invention, it is possible however to choose to carry out a processing operation, on the query images, that is identical to the one made on learning images during the learning phase, by RobPCA type analysis.
In the preferred embodiment of the invention, two variants can be envisaged, depending on whether the selection of the query images representative of the face to be identified is done on the basis of filtering thresholds DOmax and DRCmax computed during the learning, or directly from the representative learning images.
In a first variant, it is chosen to use the decision parameters 52 computed during the learning stage (§1.1, thresholds DOmax and DRCmax). A vector vq is associated (by concatenation of the rows or else of the columns of the image) with each facial image Iq extracted from the query sequence S, and the following algorithm 80 is applied to decide to keep or not the facial image Iq and to use it not use it during the identification: For each of the video sequences S(j) used during the learning:
In the variant of an embodiment in which consideration is given, during the learning, to only one set in which all the learning images are grouped together, and in which only one projection P, only one robust mean μ only one decision threshold DOmax and only one decision threshold DRCmax, are computed, the facial query images are also filtered in using P, μ, DOmax and DRCmax during the recognition phase. As in the case of the learning, a query image I is filtered (i.e. considered to be aberrant) if:
DOq>DOmax ou DRCq>DRCmax
where DOq and DRCq are respectively the orthogonal distance and the robust distance to the centre of v′ (where v′ is the vector associated with I′, the image resulting from the resizing of I) in using P and μ.
A second variant uses the representative learning images 53 coming from the learning phase. With each facial image Iq extracted (42) from the query sequence S, a vector vq is associated (by concatenation of the rows or else of the columns of the image) and this vector is inserted into each of the sets of vectors associated with the representative learning images 53 coming from the video sequences S(j) used during the learning. There are thus as many sets available as there are learning sequences S(j). A filtering procedure is then applied to each of these sets. This filtering procedure is similar to the one used during the learning in computing the thresholds DOmax and DRCmax associated with each of these sets. The facial image Iq is selected 80 if it is chosen as being a representative image by at least one of the filtering procedures applied (i.e. if for at least one of the sets, we have DOq≦DOmax and DRCq≦DRCmax).
This procedure of selection 80 of the representative query images may also be applied by inserting one or more images Iq in the set of facial images made up of all the representative learning images coming from the learning phase (all learning sequences without distinction). However, it is desirable that the number of images Iq inserted should remain smaller than the number of representative learning images. The filtering procedure is thus executed only once and the facial image Iq is selected if it is chosen as a representative image. In this case, only two thresholds DOmax and DRCmax are computed for the set constituted by all the representative learning images and the image or images (s) Iq.
The set of facial images selected from the query sequence is noted as follows
Q=[q1, q2, . . . , qs]
The identification of a query image qi is done in two steps. First of all, the representative query image qi is projected 81 in the description space 55 (computed during the learning) in the same way as the images of the learning base (step 54). Then, a search 82 is made for the closest neighbor in the description space 55. This involves searching for that projected vector among the projected vectors 56 corresponding to the images of the learning base which is the closest to the query projected vector. The query image qi is assigned to the same person as the person associated with the closest retrieved neighbor. Each image qi thus votes for a given person, i.e. designates a person among those stored in the learning base. Then, the results obtained for each of the representative query images of the set Q are merged 83, and the face of the query sequence is finally recognized 84 as the person who will have obtained the largest number of votes.
Other identification procedures on the basis of the images of the set Q may be applied.
Here below, a more detailed description is provided of the practical implementation of an embodiment of the invention, as well as the mathematical processing operations performed in the set of steps described here above in § 1.1 to 1.4.
It is assumed that there is a set of video sequences S(1), . . . , S(r) available, each associated with one of the persons for whom the learning is being done. Each sequence is acquired for example by filming the associated person by means of a camera for a determined duration.
As presented in §1.1, from each learning sequence S(j), a set of facial images is extracted I1, I2, . . . , In by means of an automatic face detector applied to each of the images of the video sequence. The operation uses for example the CFF detector described by C. Garcia and M. Delakis in “Convolutional Face Finder: A Neural Architecture for Fast and Robust Face Detection”, IEEE Trans. on Pattern Analysis and Machine Intelligence, 26(11):1408-1423, November 2004. These images are then resized so that they all have the same size (28×31). This resolution makes it possible to avoid taking account of the details in the images for the only the pose of the face (whether frontal or not) and its positioning in the image matters.
A procedure for the selection of the representative learning images is then applied. This procedure starts with a robust principal component analysis (RobPCA) on the matrix Xn×d of the data, formed by vectors associated with the extracted facial images (d=28×31). The row j of the matrix corresponds to the vector associated with the image Ij. This vector is built by concatenation of the rows of the image Ij after resizing.
The RobPCA can be used to compute a robust mean μ (vector with dimension d) and a robust matrix of covariance Cd×d in considering only a subset of the vectors (namely vectors sized d associated with the facial images. Each vector corresponds to a row of the matrix X). It also enables the reduction of the size of the images by projecting them in a much smaller-sized space k (k<d) defined by the eigen vectors of the robust covariance matrix C. According to the principle of the RobCap, and as described in detail in appendix 1 which is an integral part of the present description, if:
Cd×d=PLPt (1)
where P is the matrix of the eigen vectors and L is a diagonal matrix of the eigenvalues (L=diag (l1, l2, . . . , ld)), then the projection of the matrix X is given by:
Y
n×k=(Xn×d−1nμ′)Pd×k
where Pd×k is formed by the k first columns of P.
In the matrix Y, the row i represents the projection of the row i of the matrix X. It is therefore the projection of the image Ii. The computation details of the matrix C and of the robust mean μ by the RobPCA are given in appendix 1 which forms an integral part of the present description.
To select the representative learning images (and therefore filter the noisy images) two distances are computed for each image Ii: these are the orthogonal distance (DOi) and the robust distance to the centre (DRCi). These two distances are computed as follows:
where xi is the vector associated with Ii (row i of the matrix X) and yi is the ith row of the matrix Y.
To isolate the aberrant vectors, the distributions of these two distances are studied. The threshold associated with the robust distance to the centre is defined by √{square root over (χk,0.9752)} if k>1 and ±√{square root over (χ1,0.9752)} if k=1 (for the square distance of Mahalanobis on standard distributions approximately follows a χk2 law) (see above-mentioned article by M. Hubert and al.). Let this threshold be written as DRCmax(j), j being the number of the learning sequence. The threshold of the orthogonal distance is, on the contrary, more difficult to fix because the distribution of the values DOi is not known. The method proposed in the article by M. Hubert and al. is used again for the computation of this threshold, i.e. the distribution is approximated by a g1χg
and variance
In estimating the mean {circumflex over (m)} and the variance {circumflex over (σ)}2 from the values DOi by means of the MCD estimator (see article by M. Hubert and al.), the threshold associated with the orthogonal distance for the memory sequence j is given by: DRCmaxj=({circumflex over (m)}+{circumflex over (σ)}z0.975)3/2 where z0.975=Φ−1(0.975) is the quantile at 97.5% of a Gaussian distribution.
Representative facial images such as those of
After selection of the representative learning images, the description space can be built by principal component analysis (PCA). In taking up the selected representative learning images, first of all a learning base is built in the form of a matrix. Each facial image is resized so that all the images have the same size. The chosen size is for example 63×57. The size may be the one obtained directly at output of the face detector. Each image then has an associated vector sized 63×57 built by concatenation of rows of the image. Each vector is then positioned in a row of the data matrix written as Xm,d, where m is the number of facial images selected and d the size of the vectors (in this case d=63×57).
It would be noted, that throughout the rest of this document, the notations used for the different variables are independent of the notations used hitherto in §1.5 of this document.
To compute the description space, X is first of all centered and a spectral decomposition is done:
X
m,d−1mμt=Um,dDd,dVd,dt (12)
where α is the mean of the vectors associated with the images of the selected faces (rows of the matrix X) and D is a diagonal matrix D=diag(l1, l2, . . . ld).
The description space is defined by the vectors of the matrix V which are also the eigen vectors of the covariance matrix of X. The number of vectors chosen defines the dimension r of the description space. This number may be fixed by analyzing the eigenvalues (D) by the criterion of the proportion of the inertia expressed, i.e. such that:
where α is an a priori fixed parameter.
Thus, the vectors projected in the space of the description are defined by:
Y
n,r=(Xm,d−1mμt)Vd,r (14)
Y, μ and V are saved for the recognition phase.
During the recognition phase, the query images representative of the face to be identified are selected from the query sequence following the procedure described in §1.3. Let these images be written as q1, . . . , qs. These images are first of all resized so that they have the same size as the images used in the learning phase (63×57 in the above case). A vector is then associated with each of these images. Let these vectors be written as v1, . . . , vs. Each vector is then projected into the description space as follows:
b
i=(vi−μ)tVd,r (15)
For each projected vector bi, the vector yi (the ith row of the matrix Y) which is closest to it is retrieved by computing the distance between bi and all the vectors yi. The facial image associated with bi is therefore recognized as being the person associated with the image represented by the closest neighbor retrieved. It is said that bi has voted for the person identified. Once this has been done for all the bi, the face of the query sequence is finally recognized as being that of the person who has obtained the greatest number of votes.
In the variant already mentioned here above, the thresholds 68 at input of the recognition device are replaced by the representative learning images 64, and the processor μP of the processing unit 70 performs a filtering identical to the one made by the learning device, from the set constituted by a query image 73 and the representative learning messages 64.
It will be noted that this description has focused on a technique implementing a RobPCA type analysis. Naturally, it would be equally possible to use any other filtering technique based on two thresholds similar to the thresholds DOmax and DRCmax.
An aspect of the disclosure provides a technique for the recognition of faces from still facial images or video sequences with improved performance as compared with prior art techniques. In particular, an aspect proposes a technique of this kind that gives satisfactory results even when the facial images to be processed are noisy, poorly framed and/or show poor lighting conditions.
An aspect of the disclosure proposes a technique of this kind that can be used to optimize the recognition capacities of the statistical methods on which they rely.
An aspect of the disclosure provides a technique of this kind that takes account of the quality of the facial images used.
An aspect of the disclosure proposes a technique of this kind that is well adapted to the recognition of several distinct persons, in the context of applications of biometrics, video surveillance and video indexing for example.
An aspect of the disclosure provides a technique of this kind that is simple and cost little to implement.
Although the present disclosure have been described with reference to one or more examples, workers skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the disclosure and/or the appended claims.
RobPCA can be used to perform principal component analysis, but in considering solely a subset of vectors. The idea is to avoid the inclusion, in the analysis, of noisy data which risks affecting the computation of the mean and the covariance matrix (first and second order moments which are known to be highly sensitive to noise). To this end, RobPCA is based on the following property: a subset A is less noisy than another subset B if the vectors of A are less dispersed than those of B. in statistical terms, the least noisy set being the one for which the determinant of the covariance matrix is the smallest.
Take a set of n vectors sized d arranged in the form of a matrix Xn,d. RobPCA is performed in four steps:
1. The data of the learning base (BA) is pre-processed by means of a classic PCA (Principal Component Analysis). The aim is not to reduce their size because all the main components are kept. What is done simply is to eliminate the superfluous sizes. To this end, a decomposition into singular values is done:
X
n,d−1nm0t=Un,t
where m0 is a classic mean and r0 the rank of the matrix Xn,d−1nm0t.
The data matrix X is then transformed as follows:
Zn,r
It is the matrix Z that is used in the following steps. Here below, the matrix Z is considered to be a set of vectors where each vector corresponds to a row of the matrix and is associated with one of the facial images extracted from a sequence.
2. The aim of the second step is to retrieve the h least noisy vectors. It may be recalled that a vector refers here to a row of the matrix Z, corresponds to a facial image and is written as zi.
The value of h could be chosen by the user but n−h must be greater than the total number of aberrant vectors. Since the number of aberrant vectors is generally unknown, h is chosen as follows:
h=max{[αn],[(n+kmax+1)/2]}, (4)
where kmax is the maximum number of principal components that will be chosen and α is parameter ranging from 0.5 to 1. It represents the proportion of the non-noisy vectors. In the present case, this parameter corresponds to the proportion of the learning facial images extracted from a sequence that are of good quality and could be included in the learning base. The value of this parameter could therefore be fixed as a function of the conditions of acquisition of the learning sequences and the quality of the facial images extracted from the sequences. The default value is 0.75.
The following is the method used to find the h least noisy vectors:
First of all a computation is made, for each vector zi, of its degree of noisiness defined by:
where B is the set of all the directions passing through two different vectors. If the number of directions is greater than 250, a subset of 250 directions is chosen randomly. tMCD(zjtv) and sMCD(zjtv) are respectively the robust mean and the robust standard deviation of the projection of all the vectors along the direction defined by v. this is the mean and standard deviation of the h projected values having the smallest variance. These two values are computed by the one-dimensional MCD estimator described by Hubert and al. in the above-mentioned article.
If all the sMCD are greater than zero, the degree of noisiness outl is computed for all the vectors, and the h vectors having the smallest values of the degree of noisiness are considered. The indices of these vectors are stored in the set H0.
If along one of the directions, sMCD(zjtv) is zero, it means that there is a hyperplane Hv orthogonal to v which contains h vectors. In this case, all the vectors are projected on Hv, which has the effect of reducing the size of the vectors to one, and the computation of the degrees of noisiness is resumed. It must be noted here that this can possibly occur several times.
At the end of this step, there is a set H0 of the least noisy vectors and, as the case may be, a new set of data Zn,r
Then, the mean m1 and a covariance matrix S0 of the h vectors previously selected are considered to perform a principal component analysis and reduce the size of the vectors. The matrix S0 is broken down as follows: S0=P0L0P0t with L0 as the diagonal matrix of the eigenvalues: L0=diag({tilde over (l)}0 . . . {tilde over (l)}r) and r≦r1. All the {tilde over (l)}j are deemed to be non-null and to be set in descending order. This decomposition makes it possible to decide on the number of principal components k0 to be kept for the remainder of the analysis. This can be done in different ways. For example, k0 could be chosen such that:
Or else such that:
{tilde over (l)}
k
/{tilde over (l)}
1≧10−3. (7)
Finally, the vectors are projected in the space defined by the k0 first eigen vectors of S0. The new matrix of vectors is given by:
Zn,k
3. In the third step, the covariance matrix of the vectors of Zn,k
3.1 Let m0 and C0 be respectively the mean and the covariance matrix of h vectors selected in the step 2 (set H0):
d
m
,C
(i)=√{square root over ((zi*−m0)tC0−1−m0))} (8)
This procedure, called C-Step, is therefore executed iteratively until the determinant of the covariance matrix of the h selected vectors no longer decreases.
At convergence, we obtain a data matrix which will be written as Zn*k
3.2 The algorithm FAST-MCD proposed by Rousseeuw and Van Driessen in 1999 and slightly modified is applied to the matrix Zn*k
Let us write {tilde over (Z)}n,k* with k≦k1 the set of data obtained at the end of the application of the FAST-MCD algorithm and m3 and S3 the mean of the covariance matrix of the h vectors selected. If det(S2)<det(S3) then the computation is continued in considering the h vectors obtained from the step 3.1, i.e. m4=m2 and S4=S2, else, the results obtained by FAST-MCD, i.e. m4=m3 and S4=S3, are considered.
3.3. In order to increase statistical efficiency, a weighted mean and a weighted covariance matrix are computed from m4 and S4. First of all, S4 is multiplied by a consistency factor c1 computed as follows
where {dm
4. The purpose of this last step is to deduce the final mean and covariance matrix. First of all, a spectral decomposition of the covariance matrix S5 is performed:
S5=P2L2P2t
where P2 is a matrix k×k that contains the eigen vectors of S5 and L2 a diagonal matrix with the corresponding eigenvalues.
The matrix P2 is then projected in by applying the inverse transforms of those applied throughout the preceding steps. This gives the final matrix of the eigen vectors Pd,k. Similarly for the mean: m5 is projected in thus giving μ. Furthermore, the final covariance matrix C could be computed by means of the equation (1).
Number | Date | Country | Kind |
---|---|---|---|
0503047 | Mar 2005 | FR | national |
This application is a Section 371 National Stage Application of International Application No. PCT/EP2006/061109, filed Mar. 28, 2006 and published as WO 2006/103240 A1 on Oct. 5, 2006, not in English.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2006/061109 | 3/28/2006 | WO | 00 | 6/2/2008 |