Method and component for image recognition

Information

  • Patent Grant
  • 8335355
  • Patent Number
    8,335,355
  • Date Filed
    Wednesday, April 21, 2010
    14 years ago
  • Date Issued
    Tuesday, December 18, 2012
    11 years ago
Abstract
A method and system for image recognition in a collection of digital images includes training image classifiers and retrieving a sub-set of images from the collection. For each image in the collection, any regions within the image that correspond to a face are identified. For each face region and any associated peripheral region, feature vectors are determined for each of the image classifiers. The feature vectors are stored in association with data relating to the associated face region. At least one reference region including a face to be recognized is/are selected from an image. At least one classifier on which said retrieval is to be based is/are selected from the image classifiers. A respective feature vector for each selected classifier is determined for the reference region. The sub-set of images is retrieved from within the image collection in accordance with the distance between the feature vectors determined for the reference region and the feature vectors for face regions of the image collection.
Description
FIELD OF THE INVENTION

The invention relates to a method and component for image recognition in a collection of digital images. In particular the invention provides improved image sorting, image retrieval, pattern recognition and pattern combination methods associated with image recognition.


DESCRIPTION OF THE RELATED ART

A useful review of face detection is provided by Yang et al., in IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 1, pages 34-58, January 2002. A review of face recognition techniques is given in Zhang et al., Proceedings of the IEEE, Vol. 85, No. 9, pages 1423-1435, September 1997.


US Application No. 2003/0210808 to Chen et al describes a method of organizing images of human faces in digital images into clusters comprising the steps of locating face regions using a face detector, extracting and normalizing the located face regions and then forming clusters of said face regions, each cluster representing an individual person.


U.S. Pat. No. 6,246,790 to Huang et al discloses image indexing using a colour correlogram technique. A color correlogram is a three-dimensional table indexed by color and distance between pixels which expresses how the spatial correlation of color changes with distance in a stored image. The color correlogram may be used to distinguish an image from other images in a database.


U.S. Pat. No. 6,430,312 also to Huang et al discloses distinguishing objects in an image as well as between images in a plurality of images. By intersecting a color correlogram of an image object with correlograms of images to be searched, those images which contain the objects are identified by the intersection correlogram. Many other techniques for colour pattern matching are described in the prior art.


In “Face annotation for family photo album management” to Chen et al published in the International Journal of Image and Graphics, Vol. 3, No. 1 (2003) techniques, including the colour correlogram, are employed to match persons within an image collection and facilitate the annotation of images based on said matching. Chen et al select a single colour region around a person use a combination of multiple colour pattern matching methods to improve the accuracy of the annotation process.


US 2002/0136433 to Lin et al describes an adaptive face recognition system and method. The system includes a database configured to store a plurality of face classes; an image capturing system for capturing images; a detection system, wherein the detection system detects face images by comparing captured images with a generic face image; a search engine for determining if a detected face image belongs to one of a plurality of known face classes; and a system for generating a new face class for the detected face image if the search engine determines that the detected face image does not belong to one of the known face classes. In the event that the search engine determines that the detected face image belongs to one of the known face classes, an adaptive training system adds the detected face to the associated face class.


In the field of multi-classifier pattern matching, U.S. Pat. No. 6,567,775 to Maali et al discloses a method for identifying a speaker in an audio-video source using both audio and video information. An audio-based speaker identification system identifies one or more potential speakers for a given segment using an enrolled speaker database. A video-based speaker identification system identifies one or more potential speakers for a given segment using a face detector/recognizer and an enrolled face database. An audio-video decision fusion process evaluates the individuals identified by the audio-based and video-based speaker identification systems and determines the speaker of an utterance. A linear variation is imposed on the ranked-lists produced using the audio and video information.


The decision fusion scheme of Maali is based on a linear combination of the audio and the video ranked-lists. The line with the higher slope is assumed to convey more discriminative information. The normalized slopes of the two lines are used as the weight of the respective results when combining the scores from the audio-based and video-based speaker analysis. In this manner, the weights are derived from the data itself but assume that the ranks and the scores for each method have linear variation (are points on a line and they estimate the equation of the line).


SUMMARY OF THE INVENTION

According to the present invention there is provided a method for image recognition in a collection of digital images that includes training image classifiers and retrieving a sub-set of images from the collection. A system is also provided including a training module and image retrieval module.


The training of the image classifiers preferably includes the following: For each image in the collection, any regions within the image that correspond to a face are identified. For each face region and any associated peripheral region, feature vectors are determined for each of the image classifiers. The feature vectors are stored in association with data relating to the associated face region.


The retrieval of the sub-set of images from the collection preferably includes the following: At least one reference region including a face to be recognized is/are selected from an image. At least one classifier on which said retrieval is to be based is/are selected from the image classifiers. A respective feature vector for each selected classifier is determined for the reference region. The sub-set of images is retrieved from within the image collection in accordance with the distance between the feature vectors determined for the reference region and the feature vectors for face regions of the image collection.


A component for image recognition in a collection of digital images is further provided including a training module for training image classifiers and a retrieval module for retrieving a sub-set of images from the collection.


The training module is preferably configured according to the following: For each image in the collection, any regions are identified in the image that correspond to a face. For each face region and any associated peripheral region, feature vectors are determined for each of the image classifiers. The feature vectors are stored in association with data relating to the associated face region.


The retrieval module is preferably configured according to the following: At least one reference region including a face to be recognized is/are selected from an image. At least one image classifier is/are selected on which the retrieval is to be based. A respective feature vector is determined for each selected classifier of the reference region. A sub-set of images is selected from within the image collection in accordance with the distance between the feature vectors determined for the reference region and the feature vectors for face regions of the image collection.


In a further aspect there is provided a corresponding component for image recognition.


In the embodiment, the training process cycles automatically through each image in an image collection, employing a face detector to determine the location of face regions within an image. It then extracts and normalizes these regions and associated non-face peripheral regions which are indicative of, for example, the hair, clothing and/or pose of the person associated with the determined face region(s). Initial training data is used to determine a basis vector set for each face classifier.


A basis vector set comprises a selected set of attributes and reference values for these attributes for a particular classifier. For example, for a DCT classifier, a basis vector could comprise a selected set of frequencies by which selected image regions are best characterised for future matching and/or discrimination and a reference value for each frequency. For other classifiers, the reference value can simply be the origin (zero value) within a vector space.


Next for each determined, extracted and normalized face region at least one feature vector is generated for at least one face-region based classifier and where an associated non-face region is available, at least one further feature vector is generated for a respective non-face region based classifier.


A feature vector can be thought of as an identified region's coordinates within the basis vector space relative to the reference value.


These data are then associated with the relevant image and face/peripheral region and are stored for future reference.


In the embodiment, image retrieval may either employ a user selected face region or may automatically determine and select face regions in a newly acquired image for comparing with other face regions within the selected image collection. Once at least one face region has been selected, the retrieval process determines (or if the image was previously “trained”, loads) feature vectors associated with at least one face-based classifier and at least one non-face based classifier. A comparison between the selected face region and all other face regions in the current image collection will next yield a set of distance measures for each classifier. Further, while calculating this set of distance measures, mean and variance values associated with the statistical distribution of the distance measures for each classifier are calculated. Finally these distance measures are preferably normalized using the mean and variance data for each classifier and are summed to provide a combined distance measure which is used to generate a final ranked similarity list.


In the preferred embodiment, the classifiers include a combination of wavelet domain PCA (principle component analysis) classifier and 2D-DCT (discrete cosine transform) classifier for recognising face regions.


These classifiers do not require a training stage for each new image that is added to an image collection. For example, techniques such as ICA (independent component analysis) or the Fisher Face technique which employs LDA (linear discriminant analysis) are well known face recognition techniques which adjust the basis vectors during a training stage to cluster similar images and optimize the separation of these clusters.


The combination of these classifiers is robust to different changes in face poses, illumination, face expression and image quality and focus (sharpness).


PCA (principle component analysis) is also known as the eigenface method. A summary of conventional techniques that utilize this method is found in Eigenfaces for Recognition, Journal of Cognitive Neuroscience, 3(1), 1991 to Turk et al., which is hereby incorporated by reference. This method is sensitive to facial expression, small degrees of rotation and different illuminations. In the preferred embodiment, high frequency components from the image that are responsible for slight changes in face appearance are filtered. Features obtained from low pass filtered sub-bands from the wavelet decomposition are significantly more robust to facial expression, small degrees of rotation and different illuminations than conventional PCA.


In general, the steps involved in implementing the PCA/Wavelet technique include: (i) the extracted, normalized face region is transformed into gray scale; (ii) wavelet decomposition in applied using Daubechie wavelets; (iii) histogram equalization is performed on the grayscale LL sub-band representation; next, (iv) the mean LL sub-band is calculated and subtracted from all faces and (v) the 1st level LL sub-band is used for calculating the covariance matrix and the principal components (eigenvectors). The resulting eigenvectors (basis vector set) and the mean face are stored in a file after training so they can be used in determining the principal components for the feature vectors for detected face regions. Alternative embodiments may be discerned from the discussion in H. Lai, P. C. Yuen, and G. C. Feng, “Face recognition using holistic Fourier invariant features” Pattern Recognition, vol. 34, pp. 95-109, 2001, which is hereby incorporated by reference.


In the 2D Discrete Cosine Transform classifier, the spectrum for the DCT transform of the face region can be further processed to obtain more robustness (see also, Application of the DCT Energy Histogram for Face Recognition, in Proceedings of the 2nd International Conference on Information Technology for Application (ICITA 2004) to Tjahyadi et al., hereby incorporated by reference).


The steps involved in this technique are generally as follows: (i) the resized face is transformed to an indexed image using a 256 color gif colormap; (ii) the 2D DCT transform is applied; (iii) the resulting spectrum is used for classification; (iv) for comparing similarity between DCT spectra the Euclidian distance was used.


Examples of non-face based classifiers are based on color histogram, color moment, colour correlogram, banded colour correlogram, and wavelet texture analysis techniques. An implementaton of color histogram is described in “CBIR method based on color-spatial feature,” IEEE Region 10th Ann. Int. Conf 1999 (TENCON'99, Cheju, Korea, 1999). Use of the colour histogram is, however, typically restricted to classification based on the color information contained within a sub-regions of the image.


Color moment may be used to avoid the quantization effects which are found when using the color histogram as a classifier (see also “Similarity of color images,” SPIE Proc. pp. 2420 (1995) to Stricker et al, hereby incorporated by reference). The first three moments (mean, standard deviation and skews) are extracted from the three color channels and therefore form a 9-dimensional feature vector.


The colour auto-correlogram (see, U.S. Pat. No. 6,246,790 to Huang et al, hereby incorporated by reference) provides an image analysis technique that is based on a three-dimensional table indexed by color and distance between pixels which expresses how the spatial correlation of color changes with distance in a stored image. The color correlogram may be used to distinguish an image from other images in a database. It is effective in combining the color and texture features together in a single classifier (see also, “Image indexing using color correlograms,” In IEEE Conf. Computer Vision and Pattern Recognition, PP. 762 et seq (1997) to Huang et al., hereby incorporated by reference).


In the preferred embodiment, the color correlogram is implemented by transforming the image from RGB color space, and reducing the image colour map using dithering techniques based on minimum variance quantization. Variations and alternative embodiments may be discerned from Variance based color image quantization for frame buffer display,” Color Res. Applicat., vol. 15, no. 1, pp. 52-58, 1990 to by Wan et al., which is hereby incorporated by reference. Reduced colour maps of 16, 64, 256 colors are achievable. For 16 colors the vga colormap may be used and for 64 and 256 colors, a gif colormap may be used. A maximum distance set D=1; 3; 5; 7 may be used for computing auto-correlogram to build a N×D dimension feature vector where N is the number of colors and D is the maximum distance.


The color autocorrelogram and banded correlogram may be calculated using a fast algorithm (see, e.g., “Image Indexing Using Color Correlograms” from the Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97) to Huang et al., hereby incorporated by reference).


Wavelet texture analysis techniques (see, e.g., “Texture analysis and classification with tree-structured wavelet transform,” IEEE Trans. Image Processing 2(4), 429 (1993) to Chang et al., hereby incorporated by reference) may also be advantageously used. In order to extract the wavelet based texture, the original image is decomposed into 10 de-correlated sub-bands through 3-level wavelet transform. In each subband, the standard deviation of the wavelet coeficients is extracted, resulting in a 10-dimensional feature vector.





BRIEF DESCRIPTION OF THE DRAWINGS





    • The file of this patent contains at least one drawing executed in color. Copies of this patent with color drawing(s) will be provided by the Patent and Trademark Office upon request and payment of the necessary fee.





Embodiments of the invention will now be described, by way of example, with reference to the accompanying drawings, in which:



FIG. 1(
a) is a block diagram of an image processing system according to a preferred embodiment of the present invention;



FIG. 1(
b) illustrates the determining of a training method to be applied to an Image Collection by the training module of FIG. 1(a);



FIG. 1(
c) illustrates an overview of the main retrieval process applied by the retrieval module of FIG. 1(a);



FIG. 2(
a) illustrates the operation of main image analysis module of FIG. 1(a);



FIG. 2(
b) illustrates the full training workflow which is implemented on an image collection;



FIG. 2(
c) illustrates the incremental training workflow which allows image subsets to be integrated with a previously trained image collection;



FIG. 2(
d) illustrates the operation of the training module for combining pre-trained image collections;



FIG. 3(
a) illustrates additional details of main image sorting/retrieval workflow following from FIG. 1(c);



FIG. 4(
a) illustrates an exemplary data storage structure for an image collection data set determined from the training process(s) illustrated in FIG. 2;



FIGS. 4(
b) and 4(c) illustrate additional details of the image data records including information stored on the extracted face & peripheral regions of an image;



FIG. 4(
d) illustrates the manner by which image collection data sets may be combined;



FIGS. 5(
a) and (c) illustrate the principle aspects of an image classifier where the feature vectors for individual patterns can be determined relative to an “averaged” pattern (mean face) and where feature vectors for individual patterns are determined in absolute terms (colour correlogram) respectively;



FIGS. 5(
b) and (d) illustrate the calculation of respective sets of similarity measure distances from a selected classifier pattern to all other classifier patterns within images of the Image Collection;



FIG. 5(
e) illustrates how multiple classifiers can be normalized and their similarity measures combined to provide a single, similarity measure;



FIGS. 6(
a), (b) & (c) illustrate statistical distribution patterns of the sets of similarity measures described in FIG. 5 for (a) Wavelet based PCA feature vectors; (b) DCT based feature vectors and (c) colour correlogram based feature vectors;



FIG. 7 illustrates a face region determined by a face detector module and the associated peripheral regions which are used for colour pattern matching of a person's hair and upper body clothing;



FIGS. 8(
a), (b), (c) and (d) illustrate user interface aspects in accordance with a preferred embodiment;



FIG. 9 illustrates user interface aspects in accordance with a preferred embodiment.



FIG. 10 illustrates the manner in which images are ranked according to their similarity to multiple reference regions.



FIG. 11 is a block diagram of an in-camera image processing system according to an alternative embodiment.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT(S)

The main preferred embodiment of the present invention will be described in relation to FIG. 1(a). This takes the form of a set of software modules 162 implemented on a desktop computer 150.


A second preferred embodiment provides an implementation within an embedded imaging appliance such as a digital camera.


Main Embodiment Software Modules on a Desktop Computer


In this principle embodiment, the present invention is described in the context of a desktop computer environment and may either be run as a stand-alone program, or alternatively may be integrated in existing applications or operating system (OS) system components to improve their functionality.


1. Main Image Analysis Module 156


This module cycles through a set of images 170-1 . . . 180-2 and determines, extracts, normalizes and analyzes face regions and associated peripheral regions to determine feature vectors for a plurality of face and non-face classifiers. The module then records this extracted information in an image data set record. The operation of the module is next described in FIG. 2(a). As will be explained later, components of this module are also used in both training and sorting/retrieval modes of the embodiment. The module is called from a higher level workflow and in its normal mode of usage is passed a set of images which must be analyzed [202]. The module loads/acquires the next image [204] and detects any face regions in said image [210]. If no face regions were found [212] then flags in the image data record for that image are updated to indicate that no face regions were found [280]. If the current image is not the last image in the image set being analyzed [298] the next image is loaded/acquired [204]. If this was the last image [298] then the module will exit [299] to the calling module. Where at least one face region is detected the module next extracts and normalizes each detected face region and, where possible, any associated peripheral regions [214].


Face region normalization techniques can range from a simple re-sizing of a face region to more sophisticated 2D rotational and affine transformation techniques and to highly sophisticated 3D face modeling methods.



FIG. 7 illustrates shows a determined face region [701], and its associated peripheral regions [702, 703]. The dimensions and relative locations of these regions are exemplary and may be adapted according to additional determining steps after the main face region is detected. Further, we remark that additional peripheral regions may be added to specifically identify items such as ear-rings, necklaces, scarves, ties and hats.


Both the face region and a full body region may also be employed for color/texture analysis and can be used as additional classifiers for the sorting/retrieval process (see also Chen et al in “Face annotation for family photo album management”, published in the International Journal of Image and Graphics Vol. 3, No. 1 (2003), hereby incorporated by reference).


Other examples of associated peripheral regions are given in FIG. 9 and are described below.


Returning to FIG. 2(a), we next discuss the analyzing of a set of extracted, normalized regions associated with a detected face region. Essentially this is the process of determining feature vectors for a plurality of face and non-face image classifiers. In this embodiment, we have confined our analysis to two face-based classification techniques and one non-face means of classification based on the banded color correlogram. In fact many different combinations between these and alternative techniques may be used.


In FIG. 2(a) we illustrate this feature vector determination process [220] as a combination of three parallel processes [220-1, 220-1 and 220-3]. In a practical embodiment within a desktop computer, each feature extraction process will be performed in a sequential manner. However representing these processes in parallel indicates that (i) they are independent of one another and (ii) that alternative hardware-based embodiments of the present invention may advantageously perform these classification processes in parallel. Once the feature vectors for the present face region and associated peripheral regions have been determined they are retained in temporary memory storage and a determination if this is the last face region in the current image is made [218]. If other face regions remain then these must be extracted, normalized and analyzed in turn [214, 220]. When all face regions within an image have had their feature vectors extracted this data, together with additional information on the location of each face/peripheral region within the image are recorded in an image data record for the current image [281]. An exemplary description of such an image data record is given in FIG. 4(b) and will be described shortly. After updating of the current image data record the main image analysis module next determines if that was the last image [298] and, if this is the case, it exits [299]. If, however, additional images exist it continues to cycle through each in turn and creates/updates and image data record for each.


We also remark that if a face region is near the edge of an image it may not be possible to properly define peripheral regions such as the body region or the hair region [216]. In this case a flag is modified in the image data record to indicate this. During the sorting/retrieval process (described later), if the user selects a search method which includes body or hair regions than the faces without those regions are either not considered in the search or are given statistically determined maximal feature vector values for these regions during the classification process.


2. Image Collection Training Process


Before the modules 162 can perform their main function of image sorting and retrieval, it is first necessary to initiate a training process on an image collection. In this principle embodiment we will assume that an exemplary image collection is a set of images contained within a subdirectory of the file system on a desktop PC. Thus, when a process controlling the modules 162 is active and a user switches into a subdirectory containing images, the module 156 must load this new image collection and determine firstly if there are images which have not contributed to the training process and secondly if the number of such unutilized images warrants a full retraining of the image collection or if, alternatively, an incremental training process can be successfully employed.



FIG. 1(
b) illustrates this process of determining which training method (full, incremental or no training) is to be applied to an image collection; thus, in response to some external event [100] (examples include user input or switching to a file system directory containing images or a timed, periodic check of known image collections) the training mode determination process first checks if new, unutilized images have been added to the image collection since the last determination of training mode [101]. If now new images have been added, or the number of new images is less than a predetermined threshold value or percentage then no training is required and the training mode determination process may exit [106]. However, if enough unutilized new images have been added the next step is to determine if incremental training is possible [104]. This decision will depend partly on the nature of the classifiers used in the person recognition process, partly on the number of unutilized images and partly on the number of images and determined face regions in the previously trained image collection.


In this preferred embodiment all of the face and non-face recognition techniques employed can be combined linearly which allows incremental training even for quite large additional subsets of new images which are added to a previously trained main image collection. However the present invention does not preclude the use of alternative face or non-face recognition methods which may not support linear combination, or may only support such combinations over small incremental steps. If it is determined that incremental training is possible then the training mode determination step exits to the incremental training step [110] which is further described in FIG. 2(c). Alternatively, if there are too many new images, or the classifiers employed in the present invention are not susceptible to linear combination between image sets then a full retraining must be undertaken [120]. This step is further described in FIG. 2(b).


A system in accordance with a preferred embodiment represents an improvement over the system described at US published application number 2002/0136433 to Lin, which is hereby incorporated by references, and which describes an “adaptive facial recognition system”. The approach described by Lin requires the determination of feature vectors based on a fixed set of basis vectors and a “generic” or “mean” face previously determined through offline training. The present invention allows for incremental retraining based on the automatic determination of face regions within newly acquired images or sets of such images.


A further improvement is that the facial regions determined and normalized by the module 156 are preferably re-utilized in subsequent re-training operations. As the automated determination of valid face regions within an image and the normalization of such regions is the most time-consuming part of the training process—typically representing 90-95% of the time required for training a typical image collection—this means that subsequent combining of several image collections into a “super-collection” and re-training of this “super-collection” can be achieved with a substantially reduced time lag.


2.1 Full Training Mode Workflow



FIG. 2(
b) illustrates the full training workflow which is implemented on an image collection; this module is initiated from the training mode determination module [100] described in FIG. 1(a). Once it is determined that an entire image collection must be trained, the next step is to load a set of data/memory pointers or file handles which will allow all of the individual images of a collection to be accessed as required [232]. Next the main image analysis module is called with the full image collection as an input [200].


In full training mode, it may not be as easy to complete all steps in the feature vector extraction process [220] in the main image analysis module [200], because the relevant basis vector may not yet be determined. In the preferred embodiment, the Wavelet/PCA classifier method [220-2b] is less easily completed until all images have been analyzed. A couple of alternatives are as follows. First, the main image analysis module may be called a second time to repeat these steps [220-2b] which may not have been completed on the first pass. Second, the incomplete feature vector extraction steps may be performed externally to the main image analysis module.


The latter case has been illustrated in FIG. 2(b). Thus, after applying the main image analysis module [200] the mean wavelet face can be calculated [234] and the PCA basis vector set can subsequently be determined [236]. Following these operations it is now possible to explicitly complete the extraction of the feature vectors for the PCA/Wavelet method of face recognition [220-2b], or alternatively to call the main image analysis module a second time, with input flags set to skip most of the internal processing steps apart from [220-2b]. As both the colour correlogram and DCT face recognition techniques chosen for the preferred embodiments use predetermined basis vectors, the feature vectors associated with these classifiers can always be calculated within the main image analysis module. Finally, having determined the feature vectors for PCA we next use these to calculate the vector displacement, in PCA classifier space, of each extracted face region relative to the mean face [236-1]. This “relative” set of feature vectors is then added to the relevant image data record [236-2]. Finally we exit the full training module [297], returning to the calling module.


2.2 Incremental Training Mode Workflow


Normally an image collection will only need to go through this (automated) full training procedure once. After initial training, it will normally be possible to add and analyze new images using the determined basis vector set for the classifier, for example, PCA. When a larger subset of new images is added to a collection, in the case of PCA/Wavelet face recognition classifiers, it will generally be possible to incrementally modify existing basis vectors by only training the newly added image subset and subsequently modifying the existing location of the mean face and the previously determined basis vector set. FIG. 2(c) describes this process in detail illustrating the incremental training workflow which allows image subsets to be integrated with a previously trained image collection. This is the normal mode of image collection training.


It begins by a determination from the workflow of FIG. 1(b) which initiates the incremental training mode [110]. Next a set of data/memory pointers or file handles which will allow all of the individual images of the image subset to be accessed [232] is loaded. Alternatively, within an image acquisition device at least one newly acquired image may be loaded. The main image analysis module [200] is now applied to the loaded image subset, using the existing basis vectors for extracting feature vectors for each classifier. After the main image analysis module [200] has finished, the incremental change in the mean wavelet face and the PCA basis vector set for the combined image collection (original collection+new subset collection) can now be estimated [234a, 236a].


Note that if the size of the new image subset (plus any previous subsets which were unused for training (and marked accordingly)) is small relative to the size of the main image collection (say <10%) then these steps may optionally be deferred [244] and the images in the image subset are temporarily marked as “unused for training” [246]. Subsequently when a larger set of images is available, the incremental training module will take all of these images marked as “unused for training” and perform incremental training using a larger combined image superset. In that case the next step is to calculate the incremental change in the previously determined mean face location which will be produced by combining the new image (super)set with the previously determined training data [234a]. Once the new mean face location is determined, the incremental changes in the basis vector set for this classifier should next be determined [236a].


If either incremental change is greater than a predetermined threshold [250] and further illustrated [502, 505] in FIG. 5(a), then the mean wavelet face must be recalculated [262]. The relevant basis vectors must be also be recalculated [264] and finally the actual feature vectors for each affected classifier must be recalculated for all the determined face regions in each image [266]. We remark that if the classifiers are chosen, as they are in our preferred embodiment, so that the superposition theorem (linear combination) applies to the classifier space from which a feature vector describing a pattern is derived, then it is a simple matter to incrementally adjust the feature vectors for each image without a need to call the main image analysis module. (Note that if it were necessary to call the main image analysis module this would, in turn, require that each image is reloaded and the necessary face & peripheral regions are extracted, normalized and analyzed.) After steps [262], [264] and [266] are completed the incremental training module can now exit, returning to the calling module [297].


If these incremental changes are less than their predetermined thresholds, then the effects of completing incremental training will be minimal and it does not make sense to do so. In this case the current subset is marked as “unused for training” and the determined incremental changes are also recorded in the global collection data set [252], which is further described in FIG. 4(a). In this case the old mean face and basis vectors are retained [254] and are next used to calculate the feature vectors relative to the old mean face [256]. The incremental training module can now exit, returning to the calling module [297].


In a variation on the above workflow the determining of step [244] can be limited to the current subset (i.e. no account is taken of additional subsets which were not yet used in training) and the additional set of steps marked “alternative” can be used. In this case, if the incremental change determined from the current subset is below the predetermined threshold, then the workflow moves to block [251] which determines if additional unused image subsets are available. If this is not the case the workflow continues as before, moving to step [252]. However, when additional subsets are available these are combined with the current image subset and the combined incremental change in mean face is determined [234b] followed by a determination of the combined incremental change in the basis vector set for this classifier [236b]. The workflow next returns to the determining step [250], repeating the previous analysis for the combined image superset comprising the current image subset and any previously unused image subsets. In this manner the incremental training module can reduce the need for retraining except when it will significantly affect the recognition process.


In other embodiments, it may be desirable to combine previously trained image collections into a “super-collection” comprising of at least two such collections. In this case it is desirable to re-use image collection data which is fixed, i.e. data which is not dependent on the actual set of images. In particular this includes the determined locations of face/peripheral regions within each image and the normalization data pertaining to each such predetermined face/peripheral region. The determination and normalization of such regions is, typically, very time consuming for a consumer image collection taking 90-95% of the time required by the training process. For a collection of several hundred images, with an average size of 3 megapixels, this can take of the order of tens of minutes, whereas the time required by the actual training engines which extract classifier data from the determined face regions will normally require of the order of several seconds per training engine.


In particular, this makes a system in accordance with a preferred embodiment suitable for use with interactive imaging browsing software which in turn employs the modules 162. Through a user interface, the user selects different groups of images, for example, through interaction with a folder structure, either by selecting one or more folders, each containing images or selecting groups of images within a folder. As these images will have been incrementally added to the storage source (local 170 or remote 180) which the user is accessing, it is likely that face and non-face region information will already have been detected and determined by the module 156 or another copy running remotely. The user can select candidate region within an image and then selectively determine which types of classifiers are to be used for sorting and retrieving images from the selected groups of images. Then generating either basis vector and/or feature vector information for all images within the selected group of images as well as the candidate region prior to sorting/retrieval can be performed relatively quickly and in line with user response expectations of an interactive application.


A modified variant of the main image analysis module [286], suitable for use in such an embodiment is illustrated in FIG. 2(d). In this variant the face region detection step and the subsequent normalization step are omitted. Instead at least one image collection data set is loaded [282]—this process could also be used for re-training an image collection which has been added to incrementally and has gradually grown large enough to require retraining. Then each image data record, as illustrated in FIG. 4(b) is loaded in turn [284] and the previously determined face and peripheral regions are read from this loaded image data record [286].


The remainder of the analysis process is similar to that described in the main image analysis module of FIG. 2(a) and comprises the extraction of feature vectors determined by each of the classifier engines.


3. Image Sorting and Retrieval


Now that the training process for an image collection has been described we must now consider how the image sorting/retrieval module functions.


3.1 Image Selection Process



FIG. 1(
c) illustrates an overview of the image selection process which should occur before the image sorting/retrieval process. A selected image will either be a newly selected/acquired image [128], in which case it must be loaded, selected or acquired [130b] and then subjected to face (pattern) detection [132]. This is followed by a feature vector extraction process [134] which may additionally incorporate related peripheral region extraction and region normalization steps. The extracted feature vector will be used for comparing with pre-determined feature vectors obtained from an image collection data set [138]. Alternatively, if an image is a member of an existing image collection [129], then the relevant feature vectors will have been previously extracted and it is only necessary to load the previously acquired image [130a] and the appropriate image data record and image collection data set [136]. The image sorting/retrieval module [140] may now be called.


3.2 Main Image Sorting/Retrieval Process


The workflow for this module is described in FIG. 3(a) and is initiated from the image selection or acquisition process described in FIG. 1(c) as the final process step [140]. It is assumed that when the image sorting/retrieval module is activated [140] it will also be provided with at least two input parameters providing access to (i) the image to be used for determining the search/sort/classification criteria, and (ii) the image collection data set against which the search is to be performed. If a data record has not already been determined for the search image [308] the main image analysis module is next applied to it to generate this data record [200]. The image is next displayed to a user who must make certain selections of the face regions to be used for searching and also of the classifiers to be used in the search [308]. Alternatively, the search criteria may be predetermined through a configuration file and step [308] may thus be automatic. The user interface aspects of the preferred embodiment are illustrated in FIGS. 8 & 9 and will be discussed shortly.


After a reference region comprising the face and/or peripheral regions to be used in the retrieval process is selected (or determined automatically) the main retrieval process is initiated [310] either by user interaction or automatically in the case where search criteria are determined automatically from a configuration file. The main retrieval process is described in step [312] and comprises three main sub-processes which are iteratively performed for each classifier to be used in the sorting/retrieval process:

    • (i) Distances are calculated in the current classifier space between the feature vector for the reference region and corresponding feature vector(s) for the face/peripheral regions for all images in the image collection to be searched [312-1]. In the preferred embodiment, the Euclidean distance is used to calculate these distances which serve as a measure of similarity between the reference region and face/peripheral regions in the image collection.
    • (ii) The statistical mean and standard deviation of the distribution of these calculated distances is determined and stored temporarily [312-2].
    • (iii) The determined distances between the reference region and the face/peripheral regions in the image collection are next normalized [312-3] using the mean and standard deviation determined in step [312-2].


These normalized data sets may now be combined in a decision fusion process [314] which generates a ranked output list of images. These may then be displayed by a UI module [316].


An additional perspective on the process steps [312-1, 312-2 and 312-3] is given in FIG. 5. FIG. 5(a) illustrates the classifier space [500] for a classifier such as the Wavelet/PCA face recognition used in this preferred embodiment. The basis vector set, [λ1, λ2, . . . , λn] is used to determine feature vectors for this classifier. The average or mean face is calculated [501] during the training phase and its vector position [507] in classifier space [500] is subtracted from the absolute position of all face regions. Thus, exemplary face regions [504-1a, 504-2a and 504-3a] have their positions [504-1b, 504-2b and 504-3b] in classifier space defined in vector terms relative to the mean face [501].


The result of performing step [312-1] on the classifier space of FIG. 5(a) is illustrated in FIG. 5(b). Thus, after a particular face region [504-2a] is selected by the user [308] the distances to all other face regions within a particular image collection are calculated. The face regions [504-1a] and [504-3a] are shown as illustrative examples. The associated distances (or unnormalized rankings) are given as [504-1c] and [504-3c].



FIGS. 5(
c) and 5(d) illustrate the analogous case to FIGS. 5(a) and 5(b) when the distances in classifier space are measured in absolute terms from the origin, rather than being measured relative to the position of an averaged, or mean face. For example, the color correlogram technique as used in our preferred embodiment is a classifier of this type which does not have the equivalent of a mean face.


We remark that the distances from the feature vector for the reference region [504-2a] and [509-2a] to the feature vectors for all other face regions in FIGS. 5(b) & (d) may be calculated in a number of ways. In the preferred embodiment we use the Euclidean distance but other distance metrics may be advantageously employed for certain classifiers other than those described here.


4. Methods for Combining Classifier Similarity Measures


4.1 Statistical Normalization Method



FIG. 5(
e) illustrates our primary technique for normalizing and combining the multiple classifiers described in this disclosure to reach a final similarity ranking.


The process is described for a set of multiple classifiers, C1, C2 . . . CN and is based on a statistical determination of the distribution of the distances of all patterns relevant to the current classifier (face or peripheral regions in our embodiment) from the selected reference region. For most classifiers, this statistical analysis typically yields a normal distribution with a mean value MCn and a variance VCn as shown in FIG. 5(e). This is further illustrated in FIGS. 6(a), (b) & (c) which illustrate exemplary statistical distributions determined using the Wavelet/PCA technique of face recognition, FIG. 6(a); the DCT technique of face recognition, FIG. 6(b) and the banded correlogram technique as applied to both hair and top-body clothing regions, FIG. 6(c). We remark that the determined statistical distribution is not always a normal distribution as illustrated by FIG. 6(b). We further remark that the bimodal form of the distribution illustrated in FIG. 6(c) occurs because it combines the distributions of hair and top-body regions; if these are separated and considered as two distinct classifiers then two separate normal distributions would result.


The combining of classifier similarity ranking measures (or, distances) is then determined by normalizing each classifier by this determined mean similarity ranking measure (distance) for that classifier, based on the reference region.


Thus the combined similarity ranking measure can now be determined quite simply as:

Dtot=D1/MC1+D2/MC2+Dn/MCn

A more sophisticated determination may optionally incorporate the standard deviation of the statistical distribution into the normalization process.


4.2 Determining Similarity Measures for Heterogenous Classifier Sets


So far we have been primarily concerned with cases where all classifiers are available for each reference region. In the context of our principle embodiment this implies that both face recognition classifiers, top-body correlogram classifier and the hair region correlogram classifier are available. However this is not always the case. We can say that the face region classifiers should always be available once a face region is successfully detected. Hereafter we refer to such classifiers as primary classifiers. In contrast the hair and clothing classifiers are not always available for close-up shots or where a face regions is towards the periphery of an image. Hereafter we refer to such classifiers as secondary classifiers.


Thus when the decision fusion process [824] performs a similarity determination across all stored patterns using all available classifiers, some patterns may not have associated secondary classifiers.


This may be dealt with in one of several ways:

    • (i) stored patterns without an associated secondary classifier may have the missing similarity measure for that classifier replaced with the maximum measure determined for that classifier; or
    • (ii) such stored patterns may have said similarity measure replaced with the determined statistical mean measure for said classifier; or
    • (iii) such patterns may be simply ignored in the search.


In case (i) these patterns will appear after patterns which contain all classifiers; in (ii) the effect of the missing classifier does not affect the ranking of the pattern which may appear interspersed with patterns which contain all classifiers while in (iii) these patterns will not appear in the ranked list determined by the decision fusion process.


A selection between these alternatives may be based on pre-determined criteria, on a user selection, or on statistical analysis of the distribution of the classifier across the pattern set.


4.3 Determining Similarity Measures for Multiple Reference regions


A second modification of the decision fusion process arises when we wish to search for a combination of two, or more, reference regions co-occurring within the same image. In this case we process the first reference region according to the previously described methods to obtain a first set of similarity measures. The second reference region is then processed to yield a second set of similarity measures. This process yields multiple sets of similarity measures.


We next cycle through each image and determine the closest pattern to the first reference region; if only one pattern exists within an image then that image will not normally be considered. For each image where at least two patterns are present we next determine the closest pattern to the second reference region. These two similarity measures are next combined as illustrated in FIG. 10 where the normalized classifier similarity measures for reference region No. 1, D′11 [1101], D′21[1102] and D′31[1103] are combined with the normalized classifier similarity measures for reference region No. 2, D′12[1105], D′22[1106] and D′32[1107]. This provides a combined similarity measure, Dtot, for that image (pattern grouping) and is recorded accordingly. After each image in the image collection is thus analyzed, a ranking list based on these combined similarity measures can be created and the relevant images sorted and displayed accordingly. A user interface for this decision fusion method is illustrated in FIGS. 8(a)-(d).


4.4 Employing User Input in the Combination Process


From the descriptions in 4.2 and 4.3 of the various methods of combining the normalized classifiers it is clear that, once the normalized classifiers for each pattern are determined, the main decision fusion process can combine these classifiers in a variety of ways and that the resulting images (pattern groupings) can be correspondingly sorted in a variety of ways with differing results.


Accordingly we illustrate in FIG. 9 an image browser user-interface for selecting between different combinations of classifiers. The user may select between using face recognition classifiers only [1002], a combination of face and top-body classifiers [1003], a full body region and a body pose classifier [1005] (see also “Face annotation for family photo album management” to Chen et al published in the International Journal of Image and Graphics Vol. 3, No. 1 (2003), hereby incorporated by reference).


Those skilled in the art will realize that alternative user interface embodiments are possible. Further, the activation buttons for these exemplary classifiers [1002, 1003, 1004 and 1005] may operate in a combinative manner. Thus, if multiple user interface components [1002, 1003, 1004 and 1005] are selected together, the decision fusion process within the image browser application can be modified to combine these classifiers accordingly. Further, additional UI components, such as sliders or percentage scales, can be used to determine weightings between the selected classifiers to allow the user additional flexibility in sorting & retrieving images.


5. User Interface Aspects



FIGS. 8(
a) . . . (d) illustrate the UI aspects of an alternative application which employs various software components in accordance with a preferred embodiment. The various steps of the image sorting/retrieval process are illustrated, starting with FIG. 8(a) which illustrates face regions [951-1], [951-2], [951-3] and [951-4] detected within an image. This image can be selected from a subdirectory on the computer file system containing an image collection, or, alternatively, through accessing a list of image links, preferably stored in a database, which define a set of images which are members of a currently selected image collection. Images from the collection are randomly sorted at this stage [952].


Next, in FIG. 8(b) at least one of the detected face regions [951-1], [951-2], [951-3] and [951-4] within an image is selected by the user [953]. The image collection is next sorted based on selected classifier, and in this instance as only the face recognition mode was selected [960] a set of sorted images ranked according to face region similarity is obtained [954].


In FIG. 8(c) we illustrate the results when the peripheral regions of hair and upper body are included in the set of classifiers [961] used for sorting and ranking images within the collection. This selection may also be made by clicking on the face region [955] a second time which causes the marked region in the selected image to expand to include the shoulders and hair of the person selected. In this case we see that the images returned by the sorting & retrieval process are now determined not only by the persons face, but are also sorted according to the clothing and/or hairstyle in each image [956] in which that person occurs.



FIG. 8(
d) illustrates a further aspect of the present invention which allows a collection to be searched for a co-occurrence of two, or more, persons within an image. In this case two (or more) face regions are selected from an image [957-1] and [957-2] and the image collection is searched for a co-occurrence of both faces. This is achieved by only considering images in which there are at least two determined face regions. Similarity measures are then determined between each face region selected for retrieval purposes and the face regions in each image which has at least two face regions; this leads to two sets of classifiers, [C11, C12 . . . C1N] and [C21, C22 . . . C2N]. A statistical distribution is associated with each classifier as previously explained and illustrated in FIG. 5(e). These are now combined as illustrated in FIG. 10 to yield a combined similarity measure (distance) between the selected pair of faces and each image in the collection. The closest images are then displayed in the UI [958].


First Alternative Embodiment: Integration into OS Components


An alternative embodiment involving UI aspects is illustrated in FIG. 9. In this case, a system in accordance with a preferred embodiment has been integrated with an existing component of the operating system which performs the function of browsing the file/directory subsystem of a computer. In this embodiment, each subdirectory may be considered as containing an image collection and the training determination component described in FIG. 1(a) can be activated when a user switches to a subdirectory containing images. Further, both incremental and full training processes may be implemented as background processes so that they do not interfere with the normal activities of a user who simply wishes to browse files using the normal OS tools.


However, if the user selects a mode to sort images based on the faces occurring in them [1002], or the faces & clothing/hair features [1003] or the full body clothing [1004] or the body pose of a person [1005] the training mode may then switch to a foreground process in order to accelerate completion of the training process for the selected subdirectory (image collection). The image regions associated with each of these exemplary classifiers are shown as [1012], [1013], [1014] and [1015] in FIG. 9.


Once the training process is completed the face regions for the currently selected image become active as UI elements and a user may now select one or more persons from the image by clicking on their face. The sorted images are then displayed as thumbnails [1010] and the user may combine (or eliminate) additional classifiers from the UI by selecting/deselecting [1002], [1003], [1004] and [1005].


The image browser application illustrated in FIG. 9 further illustrates how the invention may be advantageously employed to allow sorting for images within multiple image collections. In the embodiment illustrated in FIG. 9, each subdirectory or folder of the left-hand browser window [1001] either contains a previously trained image collection, or a training process will be activated upon a user selection of an untrained image folder. In the discussion that follows each subdirectory is assumed to contain a previously trained image collection and an image collection data set comprising an image data record for each image which is a member of that collection.


The browser application supports two distinct means of searching multiple collections to find the nearest match to one or more face regions selected within the main browser image [1012]. In the context of this embodiment of the invention that may be achieved by selecting multiple image collections in the left-hand window of the image browser [1001].


In the first method the user selects multiple collections from the left-hand browser window [1001]. The selected face regions within the main image are next analyzed and feature vectors are extracted for each classifier based on the basis sets determined within the first selected image collection. Similarity measures are determined between the one or more selected face regions of the main image and each of the face regions within said first selected image collection for each of the classifier sets for which basis vectors are provided within that image collection data set. Normalization measures are determined and combined similarity measures are next determined for each face region within said first selected image collection. A list of these normalized combined similarity measures is stored temporarily.


This process is repeated for each of the selected image collections and an associated list of normalized combined similarity measures is determined. These lists are next combined and all images from the selected image collections are displayed according to their relative similarity measures in the bottom right-hand window [1010] of the image browser.


A second method of searching multiple collections combines these image collections into a new “super-collection”. The collection data sets for each of the selected image collections are then loaded and merged to form a combined collection data set for this “super-collection”. Certain data from the combined data set will now be invalid because it is dependent on the results of the training process. This is illustrated in FIGS. 4(c) & 4(d). Fortunately, the most time-consuming data to determine is that pertaining to the location of valid face regions and the normalization of these regions. All of this data can be reused.


The modified retraining process for such a “super-collection” is described above with reference to FIG. 2(d).


Thus, upon a user selection of multiple image collections the present invention allows a fast retraining of the combined image “super-collection”. In this case the primary selection image presented in the main browser window [1012] will be from the combined image “super-collection” and the sorted images presented in the lower right-hand window [1010] are also taken from this combined “super-collection”.


Second Alternative Embodiment: In-Camera Implementation


As imaging appliances continue to increase in computing power, memory and non-volatile storage, it will be evident to those skilled in the art of digital camera design that many aspects of the present invention could be advantageously embodied as an in-camera image sorting sub-system. An exemplary embodiment is illustrated in FIG. 11.


Following the main image acquisition process [1202] a copy of the acquired image is saved to the main image collection [1212] which will typically be stored on a removable compact-flash or multimedia data card [1214]. The acquired image may also be passed to an image subsampler [1232] which generates an optimized subsampled copy of the main image and stores it in a subsampled image collection [1216]. These subsampled images may advantageously be employed in the analysis of the acquired image.


The acquired image (or a subsampled copy thereof) is also passed to a face detector module [1204] followed by a face and peripheral region extraction module [1206] and a region normalization module [1207]. The extracted, normalized regions are next passed to the main image analysis module [1208] which generates an image data record [409] for the current image. The main image analysis module may also be called from the training module [1230] and the image sorting/retrieval module [1218].


A UI module [1220] facilitates the browsing & selection of images [1222], the selection of one or more face regions [1224] to use in the sorting/retrieval process [1218]. In addition classifiers may be selected and combined [1226] from the UI Module [1220].


Those skilled in the art will realize that various combinations are possible where certain modules are implemented in a digital camera and others are implemented on a desktop computer.

Claims
  • 1. A digital image acquisition device, including a lens, an image sensor and a processor, and having an operating system including a component embodied within a processor-readable medium for programming the processor to perform an image recognition method a) training a plurality of image classifiers, including: for a plurality of images in the collection, identifying one or more regions corresponding to a face region;for each image identified as having multiple face regions, for each of a plurality of image classifiers, determining combination feature vectors corresponding to the multiple face regions; andstoring said combination feature vectors in association with certain recognizable data relating to at least one of the multiple face regions, andb) retrieving a sub-set of images from said collection or a different collection that includes one or more images including both a face associated with certain recognizable data and a second face, or a subset of said collection, or a combination thereof, including: selecting from said plurality of image classifiers at least one classifier on which said retrieving is to be based, said at least one classifier being configured for programming the processor to select images containing at least two reference face regions including a first face to be recognized and a second face;determining, for said at least two reference face regions, a respective feature vector for one or more selected classifiers; andretrieving said sub-set of images from within said collection or said different collection that includes one or more images including both said face associated with certain recognizable data and said second face, or said subset of said collection, or said combination thereof, in accordance with the distance between the feature vectors determined for said reference region and the feature vectors for face regions of said image collection; andwherein said determining comprises:a) for each face region, extracting respective features representative of the region;b) for each of said plurality of image classifiers, determining respective basis vectors according to said extracted features; andc) for the extracted features for each region, for each classifier, determining said feature vectors, based on each determined basis vector.
  • 2. A method for image recognition in a collection of digital images comprising: a) training a plurality of image classifiers, including:for a plurality of images in the collection, identifying one or more regions corresponding to a face region;for each image identified as having multiple face regions, for each of a plurality of image classifiers, determining combination feature vectors corresponding to the multiple face regions; andstoring said combination feature vectors in association with certain recognizable data relating to at least one of the multiple face regions, andb) retrieving a sub-set of images from said collection or a different collection that includes one or more images including both a face associated with certain recognizable data and a second face, or a subset of said collection, or a combination thereof, including:selecting from said plurality of image classifiers at least one classifier on which said retrieving is to be based, said at least one classifier being configured for programming the processor to select images containing at least two reference face regions including a first face to be recognized and a second face;determining, for said at least two reference face regions, a respective feature vector for one or more selected classifiers; andretrieving said sub-set of images from within said collection or said different collection that includes one or more images including both said face associated with certain recognizable data and said second face, or said subset of said collection, or said combination thereof, in accordance with the distance between the feature vectors determined for said reference region and the feature vectors for face regions of said image collection; andwherein said determining comprises: a) for each face region, extracting respective features representative of the region;b) for each of said plurality of image classifiers determining respective basis vectors according to said extracted features; andc) for the extracted features for each region, for each classifier, determining said feature vectors, based on each determined basis vector.
  • 3. A method as claimed in claim 2, wherein said determining further comprises: a) for each associated peripheral region for said each face region, extracting respective features representative of the peripheral region;b) for each of said plurality of image classifiers, determining respective basis vectors according to said extracted features; andc) for the extracted features for each peripheral region, for each classifier, determining said feature vectors, based on each determined basis vector.
  • 4. A method as claimed in claim 2 wherein each basis vector for a classifier comprises a selected set of attributes and respective reference values for these attributes.
  • 5. A method as claimed in claim 2, wherein said determining feature vectors for said reference region, is responsive to determining that feature vectors have previously been determined for said reference region for said classifier, for retrieving said feature vectors from storage.
  • 6. A method as claimed in claim 2, wherein retrieving said sub-set of images comprises, for each classifier, comparing feature vectors for the selected face region with feature vectors for face regions in the image collection to provide a set of distance measures.
  • 7. A method as in claim 2, further comprising calculating for each set of distance measures, mean and variance values.
  • 8. A component embodied within a non-transitory processor-readable medium for programming a processor to perform an image recognition method including image recognition in a collection of digital images, wherein the method comprises: a) training a plurality of image classifiers, including: for a plurality of images in the collection, identifying one or more regions corresponding to a face region;for each image identified as having multiple face regions, for each of a plurality of image classifiers, determining combination feature vectors corresponding to the multiple face regions; andstoring said combination feature vectors in association with certain recognizable data relating to at least one of the multiple face regions, andb) retrieving a sub-set of images from said collection or a different collection that includes one or more images including both a face associated with certain recognizable data and a second face, or a subset of said collection, or a combination thereof, including: selecting from said plurality of image classifiers at least one classifier on which said retrieving is to be based, said at least one classifier being configured for programming the processor to select images containing at least two reference face regions including a first face to be recognized and a second face;determining, for said at least two reference face regions, a respective feature vector for one or more selected classifiers; andretrieving said sub-set of images from within said collection or said different collection that includes one or more images including both said face associated with certain recognizable data and said second face, or said subset of said collection, or said combination thereof, in accordance with the distance between the feature vectors determined for said reference region and the feature vectors for face regions of said image collection; andwherein said determining comprises: a) for each face region, extracting respective features representative of the region;b) for each of said plurality of image classifiers, determining respective basis vectors according to said extracted features; andc) for the extracted features for each region, for each classifier, determining said feature vectors, based on each determined basis vector.
  • 9. A component as claimed in claim 8, wherein said determining further comprises: d) for each associated peripheral region for said each face region, extracting respective features representative of the peripheral region;e) for each of said plurality of image classifiers, determining respective basis vectors according to said extracted features; andf) for the extracted features for each peripheral region, for each classifier, determining said feature vectors, based on each determined basis vector.
  • 10. A component as claimed in claim 8, wherein each basis vector for a classifier comprises a selected set of attributes and respective reference values for these attributes.
  • 11. A component as claimed in claim 8, wherein said determining feature vectors for said reference region, is responsive to determining that feature vectors have previously been determined for said reference region for said classifier, for retrieving said feature vectors from storage.
  • 12. A component as claimed in claim 8, wherein retrieving said sub-set of images comprises, for each classifier, comparing feature vectors for the selected face region with feature vectors for face regions in the image collection to provide a set of distance measures.
  • 13. A component as claimed in claim 8, wherein the method further comprises calculating for each set of distance measures, mean and variance values.
  • 14. A device as claimed in claim 1, wherein the method further comprises calculating for each set of distance measures, mean and variance values.
  • 15. A component as claimed in claim 8, wherein said determining further comprises: g) for each face region and any associated peripheral region, extracting respective features representative of the region;h) for each of said plurality of image classifiers, determining respective basis vectors according to said extracted features; andi) for the extracted features for each region, for each classifier, determining said feature vectors, based on each determined basis vector.
  • 16. A device as claimed in claim 1, wherein each basis vector for a classifier comprises a selected set of attributes and respective reference values for these attributes.
  • 17. A device as claimed in claim 1, wherein said determining feature vectors for said reference region, is responsive to determining that feature vectors have previously been determined for said reference region for said classifier, for retrieving said feature vectors from storage.
  • 18. A device as claimed in claim 1, wherein retrieving said sub-set of images comprises, for each classifier, comparing feature vectors for the selected face region with feature vectors for face regions in the image collection to provide a set of distance measures.
PRIORITY

This application is a Continuation of U.S. patent application Ser. No. 11/027,001, filed Dec. 29, 2004, now U.S. Pat. No. 7,715,597; which is hereby incorporated by reference.

US Referenced Citations (420)
Number Name Date Kind
4047187 Mashimo et al. Sep 1977 A
4317991 Stauffer Mar 1982 A
4376027 Smith et al. Mar 1983 A
RE31370 Mashimo et al. Sep 1983 E
4638364 Hiramatsu Jan 1987 A
5018017 Sasaki et al. May 1991 A
RE33682 Hiramatsu Sep 1991 E
5063603 Burt Nov 1991 A
5164831 Kuchta et al. Nov 1992 A
5164992 Turk et al. Nov 1992 A
5227837 Terashita Jul 1993 A
5280530 Trew et al. Jan 1994 A
5291234 Shindo et al. Mar 1994 A
5311240 Wheeler May 1994 A
5384912 Ogrinc et al. Jan 1995 A
5430809 Tomitaka Jul 1995 A
5432863 Benati et al. Jul 1995 A
5488429 Kojima et al. Jan 1996 A
5496106 Anderson Mar 1996 A
5500671 Andersson et al. Mar 1996 A
5576759 Kawamura et al. Nov 1996 A
5633678 Parulski et al. May 1997 A
5638136 Kojima et al. Jun 1997 A
5642431 Poggio et al. Jun 1997 A
5680481 Prasad et al. Oct 1997 A
5684509 Hatanaka et al. Nov 1997 A
5706362 Yabe Jan 1998 A
5710833 Moghaddam et al. Jan 1998 A
5724456 Boyack et al. Mar 1998 A
5744129 Dobbs et al. Apr 1998 A
5745668 Poggio et al. Apr 1998 A
5774129 Poggio et al. Jun 1998 A
5774747 Ishihara et al. Jun 1998 A
5774754 Ootsuka Jun 1998 A
5781650 Lobo et al. Jul 1998 A
5802208 Podilchuk et al. Sep 1998 A
5812193 Tomitaka et al. Sep 1998 A
5818975 Goodwin et al. Oct 1998 A
5835616 Lobo et al. Nov 1998 A
5842194 Arbuckle Nov 1998 A
5844573 Poggio et al. Dec 1998 A
5852823 De Bonet Dec 1998 A
5870138 Smith et al. Feb 1999 A
5911139 Jain et al. Jun 1999 A
5911456 Tsubouchi et al. Jun 1999 A
5978519 Bollman et al. Nov 1999 A
5991456 Rahman et al. Nov 1999 A
6035072 Read Mar 2000 A
6053268 Yamada Apr 2000 A
6072904 Desai et al. Jun 2000 A
6097470 Buhr et al. Aug 2000 A
6101271 Yamashita et al. Aug 2000 A
6128397 Baluja et al. Oct 2000 A
6142876 Cumbers Nov 2000 A
6148092 Qian Nov 2000 A
6188777 Darrell et al. Feb 2001 B1
6192149 Eschbach et al. Feb 2001 B1
6234900 Cumbers May 2001 B1
6246790 Huang et al. Jun 2001 B1
6249315 Holm Jun 2001 B1
6263113 Abdel-Mottaleb et al. Jul 2001 B1
6268939 Klassen et al. Jul 2001 B1
6282317 Luo et al. Aug 2001 B1
6301370 Steffens et al. Oct 2001 B1
6332033 Qian Dec 2001 B1
6349373 Sitka et al. Feb 2002 B2
6351556 Loui et al. Feb 2002 B1
6389181 Shaffer et al. May 2002 B2
6393148 Bhaskar May 2002 B1
6400470 Takaragi et al. Jun 2002 B1
6400830 Christian et al. Jun 2002 B1
6404900 Qian et al. Jun 2002 B1
6407777 DeLuca Jun 2002 B1
6418235 Morimoto et al. Jul 2002 B1
6421468 Ratnakar et al. Jul 2002 B1
6430307 Souma et al. Aug 2002 B1
6430312 Huang et al. Aug 2002 B1
6438264 Gallagher et al. Aug 2002 B1
6456732 Kimbell et al. Sep 2002 B1
6459436 Kumada et al. Oct 2002 B1
6473199 Gilman et al. Oct 2002 B1
6501857 Gotsman et al. Dec 2002 B1
6502107 Nishida Dec 2002 B1
6504942 Hong et al. Jan 2003 B1
6504951 Luo et al. Jan 2003 B1
6516154 Parulski et al. Feb 2003 B1
6526161 Yan Feb 2003 B1
6554705 Cumbers Apr 2003 B1
6556708 Christian et al. Apr 2003 B1
6564225 Brogliatti et al. May 2003 B1
6567775 Maali et al. May 2003 B1
6567983 Shiimori May 2003 B1
6606398 Cooper Aug 2003 B2
6633655 Hong et al. Oct 2003 B1
6661907 Ho et al. Dec 2003 B2
6697503 Matsuo et al. Feb 2004 B2
6697504 Tsai Feb 2004 B2
6754389 Dimitrova et al. Jun 2004 B1
6760465 McVeigh et al. Jul 2004 B2
6765612 Anderson et al. Jul 2004 B1
6783459 Cumbers Aug 2004 B2
6801250 Miyashita Oct 2004 B1
6826300 Liu et al. Nov 2004 B2
6850274 Silverbrook et al. Feb 2005 B1
6876755 Taylor et al. Apr 2005 B1
6879705 Tao et al. Apr 2005 B1
6928231 Tajima Aug 2005 B2
6940545 Ray et al. Sep 2005 B1
6965684 Chen et al. Nov 2005 B2
6993157 Oue et al. Jan 2006 B1
7003135 Hsieh et al. Feb 2006 B2
7020337 Viola et al. Mar 2006 B2
7027619 Pavlidis et al. Apr 2006 B2
7035456 Lestideau Apr 2006 B2
7035467 Nicponski Apr 2006 B2
7038709 Verghese May 2006 B1
7038715 Flinchbaugh May 2006 B1
7042505 DeLuca May 2006 B1
7046339 Stanton et al. May 2006 B2
7050607 Li et al. May 2006 B2
7064776 Sumi et al. Jun 2006 B2
7082212 Liu et al. Jul 2006 B2
7092555 Lee et al. Aug 2006 B2
7099510 Jones et al. Aug 2006 B2
7110575 Chen et al. Sep 2006 B2
7113641 Eckes et al. Sep 2006 B1
7119838 Zanzucchi et al. Oct 2006 B2
7120279 Chen et al. Oct 2006 B2
7151843 Rui et al. Dec 2006 B2
7158680 Pace Jan 2007 B2
7162076 Liu Jan 2007 B2
7162101 Itokawa et al. Jan 2007 B2
7171023 Kim et al. Jan 2007 B2
7171025 Rui et al. Jan 2007 B2
7175528 Cumbers Feb 2007 B1
7187786 Kee Mar 2007 B2
7190829 Zhang et al. Mar 2007 B2
7200249 Okubo et al. Apr 2007 B2
7206461 Steinberg et al. Apr 2007 B2
7218759 Ho et al. May 2007 B1
7227976 Jung et al. Jun 2007 B1
7254257 Kim et al. Aug 2007 B2
7269292 Steinberg Sep 2007 B2
7274822 Zhang et al. Sep 2007 B2
7274832 Nicponski Sep 2007 B2
7295233 Steinberg et al. Nov 2007 B2
7308156 Steinberg et al. Dec 2007 B2
7310450 Steinberg et al. Dec 2007 B2
7315630 Steinberg et al. Jan 2008 B2
7315631 Corcoran et al. Jan 2008 B1
7315658 Steinberg et al. Jan 2008 B2
7317815 Steinberg et al. Jan 2008 B2
7317816 Ray et al. Jan 2008 B2
7324670 Kozakaya et al. Jan 2008 B2
7330570 Sogo et al. Feb 2008 B2
7336821 Ciuc et al. Feb 2008 B2
7340109 Steinberg et al. Mar 2008 B2
7352394 DeLuca et al. Apr 2008 B1
7357717 Cumbers Apr 2008 B1
7362368 Steinberg et al. Apr 2008 B2
7369712 Steinberg et al. May 2008 B2
7403643 Ianculescu et al. Jul 2008 B2
7424170 Steinberg et al. Sep 2008 B2
7436998 Steinberg et al. Oct 2008 B2
7440593 Steinberg et al. Oct 2008 B1
7440594 Takenaka Oct 2008 B2
7460694 Corcoran et al. Dec 2008 B2
7460695 Steinberg et al. Dec 2008 B2
7466866 Steinberg Dec 2008 B2
7469055 Corcoran et al. Dec 2008 B2
7469071 Drimbarean et al. Dec 2008 B2
7471846 Steinberg et al. Dec 2008 B2
7474341 DeLuca et al. Jan 2009 B2
7506057 Bigioi et al. Mar 2009 B2
7515740 Corcoran et al. Apr 2009 B2
7536036 Steinberg et al. May 2009 B2
7536060 Steinberg et al. May 2009 B2
7536061 Steinberg et al. May 2009 B2
7545995 Steinberg et al. Jun 2009 B2
7551754 Steinberg et al. Jun 2009 B2
7551755 Steinberg et al. Jun 2009 B1
7551800 Corcoran et al. Jun 2009 B2
7555148 Steinberg et al. Jun 2009 B1
7558408 Steinberg et al. Jul 2009 B1
7564994 Steinberg et al. Jul 2009 B1
7565030 Steinberg et al. Jul 2009 B2
7574016 Steinberg et al. Aug 2009 B2
7587068 Steinberg et al. Sep 2009 B1
7587085 Steinberg et al. Sep 2009 B2
7590305 Steinberg et al. Sep 2009 B2
7599577 Ciuc et al. Oct 2009 B2
7606417 Steinberg et al. Oct 2009 B2
7616233 Steinberg et al. Nov 2009 B2
7619665 DeLuca Nov 2009 B1
7620218 Steinberg et al. Nov 2009 B2
7630006 DeLuca et al. Dec 2009 B2
7630527 Steinberg et al. Dec 2009 B2
7634109 Steinberg et al. Dec 2009 B2
7636486 Steinberg et al. Dec 2009 B2
7639888 Steinberg et al. Dec 2009 B2
7639889 Steinberg et al. Dec 2009 B2
7660478 Steinberg et al. Feb 2010 B2
7676108 Steinberg et al. Mar 2010 B2
7676110 Steinberg et al. Mar 2010 B2
7680342 Steinberg et al. Mar 2010 B2
7683946 Steinberg et al. Mar 2010 B2
7684630 Steinberg Mar 2010 B2
7685341 Steinberg et al. Mar 2010 B2
7689009 Corcoran et al. Mar 2010 B2
7692696 Steinberg et al. Apr 2010 B2
7693311 Steinberg et al. Apr 2010 B2
7694048 Steinberg et al. Apr 2010 B2
7697778 Steinberg et al. Apr 2010 B2
7702136 Steinberg et al. Apr 2010 B2
7702236 Steinberg et al. Apr 2010 B2
7715597 Costache et al. May 2010 B2
7738015 Steinberg et al. Jun 2010 B2
7746385 Steinberg et al. Jun 2010 B2
7747596 Bigioi et al. Jun 2010 B2
7773118 Florea et al. Aug 2010 B2
7783085 Perlmutter et al. Aug 2010 B2
7787022 Steinberg et al. Aug 2010 B2
7792335 Steinberg et al. Sep 2010 B2
7792970 Bigioi et al. Sep 2010 B2
7796816 Steinberg et al. Sep 2010 B2
7796822 Steinberg et al. Sep 2010 B2
7804531 DeLuca et al. Sep 2010 B2
7804983 Steinberg et al. Sep 2010 B2
7809162 Steinberg et al. Oct 2010 B2
7822234 Steinberg et al. Oct 2010 B2
7822235 Steinberg et al. Oct 2010 B2
7844076 Corcoran et al. Nov 2010 B2
7844135 Steinberg et al. Nov 2010 B2
7847839 DeLuca et al. Dec 2010 B2
7847840 DeLuca et al. Dec 2010 B2
7848549 Steinberg et al. Dec 2010 B2
7852384 DeLuca et al. Dec 2010 B2
7853043 Steinberg et al. Dec 2010 B2
7855737 Petrescu et al. Dec 2010 B2
7860274 Steinberg et al. Dec 2010 B2
7864990 Corcoran et al. Jan 2011 B2
7865036 Ciuc et al. Jan 2011 B2
7868922 Ciuc et al. Jan 2011 B2
7869628 Corcoran et al. Jan 2011 B2
20010028731 Covell et al. Oct 2001 A1
20010031129 Tajima Oct 2001 A1
20010031142 Whiteside Oct 2001 A1
20020105662 Patton et al. Aug 2002 A1
20020106114 Yan et al. Aug 2002 A1
20020113879 Battle et al. Aug 2002 A1
20020114535 Luo Aug 2002 A1
20020132663 Cumbers Sep 2002 A1
20020136433 Lin Sep 2002 A1
20020141586 Margalit et al. Oct 2002 A1
20020154793 Hillhouse et al. Oct 2002 A1
20020168108 Loui et al. Nov 2002 A1
20020172419 Lin et al. Nov 2002 A1
20030025812 Slatter Feb 2003 A1
20030035573 Duta et al. Feb 2003 A1
20030043160 Elfving et al. Mar 2003 A1
20030048926 Watanabe Mar 2003 A1
20030048950 Savakis et al. Mar 2003 A1
20030052991 Stavely et al. Mar 2003 A1
20030059107 Sun et al. Mar 2003 A1
20030059121 Savakis et al. Mar 2003 A1
20030084065 Lin et al. May 2003 A1
20030086134 Enomoto May 2003 A1
20030086593 Liu et al. May 2003 A1
20030107649 Flickner et al. Jun 2003 A1
20030118216 Goldberg Jun 2003 A1
20030118218 Wendt et al. Jun 2003 A1
20030122839 Matraszek et al. Jul 2003 A1
20030128877 Nicponski Jul 2003 A1
20030156202 Van Zee Aug 2003 A1
20030158838 Okusa Aug 2003 A1
20030198368 Kee Oct 2003 A1
20030210808 Chen et al. Nov 2003 A1
20040008258 Aas et al. Jan 2004 A1
20040136574 Kozakaya et al. Jul 2004 A1
20040145660 Kusaka Jul 2004 A1
20040207722 Koyama et al. Oct 2004 A1
20040210763 Jonas Oct 2004 A1
20040213454 Lai et al. Oct 2004 A1
20040223063 DeLuca et al. Nov 2004 A1
20040264780 Zhang et al. Dec 2004 A1
20050013479 Xiao et al. Jan 2005 A1
20050031224 Prilutsky et al. Feb 2005 A1
20050036676 Heisele Feb 2005 A1
20050063569 Colbert et al. Mar 2005 A1
20050069208 Morisada Mar 2005 A1
20050129278 Rui et al. Jun 2005 A1
20050140801 Prilutsky et al. Jun 2005 A1
20050226509 Maurer et al. Oct 2005 A1
20060006077 Mosher et al. Jan 2006 A1
20060018521 Avidan Jan 2006 A1
20060093238 Steinberg et al. May 2006 A1
20060104488 Bazakos et al. May 2006 A1
20060120599 Steinberg et al. Jun 2006 A1
20060140055 Ehrsam et al. Jun 2006 A1
20060140455 Costache et al. Jun 2006 A1
20060177100 Zhu et al. Aug 2006 A1
20060177131 Porikli Aug 2006 A1
20060204034 Steinberg et al. Sep 2006 A1
20060204053 Mori et al. Sep 2006 A1
20060228040 Simon et al. Oct 2006 A1
20060239515 Zhang et al. Oct 2006 A1
20060251292 Gokturk et al. Nov 2006 A1
20070011651 Wagner Jan 2007 A1
20070053335 Onyon et al. Mar 2007 A1
20070091203 Peker et al. Apr 2007 A1
20070098303 Gallagher et al. May 2007 A1
20070154095 Cao et al. Jul 2007 A1
20070154096 Cao et al. Jul 2007 A1
20070160307 Steinberg et al. Jul 2007 A1
20070253638 Steinberg et al. Nov 2007 A1
20070269108 Steinberg et al. Nov 2007 A1
20070296833 Corcoran et al. Dec 2007 A1
20080013798 Ionita et al. Jan 2008 A1
20080013799 Steinberg et al. Jan 2008 A1
20080043121 Prilutsky et al. Feb 2008 A1
20080049970 Ciuc et al. Feb 2008 A1
20080075385 David et al. Mar 2008 A1
20080089561 Zhang Apr 2008 A1
20080112599 Nanu et al. May 2008 A1
20080137919 Kozakaya et al. Jun 2008 A1
20080143854 Steinberg et al. Jun 2008 A1
20080144966 Steinberg et al. Jun 2008 A1
20080175481 Petrescu et al. Jul 2008 A1
20080186389 DeLuca et al. Aug 2008 A1
20080205712 Ionita et al. Aug 2008 A1
20080219517 Blonk et al. Sep 2008 A1
20080219518 Steinberg et al. Sep 2008 A1
20080219581 Albu et al. Sep 2008 A1
20080220750 Steinberg et al. Sep 2008 A1
20080232711 Prilutsky et al. Sep 2008 A1
20080240555 Nanu et al. Oct 2008 A1
20080266419 Drimbarean et al. Oct 2008 A1
20080267461 Ianculescu et al. Oct 2008 A1
20080292193 Bigioi et al. Nov 2008 A1
20080309769 Albu et al. Dec 2008 A1
20080309770 Florea et al. Dec 2008 A1
20080316327 Steinberg et al. Dec 2008 A1
20080316328 Steinberg et al. Dec 2008 A1
20080317339 Steinberg et al. Dec 2008 A1
20080317357 Steinberg et al. Dec 2008 A1
20080317378 Steinberg et al. Dec 2008 A1
20080317379 Steinberg et al. Dec 2008 A1
20090002514 Steinberg et al. Jan 2009 A1
20090003661 Ionita et al. Jan 2009 A1
20090003708 Steinberg et al. Jan 2009 A1
20090040342 Drimbarean et al. Feb 2009 A1
20090080713 Bigioi et al. Mar 2009 A1
20090080796 Capata et al. Mar 2009 A1
20090080797 Nanu et al. Mar 2009 A1
20090115915 Steinberg et al. May 2009 A1
20090123063 Ciuc May 2009 A1
20090167893 Susanu et al. Jul 2009 A1
20090179998 Steinberg et al. Jul 2009 A1
20090179999 Albu et al. Jul 2009 A1
20090185753 Albu et al. Jul 2009 A1
20090189997 Stec et al. Jul 2009 A1
20090189998 Nanu et al. Jul 2009 A1
20090190803 Neghina et al. Jul 2009 A1
20090196466 Capata et al. Aug 2009 A1
20090238410 Corcoran et al. Sep 2009 A1
20090238419 Steinberg et al. Sep 2009 A1
20090263022 Petrescu et al. Oct 2009 A1
20090303342 Corcoran et al. Dec 2009 A1
20090303343 Drimbarean et al. Dec 2009 A1
20090304278 Steinberg et al. Dec 2009 A1
20100014721 Steinberg et al. Jan 2010 A1
20100026831 Ciuc et al. Feb 2010 A1
20100026832 Ciuc et al. Feb 2010 A1
20100026833 Ciuc et al. Feb 2010 A1
20100039520 Nanu et al. Feb 2010 A1
20100039525 Steinberg et al. Feb 2010 A1
20100053362 Nanu et al. Mar 2010 A1
20100053367 Nanu et al. Mar 2010 A1
20100053368 Nanu et al. Mar 2010 A1
20100054533 Steinberg et al. Mar 2010 A1
20100054549 Steinberg et al. Mar 2010 A1
20100054592 Nanu et al. Mar 2010 A1
20100060727 Steinberg et al. Mar 2010 A1
20100066822 Steinberg et al. Mar 2010 A1
20100141786 Bigioi et al. Jun 2010 A1
20100141787 Bigioi et al. Jun 2010 A1
20100141798 Steinberg et al. Jun 2010 A1
20100146165 Steinberg et al. Jun 2010 A1
20100165140 Steinberg Jul 2010 A1
20100165150 Steinberg et al. Jul 2010 A1
20100182458 Steinberg et al. Jul 2010 A1
20100194895 Steinberg Aug 2010 A1
20100201826 Steinberg et al. Aug 2010 A1
20100201827 Steinberg et al. Aug 2010 A1
20100220899 Steinberg et al. Sep 2010 A1
20100231727 Steinberg et al. Sep 2010 A1
20100238309 Florea et al. Sep 2010 A1
20100259622 Steinberg et al. Oct 2010 A1
20100260414 Ciuc Oct 2010 A1
20100271499 Steinberg et al. Oct 2010 A1
20100272363 Steinberg et al. Oct 2010 A1
20100295959 Steinberg et al. Nov 2010 A1
20100321537 Zamfir Dec 2010 A1
20100328472 Steinberg et al. Dec 2010 A1
20100328486 Steinberg et al. Dec 2010 A1
20100329549 Steinberg et al. Dec 2010 A1
20100329582 Albu et al. Dec 2010 A1
20110002506 Ciuc et al. Jan 2011 A1
20110002545 Steinberg et al. Jan 2011 A1
20110007174 Bacivarov et al. Jan 2011 A1
20110013043 Corcoran et al. Jan 2011 A1
20110013044 Steinberg et al. Jan 2011 A1
20110025859 Steinberg et al. Feb 2011 A1
20110025886 Steinberg et al. Feb 2011 A1
20110026780 Corcoran et al. Feb 2011 A1
20110033112 Steinberg et al. Feb 2011 A1
20110043648 Albu et al. Feb 2011 A1
20110050919 Albu et al. Mar 2011 A1
20110053654 Petrescu et al. Mar 2011 A1
20110055354 Bigioi et al. Mar 2011 A1
Foreign Referenced Citations (9)
Number Date Country
2370438 Jun 2002 GB
5260360 Oct 1993 JP
WO2007142621 Dec 2007 WO
WO2008015586 Feb 2008 WO
WO2008107112 Sep 2008 WO
WO2008109622 Sep 2008 WO
WO2008107112 Jan 2009 WO
WO2010063463 Jun 2010 WO
WO2010063463 Jul 2010 WO
Related Publications (1)
Number Date Country
20100202707 A1 Aug 2010 US
Continuations (1)
Number Date Country
Parent 11027001 Dec 2004 US
Child 12764650 US