Face recognition performance using additional image features

Information

  • Patent Grant
  • 8379917
  • Patent Number
    8,379,917
  • Date Filed
    Friday, October 2, 2009
    15 years ago
  • Date Issued
    Tuesday, February 19, 2013
    11 years ago
Abstract
A technique is provided for recognizing faces in an image stream using a digital image acquisition device. A first acquired image is received from an image stream. A first face region is detected within the first acquired image having a given size and a respective location within the first acquired image. First faceprint data uniquely identifying the first face region are extracted along with first peripheral region data around the first face region. The first faceprint and peripheral region data are stored, and the first peripheral region data are associated with the first face region. The first face region is tracked until a face lock is lost. A second face region is detected within a second acquired image from the image stream. Second peripheral region data around the second face region are extracted. The second face region is identified upon matching the first and second peripheral region data.
Description
BACKGROUND

Face tracking is a recent innovation in digital cameras and related consumer imaging devices such as camera phones and handheld video cameras. Face tracking technologies have been improving to where they can detect and track faces at up to 60 fps (see, e.g., U.S. Pat. Nos. 7,403,643, 7,460,695, 7,315,631, 7,460,694, and 7,469,055, and US publications 2009/0208056, 2008/0267461 and U.S. Ser. No. 12/063,089, which are all assigned to the same assignee and are incorporated by reference). Users have now come to expect high levels of performance from such in-camera technology.


Faces are initially detected using a face detector, which may use a technique such as that described by Viola-Jones which use rectangular Haar classifiers (see, e.g., US2002/0102024 and Jones, M and Viola, P., “Fast multi-view face detection,” Mitsubishi Electric Research Laboratories, 2003.


Once a face is detected, its location is recorded and a localized region around that face is scanned by a face detector in the next frame. Thus, once a face is initially detected, it can be accurately tracked from frame to frame without a need to run a face detector across the entire image. The “located” face is said to be “locked” by the face tracker. Note that it is still desirable to scan the entire image or at least selected portions of the image with a face detector as a background task in order to locate new faces entering the field of view of the camera. However, even when a “face-lock” has been achieved, the localized search with a face detector may return a negative result even though the face is still within the detection region. This can happen because the face has been turned into a non-frontal, or profile pose, facing instead either too much up, down, left or right to be detected. That is, a typical face detector can only accurately detect faces in a semi-frontal pose. Face lock may also be lost due to sudden changes in illumination conditions, e.g. backlighting of a face as it passes in front of a source of illumination, among other possibilities such as facial distortions and occlusions by other persons or objects in a scene.


Face Recognition in Cameras

Now that face detection has been quickly adopted as a “must-have” technology for digital cameras, many engineers have begun to consider the problem of performing more sophisticated face analysis within portable imaging devices. Perhaps the most desired of these is to recognize and distinguish between the individual subjects within an image, or to pick out the identity of a particular individual, for example, from a stored set of friends & family members. These applications are examples of what may be referred to generically as face recognition.


Forensic Face Recognition

Face recognition is known from other fields of research. In particular it is known from the areas of law enforcement and from applications relating in immigration control and in the recognition of suspected terrorists at border crossing. Face recognition is also used in applications in gaming casinos and in a range of commercial security applications.


Most of these applications fall into a sub-category of face recognition that may be referred to as forensic face recognition. In such applications a large database of images acquired under controlled conditions—in particular controlled frontal pose and regulated diffuse illumination—is used to train a set of optimal basis functions for a face. When a new face is acquired, it is analyzed in terms of these basis functions. A facial pattern, or faceprint, is obtained and matched against the recorded patterns of all other faces from this large database (see, e.g., U.S. Pat. Nos. 7,551,755, 7,558,408, 7,587,068, 7,555,148, and 7,564,994, which are assigned to the same assignee and incorporated by reference). Thus, an individual can be compared with the many individuals from a law-enforcement database of known criminals, or some other domestic or international law-enforcement database of known persons of interest. Such system relies on the use of a large back-end database and powerful computing infrastructure to facilitate a large number of pattern matching comparisons.


Face Recognition in Consumer Devices

In consumer electronics, the use of face recognition is clearly somewhat different in nature than in say law enforcement or security. To begin with, the nature and implementation of face recognition is influenced by a range of factors which cannot be controlled to the same extent as in forensic applications, such as: (i) there are significant variations in the image acquisition characteristics of individual handheld imaging devices resulting in variable quality of face regions; (ii) image acquisition is uncontrolled and thus faces are frequently acquired at extremes of pose and illumination; (iii) there is not a suitable large database of pre-acquired images to facilitate training; (iv) there is not control on the facial appearance of individuals so people can wear different make-up, hairstyles, glasses, beards, etc; (v) devices are often calibrated to match local demographics or climate conditions—e.g. cameras in California normally expect sunny outdoor conditions, whereas in Northern Europe cloudy, overcast conditions are the norm, and thus the default white balance in these locations will be calibrated differently; (vi) faces are typically acquired against a cluttered background, making it difficult to accurately extract face regions for subsequent analysis. Additional discussion can be found, for example, in Automated sorting of consumer image collections using face and peripheral region image classifiers, IEEE Transactions on Consumer Electronics, Vol. 51, No. 3, August 2005, pp. 747-754 (Corcoran, P.; Costache, G.). and in US 20060140455, Method and component for image recognition to Corcoran et al.


From the above discussion, it can be understood that on a handheld imaging device the face recognition process may be significantly less reliable than in a typical forensic face recognition system. The acquisition of face regions produces less reliable face regions for analysis, the handheld device can hold a much smaller dataset of face patterns for comparison, and the training of new faces typically relies on data input from the user of a camera who may not be professionally trained and may not capture optimal images for training.


SUMMARY OF THE INVENTION

A technique is provided for recognizing faces in an image stream using a digital image acquisition device. A first acquired image is received from an image stream. A first face region is detected within the first acquired image having a given size and a respective location within the first acquired image. First faceprint data uniquely identifying the first face region are extracted along with first peripheral region data around the first face region. The first faceprint and peripheral region data are stored, and the first peripheral region data are associated with the first face region. The first face region is tracked until a face lock is lost. A second face region is detected within a second acquired image from the image stream. Second peripheral region data around the second face region are extracted. The second face region is identified upon matching the first and second peripheral region data.


The storing of peripheral region data may occur within volatile memory.


A database may be provided with an identifier and associated parameters for each of a number of one or more faces to be recognized. Using the database, face recognition may be selectively applied to the first face region to provide an identifier for the first face region


Second faceprint data uniquely identifying the second face region may be extracted. The first and second faceprint data may be matched, thereby confirming the identifying of the second face region. If they do not match, the identifying of the second face region as being the same as the first face region is discontinued.


Texture information may be retrieved and matched from the first and second peripheral region data.


The detecting and identifying of the second face region may occur within two seconds, or within one second, or less.


A same identifier may be displayed along with the first and second face regions. The identifier may include a nickname of a person associated with the first and second face regions.


Another method is provided for recognizing faces in an image stream using a digital image acquisition device. A first acquired image is acquired from an image stream. A first face region is detected within the first acquired image having a given size and a respective location within the first acquired image. First faceprint data uniquely identifying the first face region are extracted along with first peripheral region data around the first face region. The first faceprint and peripheral region data are stored, including associating the first peripheral region data with the first face region. A first face region combination, including the first face region and peripheral region data, is tracked until face lock is lost. The first face region is identified.


A database may be provided with an identifier and associated parameters for each of a number of one or more faces to be recognized. Using the database, face recognition may be selectively applied to the first face region to provide an identifier for the first face region.


Texture information may be retrieved and matched from the first and second peripheral region data.


The image stream may include two or more relatively low resolution reference images.


The image stream may include a series of reference images of nominally a same scene of a main acquired image, such as two or more preview or postview images, and/or one or more images whose exposure period overlaps some part of the exposure duration of a main acquired image.


One or more processor-readable storage media are also provided herein having code embedded therein for programming a processor to perform any of the methods described herein.


A portable, digital image acquisition device is also provided herein, including a lens and image sensor for acquiring digital images, a processor, and a processor-readable medium having code embedded therein for programming the processor to perform any of the methods described herein.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates exemplary processes in accordance with certain embodiments.



FIG. 2 illustrates exemplary processes in accordance with certain other embodiments.





DETAILED DESCRIPTION OF THE EMBODIMENTS

Techniques are described below for improving the performance of an internal person recognition algorithm by using a combination of features extracted from face regions, as well as extra features extracted from regions adjacent to the face, around the face, surrounding the face, proximate to the face, just above, below or to the side of the face, at the periphery of the face. Herein, the term “peripheral regions” shall be used and is intended to include regions adjacent to the face, around the face, surrounding the face, proximate to the face, just above, below, in front, behind or to the side of the face, and/or at the periphery of the face. These extra, peripheral features can tend to be quite invariant to acquisition conditions and can be used, on a temporary basis, for person identification, and/or specifically to maintain a face-lock of a tracked face that is temporarily not detected but will reappear in an image stream upon resuming a more appropriate illumination, direction, pose, orientation or tilt relative to the camera and/or lighting.


The performance of in-camera face identification technology will typically decrease when there are differences in illumination, face pose and/or expression between the current face to be recognized and the face image used for learning the identity of a person. To improve the identification performance, we propose to use information from peripheral regions of the image surrounding the main face. For example, these regions may contain information about clothing, jewelry or electronic equipment being used, cigarette, gum chewing, whistling, shouting, singing, shape of neck and/or shoulders relative to face, shape of face or a feature of the face, scar, tattoo, bandage, brightness, color or tone of face, of a face feature or of a peripheral region, hairstyle, hair color, degree of hair loss, hair covering, toupee, hair on face, in ears or nose or on chest or neck, a hat or helmet, face mask, glasses, umbrella, name tag, label on clothing, or any of a wide variety of other items or features that may distinguish a “peripheral region” around a person's face, which can be used to compliment the identification of an individual.


Certain embodiments described herein are based on using facial features that are sensitive to small variations in face pose, tilt, expression and/or size, or other factors. An advantage is robustness to face variations in pose, tilt, expression and/or size as well as facial occlusions. Very short identification times are achieved.


Current problems with existing solutions are mitigated in certain embodiments described herein by using smart processing techniques. In certain embodiments, these take advantage of technologies now available within state-of-art digital cameras. In particular, multiple face regions may be extracted and pre-filtered from a preview image stream until face regions which are suitable (e.g., with regard to frontal pose and constant illumination) are obtained. Then, the actual pattern matching process of face recognition is initiated.


User Expectations for Face Recognition

Challenges are introduced by the expectations of camera users. It is desired to have a camera with a real-time face tracking facility that can detect and independently track up to nine or more faces in real-time. User expectations for face recognition are and will continue to be quite high. It is further desired not only that a camera will correctly recognize a face captured in a still image, but also that the camera can recognize a face as it tracks the face prior to capturing a still image.


Now, an in-camera recognition algorithm may typically take several tens of seconds to extract, pre-process and then achieve a reliable initial recognition of a face region. After a camera is first pointed at a scene with several face images, it has been acceptable to have a delay of this order of magnitude. It has been understood by the user that the camera has to “think” about the different faces before it reaches a decision as to the identity of each. However, once the camera has identified a person, it is desired that subsequent recognitions and/or maintaining recognition during tracking should not present serious further delays.


With an analogy being face detection versus face-tracking, where the initial detection takes more time than tracking in subsequent frames, it is desired that while initially recognizing a face region in a preview stream may present some initial delay, it is desired to continue to hold a “face-lock” of a recognized face with the tracking algorithm; maintaining a “memory” of this face. However, when a recognized face leaves the current imaging scene and then re-enters it a short while later, it is desired not to have to repeat the same initial face recognition delays. In the past, a background face detection algorithm would find the face within a second or two, and the face recognition algorithm would not recall the identity of that person within the same time-frame and have to initiate recognition again with unacceptable delays. Thus, the camera is considered to have forgotten the person even though they only left the imaging scene for a few seconds.


In embodiments herein, information contained within peripheral regions, that is, regions of the image surrounding the main face region, are used to improve and refine the sorting of images containing faces (see Automated sorting of consumer image collections using face and peripheral region image classifiers, IEEE Transactions on Consumer Electronics, Vol. 51, No. 3, August 2005, pp. 747-754 (Corcoran, P.; Costache, G.). and in US 20060140455, Method and component for image recognition to Corcoran et al., incorporated by reference). In particular these regions contain information that can be used to compliment the identification of an individual, so that sufficient information is maintained throughout the face tracking to maintain face-lock even when the face region itself is not optimally directed, is blocked or partially blocked, and/or illuminated unevenly and/or insufficiently or overly illuminated.


It is noted that in certain embodiments the peripheral regions may be tracked, along with the face regions, and not lost in the first place such that recovery is not necessary. In other embodiments, the peripheral regions are quickly detected after loss of face lock, such that the face detector and face recognition components know to look for the specific face that was lost in the vicinity of the detected peripherals regions. Either way, re-initiating face detection and recognition for the previously identified face is obviated by an advantageous embodiment described herein.


A challenge is to increase the speed at which the identity of a face is recovered after a “face-lock” is lost, or if a face-lock being lost entails initiating face recognition from the beginning, then to increase the duration, proportion and/or probability of maintenance of face-lock over the image stream at least while the particular face actually remains within the scene being imaged with the camera. Again, as re-initiating face recognition is too slow and can lead to gaps of tens of seconds while a suitable new face region is obtained and recognized, it is advantageous as described in embodiments herein to detect and utilize information contained in one or more peripheral regions to maintain the face-lock and/or to recover the identity of a face quickly.


Of note, peripheral regions are generally more texture-based than face regions, such that peripheral regions are more invariant to acquisition conditions. Thus, when a face is first recognized, a process in accordance with certain embodiments extracts and records the textures of these one or more peripheral regions, and associates the peripheral regions, at least on a temporary basis, with the particular recognition profile.


In certain embodiments, the association of peripheral regions with a face region may be temporary in nature, such that the peripheral region data may include volatile data that will eventually be discarded, e.g., when the camera is switched off, or after a fixed time interval (e.g. 1 hour, 1 day), or when the camera moves to a new location which is removed from the current location by a certain distance (e.g. 1000 meters), or when manually reset by the user, or when the identified person is confirmed to have left the scene, or based on combinations of these and some other criteria.


An exemplary process is now described with reference to FIG. 1. At S1, a face is detected in a preview stream and tracking of this face is initiated. At S2, after a good face region FR1 is obtained from the tracked face, then a face pattern FP1 is extracted (referred to as a “faceprint”). Pattern matching is performed to recognize the face region FR1 if its faceprint FP1 matches a stored faceprint. The person corresponding to FR1 is thus identified as ID1 from a set of known persons. If no match is found, then the person FR1 may be marked as “unknown” unless or until a nickname is provided (a user may be prompted to provide a nickname, otherwise provided an opportunity to do so).


At S3, peripheral region data PRD1 are obtained and analyzed. This peripheral data PRD1 is stored and associated with the identified person from step 2. At S4, the camera continues to track the face region until “face lock” is lost. Before face lock is lost the identified person may be displayed with a tag or other identifier (e.g. writing a nickname beside their face in the display, or using a symbol identifier of selected and/or arbitrary type designated by a user). At S5, a new face FR2 is detected and tracking is initiated. The peripheral regions PRD2 around this face FR2 are extracted at S6. At S7, the PRD2 are compared with peripheral region data which is currently stored, e.g., such as PRD1 associated with lost face region FR1. If a match of PRIM and PRD2 is determined at S7, then at S9 the face FR2 is temporarily identified (and displayed) as being the person ID1 associated with this peripheral region data PRD2/PRD1, which is the same identifier ID1 used for FR 1 before it was lost. If no match can be determined, then the process stops at S8 and no identity information for this face is provided at this time.


At S10, after a good face region FR2 is obtained from the tracked face FR2, then a face pattern or faceprint FP2 is extracted. Pattern matching is performed to recognize the face FR2 and the person is thus identified at S12 from a set of known persons, or marked as “unknown” at S11. Where an identification ID1 was already provided by volatile data this can be confirmed at S12, or replaced at S11 by the identity from the recognition algorithm.


After S11, if peripheral regions and recognized identity did not match, then a new set of volatile peripheral region data PRD2 is created and associated with this new identity ID2. The camera continues to track this face region until “face lock” is lost displaying the identified person (e.g. writing a nickname beside their face in the display), as the process is basically returned with regard to FR2 and ID2 to S4. The volatile data PRD1 and PRD2 is stored within the camera until an extinction event (power-down, count-down, change of location or manual intervention or combinations thereof) occurs whereupon the volatile data is deleted.


Referring now to FIG. 2, S1-S3, S5-S6 and S8-S11 are same or similar to those described with reference to FIG. 1, and will not be reiterated here. At S44 in FIG. 2, peripheral region data PRD1 is tracked along with face region FR1. In this embodiment, PRD1 will be stored along with faceprint FP1 and not deleted with volatile memory on power-down. Either the combination of FR1 and PRD1 can be identified as ID1, or FR1 can be identified as ID1 as in FIG. 1, while the combination of FR1 and PRD1 is differently identified, e.g., as ID11. At S77, PRD1 and PRD2 are compared if face lock of FR1 and PRD1 is lost, although the face lock of this combined data is less likely to become lost than in the embodiment where only FR1 is tracked.


Alternative Embodiments

When a face detector fails, it may still be possible to retain a “face-lock” using other techniques than those described above, or a combination of techniques. For example, face regions have a skin color or skin tone that can be segmented from the background scene, such that even when a face region turns into a profile pose, it can still exhibit a relatively large region of skin color pixels. Thus a skin-color filter can be used to augment the face detector and improve the reliability of a face tracker. Other augmentation techniques can include luminance and edge filters. It is also possible to use stereo audio analysis to track the relative location of a face of a speaker in a scene without visually detecting the face. This can be useful in video conferencing systems, where it is desired to locate the speaker in a group of more than one person. The face tracker may in fact use a combination of techniques to constantly and reliably maintain a coherent lock on a tracked face region.


The face detection process'can tend to be relatively time-consuming, and so in some embodiments is not performed on every frame. In some cases, only a portion of each frame is scanned for faces with the face detector (see, e.g., U.S. Ser. No. 12/479,593, filed Jun. 5, 2009, which is assigned to the same assignee and is incorporated by reference), such that the entire frame is scanned over a number of frames, e.g., 5 or 10 frames. The frame portion to be scanned may be changed from frame to frame so that over a sequence of several frames the entire field of view is covered. Alternatively, after initial face detection is performed on the entirety of the image either in a single frame or a sequence of frames, the face detector can change to being applied just to the outside of the frame to locate new faces entering the field of view, while previously detected faces, which may also be recognized and identified, within the frame are tracked.


In certain face tracking systems within digital cameras, multiple faces (e.g., up to 9 in some cameras) can be independently tracked in a single scene. Such face tracker can still be very responsive and exhibit less than a few seconds time lag to detect a new face. The face tracker will independently track the movements of a relatively large number (e.g., nine or more) of faces. Tracking is smooth and highly responsive in almost all acquisition conditions, although at very low light levels performance may be significantly degraded.


While an exemplary drawings and specific embodiments of the present invention have been described and illustrated, it is to be understood that that the scope of the present invention is not to be limited to the particular embodiments discussed. Thus, the embodiments shall be regarded as illustrative rather than restrictive, and it should be understood that variations may be made in those embodiments by workers skilled in the arts without departing from the scope of the present invention.


In addition, in methods that may be performed according to preferred embodiments herein and that may have been described above, the operations have been described in selected typographical sequences. However, the sequences have been selected and so ordered for typographical convenience and are not intended to imply any particular order for performing the operations, except for those where a particular order may be expressly set forth or where those of ordinary skill in the art may deem a particular order to be necessary.


In addition, all references cited above and below herein, as well as the background, invention summary, abstract and brief description of the drawings, are all incorporated by reference into the detailed description of the preferred embodiments as disclosing alternative embodiments.


The following are incorporated by reference: U.S. Pat. Nos. 7,587,085, 7,587,068, 7,574,016, 7,565,030, 7,564,994, 7,558,408, 7,555,148, 7,551,755, 7,551,754, 7,545,995, 7,515,740, 7,471,846, 7,469,071, 7,469,055, 7,466,866, 7,460,695, 7,460,694, 7,440,593, 7,436,998, 7,403,643, 7,352,394, 6,407,777, 7,269,292, 7,308,156, 7,315,631, 7,336,821, 7,295,233, 6,571,003, 7,212,657, 7,039,222, 7,082,211, 7,184,578, 7,187,788, 6,639,685, 6,628,842, 6,256,058, 5,579,063, 6,480,300, 5,781,650, 7,362,368 and 5,978,519; and


U.S. published application nos. 2008/0175481, 2008/0013798, 2008/0031498, 2005/0041121, 2007/0110305, 2006/0204110, PCT/US2006/021393, 2005/0068452, 2006/0120599, 2006/0098890, 2006/0140455, 2006/0285754, 2008/0031498, 2007/0147820, 2007/0189748, 2008/0037840, 2007/0269108, 2007/0201724, 2002/0081003, 2003/0198384, 2006/0276698, 2004/0080631, 2008/0106615, 2006/0077261 and 2007/0071347; and


U.S. patent applications Ser. Nos. 10/764,339, 11/861,854, 11/573,713, 11/462,035, 12/042,335, 12/063,089, 11/761,647, 11/753,098, 12/038,777, 12/043,025, 11/752,925, 11/767,412, 11/624,683, 60/829,127, 12/042,104, 11/856,721, 11/936,085, 12/142,773, 60/914,962, 12/038,147, 11/861,257, 12/026,484, 11/861,854, 61/024,551, 61/019,370, 61/023,946, 61/024,508, 61/023,774, 61/023,855, 61/221,467, 61/221,425, 61/221,417, 61/091,700, 61/182,625, 61/221,455, 11/319,766, 11/673,560, 12/485,316, 12/374,040, 12/479,658, 12/479,593, 12/362,399, 12/191,304, 11/937,377, 12/038,147, 12/055,958, 12/026,484, 12/554,258, 12/437,464, 12/042,104, 12/485,316, and 12/302,493.

Claims
  • 1. A method of recognizing faces in an image stream using a digital image acquisition device, comprising:receiving a first acquired image from an image stream;detecting a first face region within the first acquired image having a given size and a respective location within the first acquired image;extracting first faceprint data uniquely identifying the first face region along with first peripheral region data around the first face region;storing the first faceprint and peripheral region data, including associating the first peripheral region data with the first face region;tracking the first face region until face lock is lost;detecting a second face region within a second acquired image from the image stream;extracting second peripheral region data around the second face region;comparing the first and second peripheral region data;identifying the second face region as a same identity as the first face region when the first peripheral region data matches the second peripheral region data; andconfirming the second face region as said same identity as the first face region when the first and second face regions match.
  • 2. The method of claim 1, wherein the storing of peripheral region data occurs only within volatile memory.
  • 3. The method of claim 1, further comprising providing a database comprising an identifier and associated parameters for each of a number of one or more faces to be recognized; and selectively applying face recognition using said database to said first face region to provide an identifier for the first face region.
  • 4. The method of claim 1, further comprising extracting second faceprint data uniquely identifying the second face region, and matching the first and second faceprint data, thereby confirming the identifying of the second face region.
  • 5. The method of claim 1, wherein the detecting and identifying of the second face region occur within two second or less.
  • 6. The method of claim 1, wherein the detecting and identifying of the second face region occur within one second or less.
  • 7. The method of claim 1, further comprising displaying a same identifier with the first and second face regions.
  • 8. The method of claim 7, wherein the identifier comprises a nickname of a person associated with the first and second face regions.
  • 9. The method of claim 1, further comprising receiving a third acquired image from the image stream that has a higher resolution than the first and second acquired images, and identifying a third face region detected therein based upon matching the first faceprint with a third faceprint extracted from the third acquired image.
  • 10. One or more non-transitory processor-readable storage media having code embedded therein for programming a processor to perform a method of recognizing faces in an image stream using a digital image acquisition device, wherein the method comprises: receiving a first acquired image from an image stream;detecting a first face region within the first acquired image having a given size and a respective location within the first acquired image;extracting first faceprint data uniquely identifying the first face region along with first peripheral region data around the first face region;storing the first faceprint and peripheral region data, including associating the first peripheral region data with the first face region;tracking the first face region until face lock is lost;detecting a second face region within a second acquired image from the image stream;extracting second peripheral region data around the second face region; andcomparing the first and second peripheral region data;identifying the second face region as a same identity as the first face region when the first peripheral region data matches the second peripheral region data; andconfirming the second face region as said same identity as the first face region when the first and second face regions match.
  • 11. The one or more processor-readable media of claim 10, wherein the storing of peripheral region data occurs only within volatile memory.
  • 12. The one or more processor-readable media of claim 10, wherein the method further comprises providing a database comprising an identifier and associated parameters for each of a number of one or more faces to be recognized; and selectively applying face recognition using said database to said first face region to provide an identifier for the first face region.
  • 13. The one or more processor-readable media of claim 10, wherein the method further comprises extracting second faceprint data uniquely identifying the second face region, and matching the first and second faceprint data, thereby confirming the identifying of the second face region.
  • 14. The one or more processor-readable media of claim 10, wherein the detecting and identifying of the second face region occur within two second or less.
  • 15. The one or more processor-readable media of claim 10, wherein the detecting and identifying of the second face region occur within one second or less.
  • 16. The one or more processor-readable media of claim 10, wherein the method further comprises displaying a same identifier with the first and second face regions.
  • 17. The one or more processor-readable media of claim 16, wherein the identifier comprises a nickname of a person associated with the first and second face regions.
  • 18. The one or more processor-readable media of claim 10, wherein the method further comprises receiving a third acquired image from the image stream that has a higher resolution than the first and second acquired images, and identifying a third face region detected therein based upon matching the first faceprint with a third faceprint extracted from the third acquired image.
  • 19. A portable, digital image acquisition device, comprising a lens and image sensor for acquiring digital images, a processor, and a processor-readable medium having code embedded therein for programming the processor to perform a method of recognizing faces in an image stream using a digital image acquisition device, wherein the method comprises: receiving a first acquired image from an image stream;detecting a first face region within the first acquired image having a given size and a respective location within the first acquired image;extracting first faceprint data uniquely identifying the first face region along with first peripheral region data around the first face region;storing the first faceprint and peripheral region data, including associating the first peripheral region data with the first face region;tracking the first face region until face lock is lost;detecting a second face region within a second acquired image from the image stream;extracting second peripheral region data around the second face region;comparing the first and second peripheral region data;identifying the second face region as a same identity as the first face region when the first peripheral region data matches the second peripheral region data; andconfirming the second face region as said same identity as the first face region when the first and second face regions match.
  • 20. The device of claim 19, wherein the storing of peripheral region data occurs only within volatile memory.
  • 21. The device of claim 19, further comprising providing a database comprising an identifier and associated parameters for each of a number of one or more faces to be recognized; and selectively applying face recognition using said database to said first face region to provide an identifier for the first face region.
  • 22. The device of claim 19, further comprising extracting second faceprint data uniquely identifying the second face region, and matching the first and second faceprint data, thereby confirming the identifying of the second face region.
  • 23. The device of claim 19, wherein the detecting and identifying of the second face region occur within two second or less.
  • 24. The device of claim 19, wherein the detecting and identifying of the second face region occur within one second or less.
  • 25. The device of claim 19, further comprising displaying a same identifier with the first and second face regions.
  • 26. The device of claim 25, wherein the identifier comprises a nickname of a person associated with the first and second face regions.
  • 27. The device of claim 19, wherein the method further comprises receiving a third acquired image from the image stream that has a higher resolution than the first and second acquired images, and identifying a third face region detected therein based upon matching the first faceprint with a third faceprint extracted from the third acquired image.
  • 28. A method of recognizing faces in an image stream using a digital image acquisition device, comprising: receiving a first acquired image from an image stream;detecting a first face region within the first acquired image having a given size and a respective location within the first acquired image;extracting first faceprint data uniquely identifying the first face region along with first peripheral region data, including texture information, around the first face region; and retrieving texture information from the first and second peripheral region data; and identifying the second face region upon matching the texture information from the first and second peripheral region data.storing the first faceprint and peripheral region data, including associating the first peripheral region data with the first face region;tracking a first face region combination, including the first face region and peripheral region data, until face lock is lost; andidentifying the first face region as a first identity based at least in part on the texture information,extracting second peripheral region data for a second detected face region after said face lock is lost;comparing the first and second peripheral region data;identifying the second face region as a same identity as the first face region when the first peripheral region data matches the second peripheral region data; andconfirming the second face region as said same identity as the first face region when the first and second face regions match.
  • 29. The method of claim 28, further comprising providing a database comprising an identifier and associated parameters for each of a number of one or more faces to be recognized; and selectively applying face recognition using said database to said first face region combination to provide an identifier for the first face region combination.
  • 30. The method of claim 29, further comprising retrieving and matching texture information from the first peripheral region data for comparing with texture information from a stored peripheral region.
  • 31. The method of claim 28, wherein the method further comprises receiving a third acquired image from the image stream that has a higher resolution than the first and second acquired images, and identifying a third face region detected therein based upon matching the first faceprint with a third faceprint extracted from the third acquired image.
  • 32. One or more non-transitory processor-readable storage media having code embedded therein for programming a processor to perform a method of recognizing faces in an image stream using a digital image acquisition device, wherein the method comprises: receiving a first acquired image from an image stream;detecting a first face region within the first acquired image having a given size and a respective location within the first acquired image;extracting first faceprint data uniquely identifying the first face region along with first peripheral region data, including texture information, around the first face region;storing the first faceprint and peripheral region data, including associating the first peripheral region data with the first face region;tracking a first face region combination, including the first face region and peripheral region data, until face lock is lost; andidentifying the first face region as a first identity based at least in part on the texture information,extracting second peripheral region data for a second detected face region after said face lock is lost;comparing the first and second peripheral region data;identifying the second face region as a same identity as the first face region when the first peripheral region data matches the second peripheral region data; andconfirming the second face region as said same identity as the first face region when the first and second face regions match.
  • 33. The one or more processor-readable media of claim 32, wherein the method further comprises providing a database comprising an identifier and associated parameters for each of a number of one or more faces to be recognized; and selectively applying face recognition using said database to said first face region combination to provide an identifier for the first face region combination.
  • 34. The one or more processor-readable media of claim 33, wherein the method further comprises retrieving and matching texture information from the first peripheral region data for comparing with texture information from a stored peripheral region.
  • 35. The one or more processor-readable media of claim 32, wherein the method further comprises receiving a third acquired image from the image stream that has a higher resolution than the first and second acquired images, and identifying a third face region detected therein based upon matching the first faceprint with a third faceprint extracted from the third acquired images.
  • 36. A portable, digital image acquisition device, comprising a lens and image sensor for acquiring digital images, a processor, and a processor-readable medium having code embedded therein for programming the processor to perform a method of recognizing faces in an image stream using a digital image acquisition device, wherein the method comprises: receiving a first acquired image from an image stream;detecting a first face region within the first acquired image having a given size and a respective location within the first acquired image;extracting first faceprint data uniquely identifying the first face region along with first peripheral region, including texture information, around the first face region;storing the first faceprint and peripheral region data, including associating the first peripheral region data with the first face region;tracking a first face region combination, including the first face region and peripheral region data, until face lock is lost; andidentifying the first face region as a first identity based at least in part on the texture information,extracting second peripheral region data for a second detected face region after said face lock is lost;comparing the first and second peripheral region data;identifying the second face region as a same identity as the first face region when the first peripheral region data matches the second peripheral region data; andconfirming the second face region as said same identity as the first face region when the first and second face regions match.
  • 37. The device of claim 36, wherein the method further comprises providing a database comprising an identifier and associated parameters for each of a number of one or more faces to be recognized; and selectively applying face recognition using said database to said first face region combination to provide an identifier for the first face region combination.
  • 38. The device of claim 37, wherein the method further comprises retrieving and matching texture information from the first peripheral region data for comparing with texture information from a stored peripheral region.
  • 39. The device of claim 36, wherein the method further comprises receiving a third acquired image from the image stream that has a higher resolution than the first and second acquired images, and identifying a third face region detected therein based upon matching the first faceprint with a third faceprint extracted from the third acquired image.
US Referenced Citations (390)
Number Name Date Kind
4047187 Mashimo et al. Sep 1977 A
4317991 Stauffer Mar 1982 A
4367027 Stauffer Jan 1983 A
RE31370 Mashimo et al. Sep 1983 E
4448510 Murakoshi May 1984 A
4638364 Hiramatsu Jan 1987 A
4796043 Izumi et al. Jan 1989 A
4970663 Bedell et al. Nov 1990 A
4970683 Harshaw et al. Nov 1990 A
4975969 Tal Dec 1990 A
5008946 Ando Apr 1991 A
5018017 Sasaki et al. May 1991 A
RE33682 Hiramatsu Sep 1991 E
5051770 Cornuejols Sep 1991 A
5063603 Burt Nov 1991 A
5111231 Tokunaga May 1992 A
5150432 Ueno et al. Sep 1992 A
5161204 Hutcheson et al. Nov 1992 A
5164831 Kuchta et al. Nov 1992 A
5164992 Turk et al. Nov 1992 A
5227837 Terashita Jul 1993 A
5278923 Nazarathy et al. Jan 1994 A
5280530 Trew et al. Jan 1994 A
5291234 Shindo et al. Mar 1994 A
5305048 Suzuki et al. Apr 1994 A
5311240 Wheeler May 1994 A
5331544 Lu et al. Jul 1994 A
5353058 Takei Oct 1994 A
5384615 Hsieh et al. Jan 1995 A
5384912 Ogrinc et al. Jan 1995 A
5430809 Tomitaka Jul 1995 A
5432863 Benati et al. Jul 1995 A
5450504 Calia Sep 1995 A
5465308 Hutcheson et al. Nov 1995 A
5488429 Kojima et al. Jan 1996 A
5493409 Maeda et al. Feb 1996 A
5496106 Anderson Mar 1996 A
5543952 Yonenaga et al. Aug 1996 A
5576759 Kawamura et al. Nov 1996 A
5633678 Parulski et al. May 1997 A
5638136 Kojima et al. Jun 1997 A
5638139 Clatanoff et al. Jun 1997 A
5652669 Liedenbaum Jul 1997 A
5680481 Prasad et al. Oct 1997 A
5684509 Hatanaka et al. Nov 1997 A
5706362 Yabe Jan 1998 A
5710833 Moghaddam et al. Jan 1998 A
5715325 Bang et al. Feb 1998 A
5724456 Boyack et al. Mar 1998 A
5745668 Poggio et al. Apr 1998 A
5748764 Benati et al. May 1998 A
5764790 Brunelli et al. Jun 1998 A
5764803 Jacquin et al. Jun 1998 A
5771307 Lu et al. Jun 1998 A
5774129 Poggio et al. Jun 1998 A
5774591 Black et al. Jun 1998 A
5774747 Ishihara et al. Jun 1998 A
5774754 Ootsuka Jun 1998 A
5781650 Lobo et al. Jul 1998 A
5802208 Podilchuk et al. Sep 1998 A
5812193 Tomitaka et al. Sep 1998 A
5818975 Goodwin et al. Oct 1998 A
5835616 Lobo et al. Nov 1998 A
5842194 Arbuckle Nov 1998 A
5844573 Poggio et al. Dec 1998 A
5850470 Kung et al. Dec 1998 A
5852669 Eleftheriadis et al. Dec 1998 A
5852823 De Bonet Dec 1998 A
RE36041 Turk et al. Jan 1999 E
5870138 Smith et al. Feb 1999 A
5905807 Kado et al. May 1999 A
5911139 Jain et al. Jun 1999 A
5912980 Hunke Jun 1999 A
5966549 Hara et al. Oct 1999 A
5978519 Bollman et al. Nov 1999 A
5990973 Sakamoto Nov 1999 A
5991456 Rahman et al. Nov 1999 A
6009209 Acker et al. Dec 1999 A
6016354 Lin et al. Jan 2000 A
6028960 Graf et al. Feb 2000 A
6035074 Fujimoto et al. Mar 2000 A
6053268 Yamada Apr 2000 A
6061055 Marks May 2000 A
6072094 Karady et al. Jun 2000 A
6097470 Buhr et al. Aug 2000 A
6101271 Yamashita et al. Aug 2000 A
6108437 Lin Aug 2000 A
6115052 Freeman et al. Sep 2000 A
6128397 Baluja et al. Oct 2000 A
6128398 Kuperstein et al. Oct 2000 A
6134339 Luo Oct 2000 A
6148092 Qian Nov 2000 A
6151073 Steinberg et al. Nov 2000 A
6173068 Prokoski Jan 2001 B1
6188777 Darrell et al. Feb 2001 B1
6192149 Eschbach et al. Feb 2001 B1
6240198 Rehg et al. May 2001 B1
6246779 Fukui et al. Jun 2001 B1
6246790 Huang et al. Jun 2001 B1
6249315 Holm Jun 2001 B1
6252976 Schildkraut et al. Jun 2001 B1
6263113 Abdel-Mottaleb et al. Jul 2001 B1
6268939 Klassen et al. Jul 2001 B1
6278491 Wang et al. Aug 2001 B1
6282317 Luo et al. Aug 2001 B1
6292575 Bortolussi et al. Sep 2001 B1
6301370 Steffens et al. Oct 2001 B1
6301440 Bolle et al. Oct 2001 B1
6332033 Qian Dec 2001 B1
6334008 Nakabayashi Dec 2001 B2
6349373 Sitka et al. Feb 2002 B2
6351556 Loui et al. Feb 2002 B1
6393148 Bhaskar May 2002 B1
6400830 Christian et al. Jun 2002 B1
6404900 Qian et al. Jun 2002 B1
6407777 DeLuca Jun 2002 B1
6421468 Ratnakar et al. Jul 2002 B1
6426779 Noguchi et al. Jul 2002 B1
6438234 Gisin et al. Aug 2002 B1
6438264 Gallagher et al. Aug 2002 B1
6441854 Fellegara et al. Aug 2002 B2
6445810 Darrell et al. Sep 2002 B2
6456732 Kimbell et al. Sep 2002 B1
6459436 Kumada et al. Oct 2002 B1
6463163 Kresch Oct 2002 B1
6473199 Gilman et al. Oct 2002 B1
6501857 Gotsman et al. Dec 2002 B1
6502107 Nishida Dec 2002 B1
6504942 Hong et al. Jan 2003 B1
6504951 Luo et al. Jan 2003 B1
6516154 Parulski et al. Feb 2003 B1
6526156 Black et al. Feb 2003 B1
6526161 Yan Feb 2003 B1
6529630 Kinjo Mar 2003 B1
6549641 Ishikawa et al. Apr 2003 B2
6556708 Christian et al. Apr 2003 B1
6564225 Brogliatti et al. May 2003 B1
6567983 Shiimori May 2003 B1
6587119 Anderson et al. Jul 2003 B1
6606398 Cooper Aug 2003 B2
6633655 Hong et al. Oct 2003 B1
6661907 Ho et al. Dec 2003 B2
6697503 Matsuo et al. Feb 2004 B2
6697504 Tsai Feb 2004 B2
6700999 Yang Mar 2004 B1
6714665 Hanna et al. Mar 2004 B1
6747690 Molgaard Jun 2004 B2
6754368 Cohen Jun 2004 B1
6754389 Dimitrova et al. Jun 2004 B1
6760465 McVeigh et al. Jul 2004 B2
6760485 Gilman et al. Jul 2004 B1
6765612 Anderson et al. Jul 2004 B1
6778216 Lin Aug 2004 B1
6792135 Toyama Sep 2004 B1
6798834 Murakami et al. Sep 2004 B1
6801250 Miyashita Oct 2004 B1
6801642 Gorday et al. Oct 2004 B2
6816611 Hagiwara et al. Nov 2004 B1
6829009 Sugimoto Dec 2004 B2
6850274 Silverbrook et al. Feb 2005 B1
6876755 Taylor et al. Apr 2005 B1
6879705 Tao et al. Apr 2005 B1
6900840 Schinner et al. May 2005 B1
6937773 Nozawa et al. Aug 2005 B1
6940545 Ray et al. Sep 2005 B1
6947601 Aoki et al. Sep 2005 B2
6959109 Moustafa Oct 2005 B2
6965684 Chen et al. Nov 2005 B2
6967680 Kagle et al. Nov 2005 B1
6977687 Suh Dec 2005 B1
6980691 Nesterov et al. Dec 2005 B2
6993157 Oue et al. Jan 2006 B1
7003135 Hsieh et al. Feb 2006 B2
7020337 Viola et al. Mar 2006 B2
7027619 Pavlidis et al. Apr 2006 B2
7027621 Prokoski Apr 2006 B1
7034848 Sobol Apr 2006 B2
7035456 Lestideau Apr 2006 B2
7035462 White et al. Apr 2006 B2
7035467 Nicponski Apr 2006 B2
7038709 Verghese May 2006 B1
7038715 Flinchbaugh May 2006 B1
7039222 Simon et al. May 2006 B2
7042501 Matama May 2006 B1
7042505 DeLuca May 2006 B1
7042511 Lin May 2006 B2
7043056 Edwards et al. May 2006 B2
7043465 Pirim May 2006 B2
7050607 Li et al. May 2006 B2
7057653 Kubo Jun 2006 B1
7064776 Sumi et al. Jun 2006 B2
7082212 Liu et al. Jul 2006 B2
7099510 Jones et al. Aug 2006 B2
7106374 Bandera et al. Sep 2006 B1
7106887 Kinjo Sep 2006 B2
7110569 Brodsky et al. Sep 2006 B2
7110575 Chen et al. Sep 2006 B2
7113641 Eckes et al. Sep 2006 B1
7119838 Zanzucchi et al. Oct 2006 B2
7120279 Chen et al. Oct 2006 B2
7146026 Russon et al. Dec 2006 B2
7151843 Rui et al. Dec 2006 B2
7158680 Pace Jan 2007 B2
7162076 Liu Jan 2007 B2
7162101 Itokawa et al. Jan 2007 B2
7171023 Kim et al. Jan 2007 B2
7171025 Rui et al. Jan 2007 B2
7190829 Zhang et al. Mar 2007 B2
7194114 Schneiderman Mar 2007 B2
7200249 Okubo et al. Apr 2007 B2
7218759 Ho et al. May 2007 B1
7227976 Jung et al. Jun 2007 B1
7254257 Kim et al. Aug 2007 B2
7274822 Zhang et al. Sep 2007 B2
7274832 Nicponski Sep 2007 B2
7289664 Enomoto Oct 2007 B2
7295233 Steinberg et al. Nov 2007 B2
7315630 Steinberg et al. Jan 2008 B2
7315631 Corcoran et al. Jan 2008 B1
7317815 Steinberg et al. Jan 2008 B2
7321670 Yoon et al. Jan 2008 B2
7324670 Kozakaya et al. Jan 2008 B2
7324671 Li et al. Jan 2008 B2
7336821 Ciuc et al. Feb 2008 B2
7336830 Porter et al. Feb 2008 B2
7352394 DeLuca et al. Apr 2008 B1
7362210 Bazakos et al. Apr 2008 B2
7362368 Steinberg et al. Apr 2008 B2
7403643 Ianculescu et al. Jul 2008 B2
7437998 Burger et al. Oct 2008 B2
7440593 Steinberg et al. Oct 2008 B1
7460695 Steinberg et al. Dec 2008 B2
7469055 Corcoran et al. Dec 2008 B2
7515740 Corcoran et al. Apr 2009 B2
7683946 Steinberg et al. Mar 2010 B2
7738015 Steinberg et al. Jun 2010 B2
7809162 Steinberg et al. Oct 2010 B2
20010000025 Darrell et al. Mar 2001 A1
20010005222 Yamaguchi Jun 2001 A1
20010028731 Covell et al. Oct 2001 A1
20010031142 Whiteside Oct 2001 A1
20010038712 Loce et al. Nov 2001 A1
20010038714 Masumoto et al. Nov 2001 A1
20020105662 Patton et al. Aug 2002 A1
20020106114 Yan et al. Aug 2002 A1
20020114535 Luo Aug 2002 A1
20020118287 Grosvenor et al. Aug 2002 A1
20020136433 Lin Sep 2002 A1
20020141640 Kraft Oct 2002 A1
20020150662 Dewis et al. Oct 2002 A1
20020168108 Loui et al. Nov 2002 A1
20020172419 Lin et al. Nov 2002 A1
20020181801 Needham et al. Dec 2002 A1
20020191861 Cheatle Dec 2002 A1
20030012414 Luo Jan 2003 A1
20030023974 Dagtas et al. Jan 2003 A1
20030025812 Slatter Feb 2003 A1
20030035573 Duta et al. Feb 2003 A1
20030044070 Fuersich et al. Mar 2003 A1
20030044177 Oberhardt et al. Mar 2003 A1
20030048950 Savakis et al. Mar 2003 A1
20030052991 Stavely et al. Mar 2003 A1
20030059107 Sun et al. Mar 2003 A1
20030059121 Savakis et al. Mar 2003 A1
20030071908 Sannoh et al. Apr 2003 A1
20030084065 Lin et al. May 2003 A1
20030095197 Wheeler et al. May 2003 A1
20030107649 Flickner et al. Jun 2003 A1
20030118216 Goldberg Jun 2003 A1
20030123713 Geng Jul 2003 A1
20030123751 Krishnamurthy et al. Jul 2003 A1
20030142209 Yamazaki et al. Jul 2003 A1
20030151674 Lin Aug 2003 A1
20030174773 Comaniciu et al. Sep 2003 A1
20030202715 Kinjo Oct 2003 A1
20040022435 Ishida Feb 2004 A1
20040041121 Yoshida et al. Mar 2004 A1
20040095359 Simon et al. May 2004 A1
20040114904 Sun et al. Jun 2004 A1
20040120391 Lin et al. Jun 2004 A1
20040120399 Kato Jun 2004 A1
20040125387 Nagao et al. Jul 2004 A1
20040170397 Ono Sep 2004 A1
20040175021 Porter et al. Sep 2004 A1
20040179719 Chen et al. Sep 2004 A1
20040218832 Luo et al. Nov 2004 A1
20040223063 DeLuca et al. Nov 2004 A1
20040228505 Sugimoto Nov 2004 A1
20040233301 Nakata et al. Nov 2004 A1
20040234156 Watanabe et al. Nov 2004 A1
20050013479 Xiao et al. Jan 2005 A1
20050013603 Ichimasa Jan 2005 A1
20050018923 Messina et al. Jan 2005 A1
20050031224 Prilutsky et al. Feb 2005 A1
20050041121 Steinberg et al. Feb 2005 A1
20050068446 Steinberg et al. Mar 2005 A1
20050068452 Steinberg et al. Mar 2005 A1
20050069208 Morisada Mar 2005 A1
20050089218 Chiba Apr 2005 A1
20050104848 Yamaguchi et al. May 2005 A1
20050105780 Ioffe May 2005 A1
20050128518 Tsue et al. Jun 2005 A1
20050140801 Prilutsky et al. Jun 2005 A1
20050185054 Edwards et al. Aug 2005 A1
20050275721 Ishii Dec 2005 A1
20060006077 Mosher et al. Jan 2006 A1
20060008152 Kumar et al. Jan 2006 A1
20060008171 Petschnigg et al. Jan 2006 A1
20060008173 Matsugu et al. Jan 2006 A1
20060018517 Chen et al. Jan 2006 A1
20060029265 Kim et al. Feb 2006 A1
20060039690 Steinberg et al. Feb 2006 A1
20060050933 Adam et al. Mar 2006 A1
20060056655 Wen et al. Mar 2006 A1
20060093212 Steinberg et al. May 2006 A1
20060093213 Steinberg et al. May 2006 A1
20060093238 Steinberg et al. May 2006 A1
20060098875 Sugimoto May 2006 A1
20060098890 Steinberg et al. May 2006 A1
20060120599 Steinberg et al. Jun 2006 A1
20060133699 Widrow et al. Jun 2006 A1
20060140455 Costache et al. Jun 2006 A1
20060147192 Zhang et al. Jul 2006 A1
20060153472 Sakata et al. Jul 2006 A1
20060177100 Zhu et al. Aug 2006 A1
20060177131 Porikli Aug 2006 A1
20060187305 Trivedi et al. Aug 2006 A1
20060203106 Lawrence et al. Sep 2006 A1
20060203107 Steinberg et al. Sep 2006 A1
20060204034 Steinberg et al. Sep 2006 A1
20060204055 Steinberg et al. Sep 2006 A1
20060204056 Steinberg et al. Sep 2006 A1
20060204058 Kim et al. Sep 2006 A1
20060204110 Steinberg et al. Sep 2006 A1
20060210264 Saga Sep 2006 A1
20060227997 Au et al. Oct 2006 A1
20060257047 Kameyama et al. Nov 2006 A1
20060268150 Kameyama et al. Nov 2006 A1
20060269270 Yoda et al. Nov 2006 A1
20060280380 Li Dec 2006 A1
20060285754 Steinberg et al. Dec 2006 A1
20060291739 Li et al. Dec 2006 A1
20070047768 Gordon et al. Mar 2007 A1
20070053614 Mori et al. Mar 2007 A1
20070070440 Li et al. Mar 2007 A1
20070071347 Li et al. Mar 2007 A1
20070076921 Living Apr 2007 A1
20070091203 Peker et al. Apr 2007 A1
20070098218 Zhang et al. May 2007 A1
20070098303 Gallagher et al. May 2007 A1
20070110305 Corcoran et al. May 2007 A1
20070110417 Itokawa May 2007 A1
20070116379 Corcoran et al. May 2007 A1
20070116380 Ciuc et al. May 2007 A1
20070154095 Cao et al. Jul 2007 A1
20070154096 Cao et al. Jul 2007 A1
20070160307 Steinberg et al. Jul 2007 A1
20070189748 Drimbarean et al. Aug 2007 A1
20070189757 Steinberg et al. Aug 2007 A1
20070201724 Steinberg et al. Aug 2007 A1
20070201725 Steinberg et al. Aug 2007 A1
20070201726 Steinberg et al. Aug 2007 A1
20070263104 DeLuca et al. Nov 2007 A1
20070273504 Tran Nov 2007 A1
20070296833 Corcoran et al. Dec 2007 A1
20080002060 DeLuca et al. Jan 2008 A1
20080013798 Ionita et al. Jan 2008 A1
20080013799 Steinberg et al. Jan 2008 A1
20080013800 Steinberg et al. Jan 2008 A1
20080019565 Steinberg Jan 2008 A1
20080037839 Corcoran et al. Feb 2008 A1
20080043121 Prilutsky et al. Feb 2008 A1
20080043122 Steinberg et al. Feb 2008 A1
20080049970 Ciuc et al. Feb 2008 A1
20080055433 Steinberg et al. Mar 2008 A1
20080075385 David et al. Mar 2008 A1
20080144966 Steinberg et al. Jun 2008 A1
20080175481 Petrescu et al. Jul 2008 A1
20080186389 DeLuca et al. Aug 2008 A1
20080205712 Ionita et al. Aug 2008 A1
20080219517 Blonk et al. Sep 2008 A1
20080240555 Nanu et al. Oct 2008 A1
20080267461 Ianculescu et al. Oct 2008 A1
20090002514 Steinberg et al. Jan 2009 A1
20090003652 Steinberg et al. Jan 2009 A1
20090003708 Steinberg et al. Jan 2009 A1
20090052749 Steinberg et al. Feb 2009 A1
20090087030 Steinberg et al. Apr 2009 A1
20090324008 Kongqiao et al. Dec 2009 A1
20110157370 Livesey Jun 2011 A1
Foreign Referenced Citations (1)
Number Date Country
578508 Jan 1994 EP
Non-Patent Literature Citations (19)
Entry
Batur et al., “Adaptive Active Appearance Models”, IEEE Transactions on Image Processing, 2005, pp. 1707-1721, vol. 14-Issue 11.
Beymer, David, “Pose-Invariant face Recognition Using Real and Virtual Views, A.I. Technical Report No. 1574”, Massachusetts Institute of Technology Artificial Intelligence Laboratory, 1996, pp. 1-176.
Bradski Gary et al., “Learning-Based Computer Vision with Intel's Open Source Computer Vision Library”, Intel Technology, 2005, pp. 119-130, vol. 9-Issue 2.
Corcoran, P. et al., “Automatic Indexing of Consumer Image Collections Using Person Recognition Techniques”, Digest of Technical Papers. International Conference on Confumer Electronics, 2005, pp. 127-128.
Costache, G. et al., “In-Camera Person Indexing of Digital Images”, Digest of Technical Papers. International Conference on Consumer Electronics, 2006, pp. 339-340.
Demirkir, C. et al., “Face detection using boosted tree classifier stages”, Proceedings of the IEEE 12th Signal Processing and Communications Applications Conference, 2004, pp. 575-578.
Drimbarean, A.F. et al., “Image Processing Techniques to Detect and Filter Objectionable Images based on Skin Tone and Shape Recognition”, International Conference on Consumer Electronics, 2001, pp. 278-279.
Goodall, C., “Procrustes Methods in the Statistical Analysis of Shape, Stable URL: http://www.jstor.org/stable/2345744”, Journal of the Royal Statistical Society. Series B (Methodological), 1991, pp. 285-339, vol. 53-Issue 2, Blackwell Publishing for the Royal Satistical Society.
Heisele, B. et al., “Hierarchical Classification and Feature Reduction for Fast Face Detection with Support Vector Machines”, Pattern Recognition, 2003, pp. 2007-2017, vol. 36-Issue 9, Elsevier.
Kouzani, A.Z., “Illumination-Effects Compensation in Facial Images Systems”, Man and Cybernetics, IEEE SMC '99 Conference Proceedings, 1999, pp. VI-840-VI-844, vol. 6.
Nayak et al., “Automatic illumination correction for scene enhancement and objection tracking, XP005600656, ISSN: 0262-8856”, Image and Vision Computing, 2006, pp. 949-959, vol. 24-Issue 9.
Turk, Matthew et al., “Eigenfaces for Recognition”, Journal of Cognitive Neuroscience, 1991, 17 pgs, vol. 3-Issue 1.
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/EP2010/063293, dated Nov. 12, 2010, 13 pages.
Charay Lerdsudwichai, Mohamed Abdel-Mottaleb and A-Nasser Ansari, Tracking multiple people with recovery from partial and total occlusion, Pattern Recognition, vol. 38, Issue 7, Jul. 2005, pp. 1059-1070, ISSN: 0031-3203 doi>10.1016/j.patcog.2004.11.022.
Xuefeng Song, and Ram Nevatia, Combined Face-Body Tracking in Indoor Environment, Pattern Recognition, 2004. ICPR 2004, Proceedings of the 17th International Conference on Cambridge, UK Aug. 23-26, 2004, Piscataway, NJ, USA, IEEE, LNKDDOI: 10.1109/Icpr2004.1333728, vol. 4, Aug. 23, 2004, pp. 159-162, Xp010723886, ISBN: 978-0-7695-2128-2.
Mark Everingham , Josef Sivic and Andrew Zisserman, “Hello! My name is . . . Buffy”—Automatic Naming of Characters in TV Video, Proceedings of the British Machine Vision, Conference (2006), Jan. 1, 2006, pp. 1-10, XP009100754.
Ming Yang, Fengjun LV, Wei XU, and Yihong Gong, Detection Driven Adaptive Multi-cue Integration for Multiple Human Tracking, IEEE 12th International Conference on Computer Vision (ICCV), Sep. 30, 2009, pp. 1554-1561, XP002607967.
Hieu T. Nguyen, and Arnold W. Smeulders, Robust Tracking Using Foreground-Background Texture Discrimination, International Journal of Computer Vision, Kluwer Academic Publishers, BO LNKDDOI:10.1007/S11263-006-7067-X, vol. 69, No. 3, May 1, 2006, pp. 277-293, XP019410143, ISSN: 1573-1405.
Gael Jaffre and Philippe Joly, Improvement of a Person Labelling Method Using Extracted Knowledge on Costume, Jan. 1, 2005, Computer Analysis of Images and Patterns Lecture Notes in Computer Science; LNCS, Springer, Berlin, DE, pp. 489-497, XP019019231, ISBN: 978-3-540-28969-2.
Related Publications (1)
Number Date Country
20110081052 A1 Apr 2011 US