Face detection in mid-shot digital images

Information

  • Patent Grant
  • 8494286
  • Patent Number
    8,494,286
  • Date Filed
    Tuesday, February 5, 2008
    17 years ago
  • Date Issued
    Tuesday, July 23, 2013
    11 years ago
Abstract
A method for detecting a face in a mid-shot digital image of a person comprises capturing first and second mid-shot digital images of nominally the same scene using different capture settings such that the foreground is differently differentiated from the background in each image, and comparing the first and second images to determine the foreground region of the images. A portion of the foreground region likely to correspond to a face is estimated based upon the geometry of the foreground region.
Description

The present invention relates to a method and system for detecting a face in a digital image, and in particular a method and apparatus for detecting a face in a mid-shot digital image of a person. In this context a mid-shot image of a person is an image having a single human figure in the foreground orientated in a generally upright position.


BACKGROUND OF THE INVENTION

Known face tracking applications for digital image acquisition devices include methods of marking human faces in a series of images such as a video stream or a camera preview. Face tracking can be used to indicate to a photographer the locations of faces in an image or to allow post processing of the images based on knowledge of the locations of the faces. Also, face tracker applications can be used in adaptive adjustment of acquisition parameters of an image, such as, focus, exposure and white balance, based on face information in order to produce improved the quality of acquired images.


A well-known method of fast-face detection is disclosed in US 2002/0102024, hereinafter Viola-Jones. In Viola-Jones, a chain (cascade) of 32 classifiers based on rectangular (and increasingly refined) Haar features are used with an integral image, derived from an acquired image, by applying the classifiers to a sub-window within the integral image. For a complete analysis of an acquired image, this sub-window is shifted incrementally across the integral image until the entire image has been covered.


A number of variants of the original Viola-Jones algorithm are known in the literature, such as disclosed in U.S. patent application Ser. No. 11/464,083 (FN143). However, such face detection applications are computationally expensive.


It is an object of the present invention to provide an alternative and computationally efficient method of face detection in mid-shot digital images.


DISCLOSURE OF THE INVENTION

The present invention provides a method for detecting a face in a mid-shot digital image of a person as claimed in claim 1.


The invention is based upon the recognition that, for mid-shot digital images, a simple geometric analysis of the foreground can locate the face to a high degree of accuracy, thereby dispensing with the need for complex calculations.


If desired, the presence of a face can be confirmed or denied by, for example, looking for a preponderance of flesh tones within the portion of the foreground identified by the inventive method and presumed to include a face, but this is still far less computationally intensive that the prior art techniques.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the invention will now be described by way of example with reference to the accompanying drawings, in which:



FIG. 1 is a block diagram of a digital image acquisition device operating in accordance with an embodiment of the present invention;



FIG. 2 is a flow diagram of face detection software in the image acquisition device of FIG. 1; and



FIG. 3 shows images processed according to two alternate image analysis algorithms which may be used in the geometrical analysis step of FIG. 2.





DESCRIPTION OF THE PREFERRED EMBODIMENT


FIG. 1 is a block diagram of a digital image acquisition device 20 which in the present embodiment is a portable digital camera, and includes a processor 120. It can be appreciated that many of the processes implemented in the digital camera may be implemented in or controlled by software operating in a microprocessor, central processing unit, controller, digital signal processor and/or an application specific integrated circuit, collectively depicted as processor 120. Generically, all user interface and control of peripheral components such as buttons and display is controlled by a microcontroller 122. The processor 120, in response to a user input at 122, such as half pressing a shutter button (pre-capture mode 32), initiates and controls the digital photographic process. Ambient light exposure is monitored using light sensor 40 in order to automatically determine if a flash is to be used. A distance to the subject is determined using a focus component 50 which also focuses the image on image capture component 60. If a flash is to be used, processor 120 causes the flash 70 to generate a photographic flash in substantial coincidence with the recording of the image by image capture component 60 upon full depression of the shutter button. The image capture component 60 digitally records the image in colour. The image capture component preferably includes a CCD (charge coupled device) or CMOS to facilitate digital recording. The flash may be selectively generated either in response to the light sensor 40 or a manual input 72 from the user of the image acquisition device. The high resolution image recorded by image capture component 60 is stored in an image store 80 which may comprise computer memory such a dynamic random access memory or a non-volatile memory. The camera is equipped with a display 100, such as an LCD, for preview and post-view of images.


In the case of preview images which are generated in the pre-capture mode 32 with the shutter button half-pressed, the display 100 can assist the user in composing the image, as well as being used to determine focusing and exposure. Temporary storage 82 is used to store one or more of the preview images and can be part of the image store 80 or a separate component. The preview image is preferably generated by the image capture component 60. For speed and memory efficiency reasons, preview images preferably have a lower pixel resolution than the main image taken when the shutter button is fully depressed, and are generated by sub-sampling a raw captured image using sub-sampler software 124 which can be part of the general processor 120 or dedicated hardware or combination thereof. Depending on the settings of this hardware subsystem, the pre-acquisition image processing may satisfy some predetermined test criteria prior to storing a preview image. Such test criteria may be chronological, such as to constantly replace the previous saved preview image with a new captured preview image every 0.5 seconds during the pre-capture mode 32, until the high resolution main image is captured by full depression of the shutter button. More sophisticated criteria may involve analysis of the preview image content, for example, testing the image for changes, before deciding whether the new preview image should replace a previously saved image. Other criteria may be based on image analysis such as the sharpness, or metadata analysis such as the exposure condition, whether a flash is going to happen, and/or the distance to the subject.


If test criteria are not met, the camera continues by capturing the next preview image without saving the current one. The process continues until the final high resolution main image is acquired and saved by fully depressing the shutter button.


Where multiple preview images can be saved, a new preview image will be placed on a chronological First In First Out (FIFO) stack, until the user takes the final picture. The reason for storing multiple preview images is that the last preview image, or any single preview image, may not be the best reference image for comparison with the final high resolution image in, for example, a red-eye correction process or, in the present embodiment, mid-shot mode processing. By storing multiple images, a better reference image can be achieved, and a closer alignment between the preview and the final captured image can be achieved in an alignment stage discussed later.


The camera is also able to capture and store in the temporary storage 82 one or more low resolution post-view images. Post-view images are low resolution images essentially the same as preview images, except that they occur after the main high resolution image is captured.


The image acquisition device 20 has a user-selectable mid-shot mode 30. In mid-shot mode, when the shutter button is depressed the camera is caused to automatically capture and store a series of images at close intervals so that the images are nominally of the same scene. A mid-shot mode face detecting processor 90 analyzes and processes the stored images according to a workflow to be described. The processor 90 can be integral to the image acquisition device 20—indeed, it could be the processor 120 with suitable programming—or part of an external processing device 10 such as a desktop computer. As will be described, the particular number, resolution and sequence of images, whether flash is used or not, and whether the images are in or out of focus, depends upon the particular embodiment. However, in this embodiment the processor 90 receives a main high resolution image from the image store 80 as well as a low resolution post-view image from the temporary storage 82.


Where the mid-shot mode face detecting processor 90 is integral to the image acquisition device 20, the final processed image may be displayed on image display 100, saved on a persistent storage 112 which can be internal or a removable storage such as CF card, SD card or the like, or downloaded to another device, such as a personal computer, server or printer via image output means 110 which can be connected via wire, fiber, or other data transmission means including wireless means. In embodiments where the processor 90 is implemented in an external device 10, such as a desktop computer, the final processed image may be returned to the image acquisition device 20 for storage and display as described above, or stored and displayed externally of the camera.



FIG. 2 shows the workflow of a first embodiment of mid-shot mode processing according to the invention.


First, mid-shot mode is selected, step 200. Now, when the shutter button is fully depressed, the camera automatically captures and stores two digital images:

    • a main, high pixel resolution, flash image (image A), step 202.
    • a post-view, low pixel resolution, non-flash image (image B), step 204.


The post-view image B is captured immediately after the main image A, so that the scene captured by each image is nominally the same. If desired image A could be non-flash and image B taken with flash. The important thing, for this embodiment, is that one of them is taken with flash and one without. Normally, for a mid-shot image of a person, the main image A would be the flash image but this will depend on other lighting. An example of a mid-shot image A is shown in FIG. 3(a)—the post-view image B will be substantially the same but of lower resolution.


Steps 200 to 204 just described necessarily take place in the image acquisition device 20. The remaining steps now to be described are performed by the mid-shot processor 90 and can take place in the camera or in an external device 10.


Images A and B are aligned in step 206, to compensate for any slight movement in the subject or camera between taking these images. Techniques for aligning images in this way are well-known. Then, step 208, the images A and B are matched in pixel resolution by up-sampling image B and/or down-sampling image A. Again, this is well-known in the art.


Next, step 210, the flash and non-flash images A and B are used to construct a foreground map, step 210. A foreground map is a set of data defining those regions of the aligned images which belong to the foreground of the images. FIG. 3(b) represents the foreground map for the image of FIG. 3(a), although it is to be understood that the map is not necessarily produced as a visible image. The foreground map locates the foreground subject within the boundaries of the overall image.


In this embodiment, steps 206 to 210 may be carried out in accordance with the method disclosed in U.S. patent application Ser. No. 11/217,788 and PCT Application No. PCT/EP2006/005109 (Ref: FN122), which is hereby incorporated by reference.


Finally, step 212, the portion of the foreground region likely to correspond to a face is identified by analysis of the size and shape of the foreground region. It will be appreciated that such a simple geometric approach to face detection can be used where the approximate size and shape of the subject is known in advance, as is the case for a mid-shot of a single human figure. Two algorithms for detecting the face region will now be described, with reference to FIGS. 3(c) and 3(d).


First, and common to both algorithms, the orientation of the foreground subject in the image relative to the camera is determined, as disclosed in International Patent Application No. PCT/EP2006/008229 (Ref: FN119), which is hereby incorporated by reference. This method is based on the observation that in a normally orientated camera for a normally orientated scene, the close image foreground, in this case, the subject, is at the bottom of the image and the far background is at the top of the image. Alternatively, the orientation of the subject in the image may be ascertained using motion sensors as is well known in the art.


In the first algorithm, FIG. 3(c), the width of the body is estimated using a band of N pixels in the lower (with respect to orientation) part of the image and bounding it to the left and the right. Then this rectangle is grown upwardly until it reaches the shoulders (rectangle 300). The line of the shoulders is determined by computing line-by-line the ratio of foreground and background pixels. When the ratio of background pixels reaches a certain threshold and does so for a number of consecutive lines, the first line encountered is taken to be the one with the shoulders. From this rectangle 300 the position and size of the face area is estimated (rectangle 302) and verified to be fully covered by foreground.


In the second algorithm, FIG. 3(d), the rectangle 304 bounding the foreground is calculated. Then, using the orientation information, the top portion 306 of the bounding rectangle 304 is selected, where the head is assumed to be. For this purpose the general position of the head is computed from a head/body ratio, loosened such that it will contain the head no matter what position it is in (e.g. straight, bent, and so on). The top of the rectangle 306 is coincident with the top of the rectangle 304 and extends down ⅜ of the height of the latter. It is also narrower, being ¾ the width of the rectangle 304 and centred within it.


Now the top rectangle 306 is reduced in width to include only the face. First, the bounding box of the foreground in the previously found rectangle 306 is computed by shrinking rectangle 306 until it contains only foreground pixels. This bounding box may contain hands or false positives from the background which are filtered by selecting the largest rectangle 308 that is the right shape/size and made only of foreground. More particularly, the height of rectangle 308 is computed using body proportions. The face height is estimated to be 2/7th of the height of rectangle 304. The vertical displacement between 308 and 306 is the presumed forehead height.


Variations of the foregoing embodiment are possible. For example, image B could be a preview image rather than a post-view image. Alternatively, both images A and B could be low resolution pre- and/or post-view images, and the foreground map derived therefrom used to identify the face region in a third, high resolution main image. In such a case all three images, i.e. images A and B and the main image, will need to be nominally the same scene. In another embodiment image B could have the same high resolution as the main image A. This would avoid the need to match image resolution at step 208.


In a further embodiment, where the use of flash for one of the images is not desirable, foreground/background separation may be carried out in accordance with the method disclosed in PCT Application No. PCT/EP2006/008229 (Ref: FN119). In this case, the main image A is taken with the foreground more in focus than the background and the other image B is taken with the background more in focus than the foreground. Using the focused and non-focused images, foreground/background separation is carried out to identify the mid-shot subject.


The invention is not limited to the embodiment(s) described herein but can be amended or modified without departing from the scope of the present invention.

Claims
  • 1. A method for detecting a face in a mid-shot digital image of a person, the method comprising: using a lens, image sensor and processor of a digital image acquisition and processing apparatus,capturing first and second mid-shot digital images of approximately a same scene that is known to include at least a face and a portion of a torso of a human person, including using different capture settings such that the foreground is differently differentiated from the background in each image,comparing the first and second images to determine the foreground region of the images, andidentifying a portion of the foreground region likely to correspond to a face based upon matching one or more known face-to-torso geometric relationships to one or more geometric relationships between regions of the foreground region of the first and second images.
  • 2. The method of claim 1, wherein the different capture settings comprise taking one image using a flash and taking the other image without using a flash.
  • 3. The method of claim 1, wherein the different capture settings comprise taking one image with the foreground more in focus than the background and taking the other image with the background more in focus than the foreground.
  • 4. The method of claim 1, wherein the first and second images have different pixel resolutions.
  • 5. The method of claim 4, further comprising matching the pixel resolutions of the first and second images prior to the comparing.
  • 6. The method of claim 1, wherein the first and second images are captured by a digital camera.
  • 7. The method of claim 6, wherein the first image is a relatively high resolution main image, and wherein the second image is a relatively low resolution pre- or post-view version of the first image.
  • 8. The method of claim 6, wherein the first and second images are relatively low resolution pre- and/or post-view versions of a higher resolution main image of said scene also captured by the camera.
  • 9. A digital image acquisition and processing apparatus comprising: means for capturing first and second mid-shot digital images of approximately a same scene that is known to include at least a face and a portion of a torso of a human person, including using different capture settings such that the foreground is differently differentiated from the background in each image,means for comparing the first and second images to determine the foreground region of the images, andmeans for identifying a portion of the foreground region likely to correspond to a face based upon matching one or more known face-to-torso geometric relationships to one or more geometric relationships between regions of the foreground region of the first and second images.
  • 10. One or more non-transitory processor-readable media having code embedded therein for programming a processor to perform a method for detecting a face in a mid-shot digital image of a person, wherein the method comprises: using a lens, image sensor and processor of a digital image acquisition and processing apparatus,capturing first and second mid-shot digital images of approximately a same scene that is known to include at least a face and a portion of a torso of a human person, including using different capture settings such that the foreground is differently differentiated from the background in each image,comparing the first and second images to determine the foreground region of the images, andidentifying a portion of the foreground region likely to correspond to a face based upon matching one or more known face-to-torso geometric relationships to one or more geometric relationships between regions of the foreground region of the first and second images.
  • 11. The one or more processor-readable media of claim 10, wherein the different capture settings comprise taking one image using a flash and taking the other image without using a flash.
  • 12. The one or more processor-readable media of claim 10, wherein the different capture settings comprise taking one image with the foreground more in focus than the background and taking the other image with the background more in focus than the foreground.
  • 13. The one or more processor-readable media of claim 10, wherein the first and second images have different pixel resolutions.
  • 14. The one or more processor-readable media of claim 13, wherein the method further comprises matching the pixel resolutions of the first and second images prior to the comparing.
  • 15. The one or more processor-readable media of claim 10, wherein the first and second images are captured by a digital camera.
  • 16. The one or more processor-readable media of claim 15, wherein the first image is a relatively high resolution main image, and wherein the second image is a relatively low resolution pre- or post-view version of the first image.
  • 17. The one or more processor-readable media of claim 15, wherein the first and second images are relatively low resolution pre- and/or post-view versions of a higher resolution main image of said scene also captured by the camera.
  • 18. A digital image acquisition and processing apparatus, comprising: a lens;an image sensor;a processor; anda processor-readable medium having code embedded therein for programming the processor to perform a method for detecting a face in a mid-shot digital image of a person, wherein the method comprises: capturing first and second mid-shot digital images of approximately a same scene that is known to include at least a face and a portion of a torso of a human person, including using different capture settings such that the foreground is differently differentiated from the background in each image,comparing the first and second images to determine the foreground region of the images, andidentifying a portion of the foreground region likely to correspond to a face based upon matching one or more known face-to-torso geometric relationships to one or more geometric relationships between regions of the foreground region of the first and second images.
  • 19. The apparatus of claim 18, wherein the different capture settings comprise taking one image using a flash and taking the other image without using a flash.
  • 20. The apparatus of claim 18, wherein the different capture settings comprise taking one image with the foreground more in focus than the background and taking the other image with the background more in focus than the foreground.
  • 21. The apparatus of claim 18, wherein the first and second images have different pixel resolutions.
  • 22. The apparatus of claim 21, wherein the method further comprises matching the pixel resolutions of the first and second images prior to the comparing.
  • 23. The apparatus of claim 18, wherein the first and second images are captured by a digital camera.
  • 24. The apparatus of claim 23, wherein the first image is a relatively high resolution main image, and wherein the second image is a relatively low resolution pre- or post-view version of the first image.
  • 25. The apparatus of claim 23, wherein the first and second images are relatively low resolution pre- and/or post-view versions of a higher resolution main image of said scene also captured by the camera.
US Referenced Citations (436)
Number Name Date Kind
4047187 Mashimo et al. Sep 1977 A
4168510 Kaiser Sep 1979 A
4317991 Stauffer Mar 1982 A
4367027 Stauffer Jan 1983 A
RE31370 Mashimo et al. Sep 1983 E
4448510 Murakoshi May 1984 A
4456354 Mizokami Jun 1984 A
4469417 Masunaga et al. Sep 1984 A
4562346 Hayashi et al. Dec 1985 A
4638364 Hiramatsu Jan 1987 A
4673276 Yoshida et al. Jun 1987 A
4690536 Nakai et al. Sep 1987 A
4796043 Izumi et al. Jan 1989 A
4970663 Bedell et al. Nov 1990 A
4970683 Harshaw et al. Nov 1990 A
4975969 Tal Dec 1990 A
5008946 Ando Apr 1991 A
5018017 Sasaki et al. May 1991 A
RE33682 Hiramatsu Sep 1991 E
5051770 Cornuejols Sep 1991 A
5061951 Higashihara et al. Oct 1991 A
5063603 Burt Nov 1991 A
5111231 Tokunaga May 1992 A
5130935 Takiguchi Jul 1992 A
5150432 Ueno et al. Sep 1992 A
5161204 Hutcheson et al. Nov 1992 A
5164831 Kuchta et al. Nov 1992 A
5164992 Turk et al. Nov 1992 A
5227837 Terashita Jul 1993 A
5262820 Tamai et al. Nov 1993 A
5280530 Trew et al. Jan 1994 A
5291234 Shindo et al. Mar 1994 A
5305048 Suzuki et al. Apr 1994 A
5311240 Wheeler May 1994 A
5331544 Lu et al. Jul 1994 A
5353058 Takei Oct 1994 A
5384615 Hsieh et al. Jan 1995 A
5384912 Ogrinc et al. Jan 1995 A
5430809 Tomitaka Jul 1995 A
5432863 Benati et al. Jul 1995 A
5450504 Calia Sep 1995 A
5465308 Hutcheson et al. Nov 1995 A
5488429 Kojima et al. Jan 1996 A
5493409 Maeda et al. Feb 1996 A
5496106 Anderson Mar 1996 A
5576759 Kawamura et al. Nov 1996 A
5629752 Kinjo May 1997 A
5633678 Parulski et al. May 1997 A
5638136 Kojima et al. Jun 1997 A
5638139 Clatanoff et al. Jun 1997 A
5652669 Liedenbaum Jul 1997 A
5680481 Prasad et al. Oct 1997 A
5684509 Hatanaka et al. Nov 1997 A
5706362 Yabe Jan 1998 A
5710833 Moghaddam et al. Jan 1998 A
5715325 Bang et al. Feb 1998 A
5724456 Boyack et al. Mar 1998 A
5745668 Poggio et al. Apr 1998 A
5764790 Brunelli et al. Jun 1998 A
5764803 Jacquin et al. Jun 1998 A
5771307 Lu et al. Jun 1998 A
5774129 Poggio et al. Jun 1998 A
5774591 Black et al. Jun 1998 A
5774747 Ishihara et al. Jun 1998 A
5774754 Ootsuka Jun 1998 A
5781650 Lobo et al. Jul 1998 A
5802208 Podilchuk et al. Sep 1998 A
5802220 Black et al. Sep 1998 A
5812193 Tomitaka et al. Sep 1998 A
5818975 Goodwin et al. Oct 1998 A
5835616 Lobo et al. Nov 1998 A
5842194 Arbuckle Nov 1998 A
5844573 Poggio et al. Dec 1998 A
5850470 Kung et al. Dec 1998 A
5852669 Eleftheriadis et al. Dec 1998 A
5852823 De Bonet Dec 1998 A
RE36041 Turk et al. Jan 1999 E
5870138 Smith et al. Feb 1999 A
5905807 Kado et al. May 1999 A
5911139 Jain et al. Jun 1999 A
5963670 Lipson et al. Oct 1999 A
5966549 Hara et al. Oct 1999 A
5978519 Bollman et al. Nov 1999 A
5991456 Rahman et al. Nov 1999 A
6028960 Graf et al. Feb 2000 A
6035074 Fujimoto et al. Mar 2000 A
6053268 Yamada Apr 2000 A
6061055 Marks May 2000 A
6072094 Karady et al. Jun 2000 A
6097470 Buhr et al. Aug 2000 A
6101271 Yamashita et al. Aug 2000 A
6108437 Lin Aug 2000 A
6128397 Baluja et al. Oct 2000 A
6128398 Kuperstein et al. Oct 2000 A
6134339 Luo Oct 2000 A
6148092 Qian Nov 2000 A
6151073 Steinberg et al. Nov 2000 A
6157677 Martens et al. Dec 2000 A
6173068 Prokoski Jan 2001 B1
6181805 Koike et al. Jan 2001 B1
6184926 Khosravi et al. Feb 2001 B1
6188777 Darrell et al. Feb 2001 B1
6192149 Eschbach et al. Feb 2001 B1
6240198 Rehg et al. May 2001 B1
6246779 Fukui et al. Jun 2001 B1
6246790 Huang et al. Jun 2001 B1
6249315 Holm Jun 2001 B1
6252976 Schildkraut et al. Jun 2001 B1
6263113 Abdel-Mottaleb et al. Jul 2001 B1
6268939 Klassen et al. Jul 2001 B1
6275614 Krishnamurthy et al. Aug 2001 B1
6278491 Wang et al. Aug 2001 B1
6282317 Luo et al. Aug 2001 B1
6285410 Marni Sep 2001 B1
6301370 Steffens et al. Oct 2001 B1
6301440 Bolle et al. Oct 2001 B1
6332033 Qian Dec 2001 B1
6349373 Sitka et al. Feb 2002 B2
6351556 Loui et al. Feb 2002 B1
6393136 Amir et al. May 2002 B1
6393148 Bhaskar May 2002 B1
6400830 Christian et al. Jun 2002 B1
6404900 Qian et al. Jun 2002 B1
6407777 DeLuca Jun 2002 B1
6421468 Ratnakar et al. Jul 2002 B1
6426779 Noguchi et al. Jul 2002 B1
6438234 Gisin et al. Aug 2002 B1
6438264 Gallagher et al. Aug 2002 B1
6456732 Kimbell et al. Sep 2002 B1
6459436 Kumada et al. Oct 2002 B1
6463163 Kresch Oct 2002 B1
6473199 Gilman et al. Oct 2002 B1
6483521 Takahashi et al. Nov 2002 B1
6501857 Gotsman et al. Dec 2002 B1
6502107 Nishida Dec 2002 B1
6504546 Cosatto et al. Jan 2003 B1
6504942 Hong et al. Jan 2003 B1
6504951 Luo et al. Jan 2003 B1
6516154 Parulski et al. Feb 2003 B1
6526161 Yan Feb 2003 B1
6529630 Kinjo Mar 2003 B1
6549641 Ishikawa et al. Apr 2003 B2
6556708 Christian et al. Apr 2003 B1
6560029 Dobbie et al. May 2003 B1
6564225 Brogliatti et al. May 2003 B1
6567983 Shiimori May 2003 B1
6587119 Anderson et al. Jul 2003 B1
6606117 Windle Aug 2003 B1
6606398 Cooper Aug 2003 B2
6633655 Hong et al. Oct 2003 B1
6661907 Ho et al. Dec 2003 B2
6661918 Gordon et al. Dec 2003 B1
6678407 Tajima Jan 2004 B1
6697503 Matsuo et al. Feb 2004 B2
6697504 Tsai Feb 2004 B2
6700999 Yang Mar 2004 B1
6747690 Molgaard Jun 2004 B2
6754368 Cohen Jun 2004 B1
6754389 Dimitrova et al. Jun 2004 B1
6760465 McVeigh et al. Jul 2004 B2
6760485 Gilman et al. Jul 2004 B1
6765612 Anderson et al. Jul 2004 B1
6778216 Lin Aug 2004 B1
6792135 Toyama Sep 2004 B1
6801250 Miyashita Oct 2004 B1
6816156 Sukeno et al. Nov 2004 B2
6816611 Hagiwara et al. Nov 2004 B1
6829009 Sugimoto Dec 2004 B2
6850274 Silverbrook et al. Feb 2005 B1
6859565 Baron Feb 2005 B2
6876755 Taylor et al. Apr 2005 B1
6879705 Tao et al. Apr 2005 B1
6885760 Yamada et al. Apr 2005 B2
6900840 Schinner et al. May 2005 B1
6937773 Nozawa et al. Aug 2005 B1
6940545 Ray et al. Sep 2005 B1
6959109 Moustafa Oct 2005 B2
6965684 Chen et al. Nov 2005 B2
6977687 Suh Dec 2005 B1
6993157 Oue et al. Jan 2006 B1
7003135 Hsieh et al. Feb 2006 B2
7020337 Viola et al. Mar 2006 B2
7024053 Enomoto Apr 2006 B2
7027619 Pavlidis et al. Apr 2006 B2
7027621 Prokoski Apr 2006 B1
7034848 Sobol Apr 2006 B2
7035456 Lestideau Apr 2006 B2
7035462 White et al. Apr 2006 B2
7035467 Nicponski Apr 2006 B2
7038709 Verghese May 2006 B1
7038715 Flinchbaugh May 2006 B1
7039222 Simon et al. May 2006 B2
7042511 Lin May 2006 B2
7043465 Pirim May 2006 B2
7050607 Li et al. May 2006 B2
7057653 Kubo Jun 2006 B1
7064776 Sumi et al. Jun 2006 B2
7082212 Liu et al. Jul 2006 B2
7088386 Goto Aug 2006 B2
7099510 Jones et al. Aug 2006 B2
7103218 Chen et al. Sep 2006 B2
7106374 Bandera et al. Sep 2006 B1
7106887 Kinjo Sep 2006 B2
7110569 Brodsky et al. Sep 2006 B2
7110575 Chen et al. Sep 2006 B2
7113641 Eckes et al. Sep 2006 B1
7119838 Zanzucchi et al. Oct 2006 B2
7120279 Chen et al. Oct 2006 B2
7151843 Rui et al. Dec 2006 B2
7158680 Pace Jan 2007 B2
7162076 Liu Jan 2007 B2
7162101 Itokawa et al. Jan 2007 B2
7171023 Kim et al. Jan 2007 B2
7171025 Rui et al. Jan 2007 B2
7190806 Cazier Mar 2007 B2
7190829 Zhang et al. Mar 2007 B2
7194114 Schneiderman Mar 2007 B2
7200249 Okubo et al. Apr 2007 B2
7200266 Ozer et al. Apr 2007 B2
7218759 Ho et al. May 2007 B1
7227976 Jung et al. Jun 2007 B1
7254257 Kim et al. Aug 2007 B2
7269292 Steinberg Sep 2007 B2
7274822 Zhang et al. Sep 2007 B2
7274832 Nicponski Sep 2007 B2
7295233 Steinberg et al. Nov 2007 B2
7315630 Steinberg et al. Jan 2008 B2
7315631 Corcoran et al. Jan 2008 B1
7317815 Steinberg et al. Jan 2008 B2
7321391 Ishige Jan 2008 B2
7324693 Chen Jan 2008 B2
7336821 Ciuc et al. Feb 2008 B2
7340110 Lim et al. Mar 2008 B2
7352393 Sakamoto Apr 2008 B2
7362368 Steinberg et al. Apr 2008 B2
7403643 Ianculescu et al. Jul 2008 B2
7430335 Dumitras et al. Sep 2008 B2
7436998 Steinberg et al. Oct 2008 B2
7440593 Steinberg et al. Oct 2008 B1
7457477 Petschnigg et al. Nov 2008 B2
7460695 Steinberg et al. Dec 2008 B2
7469055 Corcoran et al. Dec 2008 B2
7502494 Tafuku et al. Mar 2009 B2
7512262 Criminisi et al. Mar 2009 B2
7515739 Porter et al. Apr 2009 B2
7515740 Corcoran et al. Apr 2009 B2
7522772 Porter et al. Apr 2009 B2
7536036 Steinberg et al. May 2009 B2
7551211 Taguchi et al. Jun 2009 B2
7565030 Steinberg et al. Jul 2009 B2
7587085 Steinberg et al. Sep 2009 B2
7612794 He et al. Nov 2009 B2
7620214 Chen et al. Nov 2009 B2
7620218 Steinberg et al. Nov 2009 B2
7623177 Nakamura et al. Nov 2009 B2
7623733 Hirosawa Nov 2009 B2
7630561 Porter et al. Dec 2009 B2
7636485 Simon et al. Dec 2009 B2
7636486 Steinberg et al. Dec 2009 B2
7652693 Miyashita et al. Jan 2010 B2
7689009 Corcoran et al. Mar 2010 B2
7733388 Asaeda Jun 2010 B2
7738015 Steinberg et al. Jun 2010 B2
7792335 Steinberg et al. Sep 2010 B2
7868922 Ciuc et al. Jan 2011 B2
7920723 Nanu et al. Apr 2011 B2
7953251 Steinberg et al. May 2011 B1
8055029 Petrescu et al. Nov 2011 B2
8073286 David et al. Dec 2011 B2
8135184 Steinberg et al. Mar 2012 B2
8306283 Zhang et al. Nov 2012 B2
20010005222 Yamaguchi Jun 2001 A1
20010028731 Covell et al. Oct 2001 A1
20010031142 Whiteside Oct 2001 A1
20010038712 Loce et al. Nov 2001 A1
20010038714 Masumoto et al. Nov 2001 A1
20020081003 Sobol Jun 2002 A1
20020105662 Patton et al. Aug 2002 A1
20020106114 Yan et al. Aug 2002 A1
20020114535 Luo Aug 2002 A1
20020118287 Grosvenor et al. Aug 2002 A1
20020136433 Lin Sep 2002 A1
20020150291 Naf et al. Oct 2002 A1
20020150662 Dewis et al. Oct 2002 A1
20020168108 Loui et al. Nov 2002 A1
20020172419 Lin et al. Nov 2002 A1
20020176609 Hsieh et al. Nov 2002 A1
20020176610 Okazaki et al. Nov 2002 A1
20020181801 Needham et al. Dec 2002 A1
20020191861 Cheatle Dec 2002 A1
20030023974 Dagtas et al. Jan 2003 A1
20030025812 Slatter Feb 2003 A1
20030035573 Duta et al. Feb 2003 A1
20030048950 Savakis et al. Mar 2003 A1
20030052991 Stavely et al. Mar 2003 A1
20030059107 Sun et al. Mar 2003 A1
20030059121 Savakis et al. Mar 2003 A1
20030071908 Sannoh et al. Apr 2003 A1
20030084065 Lin et al. May 2003 A1
20030107649 Flickner et al. Jun 2003 A1
20030117501 Shirakawa Jun 2003 A1
20030118216 Goldberg Jun 2003 A1
20030123713 Geng Jul 2003 A1
20030123751 Krishnamurthy et al. Jul 2003 A1
20030142209 Yamazaki et al. Jul 2003 A1
20030151674 Lin Aug 2003 A1
20030169907 Edwards et al. Sep 2003 A1
20030174773 Comaniciu et al. Sep 2003 A1
20030202715 Kinjo Oct 2003 A1
20040022435 Ishida Feb 2004 A1
20040095359 Simon et al. May 2004 A1
20040120391 Lin et al. Jun 2004 A1
20040120399 Kato Jun 2004 A1
20040170397 Ono Sep 2004 A1
20040175021 Porter et al. Sep 2004 A1
20040179719 Chen et al. Sep 2004 A1
20040218832 Luo et al. Nov 2004 A1
20040223649 Zacks et al. Nov 2004 A1
20040227978 Enomoto Nov 2004 A1
20040228505 Sugimoto Nov 2004 A1
20040264744 Zhang et al. Dec 2004 A1
20050007486 Fujii et al. Jan 2005 A1
20050013479 Xiao et al. Jan 2005 A1
20050036044 Funakura Feb 2005 A1
20050041121 Steinberg et al. Feb 2005 A1
20050068446 Steinberg et al. Mar 2005 A1
20050068452 Steinberg et al. Mar 2005 A1
20050069208 Morisada Mar 2005 A1
20050089218 Chiba Apr 2005 A1
20050104848 Yamaguchi et al. May 2005 A1
20050105780 Ioffe May 2005 A1
20050129278 Rui et al. Jun 2005 A1
20050140801 Prilutsky et al. Jun 2005 A1
20050185054 Edwards et al. Aug 2005 A1
20050195317 Myoga Sep 2005 A1
20050248664 Enge Nov 2005 A1
20050275721 Ishii Dec 2005 A1
20060006077 Mosher et al. Jan 2006 A1
20060008152 Kumar et al. Jan 2006 A1
20060008173 Matsugu et al. Jan 2006 A1
20060018517 Chen et al. Jan 2006 A1
20060029265 Kim et al. Feb 2006 A1
20060039690 Steinberg et al. Feb 2006 A1
20060050933 Adam et al. Mar 2006 A1
20060093238 Steinberg et al. May 2006 A1
20060098875 Sugimoto May 2006 A1
20060098890 Steinberg et al. May 2006 A1
20060120599 Steinberg et al. Jun 2006 A1
20060140455 Costache et al. Jun 2006 A1
20060147192 Zhang et al. Jul 2006 A1
20060177100 Zhu et al. Aug 2006 A1
20060177131 Porikli Aug 2006 A1
20060203106 Lawrence et al. Sep 2006 A1
20060203107 Steinberg et al. Sep 2006 A1
20060203108 Steinberg et al. Sep 2006 A1
20060204034 Steinberg et al. Sep 2006 A1
20060204054 Steinberg et al. Sep 2006 A1
20060204055 Steinberg et al. Sep 2006 A1
20060204056 Steinberg et al. Sep 2006 A1
20060204057 Steinberg Sep 2006 A1
20060204058 Kim et al. Sep 2006 A1
20060204110 Steinberg et al. Sep 2006 A1
20060210264 Saga Sep 2006 A1
20060215924 Steinberg et al. Sep 2006 A1
20060228037 Simon et al. Oct 2006 A1
20060257047 Kameyama et al. Nov 2006 A1
20060268150 Kameyama et al. Nov 2006 A1
20060269270 Yoda et al. Nov 2006 A1
20060280380 Li Dec 2006 A1
20060285754 Steinberg et al. Dec 2006 A1
20060291739 Li et al. Dec 2006 A1
20070018966 Blythe et al. Jan 2007 A1
20070035628 Kanai Feb 2007 A1
20070047768 Gordon et al. Mar 2007 A1
20070070440 Li et al. Mar 2007 A1
20070071347 Li et al. Mar 2007 A1
20070091203 Peker et al. Apr 2007 A1
20070098303 Gallagher et al. May 2007 A1
20070110305 Corcoran et al. May 2007 A1
20070116379 Corcoran et al. May 2007 A1
20070116380 Ciuc et al. May 2007 A1
20070122056 Steinberg et al. May 2007 A1
20070133901 Aiso Jun 2007 A1
20070154095 Cao et al. Jul 2007 A1
20070154096 Cao et al. Jul 2007 A1
20070160307 Steinberg et al. Jul 2007 A1
20070189606 Ciuc et al. Aug 2007 A1
20070189748 Drimbarean et al. Aug 2007 A1
20070189757 Steinberg et al. Aug 2007 A1
20070201724 Steinberg et al. Aug 2007 A1
20070201725 Steinberg et al. Aug 2007 A1
20070201726 Steinberg et al. Aug 2007 A1
20070201853 Petschnigg Aug 2007 A1
20070296833 Corcoran et al. Dec 2007 A1
20080013798 Ionita et al. Jan 2008 A1
20080013799 Steinberg et al. Jan 2008 A1
20080013800 Steinberg et al. Jan 2008 A1
20080019565 Steinberg Jan 2008 A1
20080019669 Girshick et al. Jan 2008 A1
20080037827 Corcoran et al. Feb 2008 A1
20080037838 Ianculescu et al. Feb 2008 A1
20080037839 Corcoran et al. Feb 2008 A1
20080037840 Steinberg et al. Feb 2008 A1
20080043122 Steinberg et al. Feb 2008 A1
20080049970 Ciuc et al. Feb 2008 A1
20080055433 Steinberg et al. Mar 2008 A1
20080075385 David et al. Mar 2008 A1
20080143866 Nakahara Jun 2008 A1
20080144966 Steinberg et al. Jun 2008 A1
20080158407 Funamoto Jul 2008 A1
20080175481 Petrescu et al. Jul 2008 A1
20080205712 Ionita et al. Aug 2008 A1
20080219517 Blonk et al. Sep 2008 A1
20080240555 Nanu et al. Oct 2008 A1
20080267461 Ianculescu et al. Oct 2008 A1
20080316327 Steinberg et al. Dec 2008 A1
20080316328 Steinberg et al. Dec 2008 A1
20080317339 Steinberg et al. Dec 2008 A1
20080317378 Steinberg et al. Dec 2008 A1
20080317379 Steinberg et al. Dec 2008 A1
20090002514 Steinberg et al. Jan 2009 A1
20090003652 Steinberg et al. Jan 2009 A1
20090003708 Steinberg et al. Jan 2009 A1
20090052749 Steinberg et al. Feb 2009 A1
20090052750 Steinberg et al. Feb 2009 A1
20090087030 Steinberg et al. Apr 2009 A1
20090087042 Steinberg et al. Apr 2009 A1
20090116698 Zhang et al. May 2009 A1
20090128644 Camp et al. May 2009 A1
20090175609 Tan Jul 2009 A1
20100060727 Steinberg et al. Mar 2010 A1
20110069888 Lim et al. Mar 2011 A1
20110122297 Steinberg et al. May 2011 A1
20110221936 Steinberg et al. Sep 2011 A1
20120069198 Steinberg et al. Mar 2012 A1
20120069222 Steinberg et al. Mar 2012 A1
Foreign Referenced Citations (27)
Number Date Country
1128316 Aug 2001 EP
1441497 Jul 2004 EP
1453002 Sep 2004 EP
1626569 Feb 2006 EP
1887511 Feb 2008 EP
2370438 Jun 2002 GB
5260360 Oct 1993 JP
2003-030647 Jan 2003 JP
25164475 Jun 2005 JP
26005662 Jan 2006 JP
26254358 Sep 2006 JP
WO 0076398 Dec 2000 WO
WO-02052835 Jul 2002 WO
WO-2007095477 Aug 2007 WO
WO-2007095477 Aug 2007 WO
WO-2007095483 Aug 2007 WO
WO-2007095553 Aug 2007 WO
WO-2007095553 Aug 2007 WO
WO 2007128117 Nov 2007 WO
WO 2007142621 Dec 2007 WO
WO-2007142621 Dec 2007 WO
WO-2008015586 Feb 2008 WO
WO-2008015586 Feb 2008 WO
WO-2008018887 Feb 2008 WO
WO-2008023280 Feb 2008 WO
WO-2008104549 Sep 2008 WO
WO 2008157792 Dec 2008 WO
Non-Patent Literature Citations (119)
Entry
Final Office Action mailed Mar 23, 2010, for U.S. Appl. No. 11/688,236, filed Mar. 19, 2007.
Final Office Action mailed Nov. 18, 2009, for U.S. Appl. No. 11/554,539, filed Oct. 30, 2006.
Machin, et al., “Real Time Facial Motion Analysis for Virtual Teleconferencing,” IEEE, 1996, pp. 340- 344.
Ming, et al., “Human Face Orientation Estimation Using Symmetry and Feature Points Analysis,” IEEE, 2000, pp. 1419-1422.
Non-Final Office Action mailed Apr. 2, 2010, for U.S. Appl. No. 10/608,784, filed Jun 26, 2003.
Non-Final Office Action mailed Apr. 30, 2010, for U.S. Appl. No. 11/765,899, filed Jun. 20, 2007.
Non-Final Office Action mailed Aug. 19, 2009, for U.S. Appl. No. 11/773,815, filed Jul. 5, 2007.
Non-Final Office Action mailed Aug. 20, 2009, for U.S. Appl. No. 11/773,855, filed Jul. 5, 2007.
Non-Final Office Action mailed Jan. 20, 2010, for U.S. Appl. No. 12/262,024, filed Oct. 30, 2008.
Non-Final Office Action mailed Jun. 14, 2010, for U.S. Appl. No. 11/624,683, filed Jan. 18, 2007.
Non-Final Office Action mailed Jun. 16, 2010, for U.S. Appl. No. 12/482,305, filed Jun. 10, 2009.
Non-Final Office Action mailed Jun. 22, 2010, for U.S. Appl. No. 12/055,958, filed Mar. 26, 2008.
Non-Final Office Action mailed Jun. 23, 2010, for U.S. Appl. No. 11/941,156, filed Nov. 18, 2007.
Non-Final Office Action mailed May 12, 2010, for U.S. Appl. No. 11/554,539, filed Oct. 30, 2007.
Non-Final Office Action mailed Sep. 8, 2009, for U.S. Appl. No. 11/688,236, filed Mar. 19, 2007.
Notice of Allowance mailed Sep. 28, 2009, for U.S. Appl. No. 12/262,037, filed Oct. 30, 2008.
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/EP2009/005461, dated Apr. 20, 2010, 12 pages.
Yao, Christina: “Image Cosmetics: An automatic Skin Exfoliation Framework on Static Images” UCSB Four Eyes Lab Imaging, Interaction, and Innovative Interfaces Publications Thesis, Master of Science in Media Arts and Technology Dec. 2005, pp. 1-83, Retrieved from the Internet : URL: http ://ilab.cs.ucsb.edu/publications/YaoMS.pdf >.
Aoki, Hiroyuki et al., “An Image Storage System Using Complex-Valued Associative Memories. Abstract printed from http://csdl.computer.org/comp/proceedings/icpr/2000/0750/02/07502626abs.htm”, International Conference on Pattern Recognition (ICPR '00), 2000, vol. 2.
Batur et al., “Adaptive Active Appearance Models”, IEEE Transactions on Image Processing, 2005. pp. 1707-1721, vol. 14—Issue 11.
Beraldin, J.A. et al., “Object Model Creation from Multiple Range Images: Acquisition, Calibration, Model Building and Verification, Abstract printed from http://csdl.computer.org/comp/proccedings/nrc/1997/7943/00/79430326abs.htm”, International Conference on Recent Advances in 3-D Digital Imaging and Modeling, 1997.
Beymer, David, “Pose-Invariant face Recognition Using Real and Virtual Views, A.I. Technical Report No. 1574”, Mass. Inst. of Technology Artificial Intelligence Laboratory, 1996, pp. 1-176.
Bradski Gary et al., “Learning-Based Computer Vision with Intel's Open Source Computer Vision Library”, Intel Technology, 2005, pp. 119-130, vol. 9—Issue 2.
Buenaposada, J., “Efficiently estimating 1-3,16 facial expression and illumination in appearance—based tracking, Retrieved from the Internet: http://www.bmva.ac.uk/binvc/2006/ [retrieved on Sep. 1, 2008]”, Proc. British machine vision conference. 2006.
Chang, T., “Texture Analysis and Classification with Tree-Structured Wavelet Transform”, IEEE Transactions on Image Processing, 1993, pp. 429-441, vol. 2—Issue 4.
Cootes T. et al., “Modeling Facial Shape and Appearance, S. Li and K. K. Jain (Eds.): “Handbook of face recognition”, XP002494037”, 2005, Chapter 3, Springer.
Cootes, T.F. et al., “A comparative evaluation of active appearance model algorithms”, Proc. 9th British Machine Vison Conference. British Machine Vision Association, 1998, pp. 680-689.
Cootes, T.F. et al., “On representing edge structure for model matching”, Proc. IEEE Computer Vision and Pattern Recognition, 2001, pp. 1114-1119.
Corcoran, P. et al., “Automatic Indexing of Consumer Image Collections Using Person Recognition Techniques”, Digest of Technical Papers. International Conference on Consumer Electronics, 2005, pp. 127-128.
Costache, G. et al., “In-Camera Person-indexing of Digital Images”, Digest of Technical Papers. International Conference on Consumer Electronics, 2006, pp. 339-340.
Crowley, J. et al., “Multi-modal tracking of faces for video communication, http://citeseer.ist.psu.edu/crowley97multimodal.html”, In Comp. Vision and Patent Rec., 1997.
Dalton, John, “Digital Cameras and Electronic Color Image Acquisition, Abstract printed from http://csdl.computer.org/comp/proceedings/compcon/1996/7414/00/74140431abs.htm”, COMPCON Spring '96—41st IEEE International Conference, 1996.
Demirkir, C. et al., “Face detection using boosted tree classifier stages”, Proceedings of the IEEE 12th Signal Processing and Communications Applications Conference, 2004, pp. 575-578.
Donner, Rene et al., “Fast Active Appearance Model Search Using Canonical Correlation Analysis”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, pp. 1690-1694, vol. 28—Issue 10.
Drimbarean, A.F. et al., “Image Processing Techniques to Detect and Filter Objectionable Images based on Skin Tone and Shape Recognition”, International Conference on Consumer Electronics, 2001, pp. 278-279.
Edwards, G.J. et al., “Advances in active appearance models”, International Conference on Computer Vision (ICCV'99), 1999, pp. 137-142.
Edwards, G.J. et al., “Learning to identify and track faces in image sequences, Automatic Face and Gesture Recognition”, IEEE Comput. Soc, 1998, pp. 260-265.
Feraud, R. et al., “A Fast and Accurate Face Detector Based on Neural Networks”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, pp. 42-53, vol. 23—Issue I.
Fernandez, Anna T. et al., “Synthetic Elevation Beamforming and Image Acquisition Capabilities Using an 8x 128 1.75D Array, Abstract Printed from http://www.ieee-uffc.org/archive/uffc/trans/toc/abs/03/t0310040.htm”, The Technical Institute of Electrical and Electronics Engineers.
Froba, B. et al., “Face detection with the modified census transform”, Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004, pp. 91-96.
Froba, B. et al., “Real time face detection, Kauai, Hawai Retrieved from the Internet:URL:http://www.embassi.de/publi/veroeffent/Froeba.pdf [retrieved on Oct. 23, 2007]”, Dept. of Applied Electronics, Proceedings of lasted “Signal and Image Processing”, 2002, pp. 1-6.
Garnaoui, H.H. et al., “Visual Masking and the Design of Magnetic Resonance Image Acquisition, Abstract printed from http://csdl.computer.org/comp/proceedings/icip/1995/7310/01/73100625abs.htm”, International Conference on Image Processing, 1995, vol. 1.
Gaubatz, Matthew et al., “Automatic Red-Eye Detection and Correction”, IEEE ICIP, Proceedings 2002 Intl Conf on Image Processing, 2002, pp. I-804-I-807, vol. 2—Issue 3.
Gerbrands, J., “On the Relationships Between SVD, KLT, and PCA”, Pattern Recognition, 1981, pp. 375-381, vol. 14, Nos. 1-6.
Goodall, C., “Procrustes Methods in the Statistical Analysis of Shape, Stable URL: http://www.jstor.org/stable/2345744”, Journal of the Royal Statistical Society. Series B (Methodological), 1991, pp. 285-339, vol. 53—Issue 2, Blackwell Publishing for the Royal Statistical Society.
Hou, Xinwen et al., “Direct Appearance Models”, IEEE, 2001, pp. I-828-I-833.
Hu, Wen-Chen et al., “A Line String Image Representation for Image Storage and Retrieval, Abstract printed from http://csdl.computer.oro/comp/proceedings/icmcs/1997/7819/00/78190434abs.htm”, International Conference on Multimedia Computing and systems, 1997.
Huang et al., “Image Indexing Using Color Correlograms”, Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97), 1997, pp. 762.
Huang, J. et al., “Detection of human faces using decision trees, http://doLieeecomputersociety.org/10.1109/Recognition”, 2nd International Conference on Automatic Face and Gesture Recognition (FG '96), IEEE Xplore, 2001, p. 248.
Huber, Reinhold et al., “Adaptive Aperture Control for Image Acquisition, Abstract printed from http://csdl.computer.org/comp/proceedings/wacv/2002/1858/00/18580320abs.htm. cited by other”, Sixth IEEE Workshop on Applications of Computer Vision, 2002.
Jebara, Tony S. et al., “3D Pose Estimation and Normalization for Face Recognition, A Thesis submitted to the Faculty of Graduate Studies and Research in Partial fulfillment of the requirements of the degree of Bachelor of Engineering”, Department of Electrical Engineering, 1996, pp. 1-121, McGill University.
Jones, M et al., “Fast multi-view face detection, http://www.merl.com/papers/docs/TR2003-96.pdf”, Mitsubishi Electric Research Lab, 2003, 10 pgs.
Kang, Sing Bing et al., “A Multibaseline Stereo System with Active Illumination and Real-Time Image Acquisition, Abstract printed from http://csdl.computer.org/comp/proceedings/iccv/1995/7042/00/70420088abs.htm”, Fifth International Conference on Computer Vision, 1995.
Kita, Nobuyuki et al., “Archiving Technology for Plant Inspection Images Captured by Mobile Active Cameras—4D Visible Memory, Abstract printed from http://csdl.computer.org/comp/proceedings/3dpvt/2002/1521/00/15210208abs.htm”, 1st Intl Symposium on 3D Data Processing Visualization and Transmission (3DPVT '02), 2002.
Kouzani, A.Z., “Illumination-Effects Compensation in Facial Images Systems”, Man and Cybernetics, IEEE SMC '99 Conference Proceedings, 1999, pp. VI-840-VI-844, vol. 6.
Kozubek, Michal et al., “Automated Multi-view 3D Image Acquisition in Human Genome Research, Abstract printed from http://csdl.computer.org/comp/proccedings/3pvt/2002/1521/00/15210091abs.htm”, 1st International Symposium on 3D Data Processing Visualization and Transmission (3DPVT '02), 2002.
Krishnan, Arun, “Panoramic Image Acquisition, Abstract printed from http://csdl.computer.org/comp/proceedings/cypr/1996/7258/00/72580379abs.htm”, Conference on Computer Vision and Pattern Recognition (CVPR '96), 1996.
Lai, J.H. et al., “Face recognition using holistic Fourier in variant features, http://digitalimaging.inf.brad.ac.uk/publication/pr34-1.pdf.”, Pattern Recognition, 2001, pp. 95-109, vol. 34.
Lei et al., “A CBIR Method Based on Color-Spatial Feature”, IEEE Region 10th Ann. Int. Conf., 1999.
Lienhart, R. et al., “A Detector Tree of Boosted Classifiers for Real-Time Object Detection and Tracking”, Proceedings of the 2003 International Conference on Multimedia and Expo, 2003, pp. 277-280, vol. 1, IEEE Computer Society.
Matkovic, Kresimir et al., “The 3D Wunderkammer an Indexing by Placing Approach to the Image Storage and Retrieval, Abstract printed from http://csdl.computer.org/comp/proceedings/tocg/2003/1942/00/19420034abs.htm”, Theory and Practice of Computer Graphics, 2003, University of Birmingham.
Matthews, I. et al., “Active appearance models revisited. Retrieved from http://www.d.cmu.edu/pub—files/pub4/matthews—iain—2004—2/matthews—iain—2004—2.pdf”, International Journal of Computer Vision, 2004, pp. 135-164, vol. 60—Issue 2.
Mekuz, N. et al., “Adaptive Step Size Window Matching for Detection”, Proceedings of the 18th International Conference on Pattern Recognition, 2006, pp. 259-262, vol. 2.
Mitra, S. et al., “Gaussian Mixture Models Based on the Frequency Spectra for Human Identification and Illumination Classification”, Proceedings of the Fourth IEEE Workshop on Automatic Identification Advanced Technologies, 2005, pp. 245-250.
Nordstrom, M.M. et al., “The IMM face database an annotated dataset of 240 face images. http://www2.imm.dtu.dk/pubdb/p.php?3160”, Informatics and Mathematical Modelling, 2004.
Ohta, Y-I et al., “Color Information for Region Segmentation, XP008026458”, Computer Graphics and Image Processing, 1980, pp. 222-241, vol. 13—Issue 3, Academic Press.
Park, Daechul et al., “Lenticular Stereoscopic Imaging and Displaying Techniques with no Special Glasses, Abstract printed from http://csdl.computer.org/comp/proceedings/icip/1995/7310/03/73103137abs.htm”, International Conference on Image Processing, 1995, vol. 3.
PCT International Search Report and Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/US2006/021393, filed Jun. 2, 2006, paper dated Mar. 29, 2007, 12 pgs.
PCT International Search Report and Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/US2006/060392, filed Oct. 31, 2006, paper dated Sep. 19, 2008, 9 pgs.
PCT Invitation to Pay Additional Fees and, Where Applicable Protest Fee, for PCT Application No. PCT/EP2008/001578, paper dated Jul. 8, 2008, 5 Pages.
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/EP2007/006540, Nov. 8, 2007, 11 pgs.
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/EP2008/001510, dated May 29, 2008, 13 pages.
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/EP2008/052329, dated Sep. 15, 2008, 12 pages.
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/IB2007/003724, dated Aug. 28, 2008, 9 pages.
Romdhani, S. et al., “Face Identification by Fitting a 3D Morphable Model using linear Shape and Texture Error Functions, XP003018283”, European Conference on Computer Vision, 2002, pp. 1-15.
Rowley, Henry A. et al., “Neural network-based face detection, ISSN: 0162-8828, DOI: 10.1109/34.655647, Posted online: Aug. 6, 2002. http://ieeexplore.ieee.org/xpl/freeabs—all.jsp?arnumber-655647andisnumber-14286”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998. pp. 23-38, p. 92, vol. 20—Issue 1.
Ryu, Hanjin et al., “Coarse-to-Fine Classification for Image-Based Face Detection”, Image and video retrieval lecture notes in Computer science, 2006, pp. 291-299, vol. 4071, Springer-Verlag.
Shand, M., “Flexible Image Acquisition Using Rcconfigurable Hardware, Abstract printed from http://csdl.computer.org/comp/proceedings/fccm/1995/7086/00/70860125abs.htm”, IEEE Symposium of FPGA's for Custom Computing Machines (FCCM '95), 1995.
Sharma, G. et al., “Digital color imaging, [Online]. Available: citeseer.ist.psu.edu/sharma97digital.html”, IEEE Transactions on Image Processing, 1997, pp. 901-932, vol. 6—Issue 7.
Shock, D. et al., “Comparison of Rural Remote Site Production of Digital Images Employing a film Digitizer or a Computed Radiography (CR) System, Abstract printed from http://csdl/computer.org/comp/proceedings/imac/1995/7560/00/75600071abs.htm”, 4th International Conference on Image Management and Communication (IMAC '95), 1995.
Sim, T. et al., “The CMU Pose, Illumination, and Expression (PIE) Database of Human Faces Robotics Institute, Tech. Report, CMU-RI-TR-01-02”, 2001, 18 pgs, Carnegie Mellon University.
Sim, T. et al., “The CMU Pose, Illumination, and Expression (PIE) database, Automatic Face and Gesture Recognition”, Fifth IEEE Intl. Conf, IEEE Piscataway, NJ, USA, 2002, 6 pages.
Skocaj, Danijel, “Range Image Acquisition of Objects with Non-Uniform Albedo Using Structured Light Range Sensor, Abstract printed from http://csdl.computer.org/comp/proceedings/icpr/2000/0750/01/07501778abs.htm”, Intl Conf on Pattern Recognition (ICPR '00), 2000, vol. 1.
Smeraldi, F. ct al., “Facial feature detection by saccadic exploration of the Gabor decomposition, XP010586874”, Image Processing, ICIP 98. Proceedings International Conference on Chicago, IL, USA, IEEE Comput. Soc, 1998, pp. 163-167, vol. 3.
Soriano, M. et al., “Making Saturated Facial Images Useful Again, XP002325961, ISSN: 0277-786X”, Proceedings of the SPIE, 1999, pp. 113-121, vol. 3826.
Stegmann, M.B. et al., “A flexible appearance modelling environment, Available: http://www2.imm.dtu.dk/pubdb/p.php?1918”, IEEE Transactions on Medical Imaging, 2003, pp. 1319-1331, vol. 22—Issue 10.
Stegmann. M.B. et al., “Multi-band modelling of appearance, XP009104697”, Image and Vision Computing, 2003, pp. 61-67, vol. 21—Issue 1.
Stricker et al., “Similarity of color images”, SPIE Proc, 1995, pp. 1-12, vol. 2420.
Sublett, J.W. et al., “Design and Implementation of a Digital Teleultrasound System for Real-Time Remote Diagnosis, Abstract printed from http://csdl.computer.org/comp/proceedings/cbms/1995/7117/00/71170292abs.htm”, Eight Annual IEEE Symposium on Computer-Based Medical Systems (CBMS '95), 1995.
Tang, Yuan Y. et al., “Information Acquisition and Storage of Forms in Document Processing, Abstract printed from http://csdl.computer.org/comp/proceedings/icdar/1997/7898/00/78980170abs.htm”, 4th International Conference Document Analysis and Recognition, 1997, vol. I and II.
Tjahyadi et al., “Application of the DCT Energy Histogram for Face Recognition”, Proceedings of the 2nd Intl Conference on Information Technology for Application, 2004. pp. 305-310.
Tkalcic, M. et al., “Colour spaces perceptual, historical and applicational background. ISBN: 0-7803-7763-X”, IEEE, EUROCON, 2003, pp. 304-308, vol. 1.
Turk, Matthew et al., “Eigenfaces for Recognition”, Journal of Cognitive Neuroscience, 1991, 17 pgs, vol. 3—Issue 1.
Twins Crack Face Recognition Puzzle, Internet article http://www.cnn.com/2003/TECH/ptech/03/10/israel.twins.reut/ index.html, printed Mar. 10, 2003, 3 pages.
U.S. Appl. No. 11/554,539, filed Oct. 30, 2006, entitled Digital Image Processing Using Face Detection and Skin Tone Information.
Viola, P. et al., “Rapid Object Detection using a Boosted Cascade of Simple Features”, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001, pp. I-511-I-518, vol. 1.
Viola, P. et al., “Robust Real-Time Face Detection”, International Journal of Computer Vision, 2004, pp. 137-154, vol. 57—Issue 2, Kluwer Academic Publishers.
Vuylsteke, P. et al., “Range Image Acquisition with a Single Binary-Encoded Light Pattern, abstract printed from http://csdl.computer.org/comp/trans/tp/1990/02/i0148abs.htm”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1990, 1 page.
Wan, S.J. et al., “Variance-based color image quantization for frame buffer display”, S. K. M. Wong Color Research and Application, 1990, pp. 52-58, vol. 15—Issue 1.
Xin He et al., “Real-Time Human Face Detection in Color Image”, International Conference on Machine Learning and Cybernetics, 2003, pp. 2915-2920, vol. 5.
Yang, Ming-Hsuan et al., “Detecting Faces in Images: A Survey, ISSN:0162-8828, http://portal.acm.org/citation.cfm?id=505621andcoll=GUIDEanddl=GUIDEandCFID=680-9268andCFTOKEN=82843223.”, IEEE Transactions on Pattern Analysis and Machine Intelligence archive, 2002, pp. 34-58, vol. 24—Issue , IEEE Computer Society.
Zhang, Jun et al., “Face Recognition: Eigenface, Elastic Matching, and Neural Nets”, Proceedings of the IEEE, 1997, pp. 1423-1435, vol. 85—Issue 9.
Zhao, W. et al., “Face recognition: A literature survey, ISSN: 0360-0300, http://portal.acm.org/citation.cfm?id=954342andcoll=GUIDEanddl=GUIDEandCFID=680-9268andCFTOKEN=82843223.”, ACM Computing Surveys (CSUR) archive, 2003, pp. 399-458, vol. 35—Issue 4, ACM Press.
Zhu Qiang et al., “Fast Human Detection Using a Cascade of Histograms of Oriented Gradients”, Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2006, pp. 1491-1498, IEEE Computer Society.
PCT International Preliminary Report on Patentability Chapter I (IB/373), for PCT Application No. PCT/US2008/067746, report dated Dec. 22, 2009, 6 pages.
PCT Written Opinion of the International Search Authority, for PCT Application No. PCT/US2008/067746, report dated Sep. 10, 2008, 5 pages.
Swain et al. (1995) “Defocus-based image segmentation.” Proc. 1995 Int'l Conf. on Acoustics, Speech, and Signal Processing, vol. 4 pp. 2403-2406.
Final Rejection, dated Mar. 28, 2012, for U.S. Appl. No. 12/140,827, filed Jun. 17, 2008.
Final Rejection, dated Nov. 21, 2011, for U.S. Appl. No. 12/140,125, filed Jun. 16, 2008.
Final Rejection, dated Nov. 9, 2011, for U.S. Appl. No. 12/140,532, filed Jun. 17, 2008.
Non-final Rejection, dated Aug. 4, 2011, for U.S. Appl. No. 12/140,827, filed Jun. 17, 2008.
Non-final Rejection, dated Dec. 29, 2011, for U.S. Appl. No. 12/140,950, filed Jun. 17, 2008.
Non-final Rejection, dated Feb. 24, 2012, for U.S. Appl. No. 12/141,134, filed Jun. 19, 2008.
Non-final Rejection, dated Jul. 5, 2011, for U.S. Appl. No. 12/140,125, filed Jun. 16, 2008.
Non-final Rejection, dated Mar. 31, 2011, for U.S. Appl. No. 12/140,532, filed Jun. 17, 2008.
Non-final Rejection, dated May 15, 2009, for U.S. Appl. No. 12/141,042, filed Jun. 17, 2008.
Notice of Allowance, dated Apr. 19, 2011, for U.S. Appl. No. 12/947,731, filed Nov. 16, 2010.
Notice of Allowance, dated Jan. 9, 2012, for U.S. Appl. No. 13/113,648, filed May 23, 2011.
Notice of Allowance, dated Sep. 23, 2009, for U.S. Appl. No. 12/141,042, filed Jun. 17, 2008.
Related Publications (1)
Number Date Country
20090196466 A1 Aug 2009 US