Digital cameras today often include an auto focus mechanism. Two kinds of conventional auto focus mechanisms are contrast detect auto focus and phase detect auto focus.
With contrast detect auto focus, the camera lens is initially positioned at a closest focus point. The lens is incrementally shifted and image sharpness is evaluated at each step. When a peak in sharpness is reached, the lens shifting is stopped. Contrast detect auto focus is used in conventional digital stills cameras or DSCs, camcorders camera phones, webcams, and surveillance cameras. They are very precise, based on pixel level measurements and fine scanning. They can focus anywhere inside a frame, but they typically only focus around the center of the frame. However, contrast detect auto focus mechanisms are slow, because they involve scanning a focus range. Also, they do not allow tracking of an acquired subject. Further scanning is involved to determine whether the subject has moved to front or back focus, known as focus hunting. Contrast detect auto focus mechanisms are typically inexpensive and rugged.
Phase detect auto focus generally involves special optoelectronics including a secondary mirror, separator lenses and a focus sensor. The separator lenses direct light coming from opposite sides of the lens towards the auto focus sensor. A phase difference between the two images is measured. The lens is shifted to the distance that corresponds to the phase difference. Phase detect auto focus is used in conventional single lens reflex cameras or SLRs. These are generally less precise than contrast detect autofocus mechanisms, because the phase difference cannot always be assessed very accurately. They can only acquire focus in fixed points inside a frame, and these are typically indicated manually by a camera user. They are typically fast, as relative positions of the subject can be detected by single measurements. They allow tracking, as it can determine whether the subject has moved to front or back focus, but only by hopping from one focus point to another. Phase detect auto focus mechanisms are typically expensive and fragile.
It is desired to have an improved auto focus mechanism that does not have the significant drawbacks of either of the contrast detect and phase detect auto focus mechanisms. United States published patent application no. 20100208091, incorporated by reference, describes a camera that detects a face in an image captured by the camera, and calculates the size of the face. It selects from amongst a number of previously stored face sizes, one that is closest to the calculated face size. It retrieves a previously stored lens focus position that is associated with the selected, previously stored face size. It signals a moveable lens system of the digital camera to move to a final focus position given by the retrieved, previously stored lens focus position. A problem with the technique described in the 20100208091 published US patent application is that it will have a significantly high rate of non-detection of faces due to their blurry, out of focus state. Unless a further enhancement is provided, this will result in an unsatisfactorily slow image capture process.
a is a plot of sharpness versus focus position of a lens in a digital image acquisition device in accordance with certain embodiments.
b illustrates a digital image including a face that is out of focus and yet still has a face detection box around a detected face region.
a-3b illustrate a first example of digital images including sharp and out of focus faces, respectively, that are each detected in accordance with certain embodiments.
a-4b illustrate a second example of digital images including sharp and out of focus faces, respectively, that are each detected in accordance with certain embodiments.
a-8d illustrate digital images including relatively sharp faces 7a and 7c and relatively out of focus faces 7b and 7d.
An autofocus method is provided for a digital image acquisition device based on face detection. The method involves use of a lens, image sensor and processor of the device. A digital image is acquired of a scene that includes one or more out of focus faces and/or partial faces. The method includes detecting one or more of the out of focus faces and/or partial faces within the digital image by applying one or more sets of classifiers trained on faces that are out of focus. One or more sizes of the one of more respective out of focus faces and/or partial faces is/are determined within the digital image. One or more respective depths is/are determined to the one or more out of focus faces and/or partial faces based on the one or more sizes of the one of more faces and/or partial faces within the digital image. One or more respective focus positions of the lens is/are adjusted to focus approximately at the determined one or more respective depths. One or more further images is/are acquired of scenes that include at least one of the one or more faces and/or partial faces with the lens focused at the one or more respectively adjusted focus positions. Upon adjusting the one or more respective focus positions, the method may further include performing a fine scan, and fine adjusting the one or more respective focus positions based on the fine scan. The scene may include at least one out of focus face and/or partial face not detected by applying the one or more sets of face classifiers, and wherein the method further comprises applying a contrast detect scan or a phase detect scan or both for acquiring said at least one out of focus face or partial face or both not detected by applying said one or more sets of classifiers trained on faces that are out of focus.
The one or more partial faces may include an eye region.
The method may include adjusting at least one of the one or more respective focus positions when at least one of the one or more faces and/or partial faces changes size at least a threshold amount. The method may also include tracking the at least one of the faces and/or partial faces, and determining the change in size of the at least one of the one or more faces and/or partial faces based on the tracking.
The method may further include increasing depth of field by stopping down the aperture of the digital image acquisition device.
The one or more faces and/or partial faces may include multiple faces and/or partial faces located respectively at multiple different depths approximately determined based on their different determined sizes.
The determining of the one or more depths may include assigning at least one average depth corresponding to at least one of the one or more determined sizes.
The determining of the one or more depths may include recognizing a detected face or partial face or both as belonging to a specific individual, calling a known face or partial face spatial parameter from a memory that corresponds to the specific face or partial face or both, and determining a depth corresponding to a determined size and the known face or partial face spatial parameter.
The adjusting of the one or more respective focus positions may utilize a MEMS (microelectromechanical system) component.
One or more processor readable media are also provided that have code embodied therein for programming a processor to perform any of the methods described herein.
A digital image acquisition device is also provided including a lens, an image sensor, a processor and a memory that has code embedded therein for programming the processor to perform any of the methods described herein.
Normal contrast detect autofocus is slow and hunts when a subject moves out of focus. Falling back to contrast detect auto focus, when a blurry face is not detected, could too often slow the process provided by the 20100208091 publication. A method that uses face detection to speed up focusing and reduce focus hunting in continuous autofocus is provided. Highly reliable face detection is first of all provided even when the face is not in focus by providing one or more sets of trained classifiers for out of focus faces and/or partial faces. For example, three sets of face classifiers may be provided: one trained to sharp faces, another trained to somewhat blurry faces, and a third trained to faces that are even more blurry and out of focus. A different number of classifier sets may be trained and used. This advantageous technique will have far fewer non-detection events than the technique of the 20100208091 publication, leading to a faster and more reliable image capture process. Face detection particularly by training face classifiers that may or may not be evenly illuminated, front facing and sharply-focused have been widely researched and developed by the assignee of the present application, e.g., as described at U.S. Pat. Nos. 7,362,368, 7,616,233, 7,315,630, 7,269,292, 7,471,846, 7,574,016, 7,440,593, 7,317,815, 7,551,755, 7,558,408, 7,587,068, 7,555,148, 7,564,994, 7,565,030, 7,715,597, 7,606,417, 7,692,696, 7,680,342, 7,792,335, 7,551,754, 7,315,631, 7,469,071, 7,403,643, 7,460,695, 7,630,527, 7,469,055, 7,460,694, 7,515,740, 7,466,866, 7,693,311, 7,702,136, 7,620,218, 7,634,109, 7,684,630, 7,796,816 and 7,796,822, and U.S. published patent applications nos. US 2006-0204034, US 2007-0201725, US 2007-0110305, US 2009-0273685, US 2008-0175481, US 2007-0160307, US 2008-0292193, US 2007-0269108, US 2008-0013798, US 2008-0013799, US 2009-0080713, US 2009-0196466, US 2008-0143854, US 2008-0220750, US 2008-0219517, US 2008-0205712, US 2009-0185753, US 2008-0266419, US 2009-0263022, US 2009-0244296, US 2009-0003708, US 2008-0316328, US 2008-0267461, US 2010-0054549, US 2010-0054533, US 2009-0179998, US 2009-0052750, US 2009-0052749, US 2009-0087042, US 2009-0040342, US 2009-0002514, US 2009-0003661, US 2009-0208056, US 2009-0190803, US 2009-0245693, US 2009-0303342, US 2009-0238419, US 2009-0238410, US 2010-0014721, US 2010-0066822, US 2010-0039525, US 2010-0165150, US 2010-0060727, US 2010-0141787, US 2010-0141786, US 2010-0220899, US 2010-0092039, US 2010-0188530, US 2010-0188525, US 2010-0182458, US 2010-0165140 and US 2010-0202707, which are all incorporated by reference.
After detection of an out of focus face and/or partial face, the technique involves relying on face size to determine at which distance the subject is located. That is, when the focus position of lens is not disposed to provided an optimally sharp image at the value of the sharpness peak illustrated at
a-3b illustrate a first example of digital images including sharp and out of focus faces, respectively, that are each detected in accordance with certain embodiments. In
a-4b illustrate a second example of digital images including sharp and out of focus faces, respectively, that are each detected in accordance with certain embodiments. In
Once the distance to the subject is determined by computation or estimation in accordance with a look up table based for example on the formula provided at
As mentioned, a highly advantageous feature is provided whereby the face detection process is reliably performed on faces even when they are out of focus. This enables advantageous auto focus techniques in accordance with embodiments described herein to detect faces before actually starting to focus on them. Once a blurry, out of focus face is detected, a rough distance may be calculated to the subject. This is possible as human faces do not vary in size considerably. Further precision may be provided by using face recognition, whereby a specific person's face is recognized by comparison with other face data of the person stored in a database or by manual user indication or because one or more pictures have been recently taken of the same person or combinations of these and other face recognition techniques. Then, the specifically known face size of that person can be used.
The distance to subject may be calculated by also taking into account the focal length of the lens (and sensor size if this is not 35 mm equivalent). When the distance to subject is known, the focusing element may be directly moved to the corresponding position without any additional scanning. Then only a fine contrast-detect scan is optionally performed around that distance. The contrast is measured onto the face area or on the eye region of the face if the face is too large and/or only partially detected. This is advantageous to reduce the computational effort for calculating contrast. In video mode, the same may be performed every time focus is to be achieved on a new face. Once focus is achieved on a certain face, the variation in face size is monitored in accordance with certain embodiments. If the changes are not significant, the algorithm measures the contrast onto the face rectangle (or eye area or other partial face region) and if this does not drop below a certain value, the focus position is not adjusted. Conversely, if the contrast drops but the face size does not change, a fine refocusing may be done around the current focus distance. If the face size is found to change more than a certain margin, the new size is compared against the old size to determine whether the subject has moved in front or back focus. Based on this, the focusing element is moved towards the appropriate direction (back or forth), until focus is reacquired. Advantageously, focus tracking is provided without hunting. For example, if the face size has increased, it can be determined that the subject has moved in front focus, so the focusing element is moved so that it focuses closer.
This method can be generalized to any object of known size, as previously mentioned. For example face detection can be changed for pet detection. Furthermore, the method may be generalized to objects of unknown size. Once focus is acquired on a certain object using the normal contrast-detect and/or phase-detect algorithm, that object can be tracked and monitored with regard to its variations in size. The method involves determining whether the object has become larger or smaller and by how much, and continuous focusing is provided without hunting even on objects of unknown size.
When a scene includes multiple faces as illustrated at
A digital camera image pipeline is illustrated at
The technique in accordance with embodiment described herein scores high in many categories. For example, it is fast, requires very low power and produces very low motor wear. In video mode, it knows whether the subject has moved in front or back focus, so it does not need to hunt. This feature can enable continuous autofocus in movie mode for DSCs and camera phones, which is something that is not available in current technologies. Furthermore, the technique does not require any additional hardware so it is cheap to implement, and it is rugged (passes any drop test) and does all this without compromising the quality of the focus in any way. It also highly accurate. Multiface autofocus is also provided which enables the camera to focus on multiple faces, located at various depths. With multiface AF in accordance with embodiments described herein, this can be done by assessing the sizes of the faces, calculating the distance to each of the faces and then deciding on a virtual focus distance that maximizes sharpness across all these faces or otherwise as described above. Moreover, the focus will then be achieved almost instantly, without having to scan the focus range or measure sharpness on multiple areas in the image, i.e., this can otherwise be very slow if together they cover a large area of the frame.
While exemplary drawings and specific embodiments of the present invention have been described and illustrated, it is to be understood that that the scope of the present invention is not to be limited to the particular embodiments discussed. Thus, the embodiments shall be regarded as illustrative rather than restrictive, and it should be understood that variations may be made in those embodiments by workers skilled in the arts without departing from the scope of the present invention.
In addition, in methods that may be performed according to preferred embodiments herein and that may have been described above, the operations have been described in selected typographical sequences. However, the sequences have been selected and so ordered for typographical convenience and are not intended to imply any particular order for performing the operations, except for those where a particular order may be expressly set forth or where those of ordinary skill in the art may deem a particular order to be necessary.
In addition, all references cited above and below herein, as well as the background, invention summary, abstract and brief description of the drawings, are all incorporated by reference into the detailed description of the preferred embodiments as disclosing alternative embodiments.
This application claims the benefit of priority to U.S. provisional patent application No. 61/387,449, filed Sep. 28, 2010.
Number | Name | Date | Kind |
---|---|---|---|
4448510 | Murakoshi | May 1984 | A |
5926212 | Kondo | Jul 1999 | A |
6122004 | Hwang | Sep 2000 | A |
6246790 | Huang et al. | Jun 2001 | B1 |
6407777 | DeLuca | Jun 2002 | B1 |
6747690 | Molgaard | Jun 2004 | B2 |
6934087 | Gutierrez et al. | Aug 2005 | B1 |
7113688 | Calvet et al. | Sep 2006 | B2 |
7266272 | Calvet et al. | Sep 2007 | B1 |
7269292 | Steinberg | Sep 2007 | B2 |
7315630 | Steinberg et al. | Jan 2008 | B2 |
7315631 | Corcoran et al. | Jan 2008 | B1 |
7317815 | Steinberg et al. | Jan 2008 | B2 |
7345827 | Tang et al. | Mar 2008 | B1 |
7359130 | Calvet | Apr 2008 | B1 |
7359131 | Gutierrez et al. | Apr 2008 | B1 |
7362368 | Steinberg et al. | Apr 2008 | B2 |
7403344 | Xu et al. | Jul 2008 | B2 |
7403643 | Ianculescu et al. | Jul 2008 | B2 |
7440593 | Steinberg et al. | Oct 2008 | B1 |
7460694 | Corcoran et al. | Dec 2008 | B2 |
7460695 | Steinberg et al. | Dec 2008 | B2 |
7466866 | Steinberg | Dec 2008 | B2 |
7469055 | Corcoran et al. | Dec 2008 | B2 |
7469071 | Drimbarean et al. | Dec 2008 | B2 |
7471846 | Steinberg et al. | Dec 2008 | B2 |
7477400 | Gutierrez et al. | Jan 2009 | B2 |
7477842 | Gutierrez | Jan 2009 | B2 |
7495852 | Gutierrez | Feb 2009 | B2 |
7515362 | Gutierrez et al. | Apr 2009 | B1 |
7515740 | Corcoran et al. | Apr 2009 | B2 |
7518635 | Kawahara et al. | Apr 2009 | B2 |
7545591 | Tong et al. | Jun 2009 | B1 |
7551754 | Steinberg et al. | Jun 2009 | B2 |
7551755 | Steinberg et al. | Jun 2009 | B1 |
7555148 | Steinberg et al. | Jun 2009 | B1 |
7555210 | Calvet | Jun 2009 | B2 |
7558408 | Steinberg et al. | Jul 2009 | B1 |
7560679 | Gutierrez | Jul 2009 | B1 |
7564994 | Steinberg et al. | Jul 2009 | B1 |
7565030 | Steinberg et al. | Jul 2009 | B2 |
7565070 | Gutierrez | Jul 2009 | B1 |
7574016 | Steinberg et al. | Aug 2009 | B2 |
7583006 | Calvet et al. | Sep 2009 | B2 |
7587068 | Steinberg et al. | Sep 2009 | B1 |
7606417 | Steinberg et al. | Oct 2009 | B2 |
7616233 | Steinberg et al. | Nov 2009 | B2 |
7620218 | Steinberg et al. | Nov 2009 | B2 |
7630527 | Steinberg et al. | Dec 2009 | B2 |
7634109 | Steinberg et al. | Dec 2009 | B2 |
7640803 | Gutierrez et al. | Jan 2010 | B1 |
7646969 | Calvet et al. | Jan 2010 | B2 |
7660056 | Tang et al. | Feb 2010 | B1 |
7663289 | Gutierrez | Feb 2010 | B1 |
7663817 | Xu et al. | Feb 2010 | B1 |
7680342 | Steinberg et al. | Mar 2010 | B2 |
7684630 | Steinberg | Mar 2010 | B2 |
7692696 | Steinberg et al. | Apr 2010 | B2 |
7693311 | Steinberg et al. | Apr 2010 | B2 |
7693408 | Tsai et al. | Apr 2010 | B1 |
7697829 | Gutierrez et al. | Apr 2010 | B1 |
7697831 | Tsai et al. | Apr 2010 | B1 |
7697834 | Tsai | Apr 2010 | B1 |
7702136 | Steinberg et al. | Apr 2010 | B2 |
7702226 | Gutierrez | Apr 2010 | B1 |
7715597 | Costache et al. | May 2010 | B2 |
7729601 | Tsai | Jun 2010 | B1 |
7729603 | Xu et al. | Jun 2010 | B2 |
7747155 | Gutierrez | Jun 2010 | B1 |
7769281 | Gutierrez | Aug 2010 | B1 |
7792335 | Steinberg et al. | Sep 2010 | B2 |
7796816 | Steinberg et al. | Sep 2010 | B2 |
7796822 | Steinberg et al. | Sep 2010 | B2 |
8212882 | Florea et al. | Jul 2012 | B2 |
20010026632 | Tamai | Oct 2001 | A1 |
20040090551 | Yata | May 2004 | A1 |
20050270410 | Takayama | Dec 2005 | A1 |
20060140455 | Costache et al. | Jun 2006 | A1 |
20060204034 | Steinberg et al. | Sep 2006 | A1 |
20060239579 | Ritter | Oct 2006 | A1 |
20070030381 | Maeda | Feb 2007 | A1 |
20070110305 | Corcoran et al. | May 2007 | A1 |
20070160307 | Steinberg et al. | Jul 2007 | A1 |
20070201725 | Steinberg et al. | Aug 2007 | A1 |
20070269108 | Steinberg et al. | Nov 2007 | A1 |
20070285528 | Mise et al. | Dec 2007 | A1 |
20070296833 | Corcoran et al. | Dec 2007 | A1 |
20080013798 | Ionita et al. | Jan 2008 | A1 |
20080013799 | Steinberg et al. | Jan 2008 | A1 |
20080043121 | Prilutsky et al. | Feb 2008 | A1 |
20080143854 | Steinberg et al. | Jun 2008 | A1 |
20080175481 | Petrescu et al. | Jul 2008 | A1 |
20080205712 | Ionita et al. | Aug 2008 | A1 |
20080218868 | Hillis et al. | Sep 2008 | A1 |
20080219517 | Blonk et al. | Sep 2008 | A1 |
20080219581 | Albu et al. | Sep 2008 | A1 |
20080220750 | Steinberg et al. | Sep 2008 | A1 |
20080252773 | Oishi | Oct 2008 | A1 |
20080259176 | Tamaru | Oct 2008 | A1 |
20080266419 | Drimbarean et al. | Oct 2008 | A1 |
20080267461 | Ianculescu et al. | Oct 2008 | A1 |
20080292193 | Bigioi | Nov 2008 | A1 |
20080309769 | Albu et al. | Dec 2008 | A1 |
20080316328 | Steinberg et al. | Dec 2008 | A1 |
20090002514 | Steinberg et al. | Jan 2009 | A1 |
20090003661 | Ionita et al. | Jan 2009 | A1 |
20090003708 | Steinberg et al. | Jan 2009 | A1 |
20090040342 | Drimbarean et al. | Feb 2009 | A1 |
20090052749 | Steinberg et al. | Feb 2009 | A1 |
20090052750 | Steinberg et al. | Feb 2009 | A1 |
20090059061 | Yu et al. | Mar 2009 | A1 |
20090080713 | Bigioi et al. | Mar 2009 | A1 |
20090087042 | Steinberg et al. | Apr 2009 | A1 |
20090153725 | Kawahara | Jun 2009 | A1 |
20090167893 | Susanu et al. | Jul 2009 | A1 |
20090179998 | Steinberg et al. | Jul 2009 | A1 |
20090179999 | Albu et al. | Jul 2009 | A1 |
20090185753 | Albu et al. | Jul 2009 | A1 |
20090190803 | Neghina et al. | Jul 2009 | A1 |
20090196466 | Capata et al. | Aug 2009 | A1 |
20090208056 | Corcoran et al. | Aug 2009 | A1 |
20090238410 | Corcoran et al. | Sep 2009 | A1 |
20090238419 | Steinberg et al. | Sep 2009 | A1 |
20090238487 | Nakagawa | Sep 2009 | A1 |
20090244296 | Petrescu et al. | Oct 2009 | A1 |
20090245693 | Steinberg et al. | Oct 2009 | A1 |
20090263022 | Petrescu et al. | Oct 2009 | A1 |
20090268080 | Song et al. | Oct 2009 | A1 |
20090273685 | Ciuc et al. | Nov 2009 | A1 |
20090303342 | Corcoran et al. | Dec 2009 | A1 |
20090303343 | Drimbarean et al. | Dec 2009 | A1 |
20090310885 | Tamaru | Dec 2009 | A1 |
20100014721 | Steinberg et al. | Jan 2010 | A1 |
20100039525 | Steinberg et al. | Feb 2010 | A1 |
20100054533 | Steinberg et al. | Mar 2010 | A1 |
20100054549 | Steinberg et al. | Mar 2010 | A1 |
20100060727 | Steinberg et al. | Mar 2010 | A1 |
20100066822 | Steinberg et al. | Mar 2010 | A1 |
20100085436 | Ohno | Apr 2010 | A1 |
20100092039 | Steinberg et al. | Apr 2010 | A1 |
20100128163 | Nagasaka et al. | May 2010 | A1 |
20100141786 | Bigioi et al. | Jun 2010 | A1 |
20100141787 | Bigioi et al. | Jun 2010 | A1 |
20100165140 | Steinberg | Jul 2010 | A1 |
20100165150 | Steinberg et al. | Jul 2010 | A1 |
20100182458 | Steinberg et al. | Jul 2010 | A1 |
20100188525 | Steinberg et al. | Jul 2010 | A1 |
20100188530 | Steinberg et al. | Jul 2010 | A1 |
20100188535 | Kurata et al. | Jul 2010 | A1 |
20100194869 | Matsuzaki | Aug 2010 | A1 |
20100202707 | Costache | Aug 2010 | A1 |
20100208091 | Chang | Aug 2010 | A1 |
20100220899 | Steinberg et al. | Sep 2010 | A1 |
20100271494 | Miyasako | Oct 2010 | A1 |
20100283868 | Clark et al. | Nov 2010 | A1 |
20100329582 | Albu et al. | Dec 2010 | A1 |
20110135208 | Atanassov et al. | Jun 2011 | A1 |
20110249173 | Li et al. | Oct 2011 | A1 |
20120120269 | Capata et al. | May 2012 | A1 |
20120120283 | Capata et al. | May 2012 | A1 |
20120218461 | Sugimoto | Aug 2012 | A1 |
20130027536 | Nozaki et al. | Jan 2013 | A1 |
Number | Date | Country |
---|---|---|
2037320 | Mar 2009 | EP |
2007093199 | Aug 2007 | WO |
2009036793 | Mar 2009 | WO |
2012041892 | Apr 2012 | WO |
2012062893 | May 2012 | WO |
2012062893 | Jul 2012 | WO |
Entry |
---|
PCT Communication in Cases for Which No Other Form is Applicable, Form PCT/ISA/224, for PCT Application No. PCT/EP2011/069904, dated Jun. 22, 2012, 1 Page. |
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/EP2011/069904, report dated Jun. 22, 2012, 21 Pages. |
Karen P Hollingsworth, Kevin W. Bowyer, Patrick J. Flynn: “Image Averaging for Improved Iris Recognition”, Advances in Biometrics, by M. Tistarelli, M.S. Nixon (Eds.): ICB 2009, LNCS 5558, Springer Berlin Heidelberg, Berlin Heidelberg, Jun. 2, 2009, pp. 1112-1121, XP019117900, ISBN: 978-3-642-01792-6. |
Sang Ku Kim, Sang Rae Park and Joan Ki Paik: Simultaneous Out-Of-Focus Blur Estimation and Restoration for Digital Auto-Focusing System, Aug. 1, 1998, vol. 44, No. 3, pp. 1071-1075, XP011083715. |
Hamm P., Schulz J. and Englmeier K.-H.: “Content-Based Autofocusing in Automated Microscopy”, Image Analysis Stereology, vol. 29, Nov. 1, 2010, pp. 173-180, XP002677721. |
PCT Notification of the Transmittal of the International Search Report and the Written Opinion of the International Search Authority, or the Declaration, for PCT Application No. PCT/EP2011/066835, report dated Jan. 18, 2012, 12 Pages. |
Co-pending U.S. Appl. No. 12/944,701, filed Nov. 11, 2010. |
Co-pending U.S. Appl. No. 12/944,703, filed Nov. 11, 2010. |
Co-pending U.S. Appl. No. 13/020/805, filed Feb. 3, 2011. |
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/EP2011/069904, report dated May 16, 2012, 17 Pages. |
Sinjini Mitra, Marios Savvides, Gaussian Mixture Models Based on the Frequency Spectra for Human Identification and Illumination Classification, 4th IEEE Workshop on Automatic Identification Advanced Technologies, 2005, Buffalo, NY, USA Oct. 17-18, 2005, pp. 245-250. |
Kouzani, A Z, Illumination-effects compensation in facial images, IEEE International Conference on Systems, Man, and Cybernetics, 1999, IEEE SMS '99 Conference Proceedings, Tokyo, Japan Oct. 12-15, 1999, vol. 6, pp. 840-844. |
Matthew Turk, Alex Pentland, Eigenfaces for Recognition, Journal of Cognitive Neuroscience, 1991, vol. 3, Issue 1, pp. 71-86. |
H. Lai, P. C. Yuen, and G. C. Feng, Face recognition using holistic Fourier invariant features, Pattern Recognition, 2001, vol. 34, pp. 95-109, 2001. Retrieved from URL: http://digitalimaging.inf.brad.ac.uk/publication/pr34-1.pdf. |
Jing Huang, S. Ravi Kumar, Mandar Mitra, Wei-Jing Zhu, Ramin Zabih, Image Indexing Using Color Correlograms, IEEE Conf. Computer Vision and Pattern Recognition, (CVPR '97) pp. 762-768, 1997. |
Markus Stricker and Markus Orengo, Similarity of color images, SPIE Proc., vol. 2420, pp. 381-392, 1995. |
Ronny Tjahyadi, Wanquan Liu, Svetha Venkatesh, Application of the DCT Energy Histogram for Face Recognition, in Proceedings of the 2nd International Conference on Information Technology for Application (ICITA 2004), 2004, pp. 305-310. |
S. J. Wan, P. Prusinkiewicz, S. L. M. Wong, Variance-based color image quantization for frame buffer display, Color Research and Application, vol. 15, No. 1, pp. 52-58, 1990. |
Tianhorng Chang and C.-C. Jay Kuo, Texture Analysis and Classification with Tree-Structured Wavelet Transform, IEEE Trans. Image Processing, vol. 2. No. 4, Oct. 1993, pp. 429-441. |
Zhang Lei, Lin Fuzong, Zhang Bo, A CBIR method based on color-spatial feature. IEEE Region 10th Ann. Int. Conf. 1999, TENCON'99, Cheju, Korea, 1999. |
Wikipedia reference: Autofocus, retrieved on Feb. 3, 2011, URL: http://en.wikipedia.org/wiki/Autofocus, 5 Pages. |
Wei Huang, Zhongliang Ling, Evaluation of focus measures in multi-focus image fusion, Pattern Recognition Letters, 2007, 28 (4), 493-500. |
Mahsen Ebrahimi Maghaddam, Out of focus blur estimation using genetic algorithm, in Proc. 15th International Conf. on Systems, Signals and Image Processing, IWSSIP 2008, 417-420. |
Mahsen Ebrahimi Maghaddam, Out of Focus Blur Estimation Using Genetic Algorithm, Journal of Computer Science, 2008, Science Publications, vol. 4 (4), ISSN 1549-3636, pp. 298-304. |
Ken Sauer and Brian Schwartz, Efficient Block Motion Estimation Using Integral Projections, IEEE Trans. Circuits, Systems for video Tech, 1996, pp. 513-518, vol. 6—Issue 5. |
Sang-Yong Lee, Jae-Tack Yoo, Yogendera Kumar, and Soo-Won Kim, Reduced Energy-Ratio Measure for Robust Autofocusing in Digital Camera, IEEE Signal Processing Letters, vol. 16, No. 2, Feb. 2009, pp. 133-136. |
Jaehwan Jeon, Jinhee Lee, and Joonki Paik, Robust Focus Measure for Unsupervised Auto-Focusing Based on Optimum Discrete Cosine Transform Coefficients, IEEE Transactions on Consumer Electronics, vol. 57, No. 1, Feb. 2011, pp. 1-5. |
Aaron Deever, In-Camera All-Digital Video Stabilization, ICIS '06 International Congress of Imaging Science, Final Program and Proceedings, Society for Imaging Science and Technology, pp. 190-193. |
Felix Albu, Corneliu Florea, Adrian Zamfir, Alexandru Drimbarean, 10.4-3 Low Complexity Global Motion Estimation Techniques for Image Stabilization, IEEE, Aug. 2008, pp. 1-4244-1459. |
Masahiro Watanabe, Shree K. Nayar: Short Papers Telecentric Optics for Focus Analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, No. 12, Dec. 1997, pp. 1360-1365. |
A. Ben Hamza, Yun He, Hamid Krim, and Alan Willsky: A multiscale approach to pixel-level image fusion, Integrated Computer-Aided Engineering, IOS Press, No. 12, 2005 pp. 135-146. |
Paul Viola and Michael Jones: Rapid Object Detection Using a Boosted Cascade of Simple Features, Mitsubishi Electric Research Laboratories, Inc., 2004, TR-2004-043 May 2004, 13 pages. |
Non-Final Rejection, dated Apr. 17, 2013, for U.S. Appl. No. 12/944,701, filed Nov. 11, 2010. |
Non-Final Rejection, dated Apr. 15, 2013, for U.S. Appl. No. 12/944,703, filed Nov. 11, 2010. |
PCT Notification of Transmittal of International Preliminary Report on Patentability Chapter I,International Preliminary Report on Patentability Chapter I, for PCT Application No. PCT/EP2011/066835, report dated Apr. 11, 2013, 9, Pages. |
Non-Final Rejection, dated Apr. 5, 2013, for U.S. Appl. No. 13/020,805, filed Feb. 3, 2011. |
Notice of Allowance, dated Jul. 10, 2013, for U.S. Appl. No. 13/020,805, filed Feb. 3, 2011. |
Number | Date | Country | |
---|---|---|---|
20120075492 A1 | Mar 2012 | US |
Number | Date | Country | |
---|---|---|---|
61387449 | Sep 2010 | US |