Method and system for biometric recognition

Information

  • Patent Grant
  • 8953849
  • Patent Number
    8,953,849
  • Date Filed
    Thursday, October 15, 2009
    15 years ago
  • Date Issued
    Tuesday, February 10, 2015
    9 years ago
Abstract
High quality, sharply focused images of an iris and the face of a person are acquired in rapid succession in either sequence by a single sensor and one or more illuminators, preferably within less than one second of each other, by changing the sensor settings or illumination levels between each acquisition.
Description
BACKGROUND OF THE INVENTION

This invention relates generally to systems and methods wherein imagery is acquired primarily to determine or verify the identity of an individual person using biometric recognition.


Biometric recognition methods are widespread and are of great interest in the fields of security, protection, financial transaction verification, airports, office buildings, but prior to the invention their ability to correctly identify individuals, even when searching through a small reference database of faces, has always been limited. There are typically false positives (which means that the incorrect person was identified) or false negatives (meaning that the correct person was not identified).


There are several reasons for such poor performance of biometric recognition methods.


First, when comparing two faces (from the same person or from different persons), it is important that the biometric templates or features are registered so that corresponding features (nose position for example) can be compared accurately. Even small errors in registration can result in matching errors even if the faces being compared are from the same person.


Second, for facial or iris recognition, it is important that the recognized face or iris and reference face or iris have the same, or very similar, pose. Pose in this context means orientation (pan, tilt, yaw) and zoom with respect to the camera. Variations in pose between the images again results in matching errors even if the faces being compared are from the same person.


Third, the dynamic range or sensitivity of the sensor may not be sufficient to capture biometric information related to the face. For example, some biometric systems are multi-modal, which means that they use several biometrics (for example, iris and face), to improve the accuracy of recognition. In such multiple biometric systems and methods there are problems in assuring that each of the sets of data are from the same person, for example the system may unintentionally capture the face of a first individual and iris of a second individual, resulting in an identification or match failure. Another problem with such multiple biometric systems is difficulty of obtaining good data for each of the separate biometrics, e.g., face and iris because, for example. However, the albedo or reflectance of one biometric material (the iris for example) may be very different to the albedo of a second biometric (the face for example). The result is that the signal reflected off one of the two biometrics is outside the dynamic range or sensitivity of the camera and are either saturated or in the dark current region of the camera's sensitivity or simply appears as a uniform gray scale with very poor contrast of biometric features, while the second biometric signal is within the dynamic range or sensitivity of the camera and has sufficient signal to noise ratio to enable accurate biometric or manual recognition.


Fourth, the illumination may vary between the images being matched in the face recognition system. Changes in illumination can result in poor match results since detected differences are due to the illumination changes and not to the fact that a different person is being matched.


Since reflectance of a face is different from that of an iris, acquiring an image of an iris and a face from the same person with a single sensor according to prior methods and systems has yielded poor results. Past practice required two cameras or sensors or, in the cases of one sensor, the sensor and illuminators were operated at constant settings.


For example, Adam, et al., U.S. Pat. Publ. 20060050933 aims to address the problem of acquiring data for use in face and iris recognition using one sensor, but does not address the problem of optimizing the image acquisition such that that the data acquired is optimal for each of the face and iris recognition components separately.


Determan, et al., U.S. Pat. Publ. 20080075334 and Saitoh, et al., US Pat. Publ. 20050270386 disclose acquiring face and iris imagery for recognition using a separate sensor for the face and a separate sensor for the iris. Saitoh claims a method for performing iris recognition that includes identifying the position of the iris using a face and iris image, but uses two separate sensors that focus separately on the face and iris respectively and acquires data simultaneously such that user motion is not a concern.


Determan also discusses using one sensor for both the face and iris, but does not address the problem of optimizing the image acquisition such that that the data acquired is optimal for each of the face and iris recognition components separately.


Jacobson, et al., in U.S. Pat. Publ. 20070206840 also describes a system that includes acquiring imagery of the face and iris, but does not address the problem of optimizing the image acquisition such that that the data acquired is optimal for each of the face and iris recognition components separately.


SUMMARY OF THE INVENTION

We have discovered a method and related system for carrying out the method which captures a high quality image of an iris and the face of a person with single sensor or camera having a sensor by acquiring at least two images with small time elapse between each acquisition by changing the sensor or camera settings and/or illumination settings between the iris acquisition(s) and the face acquisition(s).


The system comprises a sensor, illuminator, and processor adapted to acquire a high quality image of an iris of the person at a first set of parameters. The parameters which can be varied between the first biometric recognition step and the second biometric recognition step, for example between iris and face recognition steps, can include one or more of the following, by means of example: illumination power setting, camera integration time, and wavelengths. The acquisitions of the first biometric and the second biometric are within one second of each other, preferably within less than one second of each other. For example, the elapsed time between recognition steps where the parameters are varied can be as little as 0.5, 0.25, 0.1, 0.05, or even less, depending on the capability of the sensor, illuminators, and processor.


The settings on the illuminator and/or sensor are also changed within one second, and within one half, one quarter, or even faster than one tenth of a second, depending on the embodiment.


Some embodiments include the steps of, and related system components or modules for, identifying one or more acquired images containing the iris or face, performing registration over a captured sequence between the identified acquired image, constraining the search for the iris or face in the remainder of the sequence in response to the results of the original identified image, and the recovered registration parameters across the sequence.


Certain embodiments of the invention include determining a distance from the sensor by comparing a diameter of the iris in the iris image with a reference table and/or comparing an separation value between two eyes of the person with a reference table.


The system and method in some cases can adjust focus as a function of a measured distance between two eyes of the person and/or adjust illumination based on the distance from the sensor, the distance calculated by comparing a diameter of the iris in an iris image with a reference table.


In certain cases the method comprises changing one or more sensor settings selected from the group consisting of integration time, illumination, shutter speed, aperture, and gain between the acquisitions of the face and the iris. The parameters which can be varied between the first image and the second image can be, for example, illumination pulse setting, illumination amplitude setting, camera integration time, camera gain setting, camera offset setting, and camera wavelength.


It is sometimes beneficial for the system to compute the diameter of the iris upon acquisition of the image of the iris with the sensor, to estimate eye separation, pose of the iris, and/or pose the face.





BRIEF DESCRIPTION OF THE DRAWINGS

The features and advantages of the invention will be appreciated by reference to the detailed description when considered in connection with the attached drawings wherein:



FIG. 1 is schematic of a face of a person wherein the face features are captured within the dynamic range of the sensor or image grabbing device with sufficient contrast for accurate facial recognition, while on the other hand the iris features are captured either outside the dynamic range of the sensor or image grabbing device, or without sufficient contrast for accurate iris recognition.



FIG. 2 is a schematic of the face of the person of FIG. 1 wherein the iris features are captured within the dynamic range of the sensor or image grabbing device and with sufficient contrast for accurate iris recognition, while on the other hand the face features are captured either outside the dynamic range of the sensor or image grabbing device, or without sufficient contrast for accurate facial recognition



FIG. 3 is a plan view of an image acquisition system comprising a single sensor in a camera and a set of two illuminators.





DETAILED DESCRIPTION OF THE INVENTION

In the following detailed description, certain embodiments will be illustrated by reference to the drawings, although it will be apparent that many other embodiments are possible according to the invention.


Referring first to FIG. 1, a face 10 is illustrated wherein face features, including corners of eyes 11, wrinkles 12, 13, 16, and 17, and corners of mouth 14 and nose 18 are visible but iris 15 is of low contrast and in poor detail. Such an image could be obtained with low illumination settings with the sensor.



FIG. 2 illustrates the same face 10 as in FIG. 1 but with the iris 15 of high contrast and the face features seen in FIG. 1 not acquired by the sensor but rather the facial features are of low contrast 19.



FIG. 3 is a side view of an embodiment of a system according to the invention wherein a subject 10 is to the left of the figure. Images of the subject are captured by means of the camera 31 and illumination is provided by means of an illuminator or, in the illustrated embodiment, two sets of illuminators wherein first illuminator set 32 are infra-red wavelength illuminators, for example an Axis ACC IR Illuminator model 20812, and second illuminator 33 is a set of Light Emitting Diode (LED) illuminators which have a broad wavelength spectrum, for example Silicon Imaging white LED illuminator 2-61617. In the illustrated embodiment a 2 megapixel resolution CCD camera 31 such as the Aegis PL-8956F model is used with the two sets of illuminators 32, 33. The illumination and camera settings are controlled by the camera and illumination setting controller. The specific parameters controlled by the controller include but are not limited to: camera gain, camera offset, camera integration time, camera look-up table selection, illumination pulse width for illuminator 1, illumination pulse width for illuminator 2, illumination amplitude for illuminator 1, and illumination amplitude for illuminator 2.


An optional range measurement module 30 can also be included. This module 30 measures the approximate distance between the cameras and/or illumination and the subject 10. There are many devices known for measuring range. These include stereo depth recovery methods, such as that described by Horn in “Robot Vision”, MIT Press, pages 202-242, or an acoustic/ultrasonic range sensor such as that supplied by Campbell Scientific Inc, model number


A second method to determine range is to measure the eye separation or iris diameter.


In one embodiment, the system comprises a sensor, lens system, illuminator, and processor adapted to acquire a high quality image of an iris of the person at a first set of illumination power setting, camera integration time, wavelengths and lens settings; and to acquire with the same first sensor a high quality image of the face of the person at a second set of illumination power setting, camera integration time, wavelengths and lens settings, wherein acquisitions of the face image and iris image are within one second of each other, preferably within less than one second of each other.


The settings:on the illuminator and/or sensor and/or lens system are also changed within one second, and within one half, one quarter, or even faster than one tenth of a second, depending on the embodiment.


Some embodiment include the steps of, and related system components or modules for, identifying one or more acquired images containing the biometric data, for example the iris or the face, performing registration over a captured sequence between the identified acquired image, constraining the search for the biometric data, including the iris or face, in the remainder of the sequence in response to the results of the original identified image, and the recovered registration parameters across the sequence. The recovered motion between the images may be due to the motion of the person in the scene as they approach or recede from the camera, or may be from motion induced by changes in the lens parameters, such as zooming of the lens or pan and tilt control of the camera.


Certain embodiments of the invention include determining a distance from the sensor by comparing a diameter of the iris in the iris image with a reference table and/or comparing an separation value between two eyes of the person with a reference table.


The system and method in some cases can adjust focus or zoom as a function of a measured distance between two eyes of the person and/or adjust illumination based on the distance from the sensor, the distance calculated by comparing a diameter of the iris in an iris image with a reference table.


In certain cases the method comprises changing one or more sensor and lens settings selected from the group consisting of integration time, illumination, shutter speed, aperture, and gain between the acquisitions of the face and the iris.


It is sometimes beneficial for the system to compute the diameter of the iris upon acquisition of the image of the iris with the sensor, to estimate eye separation, pose of the iris, and/or pose the face. In one embodiment, the wavelength and brightness of the illumination are varied. More specifically, the camera and illumination parameters are controlled as follows: The visible illuminator 33 is set to provide constant illumination for all acquired frames with a magnitude of 50 milliamps. The remaining IR illuminator 32 is set to a constant pulse width of 6 msecs, but to a pulse magnitude that varies between the two values of 0 milliamps and 400 milliamps between alternate frames that are acquired by the camera. The camera may be set to acquire frames at 3 frames a second. The camera integration time may be set at 6 msecs, and the camera gain and offset and camera lookup table may be set to constant values. Those constant values are chosen such that the image of the iris captured when the current is being passed to the Infra-Red illuminator has enough signal to noise for accurate biometric matching. By this embodiment, the images acquired when the current is not being passed to the infra-red illuminator are suitable for accurate facial recognition.


In a second embodiment, the camera integration time is varied. More specifically, the camera and illumination parameters are controlled as follows: The visible illuminator is set to provide no illumination for any frame, or a constant illumination for all acquired frames with a magnitude of 50 milliamps. The remaining IR illuminator is set to a constant pulse width of 6 msecs, and to a constant pulse magnitude of 400 milliamps. The camera may be set to acquire frames at 3 frames a second. The camera integration time is set to alternate between adjacent frames between the two values of 1.5 msecs and 6 msecs, and the camera gain and offset and camera lookup table may be set to constant values. Those constant values are chosen such that the image of the iris captured when the camera uses the longer integration time has enough signal to noise for accurate biometric matching. In this embodiment, the images acquired when the shorter integration time are suitable for accurate facial recognition.


A third embodiment is the same as the first embodiment, excepting that the magnitude of one or both of the visible illuminators (set at 50 milliamps in embodiment 1) and IR illuminators (set at 400 milliamps in embodiment 1) is adjusted in response to an output of either the processor and/or the depth measurement sensor. More specifically, the processor or range measurement sensor provides an estimate of the range of the subject. This estimate is then used to look-up a preferred intensity magnitude for each of the visible and infra-red illuminators which is then provided to the illuminators. These preferred values are selected by acquiring data from a wide range of subjects under different intensity magnitude settings and at different distances, and by empirically finding the settings that provide the best performance for biometric recognition of the face and iris respectively.


A fourth embodiment first acquires data as described in the first embodiment. In a second step however the images that are optimized for acquiring data of each of the face and iris are aligned using the methods described in this invention, in order to remove any subject or camera motion that may have occurred between the two time instants that each of the optimized data was acquired from the sensor. In this way the features that are optimal for facial recognition in one image can be corresponded to features that are optimal for iris recognition in the other image. This allows processing performed on one image to be used to constrain the results of processing on the other image. For example, recovery of the approximate position and orientation of the face in one image can then be used to constrain the possible position and orientation of the iris in the second image. Similarly, recovery of the position of the iris in one image constrains the possible position of the face in the second image. This can assist in reducing the processing time for one or other of the biometric match processes, for example. In another example, some facial features are most accurately localized and have best signal to noise properties under one set of camera or illumination conditions, whereas another set of facial features are most accurately localized and have best signal to noise properties under another set of camera or illumination settings. This method allows the most accurately localized features of all facial features to be used for facial recognition, thereby providing improved recognition performance.


The image resolution typically required for face recognition is recognized to be approximately 320×240 pixels, as documented in an ISO standard. As part of the image acquisition system, however, we use a camera capable of imaging the face with a much higher resolution, for example 1024×1024 pixels. This higher resolution data enables the detection and localization of features that cannot be detected reliably in the lower resolution data, and also enables more precise and robust detection of features that could be seen in the lower resolution imagery. For example, the precise location of the pupil boundary can be recovered in the high resolution imagery and typically cannot be recovered accurately in the lower resolution imagery. One method for detecting specularities is to threshold the image, for example. One method for detecting the pupil/iris boundary is to perform a Hough transform, for example. U.S. Pat. No. 3,069,654. The face recognition algorithm may use the same high-resolution data that is being captured or by using an additional low resolution camera. An additional method for performing registration is to perform alignment algorithms over the eye region. In this case the eye region in one face image is aligned to sub-pixel precision to the eye region in another face image. Registration can be performed for example by Horn, “Robot Vision”, MIT Press p 278-299. The precise localization information can be passed to the face recognition algorithm in order to improve its performance.


In addition to eye location, the image acquisition system also recovers the zoom or distance of the person. This is accomplished by setting the high-resolution camera to have a very narrow depth of field. This means that features of the face only appear sharply in focus at a specific distance from the camera. Methods can be performed to detect when those features are sharply in focus, and then only those images are selected for face recognition. If a second lower resolution camera is used for acquiring the data used for face recognition, then processing performed on the high-resolution imagery to detect sharply-focused features is used to trigger image acquisition on the lower resolution camera. This ensures that the face images used for face recognition are all at the identical scale. There are several methods available to detect sharply-focused features. For example, an edge filter can be performed over the image (see Sobel, I., Feldman, G., “A 3×3 Isotropic Gradient Operator for Image Processing”, Pattern Classification and Scene Analysis, Duda, R. and Hart, P., John Wiley and Sons, '73, pp 271-272) and then squared at each pixel, and then averaged over the image in order to compute an edge energy score. When the score is maximal or exceeds a threshold, then the person is within the depth of field of the high resolution camera.


Knowledge of the eye location as well as the zoom of the face allows specific sub-regions of the face to be selected and used for face recognition. For example, one or more rectangular regions of a certain size (in pixels) can be cut out from the high-resolution imagery and used as an input to a face recognition engine, even though only part of the face is being presented. The locations of certain areas, such as the nose, can be predicted using a model of a standard face, and using knowledge of the eye location and the zoom. The face recognition engine is informed that only one or more specific subsets of the face are being presented. In this case we only provide a face recognition database to the face recognition engine that comprises only the same specific subset regions.


If a second camera is used to acquire data for face recognition (or any other biometric recognition, such as ear recognition) then because the location of the first camera is different to that of the second camera, then recovering a precise pixel location in the first camera does not simply translate into a corresponding pixel location in the second camera. We accomplish this using knowledge of the location of the depth of field of the first camera, which in turn provides a very precise depth of the face with respect to the first camera. Given a pixel location in the first camera, and the depth of the person, as well as camera intrinsics (such as focal length, and relative camera translation) that can be calibrated in advance (see for example, “An Efficient and Accurate Camera Calibration Technique for 3D Machine Vision”, Roger Y. Tsai, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Miami Beach, Fla., 1986, pages 364-374), then it is known how to compute the precise pixel location of the corresponding feature in the second camera (see Horn, “Robot Vision”, MIT Press, p 202-242 for example).


In addition to ensuring consistent zoom, we also take steps to ensure consistent pose by detecting features in the high-resolution image that would not otherwise be visible with precision in the low resolution imagery. For example, the pupil boundary is only near-circular if the person is looking in the direction of the camera. A method for detecting circular or non-circular boundaries is U.S. Pat. No. 3,069,654. If imagery of the iris is near circular in the narrow field of view imagery, then imagery of the face is in the lower resolution camera is more likely to be of a frontal view and is passed to the facial recognition module. Similarly, the pose of the face can be recovered and used to constrain the expected pose of the iris for subsequent processing.


The image acquisition system also has a dynamic range control module. This module addresses the problem where data from two different biometrics (e.g. iris and face) cannot be reliably acquired because the dynamic range of the sensor is limited. We address this by two methods.


First, we acquire data at two different but controlled times in such a way that at the first time instance we expect that the first biometric imagery (e.g. face) imagery will be within the dynamic range of the sensor given the specific illumination configuration. We then acquire data at a second time instance where we expect the second biometric imagery (e.g., iris imagery) to be within the dynamic range or sensitivity of the sensor. For example, consider a configuration where a camera and an illuminator lie close to each other, and a person is approaching the configuration. Images are continuously captured. As the person approaches the configuration, then the reflectance off the biometric tissue (face or iris) increases since the distance from the person to the camera and illumination configuration is decreasing. At one distance it can be expected that data corresponding to one biometric will be within the dynamic range (e.g. face) while at a different distance, it can be expected that data corresponding to a second biometric can be within the dynamic range (e.g. iris). The camera may have a small depth of field due to the resolution requirements of obtaining one biometric (e.g. the iris). However, the resolution required for the other biometric may be much coarser so that blurring due to imagery lying outside the depth of field has negligible impact on the quality of data acquired for the other biometric (e.g. the face).


A specific implementation of this approach is to a) Acquire all images into a stored buffer, b) detect the presence of an eye in the depth of field of the camera using the methods described earlier, c) compute the number of frames back in time where the person was situated at a further distance from the depth of field region (and therefore illuminated less), based on a prediction of their expected motion (which can be, for example, a fixed number based on walking speed), and d) select that imagery from the buffer to be used for face recognition. The eye and face location can be registered over time in the buffer to maintain knowledge of the precise position of the eyes and face throughout the sequence. Registration can be performed for example by Horn, “Robot Vision”, MIT Press p278-299


The second method for ensuring that data lies within the dynamic range of the camera is to modulate the magnitude of the illumination over a temporal sequence. For example, in one frame the illumination can be controlled to be much brighter than in a subsequent frame. In one implementation, images are always acquired at a low illumination level suitable for one biometric. Features are detected that would only be observed when the face is fully in focus and within the depth of field region. For example, the Laplacian image focus measure can be used. When the face is near or within the depth of field region, based for example on a threshold of the focus measure, then the illumination can be increased in order to obtain imagery of the second biometric (e.g. the iris) within the dynamic range of the camera. When the face has left the depth of field region, then the illumination can revert back to the lower magnitude level.


In addition to modulating the magnitude of the illumination, we also modulate the wavelength of the illumination. This allows multiple datasets corresponding to the same biometric to be acquired and matched independently, but using the constraint that the data belongs to the same person. For example, person A may match dataset B and C using data captured at one wavelength, but person A may match dataset C and D using data captured at a second wavelength. This gives evidence that person A matches to dataset C. This approach is extended to not only include fusing the results of face recognition after processing, but also by including the multi-spectral data as a high-dimensional feature vector as an input to the face recognition engine.


In some embodiments there is an advantage in aligning the acquired images of the face and iris with the processor, thereby reducing the effect of camera or subject motion that may have occurred between the two time instants that each of the images was acquired from the sensor. In this way the features that are optimal for facial recognition in one image can be corresponded to features that are optimal for iris recognition in the other image. This allows processing performed on one image to be used to constrain the results of processing on the other image. For example, recovery of the approximate position and orientation of the face in one image can then be used to constrain the possible position and orientation of the iris in the second image. Similarly, recovery of the position of the iris in one image constrains the possible position of the face in the second image. This can assist in reducing the processing time for one or other of the biometric match processes, for example. In another example, some facial features are most accurately localized and have best signal to noise properties under one set of camera or illumination conditions, whereas another set of facial features are most accurately localized and have best signal to noise properties under another set of camera or illumination settings. This method allows, the most accurately localized features of all facial features to be used for facial recognition, thereby providing improved recognition performance.


The present invention, therefore, is well adapted to carry out the objects and attain the ends and advantages mentioned, as well as others inherent therein. While the invention has been depicted and described and is defined by reference to particular preferred embodiments of the invention, such references do not imply a limitation on the invention, and no such limitation is to be inferred. The invention is capable of considerable modification, alteration and equivalents in form and function, as will occur to those ordinarily skilled in the pertinent arts. The depicted and described preferred embodiments of the invention are exemplary only and are not exhaustive of the scope of the invention. Consequently, the invention is intended to be limited only by the spirit and scope of the appended claims, giving full cognizance to equivalents in all respects.

Claims
  • 1. A method for acquiring images of an iris and a face from a single person, the method comprising: acquiring in either order and in rapid succession, with a sensor and one or more illuminators,(A) an image of an iris, the sensor and the one or more illuminators having a first parameter set adapted to acquire the image of the iris;(B) an image of the face at a second parameter set; and(C) obtaining an orientation of the face from the image of the face, wherein steps (A) and (B) are within a sufficiently short time of each other to assure that the image of the iris and the image of the face are of the same single person.
  • 2. The method of claim 1 wherein the first parameter set comprises an illumination pulse setting, an illumination amplitude setting, a camera integration time, a camera gain setting, a camera offset setting, and a camera wavelength.
  • 3. The method of claim 1 wherein the acquisitions of the image of the face and the image of the iris are within less than 1.0 second of each other.
  • 4. The method of claim 1 wherein the acquisitions of the image of the face and the image of the iris are within less that 0.25 seconds of each other.
  • 5. The method of claim 1 wherein steps (A) and (B) are repeated at least 2 times per second.
  • 6. The method of claim 1 wherein steps (A) and (B) are repeated at least 10 times per second.
  • 7. The method of claim 1 wherein the illumination settings in steps (A) and (B) are different and the sensor settings in steps (A) and (B) are the same.
  • 8. The method of claim 1 comprising: selecting the acquired image of the iris or the image of the face,performing registration over a captured sequence with respect to the selected image,constraining a search for the iris or face in another image of the iris or face with respect to the selected image.
  • 9. The method of claim 1 further comprising: determining a distance from the sensor to the iris by comparing a diameter of the iris in the image of the iris with a diameter in a reference table and/or comparing a separation value between two eyes of the person in the image of the face with a separation value in the reference table; andadjusting one or more of the parameters in at least one of the first parameter set or the second parameter set in response to the distance and/or the separation value.
  • 10. The method of claim 1 comprising adjusting focus as a function of a measured distance between two eyes of the person in the image of the face.
  • 11. The method of claim 1 comprising adjusting illumination based on a distance from the sensor to the person.
  • 12. The method of claim 1 comprising changing one or more sensor parameters between steps (A) and (B) selected from the group consisting of integration time, illumination, shutter speed, aperture, and gain.
  • 13. The method of claim 1 comprising computing the diameter of the iris upon acquisition of the image of the iris with the sensor.
  • 14. The method of claim 1, further comprising: estimating eye separation, a pose of the iris, and/or a pose of the face based on at least one of the image of the iris or the image of the face.
  • 15. The method of claim 1, further comprising: determining a subject distance and fixed illumination levels; andacquiring face imagery at lower sensitivity and iris imagery at higher sensitivity, the lower or higher sensitivity being adjusted between frames by either increasing the integration time of the sensor or increasing the gain of the sensor.
  • 16. The method of claim 1 comprising determining subject distance and camera setting, and acquiring iris imagery at a relatively high illumination setting and acquiring face imagery at a relatively low illumination setting.
  • 17. The method of claim 1 further comprising aligning the image of the face and the image of the iris.
  • 18. The method of claim 1, wherein (C) includes determining an orientation of at least one of a pan, a tilt, a zoom, or a yaw of the face with respect to the sensor.
  • 19. The method of claim 1, further comprising: (D) estimating an orientation of the iris in the image of the iris based on the orientation of the face; and(E) constraining an analysis of the image of the iris based on the orientation of the iris.
  • 20. A system for acquiring images of an iris and a face of a person and determining that the images of the iris and face are from the same person, the system comprising: a sensor to acquire an image of the iris and an image of the face,an illuminator to illuminate the iris and/or the face during acquisition of the image of the iris and/or the image of the face, anda processor, operably coupled to the sensor and to the illuminator, to: vary at least one of an illumination pulse setting, an illumination amplitude setting, a sensor integration time, a sensor gain setting, a sensor offset setting, and a sensor wavelength, during the acquisition of the image of the iris and/or the image of the face; andobtain an orientation of the face from the image of the face.
  • 21. The system of claim 20, further comprising another illuminator, and wherein the system is adapted to flash one of the illuminators for iris acquisition and the other of the illuminators for face acquisition.
  • 22. The system of claim 20, wherein the system is adapted to change sensor parameters between acquisition of the image of the face and acquisition of the image of the iris.
  • 23. The system of claim 20, wherein the orientation of the face includes an orientation of at least one of a pan, a tilt, a zoom, or a yaw of the face with respect to the sensor.
  • 24. The system of claim 20, wherein the processor is further configured to: estimate an orientation of the iris in the image of the iris based on the orientation of the face; andconstrain an analysis of the image of the iris based on the orientation of the iris.
  • 25. A method of iris data analysis, comprising: (A) acquiring, with a sensor, an image of a face including an iris;(B) obtaining an orientation of the face based on the image acquired in (A), the orientation of the face including an orientation of at least one of a pan, a tilt, or a yaw of the face with respect to the sensor;(C) estimating an orientation of the iris in the image acquired in (A) based on the orientation of the face obtained in (B); and(D) constraining an analysis of the iris in the image acquired in (A) based on the orientation of the iris estimated in (C).
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims benefit from provisional application 60/925259 filed Apr. 19, 2007, which is hereby incorporated by reference in its entirety.

US Referenced Citations (185)
Number Name Date Kind
4641349 Flom et al. Feb 1987 A
5259040 Hanna Nov 1993 A
5291560 Daugman Mar 1994 A
5488675 Hanna Jan 1996 A
5572596 Wildes et al. Nov 1996 A
5581629 Hanna et al. Dec 1996 A
5613012 Hoffman et al. Mar 1997 A
5615277 Hoffman Mar 1997 A
5737439 Lapsley et al. Apr 1998 A
5751836 Wildes et al. May 1998 A
5764789 Pare et al. Jun 1998 A
5802199 Pare et al. Sep 1998 A
5805719 Pare et al. Sep 1998 A
5838812 Pare et al. Nov 1998 A
5901238 Matsushita May 1999 A
5953440 Zhang et al. Sep 1999 A
5978494 Zhang Nov 1999 A
6021210 Camus et al. Feb 2000 A
6028949 McKendall Feb 2000 A
6055322 Salganicoff et al. Apr 2000 A
6064752 Rozmus et al. May 2000 A
6069967 Rozmus et al. May 2000 A
6088470 Camus et al. Jul 2000 A
6144754 Okano et al. Nov 2000 A
6192142 Pare et al. Feb 2001 B1
6246751 Bergl et al. Jun 2001 B1
6247813 Kim et al. Jun 2001 B1
6252977 Salganicoff et al. Jun 2001 B1
6289113 McHugh et al. Sep 2001 B1
6366682 Hoffman et al. Apr 2002 B1
6373968 Okano et al. Apr 2002 B2
6377699 Musgrave et al. Apr 2002 B1
6424727 Musgrave et al. Jul 2002 B1
6483930 Musgrave et al. Nov 2002 B1
6532298 Cambier et al. Mar 2003 B1
6542624 Oda Apr 2003 B1
6546121 Oda Apr 2003 B1
6554705 Cumbers Apr 2003 B1
6594376 Hoffman et al. Jul 2003 B2
6594377 Kim et al. Jul 2003 B1
6652099 Chae et al. Nov 2003 B2
6700998 Murata Mar 2004 B1
6714665 Hanna et al. Mar 2004 B1
6760467 Min et al. Jul 2004 B1
6819219 Bolle et al. Nov 2004 B1
6850631 Oda et al. Feb 2005 B1
6917695 Teng et al. Jul 2005 B2
6944318 Takata et al. Sep 2005 B1
6950536 Houvener Sep 2005 B2
6980670 Hoffman et al. Dec 2005 B1
6985608 Hoffman et al. Jan 2006 B2
7007298 Shinzaki et al. Feb 2006 B1
7020351 Kumar Mar 2006 B1
7047418 Ferren et al. May 2006 B1
7095901 Lee et al. Aug 2006 B2
7146027 Kim et al. Dec 2006 B2
7152782 Shenker et al. Dec 2006 B2
7248719 Hoffman et al. Jul 2007 B2
7271939 Kono Sep 2007 B2
7346472 Moskowitz et al. Mar 2008 B1
7385626 Aggarwal et al. Jun 2008 B2
7398925 Tidwell et al. Jul 2008 B2
7414737 Cottard et al. Aug 2008 B2
7418115 Northcott et al. Aug 2008 B2
7428320 Northcott et al. Sep 2008 B2
7542590 Robinson et al. Jun 2009 B1
7545962 Peirce et al. Jun 2009 B2
7558406 Robinson et al. Jul 2009 B1
7558407 Hoffman et al. Jul 2009 B2
7574021 Matey Aug 2009 B2
7583822 Guillemot et al. Sep 2009 B2
7606401 Hoffman et al. Oct 2009 B2
7616788 Hsieh et al. Nov 2009 B2
7639840 Hanna et al. Dec 2009 B2
7660700 Moskowitz et al. Feb 2010 B2
7693307 Rieul et al. Apr 2010 B2
7697786 Camus et al. Apr 2010 B2
7715595 Kim et al. May 2010 B2
7719566 Guichard May 2010 B2
7770019 Ferren et al. Aug 2010 B2
7787762 Abe Aug 2010 B2
7797606 Chabanne Sep 2010 B2
7801335 Hanna Sep 2010 B2
7847688 Bernard et al. Dec 2010 B2
7869627 Northcott et al. Jan 2011 B2
7925059 Hoyos et al. Apr 2011 B2
7929017 Aggarwal Apr 2011 B2
7929732 Bringer et al. Apr 2011 B2
7949295 Kumar May 2011 B2
7949494 Moskowitz et al. May 2011 B2
7978883 Rouh et al. Jul 2011 B2
8009876 Kim et al. Aug 2011 B2
8025399 Northcott et al. Sep 2011 B2
8028896 Carter et al. Oct 2011 B2
8090246 Jelinek Jan 2012 B2
8092021 Northcott et al. Jan 2012 B1
8132912 Northcott et al. Mar 2012 B1
8159328 Luckhardt Apr 2012 B2
8170295 Fujii et al. May 2012 B2
8181858 Carter et al. May 2012 B2
8195044 Hanna Jun 2012 B2
8212870 Hanna Jul 2012 B2
8214175 Moskowitz et al. Jul 2012 B2
8233680 Bringer et al. Jul 2012 B2
8243133 Northcott et al. Aug 2012 B1
8260008 Hanna Sep 2012 B2
8279042 Beenau et al. Oct 2012 B2
8280120 Hoyos Oct 2012 B2
8285005 Hamza Oct 2012 B2
8289390 Aggarwal Oct 2012 B2
8306279 Hanna Nov 2012 B2
8317325 Raguin et al. Nov 2012 B2
8364646 Hanna Jan 2013 B2
8411909 Zhao et al. Apr 2013 B1
8442339 Martin et al. May 2013 B2
8443202 White et al. May 2013 B2
8553948 Hanna Oct 2013 B2
8604901 Hoyos Dec 2013 B2
8606097 Hanna Dec 2013 B2
8719584 Mullin May 2014 B2
20050084137 Kim et al. Apr 2005 A1
20050084179 Hanna Apr 2005 A1
20060026427 Jefferson Feb 2006 A1
20060028552 Aggarwal et al. Feb 2006 A1
20060073449 Kumar et al. Apr 2006 A1
20060074986 Mallalieu et al. Apr 2006 A1
20060279630 Aggarwal et al. Dec 2006 A1
20060280344 Kee et al. Dec 2006 A1
20070110285 Hanna May 2007 A1
20070206839 Hanna Sep 2007 A1
20070211922 Crowley et al. Sep 2007 A1
20080122578 Hoyos et al. May 2008 A1
20080291279 Samarasekera et al. Nov 2008 A1
20090074256 Haddad Mar 2009 A1
20090097715 Cottard et al. Apr 2009 A1
20090161925 Cottard et al. Jun 2009 A1
20090231096 Bringer et al. Sep 2009 A1
20090274345 Hanna Nov 2009 A1
20100014720 Hoyos et al. Jan 2010 A1
20100021016 Cottard et al. Jan 2010 A1
20100074477 Fujii et al. Mar 2010 A1
20100127826 Saliba et al. May 2010 A1
20100232655 Hanna Sep 2010 A1
20100246903 Cottard Sep 2010 A1
20100253816 Hanna Oct 2010 A1
20100278394 Raguin et al. Nov 2010 A1
20100310070 Bringer et al. Dec 2010 A1
20110002510 Hanna Jan 2011 A1
20110007949 Hanna Jan 2011 A1
20110119111 Hanna May 2011 A1
20110119141 Hoyos et al. May 2011 A1
20110158486 Bringer et al. Jun 2011 A1
20110194738 Choi et al. Aug 2011 A1
20110211054 Hanna Sep 2011 A1
20110277518 Lais et al. Nov 2011 A1
20120127295 Hanna May 2012 A9
20120187838 Hanna Jul 2012 A1
20120212597 Hanna Aug 2012 A1
20120219279 Hanna Aug 2012 A1
20120239458 Hanna Sep 2012 A9
20120240223 Tu Sep 2012 A1
20120242820 Hanna Sep 2012 A1
20120242821 Hanna Sep 2012 A1
20120243749 Hanna Sep 2012 A1
20120257797 Leyvand et al. Oct 2012 A1
20120268241 Hanna Oct 2012 A1
20120293643 Hanna Nov 2012 A1
20120300052 Hanna Nov 2012 A1
20120300990 Hanna Nov 2012 A1
20120321141 Hoyos et al. Dec 2012 A1
20120328164 Hoyos et al. Dec 2012 A1
20130051631 Hanna Feb 2013 A1
20130110859 Hanna May 2013 A1
20130162798 Hanna Jun 2013 A1
20130162799 Hanna Jun 2013 A1
20130182093 Hanna Jul 2013 A1
20130182094 Hanna Jul 2013 A1
20130182095 Hanna Jul 2013 A1
20130182913 Hoyos et al. Jul 2013 A1
20130182915 Hanna Jul 2013 A1
20130194408 Hanna Aug 2013 A1
20130212655 Hoyos et al. Aug 2013 A1
20130294659 Hanna Nov 2013 A1
20140064574 Hanna Mar 2014 A1
20140072183 Hanna Mar 2014 A1
Foreign Referenced Citations (31)
Number Date Country
1020020078225 Oct 2002 KR
1020030005113 Jan 2003 KR
1003738500000 Feb 2003 KR
1020030034258 May 2003 KR
1020030051970 Jun 2003 KR
2003216700000 Jul 2003 KR
1004160650000 Jan 2004 KR
2003402730000 Jan 2004 KR
2003411370000 Jan 2004 KR
2003526690000 May 2004 KR
2003552790000 Jun 2004 KR
2003620320000 Sep 2004 KR
2003679170000 Nov 2004 KR
1020050005336 Jan 2005 KR
2003838080000 May 2005 KR
1020050051861 Jun 2005 KR
1020050102445 Oct 2005 KR
2004046500000 Dec 2005 KR
1005726260000 Apr 2006 KR
1020060081380 Jul 2006 KR
1011976780000 Oct 2012 KR
1013667480000 Feb 2014 KR
1013740490000 Mar 2014 KR
1020140028950 Mar 2014 KR
1020140039803 Apr 2014 KR
1020140050501 Apr 2014 KR
WO 03060814 Jul 2003 WO
WO 2005008567 Jan 2005 WO
WO 2008131201 Oct 2008 WO
WO 2010062371 Jun 2010 WO
WO 2011093538 Aug 2011 WO
Non-Patent Literature Citations (9)
Entry
Bergen, J.R. et al., Hierarchical Model-Based Motion Estimation, European Conf. on Computer Vision (1993).
Daugman, John, “How Iris Recognition Works,” IEEE Transaction on Circuits and Systems for Video Technology, vol. 14, No. 1, pp. 21-30 (Jan. 2004).
Galvin, B., et al., Recovering Motion Fields: An Evaluation of Eight Optical Flow Algorithms, Proc. of the British Machine Vision Conf. (1998).
Kumar, R., et al., Direct recovery of shape from multiple views: a parallax based approach, 12th IAPR Intl Conf. on Pattern Recognition, (Oct. 1994).
Nishino, K., et al. The World in an Eye, IEEE Conf. on Pattern Recognition, vol. 1, at pp. 444-451 (Jun. 2004).
Wildes, R.P., Iris Recognition: An Emerging Biometric Technology, Proc. IEEE 85(9) at pp. 134863 (Sep. 1997).
Written Opinion of the International Searching Authority in PCT/US2008/060791 mailed Aug. 27, 2008.
International Search Report in PCT/US2008/060791 mailed Aug. 27, 2008.
International Preliminary Report on Patentability in PCT/US2008/060791 dated Oct. 20, 2009.
Related Publications (1)
Number Date Country
20140112550 A1 Apr 2014 US
Provisional Applications (1)
Number Date Country
60925259 Apr 2007 US
Continuations (1)
Number Date Country
Parent PCT/US2008/060791 Apr 2008 US
Child 12596019 US