The invention relates to apparatuses and methods for image based biometric recognition. The invention particularly enables obtaining and processing images of one or more biometric features corresponding to a subject, for the purposes of biometric recognition.
Methods for biometric recognition based on facial features, including features of the eye are known. Methods for eye or iris based recognition implement pattern-recognition techniques to compare an acquired image of a subject's eye or iris against a previously stored image of the subject's eye or iris, and thereby determine or verify identity of the subject. A digital feature set corresponding to an acquired image is encoded based on the image, using mathematical or statistical algorithms. The digital feature set or template is thereafter compared with databases of previously encoded digital templates (stored feature sets corresponding to previously acquired eye or iris images), for locating a match and determining or verifying identity of the subject.
Apparatuses for eye based biometric recognition such as iris recognition typically comprise an imaging apparatus for capturing an image of the subject's eye(s) or iris(es) and an image processing apparatus for comparing the captured image against previously stored eye or iris image information. The imaging apparatus and image processing apparatus may comprise separate devices, or may be combined within a single device.
While eye or iris based recognition apparatuses have been previously available as dedicated or stand alone devices, it is increasingly desirable to incorporate recognition capabilities into handheld devices or mobile communication devices or mobile computing devices (collectively referred to as “mobile devices”) having inbuilt cameras, such as for example, mobile phones, smart phones, personal digital assistants, tablets, laptops, or wearable computing devices.
Implementing eye or iris based recognition in mobile devices is convenient and non-invasive and gives individuals access to compact ubiquitous devices capable of acquiring images of sufficient quality to enable recognition (identification or verification) of identity of an individual. By incorporating eye or iris imaging apparatuses into mobile devices, such mobile devices achieve biometric recognition capabilities, which capabilities may be put to a variety of uses, including access control for the mobile device itself.
Processing of eye or iris based images for the purpose of biometric recognition requires clear, well focused images of a subject's eye. In addition to other image quality criteria, eye or iris recognition requires sufficient usable eye or iris area to enable accurate image recognition. Usable eye or iris area is measured as the percentage of the eye or iris that is not occluded by eyelash(es), eyelid(s), specular reflects, ambient specular reflections or otherwise. Occlusion of the eye or iris not only reduces available textural information for comparison, but also decreases accuracy of the eye or iris segmentation process, both of which increase recognition errors.
In a significant percentage of instances where an eye image is found to have insufficient usable eye or iris area, reduction of or interference with usable eye or iris area is found to have been caused by specular reflections from surfaces of eyewear such as eyeglasses, which specular reflections arise from light generated by illuminators associated with the imaging apparatus. Owing to the fact that intensity of light from a specular reflection is several times greater than intensity of diffuse reflections of light off a surface of a subject's eye, light from a specular reflection obscures the area covered by the specular reflection, and interferes with capture of eye or iris texture information corresponding to the area covered by the specular reflection.
Prior art solutions for addressing this problem have relied on positioning a plurality of illuminators within (or in the vicinity of) an imaging apparatus, and obtaining multiple images of a subject's eye or iris—wherein each image is obtained under illumination from an illuminator selected from the plurality of illuminators. The multiple images are thereafter examined and images in which specular reflections are found to obscure part of the subject's eye or iris are discarded. Images in which specular reflections do not obscure the eye or iris are selected for further image processing and image comparison steps. The prior art approach suffers from multiple drawbacks. A first drawback is that positioning of illuminators to ensure that at least one image of the multiple images is not obscured by specular reflections requires knowledge of an approximate diameter of the area to be imaged, and also the approximate distance of the subject's eye from the illuminators—both of which limit the usefulness and adaptability of the imaging apparatus. Additionally, in cases where specular reflections are caused by eyeglass lenses positioned in front of a subject's eye, the position of a specular reflection formed on an eyeglass lens is affected by the curvature of the eye glass lens—forcing manufacturers of the prior art imaging apparatus to position illuminators based on certain assumptions regarding curvature of eyeglass lenses that are worn by a majority of eyeglass wearing populations. Variations in diameter of a subject's eye, distance of the subject's eye from illuminators, and curvature in an eyeglass lens interposed between illuminators and a subject's eye, reduce the likelihood of a subject's eye not being at least partially obscured by a specular reflection—which in turn results in prior art image processing apparatuses having to acquire and examine a larger number of iris images before a suitable unobscured eye image can be located. It would be understood that the process of acquiring and examining a larger number of iris images results in an increased consumption of processing resources, a corresponding increase in the time necessary to arrive at an identity decision, and a poor user experience.
There is accordingly a need for improved eye based biometric recognition systems that can minimize the impact of specular reflections while obtaining and processing images of a subject's eye.
The present invention provides a method for image based biometric recognition. The method comprises the steps of (a) performing a first biometric determination based on a first image, the first biometric determination comprising comparing information extracted from the first image against stored biometric information, (b) performing a second biometric determination based on a second image, the second biometric determination comprising comparing information extracted from the second image against stored biometric information, (c) combining outputs of the first biometric determination and the second biometric determination based on at least one predefined method for combining outputs, and (d) rendering a match decision or a non-match decision based on an output of the combining of outputs. The first image may be an image of a field of view of an imaging apparatus under illumination from a first illumination source located at a first position relative to the field of view. The second image may be an image of the field of view under illumination from a second illumination source located at a second position relative to the field of view.
Acquisition of at least one of the first image and the second image may occur under illumination from only one of the first and second illumination sources. The first and second illumination sources may be alternately pulsed during acquisition of the first and second images. In an embodiment, timing of illumination pulses generated by the first and second illumination sources may be substantially synchronized with an exposure timing of an imaging apparatus at which the first and second images are acquired.
In an exemplary embodiment, at least one predefined method for combining outputs may be selected from a group of predefined methods for combining outputs from more than one biometric determination. Selection of the at least one predefined method may be based on an assessment of quality of at least one of the first and second images in accordance with a predetermined criteria.
According to a specific embodiment of the method, the first and second biometric determinations are eye or iris based biometric determinations, while the predetermined criteria is a specified minimum threshold requirement corresponding to usable eye or iris area within an image under assessment. In another embodiment, the first and second biometric determinations are eye or iris based biometric determinations, while at least one of the first and second images includes two partially or wholly imaged eyes or irises.
The at least one predefined rule for combining outputs may be selected from a group of predefined rules for combining outputs from first and second biometric determinations. Selection may be based on an assessment of quality of at least one of the first and second images in accordance with a predetermined criteria, which predetermined criteria may be a specified minimum threshold requirement corresponding to usable eye or iris area within an image under assessment.
Steps (a) to (d) of the method may be repeated in respect of successively acquired image pairs each comprising first and second images, until occurrence of a termination event. The termination event may comprise any of (i) expiry of a predetermined time interval, (ii) performance of step (a) to (d) in respect of a predetermined number of image pairs, (iii) rendering of a match decision, (iv) distance between an image sensor and a biometric feature of interest exceeding a predetermined maximum distance or (v) a determination that at least one image within an image pair does not include a biometric feature of interest.
The invention additionally provides a system for image based biometric recognition. The system may comprise an imaging apparatus comprising at least one image sensor, a first illumination source located at a first position relative to a field of view corresponding to the imaging apparatus, a second illumination source located at a second position related to the field of view, an illumination controller, and a processing device. The processing device may be configured for (a) performing a first biometric determination based on a first image, the first biometric determination comprising comparing information extracted from the first image against stored biometric information, (b) performing a second biometric determination based on a second image, the second biometric determination comprising comparing information extracted from the second image against stored biometric information, (c) combining outputs of the first biometric determination and the second biometric determination based on at least one predefined rule for combining outputs, and (d) rendering a match decision or a non-match decision based on an output of the combining of outputs. The first image may be an image of the field of view under illumination from the first illumination source, while the second image may be an image of the field of view under illumination from the second illumination source.
The imaging apparatus may acquire at least one of the first image and the second image under illumination from only one of the first and second illumination sources.
The illumination controller may be configured to alternately pulse the first and second illumination sources during acquisition of the first and second images. The illumination controller may additionally or alternately be configured to substantially synchronize illumination pulses generated by the first and second illumination source with an exposure timing of the imaging apparatus.
The processing device may be configured such that at least one predefined rule for combining outputs may be selected from a group of predefined rules for combining outputs from more than one biometric determination. Selection of the at least one predefined rule may be based on an assessment of quality of at least one of the first and second images in accordance with a predetermined criteria.
In a system embodiment, the first and second biometric determinations are eye or iris based biometric determinations, and the predetermined criteria may be a specified minimum threshold requirement corresponding to usable eye or iris area within an image under assessment. In another embodiment, the first and second biometric determinations are eye or iris based biometric determinations, and at least one of the first and second images includes two partially or wholly imaged eyes or irises.
The processing device may be configured such that at least one predefined rule for combining outputs is selected from a group of predefined rules for combining outputs from first and second biometric determinations, which selection is based on an assessment of quality of at least one of the first and second images in accordance with a predetermined criteria. The predetermined criteria may comprise a specified minimum threshold requirement corresponding to usable eye or iris area within an image under assessment.
The processing device may be configured to repeat steps (a) to (d) in respect of successively acquired image pairs each comprising first and second images, until occurrence of a termination event, which termination event may comprise any of (i) expiry of a predetermined time interval, (ii) performance of step (a) to (d) in respect of a predetermined number of image pairs, (iii) rendering of a match decision, (iv) distance between an image sensor and a biometric feature of interest exceeding a predetermined maximum distance or (v) a determination that at least one image within an image pair does not include a biometric feature of interest.
The invention additionally presents a computer program product for iris based biometric recognition, comprising a computer usable medium having a computer readable program code embodied therein, the computer readable program code comprising instructions for performing a method in accordance with one or more of the method embodiments described herein.
The present invention is directed to apparatuses and methods configured for biometric recognition, for example based on eye or iris imaging and processing. In an embodiment, the apparatus of the present invention comprises a mobile device having an eye or iris based recognition system implemented therein.
Although not illustrated in
Imaging apparatuses for eye or iris based biometric recognition may comprise an image sensor and an optical assembly. The imaging apparatus may comprise a conventional solid state still camera or video camera, and the image sensor may comprise a charged coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) device. The optical assembly may comprise a single unitarily formed element, or may comprise an assembly of optical elements selected and configured for achieving desired image forming properties. The imaging apparatus may have a fixed focus, or a variable focus.
The optical assembly and image sensor may be configured and disposed relative to each other, such that (i) one surface of the image sensor coincides with the image plane of the optical assembly and (ii) the object plane of the optical assembly substantially coincides with an intended position or a subject's eye for iris image acquisition. When the subject's eye is positioned at the object plane, an in-focus image of the eye is formed on the image sensor.
The imaging apparatus may additionally comprise one or more illuminators used to illuminate the eye of the subject being identified. The illuminator may comprise any source of illumination including an incandescent light or a light emitting diode (LED).
Segmentation is performed on the acquired image at step 204. Segmentation refers to the step of locating the boundaries of the eye or iris within the acquired image, and cropping the portion of the image which corresponds to the eye or iris. In the case of an iris, since the iris is annular in shape, segmentation typically involves identifying two substantially concentric circular boundaries within the acquired image—which circular boundaries correspond to the inner and outer boundaries of the iris. Several techniques for iris segmentation may be implemented to this end, including for example Daugman's iris segmentation algorithm. Eye or iris segmentation may additionally include cropping of eyelids and eye lashes from the acquired image. It would be understood that segmentation is an optional step prior to feature extraction and comparison that may be avoided entirely. Segmentation is at times understood to comprise a part of feature extraction operations, and is not always described separately.
Subsequently, feature extraction is performed at step 206—comprising processing image data corresponding to the cropped eye or iris image, to extract and encode salient and discriminatory features that represent an underlying biometric trait. For iris images, features may be extracted by applying digital filters to examine texture of the segmented iris images. Application of digital filters may result in a binarized output (also referred to as an “iris code” or “feature set”) comprising a representation of salient and discriminatory features of the iris. Multiple techniques for iris feature extraction may be implemented, including by way of example, application of Gabor filters.
At step 208, a comparison algorithm compares the feature set corresponding to the acquired eye or iris image against previously stored eye or iris image templates from a database, to generate scores that represent a difference (i.e. degree of similarity or dissimilarity) between the input image and the database templates. The comparison algorithm may for example involve calculation of a hamming distance between the feature sets of two images, wherein the calculated normalized hamming distance represents a measure of dissimilarity between two images.
The feature extraction and comparison steps may be integrated into a single step. Equally, the feature extraction step may be omitted entirely, in which case the comparison step may comprise comparing iris image information corresponding to the received frame, with stored eye or iris information corresponding to at least one eye or iris image. For the purposes of this invention, any references to the step of comparison shall be understood to apply equally to (i) comparison between a feature set derived from a feature extraction step and one or more stored image templates, and (ii) comparison performed by comparing image information corresponding to the received frame, with stored information corresponding to at least one eye or iris image.
At step 210, results of the comparison step are used to arrive at a decision (identity decision) regarding identity of the acquired eye or iris image.
For the purposes of this specification, an identity decision may comprise either a positive decision or a negative decision. A positive decision (a “match” or “match decision”) comprises a determination that the acquired eye or iris image (i) matches an eye or iris image or an eye or iris template already registered or enrolled within the system or (ii) satisfies a predetermined degree of similarity with an eye or iris image or an eye or iris template already registered or enrolled within the system. A negative decision (a “non-match” or “non-match decision”) comprises a determination that the acquired eye or iris image (i) does not match any eye or iris image or any eye or iris template already registered or enrolled within the system or (ii) does not satisfy a predetermined degree of similarity with any eye or iris image or any eye or iris template registered or enrolled within the system. In embodiments where a match (or a non-match) relies on satisfaction of (or failure to satisfy) a predetermined degree of similarity with eye or iris images or templates registered or enrolled within the system—the predetermined degree of similarity may be varied depending on the application and requirements for accuracy. In certain devices (e.g. mobile devices) validation of an identity could result in unlocking of, access authorization or consent for the mobile device or its communications, while failure to recognize an eye or iris image could result in refusal to unlock or refusal to allow access. In an embodiment of the invention, the match (or non-match) determination may be communicated to another device or apparatus which may be configured to authorize or deny a transaction, or to authorize or deny access to a device, apparatus, premises or information, in response to the communicated determination.
Each of
In
Likewise, each of
In
In
In
It has accordingly been discovered that multiple illumination sources associated with an iris imaging apparatus (such as illumination sources Il1 and Il2 at the same time), can be pulsed or flashed alternately (and preferably synchronized with exposure times of the imaging apparatus) for the purpose of iris image capture. By turning off a first illumination source (such as Il2) that is responsible for generating a specular reflection (such as SR8) that interferes with or reduces usable iris area, and simultaneously illuminating the subject's eye with a second illumination source (such as Il1) which generates a specular reflection (such as SR7) that does not interfere with or reduce usable iris area, the accuracy of image recognition processes can be improved.
While not illustrated in
While the embodiment illustrated in
The invention additionally relies on the discovery that results of two or more biometric tests may be combined for an enhanced test.
At step 602, the illumination controller activates a first illumination source to generate illumination therefrom. Step 604 comprises acquiring a first image of a subject's eye or iris under illumination from the first illumination source (i.e. while the first illumination source is on). In an embodiment of the invention, a second illumination source is deactivated or turned off for the duration of acquisition of the first image at step 604.
At step 606, the illumination controller activates the second illumination source to generate illumination therefrom. Step 608 comprises acquiring a second image of a subject's eye or iris under illumination from the second illumination source (i.e. while the second illumination source is on). In an embodiment, the first illumination source is deactivated for the duration of acquisition of the second image at step 608.
Step 610 comprises combining (i) results of eye or iris based biometric testing based on the first image with (ii) results of eye or iris based biometric testing based on the second image, and generating a match or non-match decision based on the combined results of eye or iris recognition testing.
The combining of results at step 610 may be based on one or more rules for combining results of biometric testing. In an embodiment, a rule for combining results may be selected from among a plurality of different rules, each of which specify a unique method of combining results of eye or iris recognition testing based on a first eye or iris image with results of eye or iris recognition testing based on a second eye or iris image.
Each rule for combining results of testing using first and second eye or iris images, may describe a method of combining results of two independent biometric tests. Exemplary rules for combining results may include:
A decision regarding selection of a specific rule from among the plurality of rules for combining results may be arrived at in response to a determination that assessed image quality of one or both of the first and second imaged eyes meets a predetermined criteria. Exemplary predetermined criteria include:
In an embodiment of the invention, each predefined criteria may be mapped to one or more specific rules for combining results of eye or iris recognition testing respectively based on the first and second eye or iris images—such that said one ore more specific rules for combining results would be invoked responsive to a determination that the corresponding predefined criteria has been met.
In implementation, it would be understood that the first image frame of a subject's eye and the second image frame of the subject's eye may comprise successive image frames acquired or received from the imaging apparatus (for example, successive image frames within a video clip of the subject's eye(s). In another embodiment, the first image frame and second image frame of the subject's eye are selected from a set of image frames sequentially generated by the imaging apparatus, such that the first image frame is separated from the second image frame by at least one intermediate image frame—which intermediate image frame is generated by the imaging apparatus as a sequentially intermediate frame between the first image frame and the second image frame. Selection of the first and second image frames from among the set of sequentially generated image frames may be based on one or more predetermined criteria, including (i) a prescribed number of intermediate frames separating the first image frame and the second image frame (ii) a predefined time interval separating time of generation (at an image sensor) or time of receiving (at a processor or memory) of the first image frame and the second image frame respectively (in a preferred embodiment, this time interval may be any interval between 5 milliseconds and 2 seconds) or (iii) availability status of a resource required for performing image processing or image comparison or combining of results of image comparisons.
While not illustrated in
At step 702, the illumination controller activates a first illumination source to generate illumination therefrom. Step 704 comprises acquiring a first image of a subject's eye or iris under illumination from the first illumination source (i.e. while the first illumination source is on). In an embodiment of the invention, a second illumination source is deactivated or turned off for the duration of acquisition of the first image at step 704.
At step 706, the illumination controller activates the second illumination source to generate illumination therefrom. Step 708 comprises acquiring a second image of a subject's eye or iris under illumination from the second illumination source (i.e. while the second illumination source is on). In an embodiment, the first illumination source is deactivated for the duration of acquisition of the second image at step 708.
Step 710 comprises assessing quality of each of the first image and the second image for image quality assessment.
Responsive to the assessed image quality of at least one of the first image and the second image matching at least one predefined criteria, step 712 selects a corresponding rule for combining results of eye or iris recognition testing based on the first image with results of eye or iris recognition testing based on the second image. The rule for combining results may be selected from among a plurality of different rules, each of which specify a unique method of combining results of eye or iris recognition testing based on the first image with results of eye or iris recognition testing based on the second image. Step 714 combines results of eye or iris recognition testing respectively based on the first and second images and generates a match/non-match decision based on the combined results of the eye or iris recognition testing. For the purposes of the embodiment of
While the illustrations of
While the embodiments of the present invention have been described in terms of eye or iris based biometric recognition, the invention can alternately be implemented or adapted for any image based biometric recognition techniques, or for that matter any techniques that rely on image analysis or image recognition.
The system 802 comprises at least one processor 804 and at least one memory 806. The processor 804 executes program instructions and may be a real processor. The processor 804 may also be a virtual processor. The computer system 802 is not intended to suggest any limitation as to scope of use or functionality of described embodiments. For example, the computer system 802 may include, but not limited to, one or more of a general-purpose computer, a programmed microprocessor, a micro-controller, an integrated circuit, and other devices or arrangements of devices that are capable of implementing the steps that constitute the method of the present invention. In an embodiment of the present invention, the memory 806 may store software for implementing various embodiments of the present invention. The computer system 802 may have additional components. For example, the computer system 802 includes one or more communication channels 808, one or more input devices 810, one or more output devices 812, and storage 814. An interconnection mechanism (not shown) such as a bus, controller, or network, interconnects the components of the computer system 802. In various embodiments of the present invention, operating system software (not shown) provides an operating environment for various softwares executing in the computer system 802, and manages different functionalities of the components of the computer system 802.
The communication channel(s) 808 allow communication over a communication medium to various other computing entities. The communication medium provides information such as program instructions, or other data in a communication media. The communication media includes, but not limited to, wired or wireless methodologies implemented with an electrical, optical, RF, infrared, acoustic, microwave, bluetooth or other transmission media.
The input device(s) 810 may include, but not limited to, a touch screen, a keyboard, mouse, pen, joystick, trackball, a voice device, a scanning device, or any another device that is capable of providing input to the computer system 802. In an embodiment of the present invention, the input device(s) 810 may be a sound card or similar device that accepts audio input in analog or digital form. The output device(s) 812 may include, but not limited to, a user interface on CRT or LCD, printer, speaker, CD/DVD writer, LED, actuator, or any other device that provides output from the computer system 802.
The storage 814 may include, but not limited to, magnetic disks, magnetic tapes, flash memory, CD-ROMs, CD-RWs, DVDs, any types of computer memory, magnetic stripes, smart cards, printed barcodes or any other transitory or non-transitory medium which can be used to store information and can be accessed by the computer system 802. In various embodiments of the present invention, the storage 814 contains program instructions for implementing the described embodiments.
While not illustrated in
The present invention may be implemented in numerous ways including as a system, a method, or a computer program product such as a computer readable storage medium or a computer network wherein programming instructions are communicated from a remote location.
The present invention may suitably be embodied as a computer program product for use with the computer system 802. The method described herein is typically implemented as a computer program product, comprising a set of program instructions which is executed by the computer system 802 or any other similar device. The set of program instructions may be a series of computer readable codes stored on a tangible medium, such as a computer readable storage medium (storage 814), for example, diskette, CD-ROM, ROM, flash drives or hard disk, or transmittable to the computer system 802, via a modem or other interface device, over either a tangible medium, including but not limited to optical or analogue communications channel(s) 808, or implemented in hardware such as in an integrated circuit. The implementation of the invention as a computer program product may be in an intangible form using wireless techniques, including but not limited to microwave, infrared, bluetooth or other transmission techniques. These instructions can be preloaded into a system or recorded on a storage medium such as a CD-ROM, or made available for downloading over a network such as the Internet or a mobile telephone network. The series of computer readable instructions may embody all or part of the functionality previously described herein.
While the exemplary embodiments of the present invention are described and illustrated herein, it will be appreciated that they are merely illustrative. It will be understood by those skilled in the art that various modifications in form and detail may be made therein without departing from or offending the spirit and scope of the invention as defined by the appended claims.
This application is a continuation-in-part application of U.S. patent application Ser. No. 14/738,505, filed on Jun. 12, 2015, now pending, which is herein incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 14738505 | Jun 2015 | US |
Child | 14885247 | US |