Invariant radial iris segmentation

Information

  • Patent Grant
  • 8442276
  • Patent Number
    8,442,276
  • Date Filed
    Friday, March 10, 2006
    18 years ago
  • Date Issued
    Tuesday, May 14, 2013
    11 years ago
Abstract
A method and computer product are presented for identifying a subject by biometric analysis of an eye. First, an image of the iris of a subject to be identified is acquired. Texture enhancements may be done to the image as desired, but are not necessary. Next, the iris image is radially segmented into a selected number of radial segments, for example 200 segments, each segment representing 1.8° of the iris scan. After segmenting, each radial segment is analyzed, and the peaks and valleys of color intensity are detected in the iris radial segment. These detected peaks and valleys are mathematically transformed into a data set used to construct a template. The template represents the subject's scanned and analyzed iris, being constructed of each transformed data set from each of the radial segments. After construction, this template may be stored in a database, or used for matching purposes if the subject is already registered in the database.
Description
RELATED APPLICATIONS

This application is related to U.S. Non-Provisional patent application Ser. No. 11/043,366, entitled “A 1D Polar Based Segmentation Approach,” filed Jan. 26, 2005. The disclosure of the related document is hereby fully incorporated by reference.


FIELD OF THE INVENTION

The invention is directed towards biometric recognition, specifically to an improved approach to radial iris segmentation.


BACKGROUND OF THE INVENTION

Biometrics is the study of automated methods for uniquely recognizing humans based upon one or more intrinsic physical or behavioral traits. In information technology, biometric authentications refer to technologies that measure and analyze human physical characteristics for authentication purposes. Examples of physical characteristics include fingerprints, eye retinas and irises, facial patterns and hand measurements.


A leading concern of existing biometric systems is that individual features that identify humans from others can be easily missed due to the lack of accurate acquisition of the biometric data, or due to the deviation of operational conditions. Iris recognition has been seen as a low error, high success method of retrieving biometric data. However, iris scanning and image processing has been costly and time consuming. Fingerprinting, facial patterns and hand measurements have afforded cheaper, quicker solutions.


During the past few years, iris recognition has matured sufficiently to allow it to compete economically with other biometric methods. However, inconsistency of acquisition conditions of iris images has led to rejecting valid subjects or validating imposters, especially when the scan is done under uncontrolled environmental conditions.


In contrast, under controlled conditions, iris recognition has proven to be very effective. This is true because iris recognition systems rely on more distinct features than other biometric techniques such as facial patterns and hand measurements and therefore provides a reliable solution by offering a much more discriminating biometric data set.


Although prototype systems and techniques had been proposed in the early 1980s, it was not until research in the 1990s that autonomous iris recognition systems were developed. The concepts discovered in this research have since been implemented in field devices. The overall approach is based on the conversion of a raw iris image into a numerical code that can be easily manipulated. The robustness of this approach and the following alternative approaches rely heavily on accurate iris segmentation. Iris segmentation is the process of locating and isolating the iris from the other parts of the eye. Iris segmentation is essential to the system's use. Computing iris features requires a high quality segmentation process that focuses on the subject's iris and properly extracts its borders. Such an acquisition process is sensitive to the acquisition conditions and has proven to be a very challenging problem. Current systems try to maximize the segmentation accuracy by constraining the operation conditions. Constraints may be placed on the lighting levels, position of the scanned eye, and environmental temperature. These constraints can lead to a more accurate iris acquisition, but are not practical in all real time operations.


Significant progress has been made to mitigate this problem; however, these developments were mostly built around the original methodology, namely, circular/elliptical contour segmentation that has proven to be problematic under uncontrolled conditions. Other work introduces concepts which compete with the above discussed methodology, but still suffer similar issues with segmentation robustness under uncontrolled conditions.


Thus, it would be desirable to have a method that provides an iris recognition technique that is well suited for iris-at-a-distance applications, i.e. a system utilizing unconstrained conditions, which still provides an accurate, real-time result based on the collected biometric data.


SUMMARY OF THE INVENTION

In accordance with the principles of the present invention, a new feature extraction technique is presented along with a new encoding scheme resulting in an improved biometric algorithm. This new extraction technique is based on a simplified polar segmentation (POSE). The new encoding scheme utilizes the new extraction technique to extract actual local iris features using a process with low computational load.


The encoding scheme does not rely on accurate segmentation of the outer bounds of the iris region, which is essential to prior art techniques. Rather, it relies on the identification of peaks and valleys in the iris (i.e., the noticeable points of change in color intensity in the iris). Advantageously, regardless of a chosen filter, the encoding scheme does not rely on the exact location of the occurrence of peaks detected in the iris, but rather relies on the magnitude of detected peaks relative to a referenced first peak. Since this algorithm does not rely on the exact location of pattern peaks/valleys, it does not require accurate segmentation of the outer boundary of the iris, which in turn eliminates the need for a normalization process.


The overall function of the present invention can be summarized as follows. First, the iris is preprocessed and then localized using an enhanced segmentation process based on a POSE approach, herein referred to as invariant radial POSE segmentation. During the segmentation process, all obscurant parts (i.e. pupil, eyelid, eyelashes, sclera and other non-essential parts of the eye) are dropped out of the analysis if the obscuration reaches the inner border of the iris. Lighting correction and contrast improvement are processed to compensate for differences in image lighting and reflective conditions. The captured iris image is unwrapped into several radial segments and each segment is analyzed to generate a one dimensional dataset representing the peak and/or valley data for that segment. The peak and/or valley data is one dimensional in the sense that peaks and/or valleys are ordered in accordance with their position along a straight line directed radially outward from the center of the iris. In one embodiment, the iris image is unwrapped into a one-dimensional polar representation of the iris signature, in which the data for only a single peak per radial segment is stored. In one implementation, the magnitude of the outermost peak from the pupil-iris border per segment is stored. In another implementation, the magnitude of the largest peak in the segment is stored. In another embodiment, the data for a plurality of peaks and/or valleys is stored per radial segment. In this embodiment, each peak and/or valley is recorded as a one bit value indicating its magnitude relative to another peak and/or valley in the segment, such as the immediately preceding peak/valley along the one dimensional direction. The data for all of the radial segments is concatenated into a template representing the data for the entire iris scan. That template can be compared to stored templates to find a match.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates a scanned iris image based on existing techniques.



FIG. 2
a illustrates a scanned iris image utilizing the principles of the present invention.



FIG. 2
b illustrates the scanned iris image of FIG. 2a mapped into a one dimensional iris map.



FIG. 3 illustrates a flow chart showing one embodiment of the present invention.



FIG. 4
a illustrates a mapping of the iris segmentation process according to the principles of the present invention.



FIG. 4
b illustrates an enhanced mapping of the iris scan according to principles of the present invention.



FIG. 5
a illustrates a first encoding scheme according to principles of the present invention.



FIG. 5
b illustrates a second encoding scheme according to principles of the present invention.





DETAILED DESCRIPTION OF THE INVENTION

A leading concern of existing biometric systems is that individual features which identify humans from others can be easily missed due to the lack of accurate data acquisition or due to deviations in operational conditions. During the past few years, iris recognition has matured to a point that allows it to compete with more common biometric means, such as fingerprinting. However, inconsistencies in acquisition conditions of iris images often leads to rejecting valid subjects or validating imposters, especially under uncontrolled operational environments, such as environments where the lighting is not closely controlled. In contrast, under controlled conditions, iris recognition has proven to be very effective. This is so because iris recognition systems rely on more distinct features than other common biometric means, providing a reliable solution by offering a more discriminating biometric.



FIG. 1 shows a scanned eye image with the borders identified according to conventional prior art segmentation techniques. Here, iris 105 is defined by outer iris border 110. However, outer iris border 110 is obstructed by the eyelid at 107 and a true border cannot be determined. The system must estimate the missing portion of the outer iris border 110. Computing iris features requires a high-quality segmentation process that focuses on the subject's iris and properly extracts its borders. Such a process is sensitive to the acquisition conditions and has proven to be a challenging problem (especially for uncooperative subjects that are captured at a distance). By constraining operational conditions, such as carefully controlling lighting and the position of a subject's eye, current systems attempt to resolve segmentation problems, but these approaches are not always practical.


The major downfall of these prior art techniques is that the system focuses on the outer border of the iris to normalize the iris scaling to allow for uniform matching. Due to many factors, including eyelids and eyelashes which may obscure the outer iris border, and lightly colored irises that may be difficult to distinguish from the sclera, the outer border may be impossible to accurately map, resulting in an incorrect segmentation of the subject's iris, which, in turn, negatively impacts the rest of the biometric recognition process. In addition, when applied to uncontrolled conditions, these segmentation techniques result in many errors. Such conditions may include subjects captured at various ranges from the acquisition device or subjects who may not have their eye directly aligned with the imaging equipment.



FIG. 2A shows a similarly scanned eye image as in FIG. 1. In this figure, principles of the present invention are applied. This approach is based on a simplified polar segmentation (POSE), a newer encoding scheme that does not rely on accurate segmentation of the outer boundary of the iris region. A detailed explanation of POSE can be found in the previously mentioned, related U.S. Non-Provisional patent application Ser. No. 11/043,366, entitled “A 1D Polar Based Segmentation Approach”. The present invention utilizes an enhanced POSE technique. This enhanced POSE technique, or invariant radial POSE, focuses on detecting the peaks and valleys of the iris, i.e., the significant discontinuities in color intensity between the pupil and the sclera within defined radial segments of the iris. In other words, a peak is a point where color intensity on either side of that point (in the selected direction) is less than the color intensity at that point (and the discontinuity exceeds some predetermined threshold so as to prevent every little discontinuity from being registered as a recorded peak). Likewise, a valley is a point where color intensity on either side of that point in the selected direction is greater than the color intensity at that point (with the same qualifications).


This technique is referred to as being one dimensional because, rather than collecting two dimensional image data per radial segment as in the prior art, the collected iris data per radial segment has only one signal dimension. This process eliminates the need to: estimate an obstructed outer boundary of the iris; segment the outer bound of the iris; and calculate exact parameters of circles, ellipses, or any other shapes needed to estimate a missing portion of the outer boundary.


Iris 205 is scanned utilizing the invariant radial POSE process. Rather than concentrating on the outer border of the iris as the process in FIG. 1 does, the invariant radial POSE process locates and identifies the peaks and valleys present in the scanned iris and creates an iris map. FIG. 2A helps illustrate one form of iris map that can represent the peak and/or valley data in an iris scan. In FIG. 2A, the data for only one peak is stored per radial segment. To construct an iris map in accordance with this embodiment of the invention, first the iris is segmented into a set number of radial segments, for example 200 segments. Thus, each segment represents a 1.8 degree slice of a complete 360 degree scan of the iris. After each of the 200 segments is analyzed, the data for one characteristic peak in the segment is stored. In the embodiment illustrated in FIGS. 2A and 2B, the peak selected for representation in each radial segment is the peak 210 that is outermost from the pupil-iris border. In alternative embodiments, the selected peak may be the greatest peak (other than the peak at the pupil-iris border), the sharpest peak, or the innermost peak. If the criterion is the outermost peak, it is preferable to use the outermost peak within a predefined distance of the pupil-iris border since, as one gets closer to the iris-sclera border, the peaks and valleys tend to become less distinct and, therefore, less reliable as a criterion for identifying subjects.


Alternately, the data corresponding to valleys instead of peaks may be recorded. In fact, the recorded data need not necessarily even be a peak or valley, but may be any other readily identifiable color or contrast characteristic. The distance from the center of the pupil of whichever peak or valley (or other characteristic) is selected for representation is stored. In a preferred embodiment, the radial distance is reported as a relative value relative to the radial distance of a reference peak from the center of the pupil. In this manner, it does not require a normalization procedure of the iris scan in order to compensate for changes to the iris due to environmental conditions (e.g., pupil dilation, ambient light). In a preferred embodiment of the invention, the reference peak is the peak at the pupil-iris border in that segment, which usually, if not always, will be the greatest peak in the segment.



FIG. 2B shows the scanned iris mapped into a one dimensional iris map. To construct this iris map, first the iris is segmented into a predetermined number of radial segments, for example 200 segments, each segment representing 1.8 degrees of a complete 360 degree scan of the iris. After each of the 200 segments is analyzed, a reference peak is selected in each segment, the reference peak being the peak at the pupil-iris border in the analyzed radial segment (which usually, if not always, will be the greatest peak in the segment). The iris is unwrapped to create the graph shown in FIG. 2B, with each point 215 representing the aforementioned relative radial distance of the corresponding peak for each of the radial segments.


For purposes of visualization, one may consider the conversion of the peaks and valleys data into the graph shown in FIG. 2B to be an “unwrapping” of the iris about the normal of the pupil-iris border (i.e., perpendicular to the border). For example, the pupil-iris border is essentially a circular border. Imagine that border is a string and unwrapping that string into a straight line with the reference peaks from each radial segment represent as a discrete point 215, as shown in FIG. 2B.


The preceding explanation is merely for the purposes of illustration in helping a person unskilled in the related arts appreciate the process viscerally. Those of skill in the related arts will understand that the conversion of the peaks and valleys information into a one dimensional dataset is actually a rather simple mathematical transformation.


Regardless of conditions, such as lighting and temperature (which affects pupil diameter dilation or contraction), this one dimensional iris representation will be unchanged with respect to the relative location of the reference peaks in each angular segment, but may result in the shifting of the entire curve 215 upwards or downwards. While pupil dilation and other factors may affect the absolute locations of the peaks or valleys (i.e., their actual distances from the pupil border), they will not affect the relative locations of the peaks and valleys in the iris relative to the reference peaks (or valleys).



FIG. 4A helps illustrate the formation of an alternative and more robust representation of the scanned iris image data in which the data for multiple peaks, rather than just one characteristic peak, is recorded per radial segment. The center of the pupil is indicated by cross 405. The horizontal or x-axis represents the radial distance from the pupil-iris border (i.e., perpendicular to the pupil-iris border), and the vertical or y-axis represents the derivative of the color intensity. The peak at the pupil-iris border is indicated at 411. All other peaks and valleys in the segment are represented graphically relative to the reference peak so that no data normalization will be necessary.


Note that each radial segment usually will be several pixels wide at the pupil border 410, and become wider as the distance from the pupil-iris border increases. Therefore, in order to generate the one dimensional data represented in the graph of FIG. 4A, the color intensity derivative data represented by the y-axis should be averaged or interpolated over the width of the segment. This representation of the interpolated data is shown in line 415, in which each significant data peak is marked by reference numeral 420.



FIG. 4B helps illustrate an even further embodiment. Graph 425 shows a graphical representation of the iris, such as the one illustrated in FIG. 4A. As with the FIG. 4A embodiment, in the feature extraction step, preferably, each individual peak is isolated and recorded with respect to the reference peak. However, in addition, in order to focus solely on the peaks, enhancement curve 430 is removed from the one dimensional iris representation. Enhancement curve 430 is the component of the graph that can be removed without affecting the magnitude of each peak relative to the next peak resulting in a normalized data set focusing solely on the magnitudes of the relative peaks. Using standard wavelet analysis well known to one of ordinary skill in the art, the enhancement curve can be calculated as the approximate component (DC component) of the decomposition of the graph of FIG. 4A. Once the enhancement curve is removed, a segmented graph 435 results, where each peak is represented as a point 437 on the graph. However with the removal of the enhancement curve, graph 425 is now normalized based on peak occurrence. As will be discussed in more detail below, in at least one embodiment of the invention, the peak data will be encoded very efficiently by encoding each peak relative to an adjacent peak using as few as one or two bits per peak. Accordingly, the removed enhancement curve simplifies the processing while preserving all needed information.



FIG. 3 illustrates a flow chart showing an embodiment of the present invention.


In Step 305, a preprocessing step takes place. The preprocessing may be essentially conventional. In this step, texture enhancements are performed on the scanned image. Obscurant parts of the image, such as pupils, eyelids, eyelashes, sclera and other non-essential parts of the eye are dropped out of the analysis. In order to reduce the side effects of outside illumination, gray scale variations and other artifacts (e.g. colored contact lenses), the system preprocesses the image using a local radial texture pattern (LRTP). However, it should be noted that the texture enhancements are not essential to the operation of the system.


The image is preprocessed using local radial texture pattern similar to, but revised over that proposed in Y. Du, R. Ives, D. Etter, T. Welch, C.-I. Chang, “A one-dimensional approach for iris identification”, EE Dept, US Naval Academy, Annapolis, Md., 2004.







LRTP


(

x
,
y

)


=


I


(

x
,
y

)


-


1
A





ω



I


(

x
,
y

)










where

    • I(x, y)=the color intensity of the pixel located at the two dimensional coordinate x, y;
    • ω=the curve that determines the neighboring points of the pixel x, y; and
    • A=the area (number of pixels) of ω.


This LRTP approach differs from that method as it avoids discontinuities due to the block analysis that was adopted in the aforementioned reference while preserving the approximation of the true mean value using the window mean instead. The mean of each window of small blocks constitutes a coarse estimate of the background illumination and thus it is subtracted from the actual values of intensities as shown in the equation above.


In Step 310, the Invariant Radial POSE segmentation process is performed. This approach differs from traditional techniques as it does not require iris segmentation at the outer border of the iris, i.e., the iris-sclera border.


Particularly, the process first roughly determines the iris center in the original image, and then refines the center estimate and extracts the edges of the pupil. A technique for locating the center of the pupil is disclosed in aforementioned U.S. patent application Ser. No. 11/043,366, incorporated by reference and need not be discussed further. Techniques for locating the pupil-iris border also are disclosed in the aforementioned patent application and need not be discussed further.


Once the pupil edge has been found, the segmentation process begins. The radial scan of the iris is done in radial segments, e.g., 200 segments of 1.8 degrees each.


After the segmentation and scanning in Step 310, the process proceeds to Step 315. In Step 315, the actual feature extraction occurs based on the segmented image obtained in Step 310. The feature extraction process can be performed, for example, in accordance with any of the three embodiments previously described in connection with FIGS. 2A and B, 4A, and 4B, respectively, which detect changes in the graphical representation of the iris while not relying on the absolute location of the changes' occurrence. Particularly, the absolute locations change as a function of the natural dilation and contraction of the human iris when exposed to variations in environmental light conditions. Therefore, the feature extraction process relies on detecting the peak and valley relative variations in magnitude and their relative locations rather than focusing on their absolute magnitudes or locations. A key advantage of this approach is that it does not require a normalization procedure of the iris scan in order to compensate for changes to the iris due to environmental conditions. A normalization procedure of the iris scan is crucial to prior art iris recognition techniques.


Next, in Step 320, the resulting peak data represented in graph 435 is encoded into an encoded template so that it can later be efficiently compared with stored templates of iris data for known persons. Two encoding alternatives are discussed below in connection with FIGS. 5A and 5B, respectively. These two are shown only for example and are not meant to limit the scope of the present invention.



FIGS. 5A and 5B help illustrate the encoding of the peak/valley data set for one radial segment of a scanned iris in accordance with two embodiments of the invention, respectively. As will be discussed in further detail below, each template will comprise a plurality of such data sets, the number of such sets in a template being equal to the number of radial segments. Thus, for instance, if each segment is 1.8°, each template will comprise 200 such data sets.



FIG. 5A illustrates a first encoding scheme which focuses on relative peak amplitude versus the amplitude of the immediately previous peak. FIG. 5A illustrates encoding of the peak data for a single radial segment and shows a data set for that segment. Each data set comprises I×K bits, where K is a number of peaks per radial segment for which we wish to record data and I is the number of bits used to encode each peak. K may be any reasonable number and should be selected to be close to the typical number of peaks expected in a radial segment. In FIG. 5A, K=8.


In this encoding scheme, the first I bits of every data set represents the selected reference peak (the pupil-iris border) and is always set to a first value , e.g., 11, where I=2. As one moves from left to right within the data set, the bits represent peaks that are farther radially outward from the pupil-iris border (i.e., the x axis in FIGS. 2B, 4A, and 4B, which represents distance from the pupil-iris border). If the magnitude of a peak is greater than the magnitude of the previous peak in the graph such as graph 435, the bits representing that peak are set to 11. Otherwise, the bits are set to a second value, e.g., 00. Therefore, the second I bits are essentially guaranteed to be 00 since, in this example, the reference peak is essentially guaranteed to have the greatest magnitude in the segment, and will, thus, always be larger than the next peak. Therefore, in this encoding scheme, the first four bits of each data set are irrelevant to and will not be considered during matching since they will always be identical, namely 1100. In cases where the radial segment does not have at least K peaks, the end of the data set is filled with a one or more third bits sets of a third value, e.g., 10 or 01 that will eventually be masked in the matching step 325. In the case where the radial segment has more than K peaks, only the K peaks closest to the pupil-iris border are encoded.


Thus, referring to the iris segment shown in the first, left-hand graph of FIG. 5A, the sequence representing the peak/valley information for this segment of the iris is 1100110011001010. Particularly, the first two bits represent the magnitude of the reference peak 501 and are always 11, the second two bits represent the magnitude of the first peak 503 in the segment and is essentially guaranteed to be 00 because it will always be smaller than the reference peak, the fifth and sixth bits are 11 because the next peak 505 is greater than the preceding peak 503, the seventh and eighth bits are 00 because the next peak 507 is less than the immediately preceding peak 507, the ninth and tenth bits are 11 because the next peak 509 is greater than the preceding peak 507, the eleventh and twelfth bits are 00 because the next peak 511 is less than the immediately preceding peak 509, and the last four bits are 1010 corresponding to two sets of unknowns because this segment has only five peaks (and the reference peak is the sixth peak represented in the data set).


As another example, referring to the iris segment shown in the second, right-hand graph of FIG. 5A, the sequence representing the peak/valley information for this segment of the iris is 1100000011101010 since the first two bits represent the magnitude of the reference peak 501 and is always 11, the next two bits represent the magnitude of the first peak in the segment 513 and are 00 because it is smaller that the reference peak, the next two bits are 00 because the next peak 515 is greater than the preceding peak 513, the next two bits are 00 because the next peak 517 is less than the immediately preceding peak 517, the next two bits are 11 because the next peak 519 is greater than the preceding peak 517, and last six bits are 101010 because this segment has only five peaks (including the reference peak).



FIG. 5B illustrates a second exemplary encoding scheme according to principles of the present invention. This second encoding scheme also is based on a 2-bit quantization of the magnitude of the filtered peaks, but the peak magnitudes are quantized into three magnitude levels, namely, Low (L), High (H), and Medium (M). Low level magnitude L is assigned 2-bit pattern 00, and High level magnitude H is assigned 2-bit pattern 11. To account for quantization errors, the levels are structured in a way that will allow only one bit error-tolerance to move from one adjacent quantization level to the next. Per this constraint, the scheme has two combinations to represent the medium level, i.e. Mi=10, and Mr=01. MI represents a case where the valley to the left of the corresponding peak is lower than the valley to the right of the peak and Mr represents a peak where the valley to the right is lower than the valley to the left. Peak 520, for example, would be labeled Mi because the valley to its left is lower than the valley to its right. Peak 521, on the other hand, is an example of a peak that would be labeled Mr because the valley to its left is higher than the valley to its right. However, the two resulting values for Medium level magnitude are treated as equivalent in the matching process. Again, bits are used to complete the bit-vector to reach a predefined vector length, e.g., 10 bits, if there are not enough peeks to fill the data set. Although not shown in the figure, the bits corresponding to unknown peaks may be identified by any reasonable means, such as appending a flag to the end of the data set indicating the number of bits that correspond to unknown peaks. Alternately, the levels may be encoded with three bit quantization in order to provide additional bit combinations for representing unknown. Even further, only one value, e.g., 10, can be assigned for the Medium level, which will leave the two-bit combination 01 for representing unknowns. The unknown bits will be masked during matching, as discussed below. Likewise, if the number of peaks in a radial segment exceeds the number of peeks needed to fill the data set, then the peaks farthest from the pupil are dropped.


Next, in Step 325, a template is constructed by concatenating all of the data sets corresponding to all of the radial segments in the iris scan. Thus, for example, if there are 200 radial segments and the number of bits used for each data set in the encoding scheme to represent the detected peaks is 16 bits, all encoded binary strings are concatenated into a template of 16×200=3400 bits.


Once the data is encoded, the process continues to Step 330. The process determines whether a scanned iris template matches a stored iris template by comparing the similarity between the corresponding bit-templates. A weighted Hamming distance can be used as a metric for recognition to execute the bit-wise comparisons. The comparison algorithm can incorporate a noise mask to mask out the unknown bits so that only significant bits are used in calculating the information measure distance (e.g. Hamming distance). The algorithm reports a value based on the comparison. A higher value reflects fewer similarities in the templates. Therefore, the lowest value is considered to be the best matching score of two templates.


To account for rotational inconsistencies and imaging misalignment, when the information measure of two templates is calculated, one template is shifted left and right bit-wise (along the angular axis) and a number of information measure distance values are calculated from successive shifts. This bit-wise shifting in the angular direction corresponds to rotation of the original iris region by an angular resolution unit. From the calculated information measure distances, only the lowest value is considered to be the best matching score of two templates.


A weighting mechanism can be used in connection with the above mentioned matching. The bits representing the peaks closest to the pupillary region (the pupil borders) are the most reliable/distinct data points and may be weighted higher as they represent more accurate data. All unknown bits, whether present in the template to be matched or in the stored templates, are weighted zero in the matching. This may be done using any reasonable technique. In one embodiment, when two templates are being compared, the bit positions corresponding to unknown bits of one of the two templates are always filled in with bits that match the corresponding bits of the other template.


While the above described embodiments rely on the detection and analysis of a peak in the iris, this is merely shown as an example. Other embodiments can rely on the detection of valleys in the iris, or any other noticeable feature in the iris.


It should be clear to persons familiar with the related arts that the process, procedures and/or steps of the invention described herein can be performed by a programmed computing device running software designed to cause the computing device to perform the processes, procedures and/or steps described herein. These processes, procedures and/or steps also could be performed by other forms of circuitry including, but not limited to, application-specific integrated circuits, logic circuits, and state machines.


Having thus described a particular embodiment of the invention, various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements as are made obvious by this disclosure are intended to be part of this description though not expressly stated herein, and are intended to be within the spirit and scope of the invention. Accordingly, the foregoing description is by way of example only, and not limiting. The invention is limited only as defined in the following claims and equivalents thereto.

Claims
  • 1. A method of identifying a subject by biometric analysis of the iris of an eye, the method comprising the steps of: (1) acquiring an image of an iris of the subject;(2) radially segmenting the iris image into a plurality of radial segments;(3) for each radial segment, determining data for a data set for a single predetermined one dimensional feature within said segment relative to a reference value of said feature within said image; and(4) constructing a template for said subject comprising said data for each of said radial segments; andwherein:at least steps 2-4 are performed using a programmed computing device; andsaid feature is peaks of color intensity within the iris; andsaid data comprises magnitudes of said peaks and their relative locations along a direction radially outward from a center of a pupil in the image of the iris of the subject.
  • 2. The method of claim wherein step (2) comprises the steps of: (2.1) determining the center of the pupil of said subject; and(2.2) determining a pupil-iris border in said image; and(2.3) radially segmenting said iris into a plurality of radial segments of equal angular size.
  • 3. The method of claim 2 wherein said data comprises a distance from said pupil-iris border of a one of said peaks that is the farthest from said pupil-iris border within a predetermined distance of said pupil-iris border.
  • 4. The method of claim 2, wherein said one-dimensional data comprises a distance from said pupil-iris border of a largest one of said peaks in said radial segment.
  • 5. A method of identifying a subject by biometric analysis of the iris of an eye, the method comprising the steps of: (1) acquiring an image of an iris of the subject;(2) radially segmenting the iris image into a plurality of radial segments;(3) for each radial segment, determining data for peaks of color intensity within the radial segment of the iris relative to a reference value of a peak of color intensity within the iris image; and(4) constructing a template for the subject comprising the data for each of the radial segments;(5) encoding the data into a data set; the encoding comprising: determining an order of the peaks in the radial segment;assigning a first value to each peak having a magnitude that is greater than a preceding peak in the order;assigning a second value to each peak having a magnitude that is lesser than a preceding peak in the order; andplacing the values in said data set in accordance with the orderwherein: at least steps 2-4 are performed using a programmed computing device.
  • 6. The method of claim 1 wherein said magnitudes are determined by interpolating data across a width of each radial segment.
  • 7. The method of claim 1 wherein said magnitudes are determined by averaging data across a width of each radial segment.
  • 8. The method of claim 1 wherein said reference value is a value of a one of said peaks corresponding to color intensity at said pupil-iris border.
  • 9. The method of claim 8 wherein step (3) includes the step of removing a decomposition curve from said detected peaks and valleys.
  • 10. The method of claim 1 further comprising the step of: (5) encoding said data into a data set.
  • 11. The method of claim 10 wherein, in step (5), a first predetermined number of bits are used to represent data of each peak in said radial segment and each said data set comprises a second predetermined number of bits corresponding to a predetermined number of peaks that can be encoded in said data set, and: if the number of peaks in said radial segment is less than said predetermined number of peaks that can be encoded, filling said data set with bits indicating an unknown peak; andif the number of peaks in said radial segment is greater than said predetermined number of peaks that can be encoded, encoding a subset of said peaks in said radial segment.
  • 12. The method of claim 11 wherein said subset of peaks comprises said peaks in said radial segment that are closest to a pupil of said subject.
  • 13. The method of claim 11 wherein said subset of peaks comprises the largest peaks detected in said radial segment.
  • 14. The method of claim 10 further comprising the step of: (6) comparing said subject's template to at least one stored template to determine if said subject's template matches said at least one stored template.
  • 15. The method of claim 14 wherein step (6) further comprises the step of weighting each encoded data set such that bits corresponding to peaks closer to said subject's pupil are more heavily weighted than bits farther from said subject's pupil.
  • 16. The method of claim 10 wherein step (5) comprises: determining an order of said peaks in said radial segment;assigning a first value to each peak having a magnitude that is greater than a preceding peak in said order;assigning a second value to each peak having a magnitude that is lesser than a preceding peak in said order; andplacing said values in said data set in accordance with said order.
  • 17. The method of claim 16, wherein said reference value is a value of one of said peaks corresponding to color intensity at said pupil-iris border and wherein said peaks are ordered in accordance with their distance from the subject's pupil.
  • 18. The method of claim 10, wherein said encoding comprises encoding each peak in said dataset as a two bit sequence wherein a first two bit sequence represents a peak with a high magnitude, a second two bit sequence represents a peak with a low magnitude, and third and fourth two bit sequences both represent a peak with a medium magnitude.
  • 19. The method of claim 18, wherein said encoding is performed with one bit error tolerance.
  • 20. The method of claim 1 wherein step (1) comprises preprocessing said image by performing texture enhancement on said image and dropping out parts of said image that obscure said iris.
  • 21. A non-transitory computer-readable medium for identifying a subject by biometric analysis of the iris of an eye, the product comprising: a first computer executable instruction media for acquiring an image of an iris of the subject;a second computer executable instruction media for radially segmenting the iris image into a plurality of radial segments;a third computer executable instruction media for determining data for a single predetermined one dimensional feature, wherein said feature comprises a peak and/or valley of color intensity within said segment of the iris relative to a reference value of said feature within said image, wherein said reference value is peak or valley of color intensity selected to represent each radical segment, and wherein said data comprises relative magnitudes of at least one peak and/or valley and relative locations of said at least one peak and/or valley along a direction radially outward from a center of a pupil of said subject; anda fourth computer executable instruction media for constructing and storing a template for said subject comprising said data set for each of said radial segments.
  • 22. The non-transitory computer-readable medium of claim 21 wherein said second computer executable instructions comprises: instructions for determining a center of a pupil of said subject; andinstructions for determining a pupil-iris border in said image; andinstructions for radially segmenting said iris into a plurality of radial segments of equal angular size.
  • 23. The non-transitory computer-readable medium of claim 22 wherein said data comprises a distance from said pupil-iris border of one of said peaks that is the farthest from said pupil-iris border within predetermined distance of said pupil-iris border.
  • 24. The non-transitory computer-readable medium of claim 22 wherein said one-dimensional data comprises a distance from said pupil-iris border of a largest one of said peaks in said radial segment.
  • 25. The non-transitory computer readable medium of claim 21 wherein said relative magnitudes are determined by interpolating data across a width of each said radial segment.
  • 26. The non-transitory computer-readable medium of claim 21 wherein said relative magnitudes are determined by averaging data across a width of each radial segment.
  • 27. The non-transitory computer-readable medium of claim 22 wherein said reference value is a value of one of said peaks corresponding to color intensity at said pupil-iris border.
  • 28. The non-transitory computer-readable medium of claim 27 wherein said third computer executable instruction media include instructions for removing a decomposition curve from said detected peaks and valleys.
  • 29. The non-transitory computer-readable medium of claim 28 further comprising: a fifth computer executable instruction media include instructions for encoding said data sets.
  • 30. The non-transitory computer-readable medium of claim 29 wherein, in said fifth computer executable instruction media, a first predetermined number of bits are used to represent data of each peak in said radial segment and each said data set comprises a second predetermined number of bits corresponding to a predetermined number of peaks that can be encoded in said data set, and: if the number of peaks in said radial segment is less than said predetermined number of peaks that can be encoded, filling said data set with bits indicating an unknown peak; andif the number of peaks in said radial segment is greater than said predetermined number of peaks that can be encoded, encoding a subset of said peaks in said radial segment.
  • 31. The non-transitory computer-readable medium of claim 30 wherein said subset of peaks comprises the peaks in said radial segment that are closest to a pupil of said subject.
  • 32. The non-transitory computer-readable medium of claim 30 wherein said subset of peaks comprises the largest peaks detected in said radial segment.
  • 33. The non-transitory computer-readable medium of claim 30 further comprising: sixth computer executable instruction media for comparing said subject's template to at least one stored template to determine if said subject's template matches said at least one stored template.
  • 34. The non-transitory computer-readable medium of claim 33 wherein said sixth computer executable instruction further comprise instructions for weighting each encoded data set such that bits corresponding to peaks closer to said subject's pupil are more heavily weighted than bits farther from said subject's pupil.
  • 35. The non-transitory computer-readable medium of claim 29 wherein said fifth computer executable instruction media comprises: instructions for determining an order of said peaks in said radial segment;instructions for assigning a first value to each peak having a magnitude that is greater than a preceding peak in said order;instructions for assigning a second value to each peak having a magnitude that is lesser than a preceding peak in said order; andinstructions for placing said values in said data set in accordance with said order.
  • 36. The non-transitory computer-readable medium of claim 35, wherein said reference value is a value of a one of said peaks corresponding to color intensity at said pupil-iris border and wherein said peaks are ordered in accordance with their distance from the subject's pupil.
Parent Case Info

This application claims the benefit of U.S. Provisional Patent Application Ser. No. 60/778,770, filed Mar. 3, 2006.

STATEMENT OF GOVERNMENT INTEREST

This invention was made with government support under Contract No. F10801 EE5.2. The Government has certain rights in the invention.

US Referenced Citations (395)
Number Name Date Kind
4641349 Flom et al. Feb 1987 A
4836670 Hutchinson Jun 1989 A
5231674 Cleveland et al. Jul 1993 A
5291560 Daugman Mar 1994 A
5293427 Ueno et al. Mar 1994 A
5359382 Uenaka Oct 1994 A
5404013 Tajima Apr 1995 A
5551027 Choy et al. Aug 1996 A
5572596 Wildes et al. Nov 1996 A
5608472 Szirth et al. Mar 1997 A
5664239 Nakata Sep 1997 A
5717512 Chmielewski, Jr. et al. Feb 1998 A
5751836 Wildes et al. May 1998 A
5859686 Aboutalib et al. Jan 1999 A
5860032 Iwane Jan 1999 A
5896174 Nakata Apr 1999 A
5901238 Matsuhita May 1999 A
5909269 Isogai et al. Jun 1999 A
5953440 Zhang et al. Sep 1999 A
5956122 Doster Sep 1999 A
5978494 Zhang Nov 1999 A
6005704 Chmielewski, Jr. et al. Dec 1999 A
6007202 Apple et al. Dec 1999 A
6012376 Hanke et al. Jan 2000 A
6021210 Camus et al. Feb 2000 A
6028949 McKendall Feb 2000 A
6055322 Salganicoff et al. Apr 2000 A
6064752 Rozmus et al. May 2000 A
6069967 Rozmus et al. May 2000 A
6081607 Mori et al. Jun 2000 A
6088470 Camus et al. Jul 2000 A
6091899 Konishi et al. Jul 2000 A
6101477 Hohle et al. Aug 2000 A
6104431 Inoue et al. Aug 2000 A
6108636 Yap et al. Aug 2000 A
6119096 Mann et al. Sep 2000 A
6120461 Smyth Sep 2000 A
6134339 Luo Oct 2000 A
6144754 Okano et al. Nov 2000 A
6246751 Bergl et al. Jun 2001 B1
6247813 Kim et al. Jun 2001 B1
6252977 Salganicoff et al. Jun 2001 B1
6282475 Washington Aug 2001 B1
6285505 Melville et al. Sep 2001 B1
6285780 Yamakita et al. Sep 2001 B1
6289113 McHugh et al. Sep 2001 B1
6299306 Braithwaite et al. Oct 2001 B1
6308015 Matsumoto Oct 2001 B1
6309069 Seal et al. Oct 2001 B1
6320610 Van Sant et al. Nov 2001 B1
6320612 Young Nov 2001 B1
6320973 Suzaki et al. Nov 2001 B2
6323761 Son Nov 2001 B1
6325765 Hay et al. Dec 2001 B1
6330674 Angelo et al. Dec 2001 B1
6332193 Glass et al. Dec 2001 B1
6344683 Kim Feb 2002 B1
6370260 Pavlidis et al. Apr 2002 B1
6377699 Musgrave et al. Apr 2002 B1
6393136 Amir et al. May 2002 B1
6400835 Lemelson et al. Jun 2002 B1
6424727 Musgrave et al. Jul 2002 B1
6424845 Emmoft et al. Jul 2002 B1
6433818 Steinberg et al. Aug 2002 B1
6438752 McClard Aug 2002 B1
6441482 Foster Aug 2002 B1
6446045 Stone et al. Sep 2002 B1
6483930 Musgrave et al. Nov 2002 B1
6484936 Nicoll et al. Nov 2002 B1
6490443 Freeny, Jr. Dec 2002 B1
6493669 Curry et al. Dec 2002 B1
6494363 Roger et al. Dec 2002 B1
6503163 Van Sant et al. Jan 2003 B1
6505193 Musgrave et al. Jan 2003 B1
6506078 Mori et al. Jan 2003 B1
6508397 Do Jan 2003 B1
6516078 Yang et al. Feb 2003 B1
6516087 Camus Feb 2003 B1
6516416 Gregg et al. Feb 2003 B2
6522772 Morrison et al. Feb 2003 B1
6523165 Liu et al. Feb 2003 B2
6526160 Ito Feb 2003 B1
6532298 Cambier et al. Mar 2003 B1
6540392 Braithwaite Apr 2003 B1
6542624 Oda Apr 2003 B1
6546121 Oda Apr 2003 B1
6553494 Glass Apr 2003 B1
6580356 Alt et al. Jun 2003 B1
6591001 Oda et al. Jul 2003 B1
6591064 Higashiyama et al. Jul 2003 B2
6594377 Kim et al. Jul 2003 B1
6594399 Camus et al. Jul 2003 B1
6598971 Cleveland Jul 2003 B2
6600878 Pregara Jul 2003 B2
6614919 Suzaki et al. Sep 2003 B1
6652099 Chae et al. Nov 2003 B2
6674367 Sweatte Jan 2004 B2
6690997 Rivalto Feb 2004 B2
6708176 Strunk et al. Mar 2004 B2
6711562 Ross et al. Mar 2004 B1
6714665 Hanna et al. Mar 2004 B1
6718049 Pavlidis et al. Apr 2004 B2
6718665 Hess et al. Apr 2004 B2
6732278 Baird, III et al. May 2004 B2
6734783 Anbai May 2004 B1
6745520 Puskaric et al. Jun 2004 B2
6750435 Ford Jun 2004 B2
6751733 Nakamura et al. Jun 2004 B1
6753919 Daugman Jun 2004 B1
6754640 Bozeman Jun 2004 B2
6760467 Min et al. Jul 2004 B1
6765470 Shinzaki Jul 2004 B2
6766041 Golden et al. Jul 2004 B2
6775774 Harper Aug 2004 B1
6785406 Kamada Aug 2004 B1
6793134 Clark Sep 2004 B2
6819219 Bolle et al. Nov 2004 B1
6829370 Pavlidis et al. Dec 2004 B1
6832044 Doi et al. Dec 2004 B2
6836554 Bolle et al. Dec 2004 B1
6837436 Swartz et al. Jan 2005 B2
6845879 Park Jan 2005 B2
6853444 Haddad Feb 2005 B2
6867683 Calvesio et al. Mar 2005 B2
6873960 Wood et al. Mar 2005 B1
6896187 Stockhammer May 2005 B2
6905411 Nguyen et al. Jun 2005 B2
6920237 Chen et al. Jul 2005 B2
6930707 Bates et al. Aug 2005 B2
6934849 Kramer et al. Aug 2005 B2
6950139 Fujinawa Sep 2005 B2
6954738 Wang et al. Oct 2005 B2
6957341 Rice et al. Oct 2005 B2
6972797 Izumi Dec 2005 B2
6992562 Fuks et al. Jan 2006 B2
7053948 Konishi May 2006 B2
7071971 Elberbaum Jul 2006 B2
7084904 Liu et al. Aug 2006 B2
7120607 Bolle et al. Oct 2006 B2
7136581 Fujii Nov 2006 B2
7183895 Bazakos et al. Feb 2007 B2
7184577 Chen et al. Feb 2007 B2
7197173 Jones et al. Mar 2007 B2
7204425 Mosher, Jr. et al. Apr 2007 B2
7277561 Shin Oct 2007 B2
7277891 Howard et al. Oct 2007 B2
7298873 Miller, Jr. et al. Nov 2007 B2
7315233 Yuhara Jan 2008 B2
7362210 Bazakos et al. Apr 2008 B2
7362370 Sakamoto et al. Apr 2008 B2
7362884 Willis et al. Apr 2008 B2
7365771 Kahn et al. Apr 2008 B2
7406184 Wolff et al. Jul 2008 B2
7414648 Imada Aug 2008 B2
7417682 Kuwakino et al. Aug 2008 B2
7418115 Northcott et al. Aug 2008 B2
7421097 Hamza et al. Sep 2008 B2
7443441 Hiraoka Oct 2008 B2
7460693 Loy et al. Dec 2008 B2
7471451 Dent et al. Dec 2008 B2
7486806 Azuma et al. Feb 2009 B2
7518651 Butterworth Apr 2009 B2
7537568 Moehring May 2009 B2
7538326 Johnson et al. May 2009 B2
7542945 Thompson et al. Jun 2009 B2
7580620 Raskar et al. Aug 2009 B2
7593550 Hamza Sep 2009 B2
7639846 Yoda Dec 2009 B2
7722461 Gatto et al. May 2010 B2
7751598 Matey et al. Jul 2010 B2
7756301 Hamza Jul 2010 B2
7756407 Raskar Jul 2010 B2
7761453 Hamza Jul 2010 B2
7777802 Shinohara et al. Aug 2010 B2
7804982 Howard et al. Sep 2010 B2
20010026632 Tamai Oct 2001 A1
20010027116 Baird Oct 2001 A1
20010047479 Bromba et al. Nov 2001 A1
20010051924 Uberti Dec 2001 A1
20010054154 Tam Dec 2001 A1
20020010857 Karthik Jan 2002 A1
20020033896 Hatano Mar 2002 A1
20020039433 Shin Apr 2002 A1
20020040434 Elliston et al. Apr 2002 A1
20020062280 Zachariassen et al. May 2002 A1
20020077841 Thompson Jun 2002 A1
20020089157 Breed et al. Jul 2002 A1
20020106113 Park Aug 2002 A1
20020112177 Voltmer et al. Aug 2002 A1
20020114495 Chen et al. Aug 2002 A1
20020130961 Lee et al. Sep 2002 A1
20020131622 Lee et al. Sep 2002 A1
20020139842 Swaine Oct 2002 A1
20020140715 Smet Oct 2002 A1
20020142844 Kerr Oct 2002 A1
20020144128 Rahman et al. Oct 2002 A1
20020150281 Cho Oct 2002 A1
20020154794 Cho Oct 2002 A1
20020158750 Almalik Oct 2002 A1
20020164054 McCartney et al. Nov 2002 A1
20020175182 Matthews Nov 2002 A1
20020186131 Fettis Dec 2002 A1
20020191075 Doi et al. Dec 2002 A1
20020191076 Wada et al. Dec 2002 A1
20020194128 Maritzen et al. Dec 2002 A1
20020194131 Dick Dec 2002 A1
20020198731 Barnes et al. Dec 2002 A1
20030002714 Wakiyama Jan 2003 A1
20030012413 Kusakari et al. Jan 2003 A1
20030014372 Wheeler et al. Jan 2003 A1
20030020828 Ooi et al. Jan 2003 A1
20030038173 Blackson et al. Feb 2003 A1
20030046228 Berney Mar 2003 A1
20030053663 Chen et al. Mar 2003 A1
20030055689 Block et al. Mar 2003 A1
20030055787 Fujii Mar 2003 A1
20030058492 Wakiyama Mar 2003 A1
20030061172 Robinson Mar 2003 A1
20030061233 Manasse et al. Mar 2003 A1
20030065626 Allen Apr 2003 A1
20030071743 Seah et al. Apr 2003 A1
20030072475 Tamori Apr 2003 A1
20030073499 Reece Apr 2003 A1
20030074317 Hofi Apr 2003 A1
20030074326 Byers Apr 2003 A1
20030076161 Tisse Apr 2003 A1
20030076300 Lauper et al. Apr 2003 A1
20030076984 Tisse et al. Apr 2003 A1
20030080194 O'Hara et al. May 2003 A1
20030091215 Lauper et al. May 2003 A1
20030092489 Veradej May 2003 A1
20030095689 Volkommer et al. May 2003 A1
20030098776 Friedli May 2003 A1
20030099379 Monk et al. May 2003 A1
20030099381 Ohba May 2003 A1
20030103652 Lee et al. Jun 2003 A1
20030107097 McArthur et al. Jun 2003 A1
20030107645 Yoon Jun 2003 A1
20030108224 Ike Jun 2003 A1
20030108225 Li Jun 2003 A1
20030115148 Takhar Jun 2003 A1
20030115459 Monk Jun 2003 A1
20030116630 Carey et al. Jun 2003 A1
20030118212 Min et al. Jun 2003 A1
20030118217 Kondo et al. Jun 2003 A1
20030123711 Kim et al. Jul 2003 A1
20030125054 Garcia Jul 2003 A1
20030125057 Pesola Jul 2003 A1
20030126560 Kurapati et al. Jul 2003 A1
20030131245 Linderman Jul 2003 A1
20030131265 Bhakta Jul 2003 A1
20030133597 Moore et al. Jul 2003 A1
20030140235 Immega et al. Jul 2003 A1
20030140928 Bui et al. Jul 2003 A1
20030141411 Pandya et al. Jul 2003 A1
20030149881 Patel et al. Aug 2003 A1
20030152251 Ike Aug 2003 A1
20030152252 Kondo et al. Aug 2003 A1
20030156741 Lee et al. Aug 2003 A1
20030158762 Wu Aug 2003 A1
20030158821 Maia Aug 2003 A1
20030159051 Hollnagel Aug 2003 A1
20030163739 Armington et al. Aug 2003 A1
20030169334 Braithwaite et al. Sep 2003 A1
20030169901 Pavlidis et al. Sep 2003 A1
20030169907 Edwards et al. Sep 2003 A1
20030173408 Mosher, Jr. et al. Sep 2003 A1
20030174049 Beigel et al. Sep 2003 A1
20030177051 Driscoll et al. Sep 2003 A1
20030182151 Taslitz Sep 2003 A1
20030182182 Kocher Sep 2003 A1
20030189480 Hamid Oct 2003 A1
20030189481 Hamid Oct 2003 A1
20030191949 Odagawa Oct 2003 A1
20030194112 Lee Oct 2003 A1
20030195935 Leeper Oct 2003 A1
20030198368 Kee Oct 2003 A1
20030200180 Phelan, III et al. Oct 2003 A1
20030210139 Brooks et al. Nov 2003 A1
20030210802 Schuessier Nov 2003 A1
20030218719 Abourizk et al. Nov 2003 A1
20030225711 Paping Dec 2003 A1
20030228898 Rowe Dec 2003 A1
20030233556 Angelo et al. Dec 2003 A1
20030235326 Morikawa et al. Dec 2003 A1
20030235411 Morikawa et al. Dec 2003 A1
20030236120 Reece et al. Dec 2003 A1
20040001614 Russon et al. Jan 2004 A1
20040002894 Kocher Jan 2004 A1
20040005078 Tillotson Jan 2004 A1
20040006553 de Vries et al. Jan 2004 A1
20040010462 Moon et al. Jan 2004 A1
20040012760 Mihashi et al. Jan 2004 A1
20040019570 Bolle et al. Jan 2004 A1
20040023664 Mirouze et al. Feb 2004 A1
20040023709 Beaulieu et al. Feb 2004 A1
20040025030 Corbett-Clark et al. Feb 2004 A1
20040025031 Ooi et al. Feb 2004 A1
20040025053 Hayward Feb 2004 A1
20040029564 Hodge Feb 2004 A1
20040030930 Nomura Feb 2004 A1
20040035123 Kim et al. Feb 2004 A1
20040037450 Bradski Feb 2004 A1
20040039914 Barr et al. Feb 2004 A1
20040042641 Jakubowski Mar 2004 A1
20040044627 Russell et al. Mar 2004 A1
20040046640 Jourdain et al. Mar 2004 A1
20040049687 Orsini et al. Mar 2004 A1
20040050924 Mletzko et al. Mar 2004 A1
20040050930 Rowe Mar 2004 A1
20040052405 Walfridsson Mar 2004 A1
20040052418 DeLean Mar 2004 A1
20040059590 Mercredi et al. Mar 2004 A1
20040059953 Purnell Mar 2004 A1
20040104266 Bolle et al. Jun 2004 A1
20040117636 Cheng Jun 2004 A1
20040133804 Smith et al. Jul 2004 A1
20040146187 Jeng Jul 2004 A1
20040148526 Sands et al. Jul 2004 A1
20040160518 Park Aug 2004 A1
20040162870 Matsuzaki et al. Aug 2004 A1
20040162984 Freeman et al. Aug 2004 A1
20040169817 Grotehusmann et al. Sep 2004 A1
20040172541 Ando et al. Sep 2004 A1
20040174070 Voda et al. Sep 2004 A1
20040190759 Caldwell Sep 2004 A1
20040193893 Braithwaite et al. Sep 2004 A1
20040219902 Lee et al. Nov 2004 A1
20040233038 Beenau et al. Nov 2004 A1
20040240711 Hamza et al. Dec 2004 A1
20040252866 Tisse et al. Dec 2004 A1
20040255168 Murashita et al. Dec 2004 A1
20050008200 Azuma et al. Jan 2005 A1
20050008201 Lee et al. Jan 2005 A1
20050012817 Hampapur et al. Jan 2005 A1
20050029353 Isemura et al. Feb 2005 A1
20050052566 Kato Mar 2005 A1
20050055582 Bazakos et al. Mar 2005 A1
20050063567 Saitoh et al. Mar 2005 A1
20050084137 Kim et al. Apr 2005 A1
20050084179 Hanna et al. Apr 2005 A1
20050099288 Spitz et al. May 2005 A1
20050102502 Sagen May 2005 A1
20050110610 Bazakos et al. May 2005 A1
20050125258 Yellin et al. Jun 2005 A1
20050127161 Smith et al. Jun 2005 A1
20050129286 Hekimian Jun 2005 A1
20050134796 Zelvin et al. Jun 2005 A1
20050138385 Friedli et al. Jun 2005 A1
20050138387 Lam et al. Jun 2005 A1
20050146640 Shibata Jul 2005 A1
20050151620 Neumann Jul 2005 A1
20050152583 Kondo et al. Jul 2005 A1
20050193212 Yuhara Sep 2005 A1
20050199708 Friedman Sep 2005 A1
20050206501 Farhat Sep 2005 A1
20050206502 Bernitz Sep 2005 A1
20050207614 Schonberg et al. Sep 2005 A1
20050210267 Sugano et al. Sep 2005 A1
20050210270 Rohatgi et al. Sep 2005 A1
20050210271 Chou et al. Sep 2005 A1
20050238214 Matsuda et al. Oct 2005 A1
20050240778 Saito Oct 2005 A1
20050248725 Ikoma et al. Nov 2005 A1
20050249385 Kondo et al. Nov 2005 A1
20050255840 Markham Nov 2005 A1
20060093190 Cheng et al. May 2006 A1
20060147094 Yoo Jul 2006 A1
20060165266 Hamza Jul 2006 A1
20060274919 LoIacono et al. Dec 2006 A1
20070036397 Hamza Feb 2007 A1
20070140531 Hamza Jun 2007 A1
20070160266 Jones et al. Jul 2007 A1
20070189582 Hamza et al. Aug 2007 A1
20070206840 Jacobson Sep 2007 A1
20070274570 Hamza Nov 2007 A1
20070274571 Hamza Nov 2007 A1
20070286590 Terashima Dec 2007 A1
20080005578 Shafir Jan 2008 A1
20080075334 Determan et al. Mar 2008 A1
20080075441 Jelinek et al. Mar 2008 A1
20080104415 Palti-Wasserman et al. May 2008 A1
20080148030 Goffin Jun 2008 A1
20080211347 Wright et al. Sep 2008 A1
20080252412 Larsson et al. Oct 2008 A1
20080267456 Anderson Oct 2008 A1
20090046899 Northcott et al. Feb 2009 A1
20090092283 Whillock et al. Apr 2009 A1
20090316993 Brasnett et al. Dec 2009 A1
20100002913 Hamza Jan 2010 A1
20100033677 Jelinek Feb 2010 A1
20100034529 Jelinek Feb 2010 A1
20100142765 Hamza Jun 2010 A1
20100182440 McCloskey Jul 2010 A1
20100239119 Bazakos et al. Sep 2010 A1
Foreign Referenced Citations (188)
Number Date Country
0484076 May 1992 EP
0593386 Apr 1994 EP
0878780 Nov 1998 EP
0899680 Mar 1999 EP
0910986 Apr 1999 EP
0962894 Dec 1999 EP
1018297 Jul 2000 EP
1024463 Aug 2000 EP
1028398 Aug 2000 EP
1041506 Oct 2000 EP
1041523 Oct 2000 EP
1126403 Aug 2001 EP
1139270 Oct 2001 EP
1237117 Sep 2002 EP
1477925 Nov 2004 EP
1635307 Mar 2006 EP
2369205 May 2002 GB
2371396 Jul 2002 GB
2375913 Nov 2002 GB
2402840 Dec 2004 GB
2411980 Sep 2005 GB
9161135 Jun 1997 JP
9198545 Jul 1997 JP
9201348 Aug 1997 JP
9147233 Sep 1997 JP
9234264 Sep 1997 JP
9305765 Nov 1997 JP
9319927 Dec 1997 JP
10021392 Jan 1998 JP
10040386 Feb 1998 JP
10049728 Feb 1998 JP
10137219 May 1998 JP
10137221 May 1998 JP
10137222 May 1998 JP
10137223 May 1998 JP
10248827 Sep 1998 JP
10269183 Oct 1998 JP
11047117 Feb 1999 JP
11089820 Apr 1999 JP
11200684 Jul 1999 JP
11203478 Jul 1999 JP
11213047 Aug 1999 JP
11339037 Dec 1999 JP
2000005149 Jan 2000 JP
2000005150 Jan 2000 JP
2000011163 Jan 2000 JP
2000023946 Jan 2000 JP
2000083930 Mar 2000 JP
2000102510 Apr 2000 JP
2000102524 Apr 2000 JP
2000105830 Apr 2000 JP
2000107156 Apr 2000 JP
2000139878 May 2000 JP
2000155863 Jun 2000 JP
2000182050 Jun 2000 JP
2000185031 Jul 2000 JP
2000194972 Jul 2000 JP
2000237167 Sep 2000 JP
2000242788 Sep 2000 JP
2000259817 Sep 2000 JP
2000356059 Dec 2000 JP
2000357232 Dec 2000 JP
2001005948 Jan 2001 JP
2001067399 Mar 2001 JP
2001101429 Apr 2001 JP
2001167275 Jun 2001 JP
2001222661 Aug 2001 JP
2001292981 Oct 2001 JP
2001297177 Oct 2001 JP
2001358987 Dec 2001 JP
2002119477 Apr 2002 JP
2002133415 May 2002 JP
2002153444 May 2002 JP
2002153445 May 2002 JP
2002260071 Sep 2002 JP
2002271689 Sep 2002 JP
2002286650 Oct 2002 JP
2002312772 Oct 2002 JP
2002329204 Nov 2002 JP
2003006628 Jan 2003 JP
2003036434 Feb 2003 JP
2003108720 Apr 2003 JP
2003108983 Apr 2003 JP
2003132355 May 2003 JP
2003150942 May 2003 JP
2003153880 May 2003 JP
2003242125 Aug 2003 JP
2003271565 Sep 2003 JP
2003271940 Sep 2003 JP
2003308522 Oct 2003 JP
2003308523 Oct 2003 JP
2003317102 Nov 2003 JP
2003331265 Nov 2003 JP
2004005167 Jan 2004 JP
2004021406 Jan 2004 JP
2004030334 Jan 2004 JP
2004038305 Feb 2004 JP
2004094575 Mar 2004 JP
2004152046 May 2004 JP
2004163356 Jun 2004 JP
2004164483 Jun 2004 JP
2004171350 Jun 2004 JP
2004171602 Jun 2004 JP
2004206444 Jul 2004 JP
2004220376 Aug 2004 JP
2004261515 Sep 2004 JP
2004280221 Oct 2004 JP
2004280547 Oct 2004 JP
2004287621 Oct 2004 JP
2004315127 Nov 2004 JP
2004318248 Nov 2004 JP
2005004524 Jan 2005 JP
2005011207 Jan 2005 JP
2005025577 Jan 2005 JP
2005038257 Feb 2005 JP
2005062990 Mar 2005 JP
2005115961 Apr 2005 JP
2005148883 Jun 2005 JP
2005242677 Sep 2005 JP
WO 9717674 May 1997 WO
WO 9721188 Jun 1997 WO
WO 9802083 Jan 1998 WO
WO 9808439 Mar 1998 WO
WO 9932317 Jul 1999 WO
WO 9952422 Oct 1999 WO
WO 9965175 Dec 1999 WO
WO 0028484 May 2000 WO
WO 0029986 May 2000 WO
WO 0031677 Jun 2000 WO
WO 0036605 Jun 2000 WO
0062239 Oct 2000 WO
WO 0101329 Jan 2001 WO
WO 0103100 Jan 2001 WO
WO 0128476 Apr 2001 WO
WO 0135348 May 2001 WO
WO 0135349 May 2001 WO
WO 0140982 Jun 2001 WO
WO 0163994 Aug 2001 WO
WO 0169490 Sep 2001 WO
WO 0186599 Nov 2001 WO
WO 0201451 Jan 2002 WO
WO 0219030 Mar 2002 WO
WO 0235452 May 2002 WO
WO 0235480 May 2002 WO
WO 02091735 Nov 2002 WO
WO 02095657 Nov 2002 WO
WO 03002387 Jan 2003 WO
WO 03003910 Jan 2003 WO
WO 03054777 Jul 2003 WO
WO 03077077 Sep 2003 WO
WO 2004029863 Apr 2004 WO
WO 2004042646 May 2004 WO
WO 2004055737 Jul 2004 WO
WO 2004089214 Oct 2004 WO
WO 2004097743 Nov 2004 WO
WO 2005008567 Jan 2005 WO
WO 2005013181 Feb 2005 WO
WO 2005024698 Mar 2005 WO
WO 2005024708 Mar 2005 WO
WO 2005024709 Mar 2005 WO
WO 2005029388 Mar 2005 WO
WO 2005062235 Jul 2005 WO
WO 2005069252 Jul 2005 WO
WO 2005093510 Oct 2005 WO
WO 2005093681 Oct 2005 WO
WO 2005096962 Oct 2005 WO
WO 2005098531 Oct 2005 WO
WO 2005104704 Nov 2005 WO
WO 2005109344 Nov 2005 WO
WO 2006012645 Feb 2006 WO
WO 2006023046 Mar 2006 WO
WO 2006051462 May 2006 WO
WO 2006063076 Jun 2006 WO
WO 2006081209 Aug 2006 WO
WO 2006081505 Aug 2006 WO
WO 2007101269 Sep 2007 WO
WO 2007101275 Sep 2007 WO
WO 2007101276 Sep 2007 WO
WO 2007103698 Sep 2007 WO
WO 2007103701 Sep 2007 WO
WO 2007103833 Sep 2007 WO
WO 2007103834 Sep 2007 WO
WO 2008016724 Feb 2008 WO
WO 2008019168 Feb 2008 WO
WO 2008019169 Feb 2008 WO
WO 2008021584 Feb 2008 WO
WO 2008031089 Mar 2008 WO
WO 2008040026 Apr 2008 WO
Non-Patent Literature Citations (92)
Entry
Ma et al., “Local Intensity Variation Analysis for Iris Recognition,” Pattern Recognition Society37, pp. 1287-1298, 2004.
Avcibas et al., “Steganalysis Using Image Quality Metrics,” IEEE Transactions on Image Processing, vol. 12, No. 2, pp. 221-229, Feb. 2003.
Boles, “A Security System Based on Human Iris Identification Using Wavelet Transform,” IEEE First International Conference on Knowledge-Based Intelligent Electronic Systems, May 21-23, Adelaide, Australia, pp. 533-541, 1997.
Carson et al., “Blobworld: Image Segmentation Using Expectation-Maximization and Its Application to Image Querying,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, No. 8, pp. 1026-1038, Aug. 2002.
Daugman, “How Iris Recognition Works,” IEEE 2002 International Conference on Image Processing, vol. I of III, 6 pages, Sep. 22-25, 2002.
Guo et al., “A System for Automatic Iris Capturing,” Mitsubishi Electric Research Laboratories, Inc., 10 pages, 2005.
Guo, “Face, Expression, and Iris Recognition Using Learning-Based Approaches,” 132 pages, 2006.
http://www.newscientisttech.com/article/dn11110-invention-covert-iris-sc, “Invention: Covert Iris Scanner,” 3 pages, printed Feb. 8, 2007.
Jalaja et al., “Texture Element Feature Characterizations for CBIR,” IEEE, pp. 733-736, 2005.
Kalka et al., “Image Quality Assessment for Iris Biometric,” Proc. of SPIE vol. 6202 62020D, 11 pages, 2006.
Ko et al., “Monitoring and Reporting of Fingerprint Image Quality and Match Accuracy for a Large User Application,” IEEE Computer Society, Proceedings of the 33rd Applied Imagery Pattern Recognition Workshop, 6 pages, 2004.
Lau et al., “Finding a Small Number of Regions in an Image Using Low-Level Features,” Pattern Recognition 35, pp. 2323-2339, 2002.
Maurer et al., “Tracking and Learning Graphs and Pose on Image Sequences of Faces,” IEEE Computer Society Press, International Conference on Automatic Face and Gesture Recognition, pp. 176-181, Oct. 14-16, 1996.
Oppenheim et al, “The Importance of Phase in Signals,” Proceedings of the IEEE, vol. 69, No. 5, pp. 529-541, 1981.
Ratha et al., “A Real-Time Matching System for Large Fingerprint Databases,” IEEE Transactions on Pattern Analysis, and Machine Intelligence, vol. 18, No. 8, pp. 799-812, Aug. 1996.
Sony, “Network Color Camera, SNC-RZ30N (NTSC),” 6 pages, Aug. 2002.
Wang et al, “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE Transactions on Image Processing, vol. 13, No. 4, pp. 600-612, Apr. 2004.
Wang et al., “A Universal Image Quality Index,” IEEE Signal Processing Letters, vol. 9, No. 3, pp. 81-84, Mar. 2002.
Wang et al., “Local Phase Coherence and the Perception of Blur,” Advances in Nueral Information Processing Systems 16, pp. 1435-1442, 2004.
Bonney et al., “Iris Pattern Extraction Using Bit Planes and Standard Deviations,” IEEE, pp. 582-586, 2004.
Camus et al.; “Reliable and Fast Eye Finding in Close-up Images,” IEEE, pp. 389-394, 2002.
Cui et al., “An Iris Detection Method Based on Structure Information,” Advances in Biometric Person Authentication, International Workshop on Biometric Recognition Systems, IWBRS 2005, Beijing China, 10 pages, Oct. 22-23, 2005.
Cui et al., “An Iris Image Synthesis Method Based on PCA and Super-Resolution,” IEEE Computer Society, Proceedings of the 17th International Conference on Pattern Recognition, 6 pages, Aug. 23-26, 2004.
Cui et al., “An Iris Recognition Algorithm Using Local Extreme Points,” Biometric Authentication, First International Conference, ICBA 2004, Hong Kong, China, 10 pages, Jul. 15-17, 2004.
L. Ma, et al.: Personal Identification Based on Iris Texture Analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, No. 12 (Dec. 2003) (pp. 1519-1533).
L. Masek: Recognition of Human Iris Patterns for Biometric Identification (Report submitted for Bachelor of Engineering degree, School of Computer Science and Software Engineering, The University of Western Australia) (2003) (56 pp.).
Y. Du, et al.: A One-Dimensional Approach for Iris Identification(undated) (11 pp.).
J. Daugman, Results from 200 billion iris cross-comparisons, Technical Report No. 635, University of Cambridge Computer Laboratory (Jun. 2005) (8 pps.).
Z. Sun, Robust Encoding of Local Ordinal Measures: A General Framework of Iris Recognition, (undated) (13 pp.).
J. Huang, et al. Iris Model Based on Local Orientation Description (undated) (5 pp.).
Y-A Huang, An Efficient Iris Recognition System, Proceedings of the First International Conference on Machine Learning and Cybernetics, Beijing, Nov. 4-5, 2002 (pp. 450-454).
J. Cui, et al.: An Appearance-Based Method for Iris Detection (undated) (6 pp.).
J. Cui, et al.: A Fast and Robust Iris Localization Method Based on Texture Segmentation (undated) (8 pp.).
U.S. Appl. No. 13/077,821, filed Mar. 30, 2011.
Freeboy, “Adaptive Optics Speeds Up Airport Immigration,” Optics.org/ole, 2 pages, Jan. 2009.
http://www.imagine-eyes.com/content/view/100/115/, “INOVEO—Ultra-High Resolution Retinal Imaging with Adaptive Optics,” 2 pages, printed Feb. 22, 2010.
AOptix Technologies, “Introducing the AOptix InSight 2 Meter Iris Recognition System,” 6 pages, 2010.
Belhumeur et al., “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,” 14 pages, prior to Jun. 11, 2010.
Bentley et al., “Multidimensional Binary Search Trees Used for Associative Searching,” Communications of the ACM, vol. 18, No. 9, pp. 509-517, Sep. 1975.
Blackman et al., “Chapter 9, Multiple Sensor Tracking: Issues and Methods,” Design and Analysis of Modern Tracking Systems, Artech House, pp. 595-659, 1999.
Brasnett et al., “A Robust Visual Identifier Using the Trace Transform,” 6 pages, prior to Jun. 11, 2010.
Buades et al., “A Review of Image Denoising Algorithms, with a New One,” Multiscale Modeling & Simulation, vol. 4, No. 2, pp. 490-530, 2005.
Chen et al., “Localized Iris Image Quality Using 2-D Wavelets,” LNCS vol. 3832, pp. 373-381, 2005.
Chow et al., “Towards a System for Automatic Facial Feature Detection,” Pattern Recognition vol. 26, No. 12, pp. 1739-1755, 1993.
U.S. Appl. No. 12/792,498, filed Jun. 2, 2010.
U.S. Appl. No. 12/814,232, filed Jun. 11, 2010.
U.S. Appl. No. 12/814,272, filed Jun. 11, 2010.
Cula et al., “Bidirectional Imaging and Modeling of Skin Texture,” Proceedings of Texture 2003, 6 pages, Oct. 17, 2003.
Cula et al., “Bidirectional Imaging and Modeling of Skin Texture,” IEEE Transactions on Biomedical Engineering, vol. 51, No. 12, pp. 2148-2159, 2004.
Cula et al., “Compact Representation of Bidirectional Texture Functions,” Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2001, 8 pages, 2001.
Cula et al., “Skin Texture Modeling,” International Journal of Computer Vision 2004, 34 pages, 2004.
Dabov et al., “Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering,” IEEE Transactions on Image Processing, vol. 16, No. 8, pp. 2080-2095, Aug. 2007.
Dabov et al., “Image Restoration by Sparse 3D Transform Collaborative Filtering,” SPIE vol. 6812 681207-1, 12 pages, 2008.
Daugman, “High Confidence Visual Recognition of Persons by a Test of Statistical Independence,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, No. 11, pp. 1148-1161, 1993.
Daugman, “Probing the Uniqueness and Randomness of Iris Codes: Results from 200 Billion Iris Pair Comparisons,” Proceedings of the IEEE vol. 94, No. 11, pp. 1928-1935, Nov. 2006.
Fooprateepsiri et al., “A Highly Robust Method for Face Authentication,” IEEE 2009 First Asian Conference on Intelligent Information and Database Systems, pp. 380-385, 2009.
Fooprateepsiri et al., “Face Verification Base-On Hausdorff-Shape Context,” IEEE 2009 Asia Conference on Informatics in Control, Automation and Robotics, pp. 240-244, 2009.
Forstner et al., “A Metric for Covariance Matrices,” 16 pages, prior to Jun. 11, 2010.
Gan et al., “Applications of Wavelet Packets Decomposition in Iris Recognition,” LNCS vol. 3832, pp. 443-449, 2005.
Hampapur et al., “Smart Surveillance: Applications, Technologies and Implications,” IEEE, 6 pages, Dec. 15-18, 2003.
Hamza et al., “Standoff Iris Recognition Usin Non-Iterative Polar Based Segmentation,” Proceedings of SPIE vol. 6944, 8 pages, 2008.
Hanna et al., “A System for Non-Intrusive Human Iris Acquisition and Identification,” IAPR Workshop on Machine Vision Applications, pp. 200-203, Nov. 12-14, 1996.
http://en.wikipedia.org/wiki/Radon—transform, “Radon Transform,” 5 pages, printed May 14, 2010.
Ivins et al., “A Deformable Model of the Human Iris for Measuring Small Three-Dimensional Eye Movements,” Machine Vision and Applications, vol. 11, pp. 42-51, 1998.
Kadyrov et al., “The Trace Transform and Its Applications,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, No. 8, pp. 811-828, Aug. 2001.
Kadyrov et al., “The Trace Transform as a Tool to Invariant Feature Construction,” 3 pages, prior to Jun. 11, 2010.
Kang et al., “Improved Dual Action Contour for Iris Recognition,” 10 pages, prior to Jun. 11, 2010.
Kawaguchi et al., “Detection of Eyes from Human Faces by Hough Transform and Separability Filter,” IEEE, 4 pages, 2000.
Kong et al., “Detecting Eyelash and Reflection for Accurate Iris Segmentation,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 17, No. 6, pp. 1025-1034, 2003.
Li et al., “Appearance Modeling Using a Geometric Transform,” IEEE Transactions on Image Processing, 17 pages, 2008.
Li et al., “Appearance Modeling Using a Geometric Transform,” Journal Preparation for IEEE Transactions on Image Processing, 30 pages, Nov. 5, 2006.
Ma et al., “Video Sequence Querying Using Clustering of Objects' Appearance Models,” Advances in Visual Computing Third Annual Symposium, ISVC 2007, 14 pages, 2007.
Monro et al., “DCT-Based Iris Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, No. 4, Apr. 2007.
Noh et al., “A Novel Method to Extract Features for Iris Recognition System,” AVBPA 2003, LNCS 2688, pp. 862-868, 2003.
Ojala et al., “Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, No. 7, 18 pages, Jul. 2002.
Pamudurthy et al., “Dynamic Approach for Face Recognition Using Digital Image Skin Correlation,” Audio and Video Based Person Authentication 5th International Conference, AVBPA 2005, Hilton Rye Town, NY, USA, 11 pages, Jul. 20-22, 2005.
Petrou et al., “The Trace Transform in a Nutshell,” 9 pages, prior to Jun. 11, 2010.
Phillips et al., “FRVT 2006 and ICE 2006 Large-Scale Results,” 56 pages, Mar. 2007.
Porikli et al., “Covariance Tracking Using Model Update Based on Means on Riemannian Manifolds,” 8 pages, prior to Jun. 11, 2010.
Proenca et al., “Toward Noncooperative Iris Recognition: A Classification Approach Using Multiple Signatures,” IEEE Transactions on Patern Analysis and Machine Intellingence, vol. 29, No. 4, pp. 607-612, Apr. 2007.
Ross et al., “Segmenting Non-Ideal Irises Using Geodesic Active Contours,” IEEE 2006 Biometrics Symposium, 3 pages, 2006.
Shapiro et al., Pages 556-559 in Book Entitled “Computer Vision,” Prentice Hall, prior to Jun. 11, 2010.
Stillman et al., “A System for Tracking and Recognizing Multiple People with Multiple Cameras,” 6 pages, Aug. 1998.
Sun et al., “Iris Recognition Based on Non-local Comparisons,” Sinobiometrics 2004, LNCS 3338, pp. 67-77, 2004.
Suzaki et al., “A Horse Identification System Using Biometrics,” Systems and Computer in Japan, vol. 32, No. 14, pp. 12-23, 2001.
Trucco et al., “Robust Iris Location in Close-up Images of the Eye,” Pattern Anal. Applic. vol. 8, pp. 247-255, 2005.
Turan et al., “Trace Transform Based Invariant Object Recognition System,” 4 pages, prior to Jun. 11, 2010.
Turk et al., “Eigenfaces for Recognition,” Journal of Cognitive Neuroscience, vol. 3, No. 1, 16 pages, 1991.
Wang et al., “Recent Developments in Human Motion Analysis,” Pattern Recognition, vol. 36, pp. 585-601, 2003.
Wei et al., “Robust and Fast Assessment of Iris Image Quality,” LNCS vol. 3832, pp. 464-471, 2005.
Zhao et al., “Dynamic Texture Recognition Using Local Binary Patterns with an Application to Facial Expressions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, No. 6, pp. 915-928, Jun. 2007.
Zhi-Hui et al., “Research Iris Serial Images Quality Assessment Method Based on HVS,” Proceedings of SPIE, vol. 6034, 6 pages, 2006.
Related Publications (1)
Number Date Country
20070211924 A1 Sep 2007 US
Provisional Applications (1)
Number Date Country
60778770 Mar 2006 US