The system and method disclosed below relate to personal identification through facial feature recognition, and specifically, to personal identification through iris identification.
Systems for identifying persons through intrinsic human traits have been developed. These systems operate by taking images of a physiological trait of a person and comparing information stored in the image to data that corresponds the trait for a particular person. When the information stored in the image has a high degree of correlation to the relevant data previously obtained a particular person's trait, a person can be positively identified. These biometric systems obtain and compare data for physical features, such as fingerprints, voice, and facial characteristics. Different traits impose different constraints on these systems. For example, fingerprint recognition systems require the person to be identified to contact an object directly for the purpose of obtaining fingerprint data from the object. Facial feature recognition systems, however, do not require direct contact with a person and these biometric systems are capable of capturing identification data without the cooperation of the person to be identified.
One trait especially suited for non-cooperative identification is an iris pattern in a person's eye. The human eye iris provides a unique trait that changes little over a person's lifetime. For cooperative iris recognition, the person to be identified is aware of an image being taken and the captured image is a frontal view of the eye. Non-cooperative iris image capture systems, on the other hand, obtain an iris image without a person's knowledge of the data capture. Thus, the subject's head is likely moving and his or her eyes are probably blinking during iris image acquisition. Consequently, the captured image is not necessarily a frontal view of the eye.
Identification of a person from an iris image requires iris image segmentation. Segmentation refers to the relative isolation of the iris in the eye image from the other features of an eye or that are near an eye. For example, eyelashes and eyelids are a portion of an eye image, but they do not contribute to iris information that may be used to identify a person Likewise, the pupil does not provide information that may be used to identify a person. Consequently, effective segmentation to locate the portions of a captured eye image that contain iris pattern information is necessary for reliable identification of person. Because previously known iris identification systems rely upon the acquisition of eye images from cooperative subjects, iris segmentation techniques have focused on frontal eye images. What is needed is a more robust method of iris segmentation to identify correctly those portions of an eye image that contain iris pattern data in an eye image obtained from a non-cooperative eye image acquisition system.
To address some of the issues related to the capture of quality eye images in a non-cooperative environment, video-based methods of image acquisition have been used. Multiple frames of eye image data provide more information that is useful for overcoming limitations present in a non-cooperative environment. All frames of eye image data, however, are not useful for identification purposes. For example, some frames may be out of focus, or contain low contrast images. Additionally, movement of the subject's head or eyes may result in the image of a closed eye from blinking or no eye at all because the subject turned his or her head. These problems with frame data require that the quality of the image data and the accuracy of the segmentation be verified. Otherwise, recognition accuracy may be drastically reduced. Previously known image quality assessment methods, unfortunately, are directed to individual iris images and do not adequately evaluate image data on a frame basis in a video sequence. Additionally, no methods for iris segmentation evaluation are currently known. Thus, development of image quality assessment methods for video frame data of eye images and segmentation evaluation methods is desirable.
A method segments iris images from frames of eye image data captured from non-cooperative subjects for a biometric identification system. The method includes receiving a frame of eye image data, and determining whether a pupil exists in the image by detecting glare areas in the image. Upon finding a pupil, subsequent images are processed with reference to the pupil location and radius calculated for the pupil in the first frame in which a pupil was located. A k means clustering method and principal component analysis with basis vector calculations are used to locate pupil boundary points. These points are fitted to a conic, which is preferably an ellipse. Using the pupil boundary, an angular derivative is computed of the eye image data in each frame having a pupil and iris boundary points are identified. These iris boundary points are fitted to a conic to identify the iris region as being between the iris boundary and the pupil boundary. Noise data are then removed from the iris region to generate an iris segment for further identification processing.
A method for use in a non-cooperative environment provides a quality score for video frame data of eye images and a segmentation score for an iris segmentation. The method includes generating a quality score for frames in a video sequence of eye images, discarding frames from the video sequence for segmentation processing in response to the quality score for a frame being less than a threshold, and generating a segmentation score for iris segmentations generated by the segmentation processing. The quality scores are used to eliminate frames from iris segmentation processing that may be blurred, out of focus, or without an image of an eye having an iris image. The segmentation score indicates the accuracy of the pupil boundary and center as well as the iris boundary location.
In a non-cooperative eye image acquisition system, the captured images are not necessarily useful for personal identification. For example,
To evaluate a frame of eye image data, a test for glare detection is performed. The basis for attempting to detect glare is the assumption that an image eye of a relatively open eye not obscured by motion blur contains glare. This correlation is demonstrated in
If the output shows peak points in the image, then image processing for segmentation is continued, however, if no peak points are present, the image is discarded. Once a frame is deemed acceptable for processing, iris segmentation begins. Iris segmentation is a process in which unnecessary information in the eye image is removed so the region of interest in the image becomes smaller. This culling of the eye image helps improve processing speed for the iris segmentation process. For the images of the human eye that underwent glare detection, a clustering algorithm is used to identify the coarse location of the pupil in the image. Clustering is performed with a K-means clustering algorithm in which K represents the number of clusters, which are characterized by their centroids. A principal component analysis and unit vector calculation, described in more detail below, are only applied on the first frame in which a pupil exists.
In more detail, the clusters are determined by minimizing the sum of the squared error.
Jk=∥X1−mk∥2
where
is the centroid of class Ck, nk is the number of elements in class Ck, and xi is the input data. ∥X∥2 is the two norm of x. The equation above finds the distance between the input data to each cluster and classifies the input data to the closest cluster.
C1k=min k{J1k},k=1, . . . , K
where K is the number of classes. After data is classified to a cluster, the cluster should be updated, where the centroid of the cluster is recalculated with the new information.
After the centroid of the cluster is updated, the number of elements of the cluster is incremented by one. The centroid of the clusters is updated as the iterations progress through the enrollment data.
Principal Component Analysis (PCA) is an orthogonal linear transformation of given data to a new coordinate system according to a variance of the projection of the data. PCA is used to reduce the dimensions of the cluster by picking out the dimensions with the largest variances, which means finding the best low rank approximation of the data by minimizing the least squares.
First, the mean of cluster centers is found.
The sample mean is a zero-dimensional representation of the clustered data. By projecting the data onto a line passing through the sample mean, we obtain:
xk=
where e is a unit vector in the direction of the line and αk is a scalar corresponding to the distance of xk to
The norm of unit vector e is equal to 1. The partial derivative of the previous equation with respect to αk is:
To minimize the least squared error, the derivative of the error criterion function should be zero. When the last equation is set to zero, we obtain,
αk=eτ(Xk−
This equation provides the least squares solution of xk projected onto the line passing through the sample mean in the direction of e. For finding the optimum direction e, a scatter matrix S is defined as,
By substituting the two previous equations into the last equation, we can rewrite the squared error criterion function as,
By using Lagrange multipliers method, eTSe can be maximized subject to the constraint ∥e∥=1,
u=eTSe−λ(eTe−1)
where λ is the undetermined Lagrange multiplier. After partially differentiating this equation with respect to e, and setting the derivative to zero, we obtain,
Se=λe.
In order to maximize eTSe, the eigenvectors corresponding to the largest eigenvalues of the scatter matrix should be selected. In this way, the best one-dimensional projection of the data in a least-sum-of-squared-error sense can be found. Then the d′ dimensional projection of the clustered data can be rewritten as,
where ei for i=1, . . . , are the d′ eigenvectors of the scatter matrix corresponding to the largest eigenvalues, and the coefficients αi for i=1, . . . d′ are the principal components of x in the basis of e.
The clustering method described above is applied to a frame of eye image data to find dark regions of the image, one of which may be a pupil. For application of the K-means clustering, each pixel is clustered into 5 classes according to the intensity values of the 8 pixels immediately adjacent to a selected pixel. Only for the first frame in which a pupil is detected are the PCA and basis vectors calculated as well. For subsequent frames, the clustering is performed using the calculated means of the clustered data (m−), the principal components (α1t), and the basis vectors of projected eigen-space (e1t). An example of a sequence of clustered frames is shown in
After clustering a frame, previously detected glare areas are removed from the clustered image and an edge detection algorithm is applied on the first cluster. In order to increase the coarse pupil detection method's efficiency, two different boundary detection methods are proposed according to the condition of the previous frame. Because each frame of a video sequence is in an order, the pupil location cannot change dramatically in two consecutive frames. If the pupil is found in the previous frame, the pupil boundary points are searched around the previous pupil center. For each angle, the search is applied on a radial axis until an edge is found. Because the glare area has already been removed, the first edges around the previous pupil center are assumed to belong to the current pupil boundary. The average of the detected pupil boundary points on the clustered image is assigned as the coarse pupil location.
On the other hand, if the pupil could not be found in the previous frame, then the edge map is filtered with a circular mask, and the highest convolution point is assigned as the coarse pupil location. The circular mask is created for different radius values and normalized. In order to increase the speed of the filtering process, the radius values are limited by using previously detected pupil radius values.
After the coarse pupil location is found, the Euclidean distance between the pupil location and the glare points are calculated. As noted above, at least one of the glare points can be assumed to be close to the pupil area. If the distances between the pupil center and glare points are high, then the pupil was not detected correctly, and the current frame is discarded. Otherwise, iris segmentation processing continues to locate the pupil more precisely.
Pupil boundary detection is now discussed in greater detail.
where r1 and r2 are radii corresponding to x and y coordinates respectively. When and r1 and r2 are equal, the equation represents a circle. However, this equation does not model rotation of an ellipse. For frontal iris images, the pupil can be modeled as a perfect circle or ellipse because the pupil rotates very little, if at all, with respect to the camera axis. On the other hand, with non-cooperative iris images, this assumption regarding rotation cannot be used. Each position of the iris, except the center position, creates a rotation of the pupil. In order to model a moving eye, a more complex elliptical model is needed. In the method disclosed herein the direct least square fitting of ellipses method is proposed to model the pupil boundary mathematically. This approach enables: ellipse-specificity, which provides useful results under occlusion; invariance to affine transformation; high robustness to noise; and high computational efficiency.
A general conic can be represented by a second order polynomial;
F(α,x)=α−x=αx2+bxy+cy2+dx+ey+f=0
where α=[a b c d e f]T and x=[x2 xy y2 xy1]T. This model is for any 2 dimensional elliptical section of a three dimensional conic. The x and y values are the pixel coordinates of the edge points of the pupil boundary. However, a second order polynomial representation assumes the coordinates are in Cartesian coordinates. The pupil boundary coordinates should be normalized before fitting.
A simple constraint applied on the linear least squares problem provides the high efficiency of the ellipse fitting algorithm. The constraint is the discriminant of the roots of the second order polynomial should be equal to 1.
b2−4ac=1
This quadratic constraint can be expressed in matrix form as αTCa=1.
Now the constrained ellipse fitting problem reduces to
The direct ellipse-specific fitting algorithm assumes the input is normalized, so the estimated coefficients found from minimizing the least squares are normalized coefficients. Estimated coefficients need to be un-normalized before being applied to the pupil boundary. The fitted ellipse boundary points over detected boundary points can be seen in
The advantage of using the direct least square fitting ellipses method is the robustness of the ellipse fitting algorithm against noisy data, occlusion and rotation. As seen in
After the pupil boundary is modeled and the pupil area is detected, an intensity and variance test is applied to the pupil region to evaluate whether the detected pupil region is correct. The basic assumption is the pupil region should have lower intensity than four neighbors and the variance inside the pupil area should be low after removing the glare region. If the detected pupil area's average intensity is higher than one of four neighbors or the variance inside the detected pupil area is higher than a given threshold, then the frame is discarded. Otherwise, the iris segmentation method continues with processing to detect the iris boundary.
Iris boundary detection and location is similar to pupil boundary detection and location. The iris boundary is almost a perfect circle when the eye is looking to the front. Because there is no muscle controlling the shape of the iris boundary, the radius of the iris boundary only changes with the distance of the camera to the eye. However, for the non-cooperative video frames, the shape of the iris also changes with the motion of the eye. Even if it is almost a perfect circle in a frontal view, the iris boundary has elliptical shape when the eye is looking another direction. The rotation of the eye corresponding to the camera axis creates a projective transformation of the iris boundary.
The same ellipse fitting algorithm is used for modeling the iris boundary. However, finding the iris boundary points is more difficult than finding the pupil boundary points for the pupil. The pupil boundary has stronger edges than the iris boundary, and the occlusion of the eyelids is less for the pupil boundary than it is for the iris boundary. An angular derivative is used instead of the gradient image for detecting the iris boundary points in order to separate vertically strong eyelids and horizontally strong eyelashes from iris boundary points.
where D=[−2, −1, 0, 1, 2], Im is the original image, x and y are the pixel coordinates and θ is the angle. The iris area has a lower intensity than the sclera area of the eye. Consequently, the signed value of the angular derivative is used rather than the absolute value and only positive values are taken in consideration. The strongest positive angular derivatives, excluding the pupil edge locations, are stored as iris boundary points. The top part of the iris (45° to 135°) is ignored because of severe occlusion of the top eyelid over the iris region.
The angular derivative of the original image, after removal of the detected pupil region, can be seen in
One of the key steps of iris recognition is segmentation of the iris region from the rest of the image accurately. Locating the eye and extracting the pupil and iris boundaries in the image is not trivial for non-cooperative iris video frames. The location and orientation of the eye and occlusion of the eyelids and eyelashes creates non-ideal circumstances for segmentation. By using the robustness of the direct least square ellipse fitting method, the negative effects of occluded and noisy data are minimized and the efficiency of the algorithm makes the proposed method applicable for video based image processing.
After segmenting the iris area from the rest of the frame, the noisy information inside the iris region should be cleared. Additionally, because the iris area is generally occluded by eyelids and eyelashes, the occluded portion of the image should be eliminated before being used for identification. The removal of the noise and occlusion data is more difficult with non-cooperative images than cooperative images for a number of reasons. For one, the eyelids in frontal gaze images are assumed to be the horizontally strongest edges. Also, eyelid information exists on both sides of the iris area in these images. However, for non-cooperative iris images in which the eye can look in any direction, the shape of the eyelid changes. This change can cause mixing of the pupil and iris boundaries with eyelid information. Moreover, the assumption of having eyelid information on both sides of the iris area is not always true in non-cooperative images. Moreover, the environment is more homogeneous in cooperative image capture systems so the contrast of the image is higher. This contrast enables the eyelashes to be more easily distinguished inside the iris region. On the other hand, for non-cooperative images, the illumination is inconsistent. This factor coupled with movement of the eye makes the contrast in a frame of non-cooperative image data low. Thus, detection of the eyelashes is more challenging.
In order to separate noisy data inside the iris region from the iris pattern, a window based variance and intensity thresholding method is used. This method is based on the assumption that noisy data has more variance and edge information than the iris patterns and that the intensity of noise is either higher or lower than the iris patterns. The variance and intensity thresholding method uses a W×W window that is captured around a pixel (x,y) inside the iris region. The variance inside the window can be described as:
where W is the size of the window, μ is the mean inside the window, and
The output of the window based variance calculation method can be seen in
The mean intensity of the window is used in order to remove dark regions and bright regions inside the iris area. In order to calculate the threshold, the median intensity inside the detected iris region is calculated, and the lower threshold and upper threshold are calculated from the median intensity of the iris region. All the variance, edge and intensity thresholds are described as adaptive thresholds according to the contrast and intensity of the whole frame.
As seen in
After detecting the boundary points for the top and bottom eyelids separately, as shown in
Besides the iris segmentation, noise cancellation is also more challenging for non-cooperative iris images because the image capturing environment is heterogeneous and eye movement affects the iris image. The proposed noise cancellation method removes high variance, low intensity, and high intensity regions from the iris region by using a window based variance and intensity thresholding method. Then, eyelid boundaries are detected from the noise mask and modeled as a second degree polynomial with convex and concave restrictions corresponding to the top or bottom eyelid. As shown in
In order to improve the efficiency of iris recognition systems using images captured in a non-cooperative environment, a method processes video images to eliminate bad quality frames and to assess the accuracy of segmented iris images generated from the processed frames. Thus, a typical iris recognition system 10, shown in
As shown in
There are a number of factors that affect iris image quality. These factors include the subject illumination, movement of the subjects, quality of the camera, out of focus conditions, closed eyes, as well as others. In the quality filter 14, the goal is to identify problematic images quickly and to eliminate the iris images that fail to possess sufficient quality for further processing, such as images without eyes, interlaced frames, severely blurred images, and motion blurred images. Previously known quality measure methods are applied to a single image, and most of them require iris segmentation before quality evaluation. Iris segmentation consumes significant processing resources, including time. In order to improve the efficiency and accuracy of iris recognition, poor quality images need to be detected quickly and eliminated before processing resources are expended on the images. In quality filter module 14, the quality of the incoming video frames is determined without requiring iris segmentation.
Eye tracking has been previously used to detect video frames that contain a blinking eye or those that have no eye image in them. Eye tracking includes calculating the difference image between two consecutive frames, using edge detection to find the region of interest, and object classification to detect if a valid eye is present. Such a method is shown in
The quality filter module 14 uses eye speculation as an indicator of the presence of a useful eye image in a frame. Eye speculation is shown in various eye images in
Another problem detected by the quality filter 14 is an interlaced frame. In a video sequence, each frame represents a unique instance in time. In an interlaced frame, a single frame of video does not correspond to a single instant in time. Instead, half of the frame shows an object at one point in time and the other half of the frame shows the object at a later time in the sequence.
Quality filter 14 determines whether a frame is interlaced or not by calculating the correlation coefficient of the odd portion and the even portion in a frame.
where X and Y represent the frame with the odd portion and the frame with the even portion, respectively. E is the expected value operator and σ means covariance. Since the odd portion and the even portion in an interlaced frame are shown in two different points, the correlation coefficient scores in interlaced frames should be lower than non-interlaced frames. Detection of the lower coefficient score in a video sequence enables the quality filter module 14 to identify interlaced frames and remove them from the sequence so they do not undergo iris segmentation processing.
When acquiring iris images, movement of the eye and improper focus of the camera may cause blurriness in the acquired image. The three most frequent kinds of blurriness encountered are motion blur, out-of-focus blur, and a combination of motion blur and out-of-focus blur. To detect out-of-focus blur, the quality filter module 14 smoothes an image with a blur filter, and compares the energy of the higher frequency image components with those of a clear image. Application of a blur filter suppresses the energy of the higher frequency components so the higher frequencies in the entire image are decreased. Modeling the defocus as a blurry process of the entire image works efficiently for focus assessment. The following spatial 8-by-8 filter is designed as a blur filter to extract the higher frequency component.
As explained above, the energy of the blur filtered image is compared with that of a clear image to detect out of focus blur. Similar to this idea, the summated focus score is passed through a compressive function:
where parameter c is the half-power of most clear frames, in order to generate a normalized focus score in the range of 0 to 1 for any frame. The highest score in the sequence of frames is selected for normalization. If the normalized score of the frame is lower than threshold T, the frame is classified as a blurred frame.
Illumination, however, causes the half-power to vary from one video sequence to another. More importantly, in a real-time situation, evaluation of the value of c from an entire sequence is impossible. To address this issue, quality filter module 14 updates the value sequentially. The on-line method for this updating is:
From this on-line method, each frame is convoluted with a high pass filter and the sum of the square of the convoluted frame is calculated as the focus score. Then the updated highest focus score is used as the parameter to normalize these scores from 0 to 1. T is used as the threshold to classify out of focus frames and clear frames.
Some motion blur images are also out-of-focus, most typically from head movement. This type of blurriness can be detected using an out-of-focus filter. If the motion blur is only in a small area with focus, such as occurs from eye movement, use of a simple out-of-focus filter may not be adequate. In order to detect this type of blurriness, the quality filter module 14 uses a modified Cepstral technique. To better understand the modified technique, the nature of motion blur is now discussed. Motion blur occurs when relative motion occurs between the camera and the object being captured. In linear motion blur, blur is in one direction. The motion blurred image is usually modeled as a convolution with a blurring kernel (or Point Spread Function, PSF) followed by noise addition.
g(x,y)=ƒ(x,y)*h(x,y)+n(x,y),
where, ƒ(x,y) is the original image, h(x,y) is the point-spread function, n(x,y) is a noise function, and g(x,y) is the degraded image. The method used to identify linear motion blur is to compute the two-dimensional spectrum of the blurred image g(x,y). The point spread function for motion blur with a length of L and angle θ is given by
where
The Fourier transform of the function h(x, y), in the equation above, is a sinc function. Multiplying this sinc function by F(u, v) in the frequency domain preserves the ripples of the sinc function. To identify the ripples in G(u, v), the blur angle and blur length are estimated.
If the unmodified Cepstral method were used to identify linear motion blur, the two-dimensional response of the blurred image g(x,y) is computed as shown:
C{g(x,y)}=F−1(log|F−1(g(x,y))|)
C{g(x,y)}=C{ƒ(x,y)}+C{h(x,y)},
In this computation, the noise function is ignored. The Fourier transform of the point-spread function, H(x,y) is a sinc function with zeros. Thus, log(|H(x,y)|) has large negative spikes. In this way, the spikes may be used to identify the motion blur of an image. This method assumes that the entire image is corrupted with the same point-spread function.
This assumption, however, does not always hold true in an iris image. When the subject's head is still and only the eye ball is moving, the captured image is not entirely blurred by the motion. In such a case, application of the Cepstral method to the entire image could not detect the motion blur. For example,
To overcome this limitation, this technique may be modified to compute a difference image between the current frame and the previous frame to roughly identify the iris area. This operation is depicted pictorially in
To overcome this shortcoming of the Cepstral method, the quality filter module 14 uses glare area detection for local motion blur assessment. The glare area is the brightest area of an iris image. Glare inside a pupil or nearby a pupil area within a good quality image can be modeled as a bright object on a much darker background with sharp edges as shown in
While glare detection is useful, care must be taken in a non-cooperative image acquisition situation, because multiple areas may have very bright illumination and unwanted glare areas (glares that are not inside the eye ball). When a subject moves the eye, only the glare in the iris region may be used as a reference to detect iris local motion blur. For example, as shown in
After applying a Sobel filter to an image, such as the one shown in
Otsu' s method assumes the image contains two classes of pixels and finds the optimum threshold separating the two classes so that their within-class variance is minimal. The first thresholding process separates the bright area from the dark area. The second thresholding process is applied to the bright area, which was segmented by the first threshold, to see if there is a gray area. If the image is a motion blurred image, the second thresholding finds a large amount of gray area. For example,
In some cases, the glare may be on both the pupil and the iris area as shown in
After the frames have been filtered to remove the ones having images that are not useful for iris segmentation, the remaining frames are subjected to the iris segmentation process and the segmented iris image is evaluated. This evaluation is important as some good quality images may not be segmented correctly. For example, many segmentation methods assume that pupil and limbic boundaries are circular, even though non-circular boundaries for these features in images are commonly acknowledged, even for images obtained in a cooperative environment. As a result, the segmentation process may not always achieve highly accurate results. Currently, a number of improvements have been made in iris image segmentation, but evaluation of the results is still important.
To evaluate iris segmentation accuracy, the center of the pupil, the pupil boundary, the outer iris boundary, which could include limbic boundaries, and the output mask from the segmentation processing are needed. These parameters are the standard outputs from the processing for iris image normalization, and polar transformation. Therefore, adding an evaluation step to the iris segmentation processing does not add additional burden for the iris recognition system. Using the center coordinates, a horizontal rectangle area, which includes the pupil, the iris, and the background, is cut as shown in
Comparing the segmented pupil boundary with the segmentation result, the accuracy of the center and the pupillary boundary can be determined efficiently.
In
In addition to the pupil center and boundary evaluation, the outer boundary of the iris region detection is also tested for accuracy. The iris boundary could include the limbic boundary, as well as the eyelid boundaries. The correct iris boundary should separate the iris from outside noise regions, such as eyelids, eyelashes, glare, and sclera. Compared to the noise regions, the iris region should be more homogeneous, and this characteristic may be used as a quality measurement if the iris boundary detection is accurate. A homogenous measure function useful to measure local homogeneity is:
where BK(c) is a ball of any radius k centered at c, [BK(c)−Bk-1(c)] is the number of pixels in BK(c)−Bk-1(c), ƒ(c) is the pixel value of the center, and ƒ(d) is the pixel value at location d. And W is the homogeneity function:
The entire outer iris boundary may separate different regions. For example, the upper iris boundary may separate the upper eyelids from the iris; while the left/right iris boundary may separate the sclera from the iris. If the local homogeneous measure function was applied directly to the entire iris region, it could give incorrect results. Based on the observation of iris boundaries, the boundary is divided into four different regions: upper, lower, left, and right boundaries. For each region, 20 pixels are selected inside and outside of the detected iris boundary, respectively. The homogenous measure is then calculated:
where, E(k) is the edge with ktk pixel distance from the detected iris boundary, c is the estimated iris area pixel value, N is the function to count the number of pixels in E(k), and W is the homogenous function:
where σ is the standard deviation of the iris part offered by the segmentation result. Therefore, if a selected pixel is inside the iris, then the W([c−ƒ(d)]) value would be high. If the edge is inside the iris, then FO(k) should be high. Otherwise, the value should be low. If the value of k is negative, then the edge is outside of the detection boundary. If the value of k is positive, then the edge is inside the boundary. If the detected iris boundary is correct, then the inside edges should have high values while outside edges should have lower values. In other words, by measuring the FO values of outer edges and inner edges, the accuracy of the detected boundary can be evaluated. This boundary accuracy is defined as:
where D is the biggest pixel distance selected in the calculation. In the segmentation evaluation module 18, D is set to a value of 20. Thus, if the detected boundary is correct,
would be low to confirm that the accuracy is high. If the detected boundary has a smaller scale than the actual boundary,
is still low, but
is also low, which means the accuracy of the boundary is not good. Here, the boundary accuracy is redefined for checking the upper boundary as shown in
The iris boundary detection score of the entire iris boundary is the combination of the four parts. As is known, if any part of segmentation is wrong, the iris recognition accuracy may be adversely affected. Therefore, the accuracy of the entire iris boundary detection is calculated as the average accuracy score of the four regions:
The segmentation evaluation module 18 generates a composite score for each segmented image. The composite score includes: the pupil boundary score, the iris boundary score, and the image quality score, which may be expressed as:
SQ=F1(P)·F2(I)·Q,
Here the F1 and F2 functions normalize the pupil boundary score and iris boundary score from 0 to 1. Q is the quality score.
The relationship between available iris patterns and the iris recognition accuracy is more of an exponential relationship than a linear relationship. The F1 and F2 functions thus expressed are:
Since the error of the pupil boundary results in more penalty than the error of iris boundary, the two functions cannot be exactly the same. The rising slope of F1(P) should be steeper than F2(I). From empirical analysis, the following values have been derived, κ1=0.025, λ1=3.7, κ2=0.011, and λ2=4.5.
Using the implementations described above, a quality filter module 14 and a segmentation evaluation module 18 are coupled to an iris identification system as shown in
Those skilled in the art will recognize that numerous modifications can be made to the specific implementations described above. While the embodiments above have been described with reference to specific applications, embodiments addressing other applications may be developed without departing from the principles of the invention described above. Therefore, the following claims are not to be limited to the specific embodiments illustrated and described above. The claims, as originally presented and as they may be amended, encompass variations, alternatives, modifications, improvements, equivalents, and substantial equivalents of the embodiments and teachings disclosed herein, including those that are presently unforeseen or unappreciated, and that, for example, may arise from applicants/patentees and others.
This application claims priority from International Application PCT/US09/51452, which is entitled “System And Method For A Non-Cooperative Iris Image Acquisition System,” and was filed on Jul. 22, 2009. This application further claims priority to U.S. Provisional Patent Application 61/083,034, which was filed on Jul. 23, 2008 and entitled “System And Method For Iris Segmentation In A Non-Cooperative Iris Image Acquisition System,” and to U.S. Provisional Patent Application 61/083,628, which was filed on Jul. 25, 2008 and entitled “System And Method For Evaluating Frame Quality And Iris Image Segmentation Quality In A Non-Cooperative Iris Image Acquisition System.”
This invention was made with government support under contract N00014-07-1-0788 awarded by the Office of Naval Research. This invention was made with government support under contract 2007-DE-BX-K182 awarded by the National Institute of Justice. The government has certain rights in the invention.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US2009/051452 | 7/22/2009 | WO | 00 | 2/25/2011 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2010/011785 | 1/28/2010 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5291560 | Daugman | Mar 1994 | A |
5481622 | Gerhardt et al. | Jan 1996 | A |
5572596 | Wildes et al. | Nov 1996 | A |
5835639 | Honsinger et al. | Nov 1998 | A |
6005983 | Anderson et al. | Dec 1999 | A |
6026174 | Palcic et al. | Feb 2000 | A |
6120461 | Smyth | Sep 2000 | A |
6526160 | Ito | Feb 2003 | B1 |
6639628 | Lee et al. | Oct 2003 | B1 |
6714665 | Hanna et al. | Mar 2004 | B1 |
7123751 | Fujieda | Oct 2006 | B1 |
8014571 | Friedman et al. | Sep 2011 | B2 |
20030223037 | Chernyak | Dec 2003 | A1 |
20040174496 | Ji et al. | Sep 2004 | A1 |
20040184670 | Jarman et al. | Sep 2004 | A1 |
20060038892 | Zanzucchi et al. | Feb 2006 | A1 |
20060093998 | Vertegaal | May 2006 | A1 |
20060098867 | Gallagher | May 2006 | A1 |
20060147094 | Yoo | Jul 2006 | A1 |
20060161141 | Chernyak | Jul 2006 | A1 |
20060280249 | Poon | Dec 2006 | A1 |
20070135999 | Kolatt | Jun 2007 | A1 |
20070153233 | Campin et al. | Jul 2007 | A1 |
20070189582 | Hamza et al. | Aug 2007 | A1 |
20070262574 | Breed et al. | Nov 2007 | A1 |
20080253622 | Tosa et al. | Oct 2008 | A1 |
Number | Date | Country |
---|---|---|
WO 9925239 | May 1999 | WO |
Entry |
---|
International Searching Authority; International Search Report for PCT/US2009/051452; Mailed Nov. 20, 2009 (6 pages). |
International Searching Authority; Written Opinion of the International Searching Authority for PCT/US2009/051452; Mailed Nov. 20, 2009 (7 pages). |
Number | Date | Country | |
---|---|---|---|
20110150334 A1 | Jun 2011 | US |
Number | Date | Country | |
---|---|---|---|
61083034 | Jul 2008 | US | |
61083628 | Jul 2008 | US |