The present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which preferred embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Like numbers refer to like elements throughout to indicate similar elements.
Referring initially to
The finger sensor 31 illustratively includes a processor 33 cooperating with the finger sensing area 32 for collecting image data therefrom. In some embodiments, the processor 33 may be provided by processing circuitry included on the integrated circuit substrate with the finger sensing area 32, and a host processor (not shown) as typically carried by the housing 21. Such a host processor for the cellular telephone 20 may typically perform the traditional processing for telephone functions, and may also have additional processing capability available for finger matching, finger navigation, etc. as will be appreciated by those skilled in the art. In other embodiments, the processor 33 may be implemented totally along with the finger sensing area 32 or in a separate integrated circuit as will also be appreciated by those skilled in the art.
The finger sensing area 32 illustratively includes an array of sensing pixels, such as electric field sensing pixels 37 formed on an integrated circuit substrate of the type as described in U.S. Pat. No. 5,963,679 to Setlak et al., assigned to the assignee of the present invention, and the entire contents of which are incorporated herein by reference. Of course, the finger sensing device 30 may be based upon other types of finger sensing as will be appreciated by those skilled in the art. For example, the finger sensing area 32 may comprise at least one of an electric field finger sensing area, a capacitive finger sensing area, an optical finger sensing area, and a thermal finger sensing area.
The processor 33 cooperates with the finger sensing area 32 for reducing a number of possible match combinations between a sensed finger data set and each of a plurality of enrolled finger data sets. The processor 33 illustratively includes a memory 34 for storing the enrolled finger data sets, coupled to the schematically illustrated indexer 35 that reduces the possible matching combinations. Accordingly, the processor 33 may reduce the number of possible match combinations by generating a plurality of overlap hypotheses for each possible match combination, generating a co-occurrence matrix score based upon the plurality of overlap hypotheses for each possible match combination, and comparing the co-occurrence matrix scores to a to thereby reduce the number of possible match combinations. The co-occurrence matrix scores may be compared to each other and a top percentage (e.g., top ten percent) selected, or each score can be compared to a threshold score for selection, for example.
The processor 33 also illustratively includes a matcher 36 to perform a match operation for the sensed finger data set based upon the reduced number of possible match combinations. The sensed finger data set may comprise a sensed finger ridge flow data set, and each enrolled finger data set comprises an enrolled finger ridge flow data set.
Reducing the possible match combinations, as performed by the processor 33, may further comprise applying at least one filter to the plurality of overlap hypotheses prior to generating the co-occurrence matrix score. For example, the at least one filter may comprise one or more of an overlap area filter, an overlap content filter, and a histogram based distance filter. These filters are discussed in greater detail below.
Since the sensed finger data set may comprise a sensed finger ridge flow data set, and the enrolled finger data sets may comprise enrolled finger ridge flow data sets, the processor 33 may generate the co-occurrence matrix score by first reducing a number of matrix entries based upon ridge flow directions. For example, reducing the number of matrix entries may include reducing the number of matrix entries based upon ridge flow directions at a plurality of anchor points. The processor may 33 also cooperate with the finger sensing area 32 to generate the enrolled finger data sets, and the enrolled finger data sets may comprise data relating to the plurality of anchor points.
The finger sensing device 30 may be readily included in an electronic device, such as the illustrated cellphone 20 (
Referring now briefly and additionally to the flowchart 40 of
There are a number of constraints that may be helpful to review. Since partial fingerprints are available for indexing, the features used should be able to capture local properties of the fingerprint. The features used should be simple, and should be computed rather inexpensively. Comparing two fingerprints based on these features should be computationally inexpensive. To allow for low end low memory processors, the approach should not inherently use memory expensive procedures, such as hash tables. The approach may make use of the ridge flow maps as much as possible, because a typical sensor system is available to give a clear picture of ridges, which may then also be used by the final matching algorithm.
There are also certain assumptions that may be helpful to consider. For example, a small set of “essential” data may be added to the template size. This will essentially be the information regarding the four anchor points, for example, (or points of significances) per enrolled data set or template. For faster computations (during indexing), an auxiliary set of “non-essential” data can be added to the template. Typically, for a fingerprint image of size N×N, this is an array of size (N/B)×(N/8) for every enroll template. Hence, the storage needs per node is (N/8)2 bytes.
Those of skill in the art will recognize that co-occurrence matrices have been used widely in comparing textures, and for web based image indexing. They generalize the one dimensional histograms computed using the gray scale values in an image. This concept has been significantly extended for indexing using ridge flow maps computed from the fingerprint images as described in detail herein. A co-occurrence matrix of an image is a 3D array, indexed by C1, C2, and D, where C1 and C2 are the grayscale value axes, and D is the distance axis. In general, constructing a co-occurrence matrix A from an image IM of size N×N is constructed, for example, using the following algorithm:
In the above steps, the Dist( ) function computes the distance between two pixel locations. This could be Euclidean distance, the Manhattan distance, or any other convention one wishes to follow. Typically, Dmax is 3 in most applications. Note that for D=0, then the diagonal (C1=C2) of the co-occurrence matrix is the same as histogram. Co-occurrence histograms have more descriptive power than histograms, because they capture the spatial relationship between two pixels as well.
To illustrate the point, consider the example in
There are several different types of discriminant functions that can be defined, given two co-occurrence matrices A1 and A2, corresponding to two images, respectively. One of the most frequently used measure is the intersection based distance, defined as
Dist(A1,A2)=1−pop(A1∩A2)/min(pop(A1),pop(A2)).
Here the pop( ) function indicates the population of the matrix, and is the sum of all the entries in the matrix. The (i,j,k) th element of the matrix A1∩A2, also known as the intersection co-occurrence matrix, is the minimum of A1(i,j,k) and A2(i,j,k). If the first image is a subset of the second image (or the vice versa), then A1∩A2=A1, and the distance value is 0.
In the present finger sensing device 30 which is driven by anchor point based overlap hypotheses generation and verification, below is tested the hypothesis that the first image is similar to the second image. The distance measure used is:
Dist(A1,A2)=1−pop(A1∩A2)/0.5(pop(A1)+pop(A2)).
The description now turns to the adaptation of co-occurrence matrices and their comparison for ridge flow maps. Indeed, the following substantial modifications were made to extend the concept of co-occurrence matrices for ridge flow map comparison. First, a modified distance function to deal with rotation is now described. Instead of computing co-occurrence matrices for fingerprint images, the processor computes the co-occurrence matrices of the ridge flow angles of the fingerprints. Comparing two ridge flow co-occurrence matrices using a distance measure is not as trivial as they can undergo significant rotation. Hence, a particular ridge flow value at a particular block in the enroll image may have a value of i1, while the corresponding block in the match image can have a value of i2=i1+θ.
For this, a new co-occurrence matrix A2θ is defined which is the co-occurrence matrix of ridge flow RF2θ constructed from the ridge flow map RF2 by adding θ to each of its elements. The modified distance is computed for the θ value that generates the minimal distance value.
Dist(A1,A2)=minθDist(A1,A2θ)
In the described embodiment, the ridge flow the search is conducted for θ in the range [−30°, 30°]. Also, in this embodiment, A2θ is not computed explicitly. Instead, the following algorithm is adopted to compute pop(A1∩A2θ).
Note that the ridge flow angles for the purpose of co-occurrence matrix computation is quantized into 32 levels, implying that the range of θ (quantized) is [−6,6]. Also, note that ⊕ indicates a circular summation, since the ridge flow angle can vary between 0 and 180 or, between 0 to 31 in the quantized space (i.e., i1⊕θ=(i1+θ)mod 32.)
Another aspect relates to making the co-occurrence matrices less crowded to reduce false acceptance. Indexing involves comparing the match image with several enroll images, most of which do not belong to the same finger. It is well known in the literature that clutter in data can lead to high scores while comparing images that do not match, increasing the probability of false matches. This may be especially true for co-occurrence intersection based distance computation, because cluttered (or crowded) matrices generally tend to generate a crowded intersection co-occurrence matrix, leading to a low distance value. To reduce the “crowd” by keeping only meaningful, non-redundant information in the co-occurrence matrix, one more modification may be made for its construction. Instead of visiting all the blocks at a distance “d” away from the block located at (i,j), the ridge direction at the block (i,j) is followed to visit the block at a distance d, only in this direction. This choice is made because there are more variations in the ridge flow values along the ridge direction, as opposed to the direction perpendicular to the ridge. The process is illustrated in
A modification to deal with non-overlap is now described. To make the indexing work with small sensors, it is helpful to assume that the images Im1 and Im2 have significant non-overlap. Hence, the distance measure on their co-occurrence matrices will fail to provide an accurate discrimination between the two images. However, if one had a significant point of interest, formally called the anchor point (for example, a core point at location (xe, ye) in the enroll image, and (xm, ym) in the match image), one can generate the overlap hypothesis, and compare the co-occurrence matrices of the overlap region only.
As illustrated in
The hypotheses pruning is now further described with additional reference to
The first filter 61 corresponds to the minimum overlap area that is required before the next stage is invoked. The threshold τ1 is set to n2/3, where n×n is the size of the ridge flow map. The second stage, or overlap content filter 62, corresponds to the overlap content, or the number of good blocks that are there in the enroll as well as the match ridge flow maps. The number of good blocks should exceed τ2, which may be set to n2/3 for one implementation embodiment. The third stage, or histogram based distance filter 64, involves computing the histogram of the overlap areas and comparing them using a histogram intersection based distance d(H1,H2) (identical to co-occurrence matrix intersection based distance), where H1 and H2 are the enroll and the match (or sensed data set) histograms, respectively, computed on the overlap region. Since the co-occurrence matrix is a generalized version of the histogram, no further discussion is needed. The only point of note is that the rotation angle for the overlap hypothesis may be computed as
θopt=arg minθ Dist(H1,H2θ).
If Dist(H1,H2) is less than τ3 (=0.200 for an implementation), then this hypothesis is passed to the co-occurrence matrix based distance computation stage 65, along with θopt. Note that this θopt is re-used for co-occurrence matrix based distance computation. That is,
Dist(A1,A2)=Dist(A1,A2θopt)
Passing the rotation value from the histogram based distance computation stage saves a lot of computation time (by sacrificing very little in the indexing accuracy), especially since the co-occurrence matrix intersection based distance computation is usually more involved than the histogram intersection based distance computation.
The processor 33 operates upon the enroll image data sets 66 and the match or sensed image data set 67. The indexer 35 includes a block for performing overlap hypotheses generation 70 and the downstream pruning, evaluation and best hypothesis selection block 71 connected thereto. Note that the four stage pruning process is repeated for all (typically, sixteen) hypotheses. The hypothesis leading to the least distance value is chosen, and this distance value is recorded as the indexing distance between the enroll and the match image.
To further reduce the computation time in indexing, the following options have been introduced and may be used in various embodiments of the finger sensing device 30 as will be appreciated by those skilled in the art. First, local properties of anchor points may be used for pruning. For example, local properties of the ridge flow map, around the anchor points, can be used to further prune the number of overlap hypotheses. In one embodiment, one can use the Harris Cornerness Measure (an output of the Harris Corner Detector that is used for anchor point generation) at each of the anchor points. This comes at no extra cost, and its strength is an indication of the “rate of change” of the ridge flow angle values at that point. If a match anchor point's cornerness strength is less than half or more than double of the strength of a enroll image's anchor point, the pair is not evaluated any further. This particular filtering to a three fold increase in the speed of the indexing process, for example.
Second, integral images may be used for fast computation of the number of good blocks in a ridge flow map subregion. Since this counting routine is called most in the indexing process, it may lead to a substantial amount of indexing time, although it looks benign. If GB is the good block image, where GB(x,y)=1 for a good block at location (x,y), and 0 for a bad block, the integral good block image, IGB is defined as
IGB(x,y)=Σ(i≦x)Σ(j≦y)GB(x,y)
A fast recursive solution to computing the integral good block images may be used, as shown in the following equations.
S(x,y)=S(x,y−1)+GB(x,y),
IGB(x,y)=IGB(x−1,y)+S(x,y).
As understood with additional reference to
An efficient computation of intersection of co-occurrence matrices is further described as follows. The three dimensional co-occurrence matrices are usually very sparse. Typically, they have an occupancy of 10%. Thus, while computing the distance based on these matrices, it may make little sense to execute the three level deep loop (see the algorithm). Instead, for the two matrices to be compared, one can maintain a list of entry locations that have been filled up. For ease, lets assume the list is L1 for the matrix A1, and so on. While comparing A1 and A2, one visits every element in E1. The co-occurrence matrix location stored in this element is read out. The minimum value of the entries at this location in A1 and A2 is computed and added to the value of the variable pop (again, refer to the algorithm). Thus, instead of visiting 32*32*4 (≈4000) locations in the co-occurrence matrix, one can visit typically 400 locations only.
Avoiding redundant co-occurrence matrix computation by smart browsing is now further described. As the match ridge flow data is compared with a set of enroll image ridge flow data, it will become apparent to those skilled in the art that certain redundant computations are being made on the match ridge flow data. This is usually true for co-occurrence matrix and histogram computation in ridge flow subregions. Assuming that one is interested in verifying the hypothesis that the anchor point located at (xe, ye) in the enroll image (say EIm1) “aligns” with the match image anchor point (xm, ym). The parameters Δx=xm−xe and Δy=ym−ye uniquely define the subregion in the match image that would be used for the extraction of the co-occurrence matrix and histogram. For yet another enroll image (Eim2), if anchor point alignment leads to the same Δx and Δy values, then computing the same set of data yet again would be a waste of time. Thus, the concept of smart browsing is introduced, wherein one can browse through a list indexed by (Δx, Δy). This location of the list stores all the enroll images that are to be compared with the match image. The list is created on the first pass (over the template library). In the second pass, one can visit every location of the list, and for the corresponding (Δx, Δy) value, one computes the histogram and the co-occurrence matrix of the match ridge flow subregion. Next, all the enroll ridge flow subregions are compared with the match ridge flow subregion, without having to compute the relevant data for the match image over and over again. This may save a significant amount of computation. However, the assumption in this case is that the enroll image templates are available in the memory all together.
The discussion now turns to template information related to indexing. For indexing, one can store the anchor point information in the enroll image template. There are at most four anchor points, each of which requires 2 bytes to store the (x,y) location, and 2 bytes to store the Harris Cornerness Strength parameter. In total, the anchor point information requires 16 bytes per enroll image node. For faster computations in low end processors, it may be important to store the integral good block images as well. However, for 96×96 images, the ridge flow information is stored in a 24×24 array. The integral good block array therefore needs to be of size 24×24, and the maximum value (which is at location (24,24)) of the array can be 576. Thus, it uses 2 bytes per array location, and hence a total of 1152 bytes per enroll node. This can be reduced significantly by reducing the resolution to 12×12 for the integral good block image. For this choice, the total requirement for the array is 144 bytes. Thus, for every enroll image node, the template size requirement for indexing is 160 bytes. Since a template typically contains 5 nodes, the additional template size requirement for indexing is 800 bytes per finger.
Interfacing with the matcher is now further described. For templates (or fingers) with multiple enroll nodes, which is usually the case with composite templates, the co-occurrence matrix based distance computation process is repeated for every node, and the minimum indexing distance value is recorded as the indexing distance between the match image and the template. The node id, and the overlap information, as well as the rotation information can be passed to the matcher, and this can be used to its advantage. The process is repeated for all templates in the database. The template list is ordered on basis of this distance. The top 10% of the templates (i.e, those closest to the match image) are passed to the matching stage in the order generated by the indexing stage. Indeed, many modifications and other embodiments of the invention will come to the mind of one skilled in the art having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is understood that the invention is not to be limited to the specific embodiments disclosed, and that other modifications and embodiments are intended to be included within the scope of the appended claims.
This application is based upon provisional patent application 60/807,576, filed Jul. 17, 2006, the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
60807576 | Jul 2006 | US |