Claims
- 1. A method for fingerprint recognition, the method comprising the steps of:
acquiring an enrolled fingerprint having a plurality of ridge curves and valleys; determining an orientation field of the enrolled fingerprint; extracting the minutiae from the enrolled fingerprint; creating an enrolled fingerprint template of the enrolled fingerprint; and storing the enrolled fingerprint template in a database.
- 2. The method of claim 1 further including:
acquiring an unknown fingerprint; determining an orientation field of the unknown fingerprint; extracting the minutiae from the unknown fingerprint; creating a unknown fingerprint template; comparing the unknown fingerprint template to the enrolled fingerprint template; determining the number of the extracted minutiae in the unknown fingerprint template that match the extracted minutiae of the enrolled fingerprint template; and if the number of extracted minutiae that match exceeds a predetermined threshold, providing indicia that the unknown fingerprint and the enrolled fingerprint are a match, otherwise indicate that the unknown fingerprint and the enrolled fingerprint are not a match.
- 3. The method of claim 2 further including the steps of:
determining an enrolled block size such that the enrolled fingerprint ridge curves can be approximated by parallel straight lines; and blocking the enrolled fingerprint using the enrolled block size forming a blocked enrolled fingerprint.
- 4. The method of claim 3 wherein the step of determining the enrolled block size includes determining the enrolled block size according to the formula block_size=r*16/500, where r is the resolution of the enrolled fingerprint in dots-per-unit-length.
- 5. The method of claim 2 further including the steps of:
determining an unknown block size such that the unknown fingerprint ridge curves can be approximated by parallel straight lines; and blocking the unknown fingerprint using the unknown block size forming a blocked unknown fingerprint.
- 6. The method of claim 5 wherein the step of determining the unknown block size includes determining the unknown block size according to the formula block_size=r*16/500, where r is the resolution of the enrolled fingerprint in dots-per-unit-length.
- 7. The method of claim 2 further including the step of separating enrolled foreground blocks from enrolled background blocks of the blocked enrolled fingerprint thereby forming an enhanced enrolled image.
- 8. The method of claim 7 wherein the step of separating the enrolled foreground blocks from the enrolled background blocks of the blocked enrolled fingerprint includes the steps of:
calculating for each enrolled block in the blocked enrolled fingerprint the mean and variance of the pixel gray level within the block; selecting as an enrolled foreground block each block having a variance that is less than a predetermined variance threshold and a mean that is greater than a predetermined mean threshold; and determining an enrolled convex hull defined by the centers of each enrolled block selected to be in the foreground; and testing each enrolled block not selected as an enrolled foreground block whether the center of the enrolled block is within the defined enrolled convex hull; in the event that the center of the enrolled block being tested is within the enrolled convex hull, selecting the enrolled block being tested as an enrolled foreground block.
- 9. The method of claim 7 further including the step of separating the unknown foreground blocks from the unknown background blocks of the blocked unknown fingerprint forming an enhanced unknown image.
- 10. The method of claim 9 wherein the step of separating the unknown foreground blocks from the unknown background blocks of the blocked unknown fingerprint includes the steps of:
calculating for each unknown block the mean and variance of the pixel gray level within the block; selecting as an unknown foreground block each block having a variance that is less than a predetermined variance threshold and a mean that is greater than a predetermined mean threshold; determining an unknown convex hull defined by the centers of each block selected to be in the foreground; testing each unknown block not selected as an unknown foreground block whether the center of the unknown block is within the defined unknown convex hull; and in the event that the center of the unknown block being tested is within the unknown convex hull, selecting the unknown block being tested as an unknown foreground block.
- 11. The method of claim 7 further including filtering each of the enrolled foreground blocks.
- 12. The method of claim 11 wherein the step of filtering each of the enrolled foreground blocks including filtering each of the enrolled foreground blocks with a low pass filter.
- 13. The method of claim 12 wherein the step of filtering each of the foreground block with a low pass filter includes filtering using a low pass Gaussian filter.
- 14. The method of claim 9 further including filtering each of the unknown foreground blocks.
- 15. The method of claim 14 wherein the step of filtering each of the unknown foreground blocks including filtering each of the unknown foreground blocks with a low pass filter.
- 16. The method of claim 15 wherein the step of filtering each of the foreground blocks with a low pass filter includes filtering using a low pass Gaussian filter.
- 17. The method of claim 7 further including the step of determining for each of the enrolled foreground blocks in the enhanced enrolled image the corresponding orientation angle and amplitude forming an enrolled orientation image.
- 18. The method of claim 17 wherein the step of determining the orientation angle and amplitude for each of the enrolled foreground block in the enhanced enrolled image includes finding the horizontal partial derivative and the vertical partial derivative for each of the enrolled foreground blocks.
- 19. The method of claim 18 wherein the step of finding the horizontal partial derivative and the vertical partial derivative for each selected foreground block includes using a Sobel differential operator.
- 20. The method of claim 18, in the event that the orientation magnitude is less than a predetermined magnitude constant, further includes the steps of:
selecting a plurality of directions equally spaced about a unit circle; calculating the average gray level and standard deviation gray level curve projected along each selected direction of the selected foreground block; and selecting the orientation angle to be the one of the selected directions having the smallest standard deviation gray level curve.
- 21. The method of claim 9 further including the step of determining for each of the unknown foreground blocks in the enhanced unknown image the corresponding orientation angle and amplitude forming an unknown orientation image.
- 22. The method of claim 21 wherein the step of determining the orientation angle and amplitude for each of the unknown foreground blocks in the enhanced unknown image includes finding the horizontal partial derivative and the vertical partial derivative for each of the unknown foreground blocks.
- 23. The method of claim 22 wherein the step of finding the horizontal partial derivative and the vertical partial derivative for each selected foreground block includes using a Sobel differential operator.
- 24. The method of claim 22, in the event that the orientation magnitude is less than a predetermined magnitude constant, further includes the steps of:
selecting a plurality of directions equally spaced about a unit circle; calculating the average gray level and standard deviation gray level curve projected along each selected direction of the selected foreground block; and selecting the orientation angle to be the one of the selected directions having the smallest standard deviation gray level curve.
- 25. The method of claim 17 further including the steps of:
creating a directional filter for filtering one of the enrolled foreground blocks in the enrolled orientation image as a function of the orientation angle and magnitude of the respective enrolled block, wherein the directional filter increases the contrast between ridges and valleys in the enrolled orientation image along the same orientation direction as the respective enrolled foreground block; and applying the directional filter to the respective enrolled foreground blocks to be filtered forming an enrolled ridge-enhanced image.
- 26. The method of step 25 wherein the step of creating the directional filter includes creating a filter mask having predetermined coefficients that are a function of the corresponding foreground block to be filtered.
- 27. The method of step 26 wherein the step of creating the filter mask includes the steps of:
creating a square filter mask having a length equal to the period of the signal or the period of the signal plus one whichever is an odd number; and determining the coefficients of the filter mask.
- 28. The method of claim 27 wherein the step of determining the coefficients of the filter mask includes the steps of:
setting the center coefficient of the center row to a value a0; setting the first and last coefficients of the center row to a value of a0/4; calculating the coefficients of the center row between the center coefficient and the first and last coefficient according to a cosine function and the difference between a0 and a0/4. determining a number of middle rows on each side of the center row needed to adequately enhance the contrast between ridges and valleys in the fingerprint image, wherein the number of middle rows is an even number; determining the coefficients of the middle rows according to cosine taper function between the center coefficient, ci and ci/1.41; and determining the values, bi, of the top and bottom row of the filter mask as 3bi=(-∑j=1,n ai,j)*12, where bi is the ith coefficient of the first and last row of the mask, ai,j is the value of the ith coefficient of the jth middle row, where there are n middle rows, and n is an odd number.
- 29. The method of claim 21 further including the steps of:
creating a directional filter for filtering one of the unknown foreground blocks in the unknown orientation image as a function of the orientation angle and amplitude of the respective unknown block, wherein the directional filter increases the contrast between ridges and valleys in the unknown orientation image along the same orientation direction as the respective unknown foreground block; and applying the directional filter to the respective unknown foreground blocks to be filtered forming an unknown ridge-enhanced image.
- 30. The method of step 29 wherein the step of creating the directional filter includes creating a filter mask having predetermined coefficients that are a function of the corresponding foreground block to be filtered.
- 31. The method of step 30 wherein the step of creating the filter mask includes the steps of:
creating a square filter mask having a length equal to the period of the signal or the period of the signal plus one whichever is an odd number; and determining the coefficients of the filter mask.
- 32. The method of claim 31 wherein the step of determining the coefficients of the filter mask includes the steps of:
setting the center coefficient of the center row to a value a0; setting the first and last coefficients of the center row to a value of a0/4; calculating the coefficients of the center row between the center coefficient and the first and last coefficient according to a cosine function and the difference between a0 and a0/4. determining a number of middle rows on each side of the center row needed to adequately enhance the contrast between ridges and valleys in the fingerprint image, wherein the number of middle rows is an even number; determining the coefficients of the middle rows according to cosine taper function between the center row coefficient, cI, and ci/1.41; and determining the values, bi, of the top and bottom row of the filter mask as 4bi=(-∑j=1,n ai,j)*12,where bi is the ith coefficient of the first and last row of the mask, ai,j is the value of the ith coefficient of the jth middle row, where there are n middle rows, and n is an odd number.
- 33. The method of claim 25 further including the steps of:
determining a binarization threshold; applying the binarization threshold to each pixel in the enrolled ridge-enhanced image forming an enrolled binary image wherein if a pixel value in the enrolled ridge-enhanced image is less than the binarization threshold, the pixel value is set to zero, and if a pixel value in the enrolled ridge-enhanced image is greater than or equal to the binarization threshold, the pixel value is set to one.
- 34. The method of claim 33 wherein the step of determining the binarization threshold includes setting the binarization threshold to one-half the maximum intensity value of the respective pixel.
- 35. The method of claim 29 further including the steps of:
determining a binarization threshold; applying the binarization threshold to each pixel in the unknown ridge-enhanced image forming an unknown binary image, wherein if a pixel value in the unknown ridge-enhanced image is less than the binarization threshold, the pixel value is set to zero, and if a pixel value in the unknown ridge-enhanced image is greater than or equal to the binarization threshold, the pixel value is set to one.
- 36. The method of claim 35 wherein the step of determining the binarization threshold includes setting the binarization threshold to one-half the maximum intensity value of the respective pixel.
- 37. The method of claim 33 further including the step of reducing the width of a ridge curve contained within the enrolled binary image to a single pixel width forming a thinned enrolled binary image.
- 38. The method of claim 35 further including the step of reducing the width of a ridge curve contained within the unknown binary image to a single pixel width forming a thinned unknown binary image.
- 39. The method of claim 37 further including the step of approximating each ridge curve in the thinned enrolled binary image by a piecewise linear approximation forming a piecewise linear reduced enrolled binary image.
- 40. The method of claim 39 wherein the step of approximating each ridge curve in the thinned enrolled binary image by a piecewise linear approximation includes:
finding the starting and ending points of a ridge curve in the thinned enrolled binary image; forming a line segment between the starting and ending points of the respective ridge curve; measuring the maximum distance between the line segment and the respective ridge curve; and if the maximum distance between the line segment and the respective ridge curve is greater than a predetermined error threshold, form a first line segment between the staring point of the respective ridge curve and the point of the respective ridge curve having the maximum distance from line segment and form a second line segment between the starting point of the respective ridge curve having the maximum distance from the line segment and the ending point of the respective ridge curve.
- 41. The method of claim 38 further including the step of approximating each ridge curve in the thinned unknown binary image by a piecewise linear approximation forming a piecewise linear reduced unknown binary image.
- 42. The method of claim 41 wherein the step of approximating each ridge curve in the thinned unknown binary image by a piecewise linear approximation includes:
finding the starting and ending points of a ridge curve in the thinned unknown binary image; forming a line segment between the starting and ending points of the respective ridge curve; measuring the maximum distance between the line segment and the respective ridge curve; and if the maximum distance between the line segment and the respective ridge curve is greater than a predetermined error threshold, form a line segment between the staring point of the respective ridge curve and the point of the respective ridge curve having the maximum distance and form a line segment between the starting point of the respective ridge curve having the maximum distance and the ending point of the respective ridge curve.
- 43. The method of claim 39 further including the step of extracting the minutiae from the piecewise linear reduced enrolled binary image to provide enrolled minutiae.
- 44. The method of claim 43 wherein the step of extracting the minutiae includes:
calculating a connection number corresponding to each ridge pixel contained within the piecewise linear reduced enrolled binary image; and determining the type of pixel as a function of the corresponding connection number.
- 45. The method of claim 44 wherein the step of calculating the connection number includes calculating the connection number according to
- 46. The method of claim 44 wherein the step of determining the type of pixel includes the steps of:
if the connection number equals 0 the pixel is an isolated point; if the connection number equals 1 the pixel is an end point; if the connection number equals 2 the pixel is a continuing point; if the connection number equals 3 the pixel is a branching point; and if the connection number equals 4 the pixel is a crossing point.
- 47. The method of claim 41 further including the step of extracting the minutiae from the piecewise linear reduced unknown binary image to provide unknown minutiae.
- 48 The method of claim 47 wherein the step of extracting the minutiae includes:
calculating a connection number corresponding to each ridge pixel contained within the piecewise linear reduced unknown binary image; and determining the type of pixel as a function of the corresponding connection number.
- 48. The method of claim 47 wherein the step of calculating the connection number includes calculating the connection number according to
- 49. The method of claim 48 wherein the step of determining the type of pixel includes the steps of:
if the connection number equals 0 the pixel is an isolated point; if the connection number equals 1 the pixel is an end point; if the connection number equals 2 the pixel is a continuing point; if the connection number equals 3 the pixel is a branching point; and if the connection number equals 4 the pixel is a crossing point.
- 50. The method of claim 43 further including the step of removing false minutiae from the enrolled minutiae to form reduced enrolled minutiae.
- 51. The method of claim 47 further including the step of removing false minutiae from the unknown minutiae to form reduced unknown minutiae.
- 52. The method of claim 50 further including the step of creating an enrolled minutiae template using the reduced enrolled minutiae.
- 53. The method of claim 52 wherein the step of creating an enrolled minutiae template includes creating a connected graph of the reduced enrolled minutiae.
- 54. The method of claim 53 wherein the step of creating the connected graph includes the steps of:
for each of the reduced enrolled minutiae, forming an enrolled segment between the respective reduced enrolled minutiae and each of the other reduced enrolled minutiae that is within a predetermined distance.
- 55. The method of claim 54 further including the steps of:
determining the intersection point between each enrolled segment and each ridge curve intersected by the respective enrolled segment; and determining the intersection angle between each enrolled segment and the tangential direction of the intersected ridge curve.
- 56. The method of claim 51 further including the step of creating an unknown minutiae template using the reduced unknown minutiae.
- 57. The method of claim 56 wherein the step of creating an unknown minutiae template includes creating a connected graph of the reduced unknown minutiae.
- 58. The method of claim 57 wherein the step of creating the connected graph includes the steps of:
for each of the reduced unknown minutiae, forming an unknown segment between the respective reduced unknown minutiae and each of the other reduced unknown minutiae that is within a predetermined distance.
- 59. The method of claim 58 further including the steps of:
determining the intersection point between each unknown segment and each ridge curve intersected by the respective unknown segment; and determining the intersection angle between each unknown segment and the tangential direction of the intersected ridge curve.
- 60. The method of claim 2 wherein the step of comparing the unknown fingerprint template to the enrolled fingerprint template includes the steps of:
a) finding a matching pair of nodes in the enrolled fingerprint template and the unknown fingerprint template; b) determining a template transformation to translate and rotate the unknown fingerprint template to align the unknown and enrolled fingerprint templates; c) using the template transformation, transforming an unknown minutiae in the neighborhood of the matching node in the unknown template to the enrolled fingerprint template; d) computing the difference between the transformed unknown minutiae and an enrolled minutiae; e) if the difference between the transformed unknown minutiae and an enrolled minutiae is less than a predetermined threshold, count the transformed unknown minutiae and the enrolled minutiae as matched; and f) in the event that there is more than one unknown minutiae in the neighborhood of the matched node in the unknown minutiae template and there is more than one enrolled minutiae in the neighborhood of the matched node in the enrolled minutiae template, repeat the step of computing the difference and comparing the difference to the threshold for each of the unknown minutiae.
- 61. The method of claim 60, further including the steps of:
in the event that more than one matching node pair is found, repeating the steps a-f for each matching node pairs; and selecting the matching node having the greatest number of matched unknown and enrolled minutiae.
- 62. The method of claim 35 further including the step of detecting a living finger.
- 63. The method of claim 62 wherein the step of detecting a living finger includes detecting the characteristic of a sweat pore contained within the binary image.
- 64. The method of claim 63 wherein the step of detecting the characteristic of a sweat pore includes:
forming a chain code of the boundaries in the binary image; finding all clockwise closed chains; measuring the closed chains; and if the size of a closed chain exceeds a predetermined sweat pore threshold the closed chain is identified as a sweat pore in a living finger.
- 65. The method of claim 2 wherein, in the event that the unknown fingerprint and the enrolled fingerprint are a match, providing access to a secured entity.
- 66. The method of claim 65 wherein the secured entity is a computer.
- 67. The method of claim 65 wherein the secured entity is a computer network.
- 68. The method of claim 65 wherein the secured entity is data contained in a smartcard.
- 69. The method of claim 65 wherein the secured entity is a cryptographic key.
- 70. The method of claim 17 further including the steps of:
dividing each of the selected foreground blocks into a plurality sub-blocks; creating a core mask; convolving each of the sub-blocks of the selected foreground blocks with the core mask; normalizing the results of the convolution of each of the sub-blocks of the selected foreground blocks with the core mask; estimating the curvature in each sub-block as proportional to the convolution of the respective sub-block; determining Poincare indices of sub-blocks having a curvature that is greater than a predetermined curvature threshold; grouping the sub-blocks having a curvature that is greater than a predetermined curvature threshold according to the corresponding Poincare index; identifying the sub-blocks having a curvature that is greater than a predetermined curvature threshold as cores and deltas according to the corresponding Poincare index; if the estimate of the curvature of a sub-block exceeds a predetermined curvature threshold, surrounding the respective sub-block with a closed curve and calculate the direction integration of the closed curve; and if the calculated direction integration is substantially zero then reduce the diameter of the closed curve and recalculate the direction integration and continue to reduce the diameter of the closed curve until the value of the direction integration is non-zero.
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority under 35 U.S.C. §119(e) to U.S. Provisional Patent Application Serial No. 60/293,487 filed May 25, 2001 and U.S. Provisional Patent Application Serial No. 60/338,949 filed Oct. 22, 2001.
Provisional Applications (2)
|
Number |
Date |
Country |
|
60293487 |
May 2001 |
US |
|
60338949 |
Oct 2001 |
US |