The present invention relates to an image matching device, an image matching method, and a program, and in particular, relates to an image matching device, an image matching method, and a program for executing matching of such biological pattern images as a fingerprint image and a palmprint image.
Fingerprints, which include striped ridges, have characteristics of permanence over entire life and uniqueness to each individual and have long been used for criminal investigations. In particular, matching which uses latent fingerprints left at crime scenes is an effective investigation means. In recent years, many police agencies have been introduced fingerprint matching systems using computers. For example, Patent Literature 1 (Japanese Patent Application Publication JP2010-225102A) discloses a striped pattern image examination device for supporting the judgment of identicalness and difference between fingerprint images.
In conventional fingerprint matching, feature-point matching which uses a ridge ending or a ridge bifurcation of fingerprint ridges is widely used as mentioned in “4.3 Minutiae-Based method” of Non Patent Literature 1 (Handbook of Fingerprint Recognition, Springer, 2003). A ridge ending or a ridge bifurcation of fingerprint ridges is called as a feature point of a fingerprint or a Minutia.
In a case of matching between high-quality fingerprint images like exemplar fingerprint images, high matching accuracy can be guaranteed since an adequate number of feature points can be extracted from both fingerprint images even with the use of conventional technologies.
When one fingerprint image to be matched is a latent fingerprint image with poor image quality however, the area where feature points in the latent fingerprint image can be extracted is small, so that an adequate number of feature points cannot be extracted from the latent fingerprint image. As a result, it is difficult to achieve high matching accuracy with the use of conventional technologies.
To solve the above problem, various techniques have been proposed.
For example, a manual input method is known. In this method, a latent fingerprint examiner manually inputs feature points of a low-quality latent fingerprint image instead of automatic extraction of feature points. However, a heavy burden is placed on the latent fingerprint examiner since the man-hour of manually inputting feature points is large.
In recent years, some methods of automatically or semi-automatically removing noises of a latent fingerprint image have been proposed.
Methods of automatically removing noise are proposed in Patent Literature 2 and 3 for example. However, the effect is limited since not all latent fingerprint images have noise that can be removed by automatic processing.
In Patent Literature 4 and 5, methods for semi-automatically removing noise are proposed. With the semi-automatic removing method however, manpower of latent fingerprint examiners is required, though the man-hour is reduced to some extent, so that this disadvantage is not yet solved.
As explained above, it is difficult to extract an adequate number of feature points from a low quality latent fingerprint image, so that a high accuracy matching cannot have been expected when a latent fingerprint image is used. Therefore, an object of the present invention is to provide an image matching device, an image matching method, a program and a storage medium for achieving a high matching accuracy in a fingerprint matching or a palmprint matching using a low quality image.
According to a first aspect of the present invention, an image matching device includes: a data storage section; a feature point matching section; and a non-linear image conversion section. The data storage section stores: a first feature point data which indicates a first set of feature points which are on a first ridge pattern included in a first image of a first biological pattern; and a second feature point data which indicates a second set of feature points which are on a second ridge pattern included in a second image of a second biological pattern. The feature point matching section calculates a first matching score between the first biological pattern and the second biological pattern based on the first feature point data and the second feature point data, and generates a corresponding feature point list which indicates a corresponding feature point set by extracting a set of corresponding feature points between the first set of feature points and the second set of feature points as the corresponding feature point set. The non-linear image converting section performs a non-linear first image conversion which makes the first image approximate to the second image based on the corresponding feature point list. The feature point matching means calculates a second matching score between the first biological pattern and the second biological pattern based on the first image after the non-linear first image conversion and the second image.
According to a second aspect of the present invention, an image matching method includes: a step of calculating a first matching score between a first biological pattern and the second biological pattern based on: a first feature point data which indicates a first set of feature points of a first ridge pattern included in a first image of a first biological pattern; and a second feature point data which indicates a second set of feature points of a second ridge pattern included in a second image of a second biological pattern; a step of storing a first matching score; a step of extracting a set of feature points which are corresponding points between the first set of feature points and the second set of feature points as a corresponding feature point set and generating a corresponding feature point list which indicates the corresponding feature point set; a step of performing a non-linear first image conversion which makes the first image approximate to the second image based on the corresponding feature point list; a step of calculating a second matching score between the first biological pattern and the second biological pattern based on the first image after the non-linear first image conversion and the second image; and a step of storing the second matching score.
According to a third aspect of the present invention, a storage medium records a computer program for making a computer execute the image matching method according to the above image matching method.
According to the present invention, an image matching device, an image matching method, a program, and a storage device which make it possible to achieve a high matching accuracy in a fingerprint matching or a palmprint matching using a low-quality image.
An image matching device, an image matching method, a program, and a recording medium according to some exemplary embodiments of the present invention will be described below with reference to the accompanying drawings.
The image matching device 10 reads a computer program recorded in a tangible recording medium 15 such as an optical disk and a magnetic disk, and executes the image matching method of the present exemplary embodiment in accordance with the computer program.
With reference to
A CPU (Central Processing Unit) of the image matching device 10 runs a computer program to control hardware of the image matching device 10, thereby achieving each element (each function) of the image matching device 10.
In the step S1, the fingerprint image inputting unit 11 inputs the image data of the fingerprint image 101 and the image data of the fingerprint image 102 to the data processing unit 12. The image data of the fingerprint image 101 and the image data of the fingerprint image 102 are gray-scale image data. In the following explanation, the image data of the fingerprint image 101 and the image data of the fingerprint image 102 may hereinafter be referred to as the image data 101 and the image data 102 respectively. The data storing section 22 stores the image data 101 and 102.
In a criminal investigation, in many cases, the matching is performed between a latent fingerprint and an exemplar fingerprint. Therefore, in the present exemplary embodiment, a case where one of two fingerprint images for matching is a latent fingerprint image and the other is a exemplar fingerprint image is explained. However, matching between latent fingerprints and matching between exemplar fingerprints are also possible. Such fingerprint images are digitized at resolution of 500 dpi in accordance with ANSI/NIST-ITL-1-2000 Data Format for the Interchange of Fingerprint, Facial, & Tattoo (SMT) Information standardized by the National Institute of Standards and Technology of the United States. The standardization document can be downloaded from the following URL (Uniform Resource Locator) as of September 2010.
According to the above standard, each pixel which forms a fingerprint image has one of the gray levels of 256 levels from 0 to 255. In the brightness reference according to the above standard, a greater gray level indicates greater brightness (brighter).
In the description below however, a greater gray level indicates greater value (darker). Therefore, the gray level of a pixel making up a ridge portion where the gray level is dense (dark) is close to 255 which is a maximum value, and the gray level of a pixel showing a paper surface color or a valley portion where the gray level is dilute (light) is close to 0 which is a minimum value. Here, the valley means the belt-shaped portion between two adjacent ridges.
In the step S2, the feature extracting section 23 extracts feature points of the fingerprints (fingerprint ridges) from a latent fingerprint image in
A process of extracting a feature point will be described by using a case where the feature point is extracted from a latent fingerprint image. First, the feature extracting section 23 extracts the directional distribution of a ridge pattern from a latent fingerprint image and performs image processing for enhancing ridges to the latent fingerprint image based on the directional distribution of the ridge pattern. The feature extracting section 23 binarizes the latent fingerprint image after ridge enhancing, to generate a binary image. Next, the feature extracting section 23 extracts a skeleton from the binary image and determines a ridge settlement region where a feature point can be extracted, based on the binary image. Finally, the feature extracting section 23 extracts a feature point from the skeleton in the ridge settlement region. Therefore, the feature extracting section 23 can generate and output latent fingerprint ridge direction data which shows directional distribution of a ridge pattern, ridge settlement region data which shows a ridge settlement region, and skeleton data which shows a skeleton, in addition to the latent fingerprint feature-point data which shows a feature point. The data storing section 22 stores the above data.
The feature extracting section 23 extracts the directional distribution of a ridge pattern and feature points from an exemplar fingerprint image with the above process, and generates and outputs exemplar fingerprint ridge direction data which shows the extracted directional distribution and exemplar fingerprint feature-point data which shows the extracted feature points. The data storing section 22 stores the above data.
In the step S3, the feature-point matching section 24 matches the latent fingerprint to the exemplar fingerprint based on the latent fingerprint feature-point data and the exemplar fingerprint feature-point data. The feature-point matching section 24 calculates a matching score which shows matching result. The feature-point matching section 24 calculates a matching score by matching the latent fingerprint feature-point data to the exemplar fingerprint feature-point data. The feature-point matching section 24 calculates a matching score by using a method disclosed in “4.3 Minutiae-Based method” of Non Patent Literature 1 (Handbook of Fingerprint Recognition, Springer, 2003), for example.
A process of calculating a matching score will be described with reference to
In the step S4, the feature-point matching section 24 judges whether or not the matching in the step S3 has succeeded. The feature-point matching section 24 judges that the matching was successful when the number of feature points for which correspondence relations are detected is a predetermined number (e.g. 4) or more, otherwise the feature-point matching section 24 judges that the matching was unsuccessful. The processing proceeds to the step S5 when the matching is judged to be successful. The processing proceeds to the step S11 when the matching is judged to be unsuccessful.
The matching result outputting unit 13 outputs 0 as the matching score in the step S11 and the image matching device 10 ends the image matching method according to the present exemplary embodiment.
In the step S5, the non-linear image converting section 25 performs non-linear image conversion for making the latent fingerprint image approximate to the exemplar fingerprint image, based on the corresponding feature point list obtained in the step S3, and generates latent fingerprint image data after non-linear image conversion which shows a latent fingerprint image after non-linear image conversion as an after non-linear image conversion latent fingerprint image. The data storing section 22 stores the latent fingerprint image data after non-linear image conversion. In the non-linear image conversion, image distortion of the latent fingerprint image is corrected so that the latent fingerprint image can be superimposed onto the exemplar fingerprint image.
For example, the non-linear image converting section 25 performs non-linear image conversion with a method disclosed in Patent Literature 1 (Japanese Patent Application Publication JP2010-225102A). The non-linear image conversion will be described below. The non-linear image converting section 25 calculates a feature-point moving amount as a moving amount (a coordinate conversion amount) for making the coordinates of the feature point 51a coincide with the coordinates of the feature point 52a, and calculates, as with the interpolation method, a pixel moving amount as a moving amount of a neighborhood pixel based on a distance between the pixel which neighbors the feature point 51a contained in the latent fingerprint image and the feature point 51a, and the feature-point moving amount, to perform non-linear image conversion based on the feature-point moving amount and the pixel moving amount. As a result of the non-linear image conversion, the feature point 51a moves so that the coordinates of the feature point 51a coincide with the coordinates of the feature point 52a, and the neighborhood pixels also move to appropriate positions. As described above, a feature point of a latent fingerprint image for which a correspondence relation is detected is moved to the coordinates of the corresponding feature point in an exemplar fingerprint image. A pixel of the latent fingerprint image other than a feature point for which a correspondence relation is detected, is moved based on the moving amount of the feature point for which a correspondence relation is detected and which is in the neighborhood of the pixel. The coordinate conversion between a latent fingerprint image and a latent fingerprint image after non-linear image conversion is not linear coordinate conversion (coordinate conversion cannot be expressed by a linear expression).
In the step S6, the noise removing and ridge enhancing section 26 performs noise removing and ridge enhancing which uses exemplar fingerprint ridge direction data, to the latent fingerprint image after non-linear image conversion, and generates feedback processing latent fingerprint image data which shows a feedback processing latent fingerprint image as a latent fingerprint image after non-linear image conversion after noise removing and ridge enhancing. The data storing section 22 stores the feedback processing latent fingerprint image data. In noise removing and ridge enhancing, a noise pattern in a latent fingerprint image after non-linear image conversion of which direction does not coincide with exemplar fingerprint ridge direction data, is removed, and a ridge of a latent fingerprint in a latent fingerprint image after non-linear image conversion of which direction coincides with exemplar fingerprint ridge direction data, is enhanced.
The step S6 will be described in detail with reference to
In the step S61, the noise removing and ridge enhancing section 26 performs direction utilizing image enhancing processing for enhancing change in gray level along a direction of the exemplar fingerprint ridge direction data, to the latent fingerprint image after non-linear image conversion, for the purpose of removing a ridge of the latent fingerprint from the latent fingerprint image after non-linear image conversion. Here, the exemplar fingerprint ridge direction data relates coordinates of each pixel contained in an exemplar fingerprint image to the direction of the ridge pattern of the exemplar fingerprint image at that coordinates. For each pixel in the latent fingerprint image after non-linear image conversion, the noise removing and ridge enhancing section 26 determines a reference region as a local region which includes the pixel (hereinafter referred to as “focused pixel”) in direction utilizing image enhancing processing, based on the exemplar fingerprint ridge direction data. The noise removing and ridge enhancing section 26 detects the direction related to the coordinates of the focused pixel from the exemplar fingerprint ridge direction data and determines a reference region based on the detected direction. The reference region is determined to be a belt-shaped region along the detected direction. The noise removing and ridge enhancing section 26 calculates the gray level after direction utilizing image enhancing processing at the focused pixel, based on a gray level histogram of the reference region. Direction utilizing image enhancing processing is based on, for example, the Adaptive Histogram Equalization or the Adaptive Contrast Stretch.
With reference to
With reference to
As a result of the direction utilizing image enhancing processing which uses a reference region determined to be along ridges of the latent fingerprint, ridges of the latent fingerprint disappear and a noise pattern is enhanced.
For example, the reference region is determined as follows. The noise removing and ridge enhancing section 26 extracts a group of pixels which is passed through when proceeding from a focused pixel to a first side along the direction of the coordinates of the focused pixel (the direction detected from the exemplar fingerprint ridge direction data) and to a second side opposite to the first side, by a predetermined number of pixels each. The reference region is made up of the group of pixels.
In the step S62, the noise removing and ridge enhancing section 26 automatically extracts directional distribution of a latent fingerprint image after direction utilizing image enhancing processing as a latent fingerprint image after non-linear image conversion after direction utilizing image enhancing processing of the step S61, and generates noise direction data which shows the extracted directional distribution. The data storing section 22 stores the noise direction data. The noise direction data shows directional distribution of a noise pattern included in the latent fingerprint image after direction utilizing image enhancing processing. The noise direction data relates coordinates of a pixel to the direction of the noise pattern at that coordinates with respect to each pixel contained in the latent fingerprint image after direction utilizing image enhancing processing.
In the step S63, the noise removing and ridge enhancing section 26 corrects the noise direction data based on the exemplar fingerprint ridge direction data and generates noise direction data after correction. The data storing section 22 stores the noise direction data after correction.
Here, the exemplar fingerprint ridge direction data relates coordinates and a direction of an exemplar fingerprint image. The noise direction data relates coordinates and a direction of a latent fingerprint image after direction utilizing image enhancing processing. When the difference between the direction of exemplar fingerprint ridge direction data and the direction of noise direction data with respect to identical coordinates, is within a predetermined range (e.g. within π/16 radian), the noise removing and ridge enhancing section 26 replaces the direction of the noise direction data with the direction perpendicular to the direction of the exemplar fingerprint ridge direction data, to generate noise direction data after correction.
Here, the meaning of the correction will be described. As disclosed in Japanese Patent Application Publication JP2009-223562A, by performing the direction utilizing image enhancing processing based on noise direction data to a latent fingerprint image after non-linear image conversion, it becomes possible to remove a noise pattern and enhance ridges of a latent fingerprint. However, when the direction of noise direction data and the direction of exemplar fingerprint ridge direction data are close to each other, ridges of the latent fingerprint are at least partially removed from the latent fingerprint image after non-linear image conversion as a result of the direction utilizing image enhancing processing. By correcting noise direction data in the step S63, it becomes possible to decrease the possibility that ridges of a latent fingerprint are removed. Note that it is also possible not to perform the step S63.
Next, in the step S64, the noise removing and ridge enhancing section 26 performs direction utilizing image enhancing processing for enhancing change in gray level along the direction of the noise direction data after correction, to the latent fingerprint image after non-linear image conversion, for the purpose of removing a noise pattern from the latent fingerprint image after non-linear image conversion. As a result, the noise removing and ridge enhancing section 26 generates feedback processing latent fingerprint image data which shows a feedback processing latent fingerprint image. The data storing section 22 stores the feedback processing latent fingerprint image data.
The processing of the step S64 is the same as the aforementioned step S61 though data to be used is different. In direction utilizing image enhancing processing of the step S64, the noise removing and ridge enhancing section 26, with respect to each pixel of the latent fingerprint image after non-linear image conversion, determines a reference region as a local region which includes that pixel (hereinafter referred to as “focused pixel”) based on the noise direction data after correction. The noise removing and ridge enhancing section 26 detects the direction related to the coordinates of the focused pixel from the noise direction data after correction and determines a reference region based on the detected direction. The reference region is determined to be a belt-like region along the detected direction. The noise removing and ridge enhancing section 26 calculates the gray level of the focused pixel after the direction utilizing image enhancing processing of the step S64, based on gray level histogram of the reference region.
As a result of the direction utilizing image enhancing processing of the step S64, a noise pattern in the latent fingerprint image after non-linear image conversion of which the direction does not coincide with the exemplar fingerprint ridge direction data, is removed and ridges of the latent fingerprint in the latent fingerprint image after non-linear image conversion of which direction coincides with the exemplar fingerprint ridge direction data is enhanced. The reason is as disclosed in Japanese Patent Application Publication JP2009-223562A.
In the step S7, the feature extracting section 23 extracts feature points of the fingerprint (ridge pattern), directional distribution of the ridge pattern, a ridge settlement region, and a skeleton from the feedback processing latent fingerprint image in
Due to the effective noise removing and ridge enhancing as a result of the feedback processing of the steps S5 and S6, the number of extracted feature points is increased and a ridge settlement region is extended.
In the step S8, the feature-point matching section 24 matches the latent fingerprint to the exemplar fingerprint based on the last feedback processing latent fingerprint feature-point data stored in the data storage section 22 and the exemplar fingerprint feature-point data. The feature-point matching section 24 calculates a matching score which shows matching result, and generates and outputs a corresponding feature point list. The process of calculating the matching score and the process of generating a corresponding feature point list are the same as the step S3. The data storing section 22 stores the matching score and the corresponding feature point list.
In the step S9, the feature-point matching section 24 compares the matching score obtained in the last step S8 and the greatest matching score stored in the data storing section 22.
The processing proceeds to the step S10 when the matching score obtained in the last step S8 is not greater in the step S9.
The matching result outputting unit 13 outputs the greatest matching score in the step S10, and the image matching device 10 ends the image matching method according to the present exemplary embodiment.
When the matching score obtained in the last step S8 is greater in the step S9, the data storing section 22 replaces the value of the greatest matching score with the matching score obtained in the last step S8. After that, the next steps S5 to S9 are performed. In the next step S5, non-linear image conversion for making the feedback processing latent fingerprint image obtained in the last step S6 approximate to the exemplar fingerprint is performed based on the corresponding feature point list obtained in the last step S8. Based on the result of the next step S5, the next steps S6 to S9 are performed.
The steps S5 to S9 may be repeated as long as the value of the greatest matching score is updated. However, the maximum number of repetition may be limited to a predetermined number of times (e.g. two times).
In the present exemplary embodiment, it is possible to input the exemplar fingerprint feature-point data and the exemplar fingerprint ridge direction data to the data processing unit 12 instead of inputting the image data 102 which shows the exemplar fingerprint image. Since feature-point data and ridge direction data for a great number of exemplar fingerprint images are registered in a database in fingerprint matching systems for criminal investigation, the image matching processing according to the present exemplary embodiment can be speeded up by using such data.
In the above description, the case where a latent fingerprint image is converted to a feedback processing latent fingerprint image to calculate a matching score based on the feedback processing latent fingerprint image and an exemplar fingerprint image is described. However, it is also possible to change the relationships between the latent fingerprint image and the exemplar fingerprint image to convert the exemplar fingerprint image to the feedback processing exemplar fingerprint image and calculate a matching score based on the feedback processing exemplar fingerprint image and the latent fingerprint image.
Furthermore, the feature-point matching section 24 may calculate a combined matching score based on a matching score obtained based on a feedback processing latent fingerprint image and an exemplar fingerprint image, and a matching score obtained based on a feedback processing exemplar fingerprint image and a latent fingerprint image. The combined matching score is, for example, the average value of two matching scores. In this case, the combined matching score is used in the step S9 instead of the matching score. Consequently, matching accuracy is further improved.
An image matching method according to the second exemplary embodiment of the present invention will be described with reference to
In the step S5, the non-linear image converting section 25 performs non-linear image conversion for making an exemplar fingerprint image approximate to a latent fingerprint image, based on the corresponding feature point list obtained in the step S3, and generates exemplar fingerprint image data after non-linear image conversion which shows the exemplar fingerprint image after non-linear image conversion as the exemplar fingerprint image after non-linear image conversion. The feature-point extracting section 23 extracts a feature point of a fingerprint (ridge pattern) and directional distribution of the ridge pattern from the exemplar fingerprint image after non-linear image conversion, and generates exemplar fingerprint feature-point data after non-linear image conversion which shows the feature point and exemplar fingerprint ridge direction data after non-linear image conversion which shows the directional distribution of the ridge pattern. The data storing section 22 stores the above data.
In the step S6, the noise removing and ridge enhancing section 26 performs noise removing and ridge enhancing, which uses the exemplar fingerprint ridge direction data after non-linear image conversion, to the latent fingerprint image, and generates feedback processing latent fingerprint image data which shows a feedback processing latent fingerprint image as a latent fingerprint image after noise removing and ridge enhancing. The data storing section 22 stores the feedback processing latent fingerprint image data.
In the step S7, the feature extracting section 23 extracts a feature point of a fingerprint (ridge pattern), directional distribution of the ridge pattern, a ridge settlement region, and a skeleton from the feedback processing latent fingerprint image, and generates and outputs feedback processing latent fingerprint feature-point data which shows the feature point, feedback processing latent fingerprint ridge direction data which shows the directional distribution, ridge settlement region data which shows the ridge settlement region, and skeleton data which shows the skeleton. The data storing section 22 stores the above data.
In the step S8, the feature-point matching section 24 matches the latent fingerprint to the exemplar fingerprint, based on the latest feedback processing latent fingerprint feature-point data and exemplar fingerprint feature-point data after non-linear image conversion stored in the data storing section 22. The feature-point matching section 24 calculates a matching score which shows matching result, and generates and outputs a corresponding feature point list. The data storing section 22 stores the matching score and the corresponding feature point list.
Detailed processes of the steps S5 to S8 according to the present exemplary embodiment can be understood clearly from the description of the steps S5 to S8 according to the first exemplary embodiment.
High matching accuracy is also achieved in the present exemplary embodiment.
In the above description, the case where a latent fingerprint image and an exemplar fingerprint image are converted to a feedback processing latent fingerprint image and an exemplar fingerprint image after non-linear image conversion respectively to calculate a matching score based on the feedback processing latent fingerprint image and the exemplar fingerprint image after non-linear image conversion is described. However, it is also possible to change the relationships between the latent fingerprint image and the exemplar fingerprint image to convert a latent fingerprint and an exemplar fingerprint image to a latent fingerprint image after non-linear image conversion and a feedback processing exemplar fingerprint image respectively and calculate a matching score based on the latent fingerprint image after non-linear image conversion and the feedback processing exemplar fingerprint image.
Furthermore, the feature-point matching section 24 may calculate a combined matching score based on the matching score obtained based on the feedback processing latent fingerprint image and the exemplar fingerprint image after non-linear image conversion, and the matching score obtained based on the latent fingerprint image after non-linear image conversion and the feedback processing exemplar fingerprint image. The combined matching score is, for example, the average value of two matching scores. In this case, the combined matching score is used in the step S9 instead of the matching score. Consequently, matching accuracy is further improved.
In the above, the case where a latent fingerprint image is matched to an exemplar fingerprint image is described. However, as in the case of the first exemplary embodiment, also in the present exemplary embodiment, it is also possible to perform matching between latent fingerprints, or between exemplar fingerprints.
An image matching method according to the third exemplary embodiment of the present invention will be described with reference to
In the step S7, the feature-point extracting section 23 extracts a feature point of a fingerprint (ridge pattern), directional distribution of the ridge pattern, a ridge settlement region, and a skeleton from a latent fingerprint image after non-linear image conversion, and generates and outputs latent fingerprint feature-point data after non-linear image conversion which shows the feature point, latent fingerprint ridge direction data after non-linear image conversion which shows the directional distribution, ridge settlement region data which shows the ridge settlement region, and skeleton data which shows the skeleton. The data storing section 22 stores the above data.
In the step S8, the feature-point matching section 24 matches a latent fingerprint to an exemplar fingerprint based on the latest latent fingerprint feature-point data after non-linear image conversion stored in the data storage section 22 and the exemplar fingerprint feature-point data. The feature-point matching section 24 calculates a matching score which shows matching result, and generates and outputs a corresponding feature point list. The data storing section 22 stores the matching score and the corresponding feature point list.
Detailed processes of the step S7 and S8 according to the present exemplary embodiment can be understood clearly from the description of the steps S7 and S8 according to the first exemplary embodiment.
There is a case where intervals between ridges of a latent fingerprint image with distortion are significantly different from the actual intervals between ridges. For this reason, extraction accuracy is lowered when feature points are extracted from a latent fingerprint image with distortion. In the present exemplary embodiment, lowering of feature-point extraction accuracy can be avoided since the intervals between ridges are normalized due to non-linear image conversion for making a latent fingerprint image approximate to the exemplar fingerprint image. Therefore, high matching accuracy is achieved in fingerprint matching which uses feature points.
In the present exemplary embodiment, exemplar fingerprint feature-point data and exemplar fingerprint ridge direction data may be inputted to the data processing unit 12 instead of inputting image data 102 which shows an exemplar fingerprint image, as in the case of the first exemplary embodiment.
Although the case where the target of matching is a fingerprint image has been described above, the target of matching may be other biological pattern images like a palmprint image.
In the above, the present invention has been explained with reference to some exemplary embodiments. However, the present invention is not limited to the above exemplary embodiments. Various modifications which can be understood by those skilled in the art can be applied to the configurations and details of the present invention within the scope of the present invention.
This application claims the priority based on Japanese Patent Application JP2010-250205 filed in Nov. 8, 2010, and the disclosure thereof is incorporated herein with this reference.
Number | Date | Country | Kind |
---|---|---|---|
2010-250205 | Nov 2010 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2011/075322 | 11/2/2011 | WO | 00 | 4/30/2013 |