This nonprovisional application is based on Japanese Patent Application No. 2006-154820 filed with the Japan Patent Office on Jun. 2, 2006 the entire contents of which are hereby incorporated by reference.
1. Field of the Invention
The present invention relates to an information processing apparatus and, more specifically, to an information processing apparatus having a function of comparing images.
2. Description of the Background Art
Conventionally, an apparatus receiving as an input biometrics data such as fingerprint image information uniquely identifying an individual and executing personal authentication process based on the input biometrics data has been introduced. In the process of personal authentication, the input biometrics data must have high quality. If the quality is low, according to Japanese Patent Laying-Open No. 2001-167053, authentication process using data as an alternative to the fingerprint image, such as a password, is executed. According to Japanese Patent Laying-Open No. 2001-167053, if personal authentication using fingerprint data (hereinafter referred to as fingerprint authentication) is not satisfactory, authentication using a password as an alternative to the fingerprint is executed in addition to the fingerprint authentication. Based on the result of these authentications, a prescribed process (such as a log-in to a computer system) that requires proper security is executed.
According to the laid-open patent application mentioned above, when fingerprint authentication is unsatisfactory, an additional authentication process is performed using data as an alternative to the fingerprint. This requires additional hardware resource for the authentication based on the alternative data. Further, increase in speed of authentication process is hindered by the additional authentication process. Further, it is not convenient for the user, as input of alternative data is required in addition to the fingerprint.
An object of the present invention is to provide an information processing apparatus that controls, when an application processing unit requiring security (crime prevention, safety etc.) at the time of activation is to be operated based on processing of images identifying individuals, permission/inhibition of activation of the application processing unit without sacrificing user convenience and maintaining required level of security.
In order to attain the object, according to an aspect, the present invention provides an information processing apparatus performing a process based on a result of comparison of an image for identifying an individual, including: a feature value detecting unit for detecting and outputting, in correspondence with each of partial images of the image as an input, a feature value in accordance with a pattern represented by the partial image; a non-eligibility detecting unit for detecting a partial image to be excluded from an object of a comparing process in the input image, based on the feature value output by the feature value detecting unit; a comparing unit for performing the comparing process using the input image with the partial image detected by the non-eligibility detecting unit excluded; and a ratio calculating unit for calculating ratio of the partial image detected to be excluded from the object by the non-eligibility detecting unit, relative to the input image as a whole; wherein permission or inhibition of a designated application process is controlled by a result of the comparing process by the comparing unit and by the ratio calculated by the ratio calculating unit.
Preferably, to the designated application process a security level required for activating the application process is allocated in advance; and permission or inhibition of a designated application process is controlled by a result of the comparing process by the comparing unit and a result of comparison between the ratio calculated by the ratio calculating unit and the allocated security level.
Preferably, in the comparing process, the input image with the partial image detected by the non-eligibility detecting unit excluded is compared with a reference image prepared in advance; and when a result of the comparing process indicates a mismatch between the input image and the reference image, permission or inhibition of the designated application process is controlled by the ratio calculated by the ratio calculating unit.
Preferably, the non-eligibility detecting-unit detects a combination of the partial images having a prescribed feature value output by the feature value detecting unit.
Preferably, the image represents a fingerprint pattern; and the feature value output by the feature value detecting unit is classified into a value indicating that the pattern of the partial image runs along a vertical direction of the fingerprint, a value indicating that it runs along a horizontal direction of the fingerprint, and a value indicating otherwise.
Preferably, the image represents a fingerprint pattern; and the feature value output by the feature value detecting unit is classified into a value indicating that the pattern of the partial image runs along a right oblique direction of the fingerprint, a value indicating that it runs along a left oblique direction of the fingerprint, and a value indicating otherwise.
Preferably, the prescribed feature value represents the value indicating otherwise.
Preferably, the combination consists of a plurality of the partial images having the value indicating otherwise, positioned adjacent to each other in a prescribed direction in the input image.
Preferably, the comparing unit includes a position searching unit for searching, in each of a plurality of partial areas of a reference image prepared in advance to be an object of comparison, a position of an area attaining maximum matching score with the partial image, in the partial areas excluding the area of the partial image detected by the non-eligibility detecting unit in the input image, a similarity score calculating unit for calculating a similarity score between the input image and the reference image, based on information of the partial area of which positional relation amount corresponds to a prescribed amount, the positional relation amount representing positional relation between a reference position for measuring, for each of the plurality of partial areas, a position of the partial area in the reference image and a position of maximum matching score corresponding to the partial area searched by the position searching unit, and for outputting the calculated score as an image similarity score; and a determining unit for determining whether the input image and the reference image match with each other, based on the applied image similarity score.
Preferably, the similarity score calculating unit calculates, among the plurality of partial areas, the number of the partial areas of which direction and distance from the reference position of the corresponding maximum matching score position searched by the position searching unit correspond to the prescribed amount, and outputs the result of calculation as the image similarity score.
Preferably, the positional relation amount indicates direction and distance of the maximum matching score position to the reference position.
Preferably, the apparatus further includes an image input unit for inputting an image; wherein the image input unit has a reading surface on which a finger is placed, for reading a fingerprint image of the finger placed thereon.
In order to attain the object, according to another aspect, the present invention provides a method of information processing, for performing a process based on a result of comparison of an image for identifying an individual, using a computer, including the steps of: detecting, in correspondence with each of partial images of the image as an input, a feature value in accordance with a pattern represented by the partial image; detecting a partial image to be excluded from an object of a comparing process in the input image, based on the output feature value; performing the comparing process using the input image with the detected partial image excluded; and calculating ratio of the partial image detected to be excluded from the object, relative to the input image as a whole; wherein permission or inhibition of a designated application process is controlled by a result of the comparing process by the step of performing the comparing process and by the ratio calculated by the ratio calculating step.
According to a further aspect, the present invention provides an information processing program for causing a computer to execute the information processing method described above.
According to a still further aspect, the present invention provides a computer readable recording medium recording an information processing program for causing a computer to execute the information processing method described above.
According to the present invention, activation of the designated application process is permitted or inhibited dependent on the result of comparing process by the comparing unit and on the ratio calculated by the ratio calculating unit. Specifically, based on the result of comparing process and on the ratio of the partial image area excluded from the object of comparison in the compared image to the entire image, that is, information representing accuracy of the result of comparison, whether activation of the application processing should be permitted or inhibited is controlled. Therefore, even when the ratio is high and it is difficult to guarantee accuracy of the comparison result, activation can be permitted/inhibited in consideration of the ratio (accuracy of comparison result) without requiring the user to input different personal information such as a password or requiring repeated image input and comparing process.
Further, even when a good image is not available because of a dirt or the like on the input image, whether activation of the application processing should be permitted or inhibited is controlled in accordance with the result of comparison between the level of security required to permit activation of the application process and the ratio occupied by the partial image not eligible for comparison due to dirt or the like. Therefore, whether activation of the application processing should be permitted or inhibited can be controlled in consideration of the security level suitable for the designated application processing unit provided in the information processing apparatus.
The foregoing and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.
In the following, embodiments of the present invention will be described with reference to the figures. Though it is assumed here that the object image represents fingerprint patterns, it is not limiting and the image may have any pattern unique to an individual, such as a retina pattern or a vein pattern.
The computer may be provided with a magnetic tape apparatus accessing a cassette-type magnetic tape that is detachably mounted thereon.
Referring to
Image input unit 101 includes a fingerprint sensor 100. Image input unit 101 outputs image data of the fingerprint read by fingerprint sensor 100. Fingerprint sensor 100 may be any of optical, pressure, and static-capacitance type sensors. Control signals and data signals between each of these units are transferred through bus 103.
Referring to
Reference image memory 1021 stores image data of a plurality of partial areas of template fingerprint images that correspond to image data to be compared with the fingerprint image data stored in sample image memory 1023. Calculation memory 1022 stores data of various calculation results. Sample image memory 1023 stores fingerprint image data output from image input unit 101. Reference image feature value memory 1024 and sample image feature value memory 1025 store data of calculation results from a partial image feature value calculating unit 1045, which will be described later.
Security rank table 1026 stores, in correspondence to a list 1029 of names of various application programs representing application processes executed in the computer shown in
As shown in the figure, as the security level represented by data 1027 attains higher, the upper limit of ratio indicated by the corresponding upper limit data 1028 becomes smaller, and as the security level lowers, the upper limit value becomes larger. Therefore, it is possible to know the required security level from the ratio represented by upper limit data 1028.
The application programs and the security levels allotted thereto shown in
In application list 1029, names of programs that require certain security levels at the time of execution by the computer shown in
Processing unit 11 includes an image correcting unit 104, a partial image feature value calculating unit (hereinafter referred to as a feature value calculating unit) 1045, a unit for determining image element not eligible for comparison (hereinafter referred to as an element determining unit) 1047, a unit for calculating ratio of image elements not eligible for comparison (hereinafter referred to as a ratio calculating unit) 1048, a unit for permitting execution of an application (hereinafter referred to as an execution permitting unit) 1049, a maximum matching score position searching unit 105, a movement-vector-based similarity score calculating unit (hereinafter referred to as a similarity score calculating unit) 106, a comparison/determination unit 107, and a control unit 108 that corresponds to CPU 622. Control unit 108 controls operations of other units. The function of each unit in processing unit 11 is realized when the corresponding program is executed. These programs are stored in advance in memory 624 or fixed disk 626, and when read and executed by CPU 622, corresponding functions are realized.
Image correcting unit 104 makes density correction of fingerprint image data.
Feature value calculating unit 1045 receives as an input given fingerprint image data, and for each of a plurality of partial area images set in the image represented by the input image data, calculates a value corresponding to a pattern of the partial image. When the fingerprint image data of interest is read from reference image memory 1021, control unit 108 stores the calculated value as the partial image feature value in reference image feature value memory 1024, and if the fingerprint image data is read from sample image memory 1023, it stores the calculated value as the partial image feature value in sample image feature value memory 1025.
Element determining unit 1047 determines (detects), from the fingerprint image to be compared, image elements to be excluded from the object of comparison. Specifically, by searching sample image feature value memory 1025, feature value of each partial image of the fingerprint image is read, and based on combinations of read feature values, a partial image to be excluded from the object of comparison (hereinafter referred to as a non-eligible element) is determined.
Ratio calculating unit 1048 calculates the ratio of partial image or images determined to be non-eligible elements relative to the entire fingerprint image to be compared. In other words, the ratio of the number of partial images occupied by the elements determined to be non-eligible by element determining unit 1047 relative to the total number of partial images set in the fingerprint image is calculated.
Execution permitting unit 1049 searches application list 1029 based on an identifier of an application (application of which activation is desired) designated beforehand by a user through input unit 700, and determines whether the identifier of the application is registered in application list 1029 or not, based on the search result. If it is determined that the identifier is registered, whether activation (execution) of the designated application program is to be permitted or inhibited (activation not permitted) is determined based on the ratio calculated by ratio calculating unit 1048.
Here, to “activate an application program” means that an operation starts to read an instruction of a program stored in advance in a memory by CPU 622 and to execute the read instruction. Further, “activation of an application program is not permitted” means the application program is locked in a software manner. Thus, activation of the application program is inhibited.
Maximum matching score position searching unit 105 receives as an input the determination result output from element determining unit 1047, and based on the input determination result, limits (determines) a partial image or partial images to be the object of comparison, from the plurality of partial images set in the fingerprint image. In accordance with the feature values of the plurality of partial images of the fingerprint image of interest calculated by feature value calculating unit 1045, the scope of search is reduced (limited). Template matching is executed in the reduced scope. Specifically, a plurality of partial areas of one of the two fingerprint images to be compared are each used as a template, a position in the other fingerprint image that attains to the highest score of matching with the template is searched, and the data representing the searched maximum matching score position is output. The output data of the maximum matching score position is stored in calculation memory 1022.
Similarity score calculating unit 106 reads the data of maximum matching score position from calculation memory 1022, and based on the read data, calculates a similarity score based on a movement vector, which will be described later. The calculated data of similarity score is stored in calculation memory 1022.
Comparison/determination unit 107 reads the data of similarity score calculated by similarity score calculating unit 106 from calculation memory 1022, and based on the similarity score represented by the read data, determines whether the two fingerprint images to be compared match (come from the same fingerprint) or do not match (come from different fingerprints).
The process for comparing two fingerprint images and controlling whether execution of an application is to be permitted or not based on the result of comparison performed in processing apparatus 1 having authentication function shown in
Further, it is assumed that at the time of inputting the fingerprint image, a finger of a user is placed beforehand in contact with fingerprint reading surface 201 of fingerprint sensor 100 (in a manner allowing reading of the fingerprint), as shown in
Further, the user registers (stores or enrolls) a reference image “A” of his/her fingerprint with reference memory 1021 in advance. Specifically, the user inputs a reference image enroll instruction by an operation of input unit 700, then CPU 622 (control unit 108) transmits a signal instructing start of an image input to image input unit 101, and waits until an image input end signal is received. Image input unit 101 reads (detects) the fingerprint of the finger placed on fingerprint reading surface 201 of fingerprint sensor 100, receives as an input the read fingerprint image as image “A”, and stores the input data of image “A” in a prescribed address of reference image memory 1021 through data bus 103. After the data of image “A” is stored in reference image memory 1021, image input unit 101 transmits the image input end signal to control unit 108. Thus, enrollment of an image as the reference image is completed. The enrolled image “A” is used as one of the images compared in the comparing process for user authentication.
At the time of enrolling the reference image, it is assumed that fingerprint reading surface 201 of fingerprint sensor 100 was not stained at all, and that the fingerprint could be read on the entire area of the fingerprint reading surface. Accordingly, it is assumed that the fingerprint represented by image “A” is free of any stain or scratch, and the fingerprint is clear.
After completion of enrollment of reference image “A”, when the user instructs start of execution of the desired program and inputs the name of the program as an identifier of the desired program through operations of input unit 700, CPU 622 (control unit 108) starts the process of
At the start of the process shown in
Image input unit 101 reads (detects) the fingerprint of the finger placed on fingerprint reading surface 201 of fingerprint sensor 100, receives as an input image “B” the read fingerprint image, and stores the data of the input image “B” at a prescribed address of memory 102 through bus 103 (step T1). In the present embodiment, after the data of image “B” is stored in memory 102, image input unit 101 transmits an image input end signal to control unit 108.
Receiving the image input end signal, control unit 108 transmits an image correction start signal to image correcting unit 104, and thereafter waits until receiving an image correction end signal. Generally, the input image has uneven image quality, as tones of pixels and overall density distribution vary because of variations in characteristics of image input unit 101 and fingerprint sensor 100, dryness of finger skin (amount of sebum) or pressure with which fingers are pressed on the reading surface.
Receiving the instruction signal to start image correction, image correcting unit 104 corrects the image quality of the input image to suppress variations in image quality derived from different conditions under which the image is input (step T2). Specifically, images “A” and “B” stored in reference memory 1021 and sample image memory 1023 of memory 102 are read, and on each of the read image data, for the overall image corresponding to the image data or each of the small areas into which the image is divided, histogram planarization, as described in Computer GAZOU SHORI NYUMON (Introduction to computer image processing), SOKEN SHUPPAN, p. 98, or image thresholding (binarization), as described in Computer GAZOU SHORI NYUMON (Introduction to computer image processing), SOKEN SHUPPAN, pp. 66-69, is performed. Then, processed image data are stored in reference image memory 1021 and sample image memory 1023. Therefore, it follows that at this time point reference images “A” and sample images “B” before and after correction are both stored in reference image memory 1021 and sample image memory 1023.
Here, every time a sample image “B” is input, image correcting process is repeated on reference image “A” to generate a corrected reference image. The following approach, however, is also available. Specifically, as reference image “A” is input and stored in reference image memory 1021, the reference image “A” may be corrected by image correcting unit 104 and data of the corrected reference image data may also be stored in reference image memory 1021. In that case, the operation of repeating the image correcting process on reference image “A” every time a sample image “B” is input can be omitted.
After the end of image correcting process on images “A” and “B”, image correcting unit 104 transmits the image correction end signal to control unit 108.
Thereafter, for the images “A” and “B” that have been image-corrected by image correcting unit 104, feature values of partial images are calculated by feature value calculating unit 1045 (step T2a).
(Calculation of Partial Image Feature Value)
Next, the procedure of calculating the feature value of a partial image at step T2a will be described.
First, an example will be described in which three different feature values are used.
In the calculation of the partial image feature value in accordance with Embodiment 1, a value corresponding to the pattern of the partial image on which the calculation is performed is output as the partial image feature value. Specifically, the maximum number of consecutive black pixels in the horizontal direction “maxhlen” and the maximum number of consecutive black pixels in the vertical direction “maxvlen” are detected, and comparison is made between the detected maximum number of consecutive black pixels in the horizontal direction “maxhlen” (a value indicating the degree of tendency of the pattern to extend in the horizontal direction (such as horizontal stripe)) and the maximum number of consecutive black pixels in the vertical direction “maxvlen” (a value indicating the degree of tendency of the pattern to extend in the vertical direction (such as vertical stripe)). As a result of comparison, if it is determined that the number along the horizontal direction is relatively larger, a value “H” representing “horizontal” (horizontal stripe) is output. If it is determined to be the vertical direction, a value “V” representing “vertical” (vertical stripe) is output, and otherwise, “X” is output.
Referring to
Even when the determined value is “H” or “V”, “X” is output, if it is determined that each of the maximum numbers of consecutive black pixels “maxhien” and “maxvlen” is not equal to or larger than the lower limit value “hlen0” or “vlen0” that is set in advance for both directions. These conditions can be given by the following expressions. If maxhlen>maxvlen and maxhlen≧hlen0, then “H” is output. If maxvlen>maxhlen and maxvlen≧vlen0, then “V” is output. Otherwise, “X” is output.
Control unit 108 transmits a partial image feature value calculation start signal to feature value calculating unit 1045, and thereafter waits until receiving a partial image feature value calculation end signal. Feature value calculating unit 1045 reads the data of partial image “Ri” on which calculation is performed from reference memory 1021 or from sample image memory 1023, and temporarily stores the same in calculation memory 1022 (step S1). Feature value calculating unit 1045 reads the stored data of partial image “Ri”, and calculates the maximum number of consecutive black pixels in the horizontal direction “maxhlen” and the maximum number of consecutive black pixels in the vertical direction “maxvlen” (step S2). The process for calculating the maximum number of consecutive black pixels in the horizontal direction “maxhlen” and the maximum number of consecutive black pixels in the vertical direction “maxvlen” will be described with reference to
Thereafter, the value of pixel counter “j” for the vertical direction is compared with a variable “n” representing the maximum number of pixels in the vertical direction (step SH002). If j≧n, step SH016 is executed, and otherwise, step SH003 is executed. In Embodiment 1, the number “n” is set (stored) in advance as n=16 and, at the start of processing, j=0. Therefore, the flow proceeds to step SH003.
At step SH003, a pixel counter “i” for the horizontal direction, previous pixel value “c”, the present number of consecutive pixels “len”, and the maximum number of consecutive black pixels “max” in the present row are initialized. Namely, i=0, c=0, len=0 and max=0 (step SH003). Thereafter, pixel counter “i” for the horizontal direction is compared with the maximum number of pixels “m” in the horizontal direction (step SH004). If i≧m, step SH011 is executed, and otherwise, step SH005 is executed. In Embodiment 1, the number m=16 and, at the start of processing, “i”=0. Therefore, the flow proceeds to step SH005.
At step SH005, the previous pixel value “c” is compared with the pixel value “pixel (i, j)” at the coordinates (i, j) on which the comparison is currently performed. If c=pixel (i, j), step SH006 is executed, and otherwise, step SH007 is executed. In Embodiment 1, “c” has been initialized to “0” (white pixel) and pixel (0, 0) is “0” (white pixel) as can be seen from
At step SH006, the calculation len=len+1 is performed. In Embodiment 1, “len” has been initialized to len=0, and therefore, the addition of 1 provides len=1. Thereafter, the flow proceeds to step SH010.
At step SH010, the calculation i=i+1 is performed, that is, the value “i” of the horizontal pixel counter is incremented by 1. Here, “i” has been initialized to i=0, and therefore, the addition of 1 provides i=1. Then, the flow returns to step SH004. Thereafter, with reference to
At step SH011, if the condition c=1 and max<len is satisfied, step SH012 is executed. Otherwise, the flow proceeds to step SH013. At this time, the values are c=0, len=15 and max=0. Therefore, the flow proceeds to step SH013.
At step SH013, the maximum number of consecutive black pixels “maxhlen” in the horizontal direction of previous rows is compared with the maximum number of consecutive black pixels “max” of the present row. If maxhlen<max, step SH014 is executed. Otherwise, step SH015 is executed. At this time, the values are maxhlen=0 and max=0, and therefore, the flow proceeds to step SH015.
At step SH015, the calculation j=j+1 is performed, that is, the value of pixel counter “j” for the vertical direction is incremented by 1. Since j=0 at this time, the result of the calculation is j=1, and the flow returns to SH002.
Thereafter, steps SH002 to SH015 are repeated for j=1 to 15. At the time when j attains to j=16 after step SH015 is performed, the value of pixel counter “j” for the vertical direction is compared with the maximum number of pixels “n” in the vertical direction. As a result of comparison, if j≧n, step SH016 is thereafter executed. Otherwise, step SH003 is executed. At this time, the values are j=16 and n=16, and therefore, the flow proceeds to step SH016.
At step SH016, “maxhlen” is output. As can be seen from the foregoing description and
Next, a flowchart of the process (step S2) for calculating the maximum number of consecutive black pixels “maxvlen” in the vertical direction, in the process (step T2a) for calculating the partial image feature value in accordance with Embodiment 1 of the present invention shown in
The subsequent processes with reference to “maxhlen” and “maxvlen” that are output through the above-described procedures will be described in detail, returning to step S3 of
At step S3, “maxhlen”, “maxvlen” and a prescribed lower limit “hlen0” of the maximum number of consecutive black pixels are compared with each other. If it is determined that the conditions of maxhlen>maxvlen and maxhlen≧hlen0 are satisfied (Y at step S3), step S7 is executed. If it is determined that the conditions are not satisfied (N at step S3), step S4 is executed. Here, it is assumed that maxhlen=14 and maxvlen=4 and further it is assumed that the lower limit value hlen0 is 2, and hence the conditions are satisfied. Thus, the flow proceeds to step S7. At step S7, “H” is stored in the feature value storing area of the partial image “Ri” for the original image of reference image feature value memory 1024 or sample image feature value memory 1025, and a partial image feature value calculation end signal is transmitted to control unit 108.
Assuming that the lower limit value hlen0 is 15, it is determined that the conditions of step S3 are not satisfied, and therefore, the process proceeds to step S4. At step S4, whether the conditions of maxvlen>maxhlen and maxvlen≧vlen0 are satisfied or not is determined. If it is determined that the conditions are satisfied (Y at step S4), the process of step S5 is executed next, and if the conditions are not satisfied, the process of step S6 is executed next.
Here, assuming that maxhlen=15, maxvlen=4 and hlen0=5, the conditions are not satisfied, and therefore, the flow proceeds to step S6. At step S6, “X” is stored in the feature value storing area of the partial image “Ri” for the original image of reference image feature value memory 1024 or sample image feature value memory 1025, and the partial image feature value calculation end signal is transmitted to control unit 108.
Assuming that the output values of step S2 are maxhlen=4 and maxvlen=10, hlen0=2 and vlen0=12, the conditions of step S3 are not satisfied, and the conditions of step S4 are not satisfied, either. Therefore, the process of step S5 is executed. At step S5, “V” is stored in the feature value storing area of the partial image “Ri” for the original image of reference image feature value memory 1024 or sample image feature value memory 1025, and the partial image feature value calculation end signal is transmitted to control unit 108.
As described above, feature value calculating unit 1045 in accordance with Embodiment 1 extracts (specifies) each of pixel strings in the horizontal and vertical directions of the partial image “Ri” of the image on which the calculation is performed (see
Another example of the three types of partial image feature values will be described. Outline of the partial image feature value calculation for this purpose will be described with reference to
Here, based on the partial image “Ri” as the object of calculation shown in
Here, the “amount of increase of the number of black pixels when the partial image as the object of calculation is displaced to the left/right by one pixel” shown in
Here, the “amount of increase of the number of black pixels when the partial image as the object of calculation is displaced upward/downward by one pixel” shown in
In these operations, when a black pixel is superposed on a black pixel, the pixel comes to be a black pixel, when a black pixel and a white pixel are superposed, the pixel comes to be a black pixel, and when a white pixel is superposed on a white pixel, the pixel comes to be a white pixel.
Next, details of the process for calculating the partial image feature value will be described with reference to the flowchart of
First, control unit 108 transmits a partial image feature value calculation start signal to feature value calculating unit 1045, and thereafter waits until receiving a partial image feature value calculation end signal.
Feature value calculating unit 1045 reads partial image “Ri” (see
The process for detecting increase “hcnt” and increase “vcnt” will be described with reference to
Referring to
At step SHT03, the value of counter “i” for the pixels in the horizontal direction is initialized, namely i=0. Thereafter, the value of counter “i” for the horizontal direction is compared with the maximum number of pixels “m” in the horizontal direction (step SHT04). If i>m, step STH05 is executed next, and otherwise, step SHT06 is executed. Here, m=16 and i=0 at the start of processing, and therefore, the flow proceeds to SHT06.
At step SHT06, partial image “Ri” is read and it is determined whether pixel value “pixel (i, j)” at coordinates (i, j) in the partial image that is the object of comparison at present is 1 (black pixel) or not, whether pixel value “pixel (i−1, j)” at coordinates (i−1, j) that is one pixel to the left of coordinates (i, j) is 1 or not, or whether pixel value “pixel (i+1, j)” at coordinates (i+1, j) that is one pixel to the right of coordinates (i, j) is 1 or not. If pixel (i, j)=1, or pixel (i−1, j)=1 or pixel (i+1, j)=1, then step SHT08 is executed, and otherwise, step SHT07 is executed.
Here, it is assumed that pixel values in the scope of one pixel above, one pixel below, one pixel to the left and one pixel to the right of partial image “Ri”, that is, the range of Ri (−1 to m+1, −1), Ri (−1, −1 to n+1), Ri (m+1, −1 to n+1) and Ri (−1 to m+1, n+1) are all “0” (white pixel), as shown in
At step SHT07, “0” is stored as pixel value work (i, j) at coordinate (i, j) of image “WHi” (see
At step SHT09, the value of counter “i” for pixels in the horizontal direction is incremented by 1, that is, i=i+1. Here, the value has been initialized as i=0, and by the addition of 1, the value attains to i=1. Then, the flow returns to step SHT04. As the pixels in the 0-th row, that is, pixel (i, 0) are all white pixels as shown in
At step SHT05, the value of counter “j” for pixels in the vertical direction is incremented by 1, that is, j=j+1. At present, j=0, and therefore, the increment generates j=1, and the flow returns to step SHT02. Here, it is the start of a new row, and therefore, as in the 0-th row, the flow proceeds though steps SHT03 and SHT04. Thereafter, steps SHT04 to SHT09 are repeated until the pixel of the first row and 14-th column, that is, i=14, j=1 having the pixel value of pixel (i+1, j)=1 is reached. After the process of step SHT09, the value i attains to i=14. As m=16 and i=14, the flow proceeds to step SHT06.
At step SHT06, pixel (i+1, j)=1, namely, pixel (14+1, 1)=1, and therefore, the flow proceeds to step SHT08.
At SHT08, 1 is stored, in calculation memory 1022, as pixel value work (i, j) at coordinates (i, j) of image “WHi” (see
The flow proceeds to step SHT09, where i attains to i=16, and the flow proceeds to step SHT04. In that case, m=16 and i=16, and therefore, the flow proceeds to step SHT05, where j attains to j=2. Then, the flow proceeds to step SHT02. Thereafter, the processes of steps SHT02 to SHT09 are repeated for j=2 to 15. When value “j” attains to j=16 after step SHT09, the flow proceeds to step SHT02 where the value of counter “j” is compared with the maximum pixel number “n” in the vertical direction. If j≧n, step SHT10 is executed next, and otherwise, step SHT03 is executed. At present, j=16 and n=16, and therefore, the flow proceeds to step SHT10. At this time, in calculation memory 1022, based on partial image “Ri” shown in
At step SHT10, difference “cnt” between each pixel value work (i, j) of image “WHi” obtained by superposing images displaced by 1 pixel to the left and right and stored in calculation memory 1022 and each pixel value pixel (i, j) of partial image “Ri” that is compared and collated at present is calculated. The process for calculating difference “cnt” between “work” and “pixel” will be described with reference to
Here, n=16, and at the start of processing, j=0. Therefore, the flow proceeds to step SC003. In step SC003, the value of pixel counter “i” for the horizontal direction is initialized, namely i=0. Thereafter, the value of counter “i” for the horizontal direction is compared with the maximum number of pixels “m” in the horizontal direction (step SC004), and if i>m, step SC005 is executed next, and otherwise, step SC006 is executed. Here, m=16, and i=0 at the start of processing, and therefore, the flow proceeds to SC006.
At step SC006, it is determined whether or not pixel value pixel (i, j) at coordinates (i, j) of partial image “Ri”, which is the object of comparison at present, is 0 (white pixel) and pixel value work (i, j) of image “WHi” obtained by superposing images displaced by 1 pixel is 1 (black pixel). If pixel (i, j)=0 and work (i, j)=1, step SC007 is executed next, and otherwise, step SC008 is execute next. Here, pixel (0, 0)=0 and work (0, 0)=0, as shown in
At step SC008, the value of counter “i” for the horizontal direction is incremented by 1, that is, i=i+1. Here, the value has been initialized to i=0, and the addition of 1 provides i=1. Then, the flow returns to step SC004. As the subsequent pixels of the 0-th row, namely pixel (i, 0) and work (i, 0) are all white pixels and the value is 0 as shown in
At step SC005, the value of counter “j” for the vertical direction is incremented by 1, that is, j=j+1. At present, j=0, and therefore, the value j attains to j=1, and the flow returns to step SC002. Here, it is the start of a new row, and therefore, as in the 0-th row, the flow proceeds to steps SC003 and SC004. Thereafter, steps SC004 to SC008 are repeated until the pixel of the first row and 14-th column, that is, i=15, j=1 is reached, and after the process of step SC008, the value “i” attains to i=15. Here, m=16 and i=15, and the flow proceeds to SC006.
At step SC006, the pixel values are determined as pixel (i, j)=0 and work (i, j)=1, that is, it is determined that pixel (14, 1)=0 and work (14, 1)=1, so that the flow proceeds to step SC007.
At step SC007, the value of difference counter “cnt” is incremented by 1, that is, cnt=cnt+1. Here, the value has been initialized to cnt=0 and the addition of 1 generates cnt=1. Next, the flow proceeds to step SC008, where “i” attains to i=16, and then the flow proceeds to step SC004. Since m=16 and i=16, the flow proceeds to step SC005, where j attains to j=2, and the flow proceeds to step SC002.
Thereafter, the process of steps SC002 to SC009 is repeated while j=2 to 15, and when the value j attains to j=15 after the process of step SC008, the flow proceeds to step SC002, in which the value of counter “j” for the vertical direction is compared with the maximum number of pixels “n” in the vertical direction. If j≧n, the flow returns to the flowchart of
At step STH11, the value of difference “cnt” calculated in accordance with the flowchart of
It is apparently seen that steps SVT01 to SVT12 in
As the amount of increase “vcnt” when the image is displaced upward/downward, the difference 96 between image “WVi” in
The processes thereafter performed on the output values “hcnt” and “vcnt” will be described, returning to step ST3 and the following steps of
At step ST3, increases “hcnt”, “vcnt” and the lower limit “vcnt0” of increase in maximum number of black pixels in the upward and downward directions are compared with each other. If the conditions vcnt>2×hcnt and vcnt≧vcnt0 are satisfied, step ST7 is executed next, and if the conditions are not satisfied, step ST4 is executed. At present, vcnt=96, hcntb=21, and assuming that vcntb0=4, the flow proceeds to step ST7. At step ST7, “H” is output to the feature value storage area of partial image “Ri” of the original image in reference image feature value memory 1024 or in sample image feature value memory 1025, and the partial image feature value calculation end signal is transmitted to control unit 108.
If the output values of step ST2 are “vcnt”=30, “hcnt”=20 and “vcnt0”=4, the conditions of step S3 are not satisfied, and then the flow proceeds to step ST4. At step S4, when it is determined that the conditions hcnt>2×vcnt and hcnt>hcntb0 are satisfied, step ST5 is executed next, and if the conditions are not satisfied, step ST6 is executed.
Here, the flow proceeds to step ST6, in which “X” is output to the feature value storage area of partial image “Ri” of the reference image feature value memory 1024 or sample image feature value memory 1025, and the partial image feature value calculation end signal is transmitted to control unit 108.
When the output values of step ST2 are “vcnt”=30, “hcnt”=70 and “vcnt0”=4, then in step ST3, it is determined that conditions vcnt>2×hcnt and vcnt≧vcnt0 are not satisfied. Then, step ST4 is executed. At step ST4, whether the conditions that hcnt>2×vcnt and hcnt≧hcnt0 are satisfied is determined. If the conditions are satisfied, step ST5 is executed next, and if the conditions are not satisfied, step ST6 is executed next.
Here, it is determined that the conditions are satisfied, the flow proceeds to step ST5, and “V” is output to the feature value storage area of the partial image “Ri” of the reference image feature value memory 1024 or sample image feature value memory 1025, and the partial image feature value calculation end signal is transmitted to control unit 108.
Regarding the partial image feature value calculation, assume that the reference image or the sample image has noise. By way of example, assume that the fingerprint image as the reference image “A” or sample image “B” is partially missing because of a furrow for example of the finger and as a result, the partial image “Ri” has a vertical crease at the center as shown in
As described above, feature value calculating unit 1045 generates image “WHi” by displacing partial image “Ri” leftward and rightward by a prescribed number of pixels and superposing the resulting images, and image “WVi” by displacing the partial image “Ri” upward and downward by a prescribed number of pixels and superposing the resulting images, determines the increase of black pixels “hcnt” as a difference in number of black pixels between partial image “Ri” and image “WHi” and determines the increase of black pixels “vcnt” as a difference in number of black pixels between partial image “Ri” and image “WVi”. Then, based on these increases, it is determined that the pattern of partial image “RI” has a tendency to extend in the horizontal direction (tendency to be horizontal stripe) or a tendency to extend in the vertical direction (tendency to be vertical stripe) or does not have any such tendency, and the value representing the result of the determination (any of “H”, “V” and “X”) is output. The output value is the feature value of the partial image “Ri”.
The three types of partial image feature values are not limited to those described above, and the following three different types may be used. An outline of partial image feature value calculation for that purpose will be described with reference to
The “amount of increase of the number of black pixels when the image is displaced in the right oblique direction by one pixel and superposed” means the “difference between the total number of black pixels in an image (16×16 pixels) obtained by generating an image by displacing an original image so that coordinates (i, j) of each pixel is changed to (i+1, j−1), generating an image by displacing the original image so that coordinates (i, j) of each pixel is changed to (i−1, j+1), with the coordinates of each pixel in the original image (16×16 pixels) being (i, j), and superposing the thus generated two images on the original image with pixels of the same coordinates (i, j) match with each other, and the total number of black pixels in the original image.”
The “amount of increase of the number of black pixels when the image is displaced in the left oblique direction by one pixel and superposed” means the “difference between the total number of black pixels in an image (16×16 pixels) obtained by generating an image by displacing an original image so that coordinates (i, j) of each pixel is changed to (i−1, j−1), generating an image by displacing the original image so that coordinates (i, j) of each pixel is changed to (i+1, j−1), with the coordinates of each pixel in the original image (16×16 pixels) being (i, j), and superposing the thus generated two images on the original image with pixels of the same coordinates (i, j) match with each other, and the total number of black pixels in the original image.”
In these operations, when a black pixel is superposed on a black pixel, the pixel comes to be a black pixel, when a black pixel and a white pixel are superposed, the pixel comes to be a black pixel, and when a white pixel is superposed on a white pixel, the pixel comes to be a white pixel.
It should be noted here that, even if the above-described determination is “R” or “L”, “X” is output when the above-described increase in number of black pixels is not equal to or larger than the lower limit “lcnt0” or “rcnt0” set in advance for both directions. These conditions may be mathematically represented in the following way. If the conditions (1) lcnt>2×rcnt and (2) lcnt≧lcnt0 are satisfied, “R” is output. If the conditions (3) rcnt>2×lcnt and (4) rcnt≧rcnt0 are satisfied, “L” is output. Otherwise, “X” is output.
Here, the value “R” representing “right oblique” is output if the amount of increase “lcnt” is larger than twice the amount of increase “rcnt”. The value “twice” as the threshold may be changed to a different value. The same applies to the right oblique direction. Further, if it is known in advance that the number of black pixels in partial image “Ri” is in a certain range (for example, the number of black pixels in partial image “Ri” is in the range of 30% to 70% relative to the total number of black pixels) and that the image is appropriate for the comparing process, the above-described conditions (2) and (4) may not be used.
Control unit 108 transmits to feature value calculating unit 1045 the partial image feature value calculation start signal and thereafter waits until receiving the partial image feature value calculation end signal.
Feature value calculating unit 1045 reads partial image “Ri” on which the calculation is to be performed (see
The process for finding the amounts of increase “rcnt” and “lcnt” is described with reference to
Referring to
At step SR03, the value of counter “i” for pixels in the horizontal direction is initialized, namely i=0. Then, the value of counter “i” for pixels in the horizontal direction and the maximum number of pixels in the horizontal direction “m” are compared with each other (step SR04). If result of comparison is i≧m, step SR05 is subsequently performed. Otherwise, the step SR06 is subsequently performed. Here, m=16 and i=0 at the start of the process, so that the flow proceeds to step SR06.
At step SR06, the partial image “Ri” is read, and it is determined whether the pixel value, pixel (i, j), at coordinates (i, j) on which the comparison is made at present is 1 (black pixel), or the pixel value, pixel (i+1, j+1), at the upper right adjacent coordinates (i+1, j+1) relative to coordinates (i, j) is 1, or the pixel value, pixel (i+1, j−1), at the lower right adjacent coordinates (i+1, j−1) relative to coordinates (i, j) is 1. If pixel (i, j)=1, or pixel (i+1, j+1)=1 or pixel (i+1, j−1)=1, step SR08 is subsequently performed. Otherwise, step SR07 is subsequently performed.
It is supposed here that, as shown in
At step SR07, 0 is stored as the pixel value, work (i, j), at coordinates (i, j) (see
At step SR09, the value of counter “i” is incremented by one, namely i=i+1. Here, the value has been initialized to i=0. Therefore, the addition of 1 provides i=1. Then, the flow returns to step SR04.
At step SR05, the value of counter “j” for pixels in the vertical direction is incremented by one, namely j=j+1. At this time, j=0 and this operation provides j=
1. Then, the flow returns to SR02. Here, since the new row is processed, the flow proceeds through steps SR03 and SR04 as does for the 0-th row. After this, steps SR04 to SR09 are repeated until it reaches to the pixel of the first row and fifth column where pixel (i, j)=1, that is, i=5 and j=1. After step SR09, i=5 is attained. Since m=16 and i=5, the flow proceeds to step SR06.
At step SR06, pixel (i, j)=1, namely pixel (5, 1)=1, and the flow proceeds to step SR08.
At step SR08, 1 is stored as pixel value work (i, j) at coordinates (i, j) in image “WRi” (see
After this, the flow proceeds to step SR09, where i=16 is reached. Then, the flow proceeds to step SR04, and since m=16 and i=16, the flow further proceeds to step SR05, where j=2 is attained. Then the flow proceeds to SR02. After this, steps SR02 to SR09 are similarly repeated for j=2 to 15. After step SR09 and when j=16 is attained, the value of pixel counter “j” for the vertical direction is compared with the maximum number “n” of pixels in the vertical direction. If the result of comparison indicates j≧n, the process of step SR10 is executed. Otherwise, step SR03 is executed next. Here, j=16 and n=16, so that the flow proceeds to step SR10. At this time, in calculation memory 1022, image “WRi” as shown in
At step SR10, difference “cnt” is calculated between pixel value work (i, j) of image “WRi” generated by superposing the image displaced in the right oblique direction by one pixel and stored in calculation memory 1022 and pixel value pixel (i, j) of partial image “Ri”, which is currently compared and collated. The process for calculating difference “cnt” between “work” and “pixel” is now described with reference to
Here, n=16 and at the start of the process, j=0. Then, the flow proceeds to step SN003. At step SN003, the value of counter for pixels in the horizontal direction, “i”, is initialized, namely i=0. Then, the value of counter “i” for the horizontal direction and the maximum number “im” of pixels in the horizontal direction are compared with each other (step SN004). If the result of comparison indicates i≧m, step SN005 is subsequently performed. Otherwise, step SN006 is subsequently performed. Here, m=16 and, at the start of the process, i=0, so that the flow proceeds to step SN006.
At step SN006, it is determined whether or not pixel value pixel (i, j) of partial image “Ri” at coordinates (i, j) on which the comparison is currently made is 0 (white pixel) and pixel value work (i, j) of image “WRi” generated by superposing the image displaced by one pixel is 1 (black pixel). When the determination provides the results, pixel (i, j)=0 and work (i, j)=1, step SN007 is subsequently performed. Otherwise, step SN008 is subsequently performed. Here, with reference to
At step SN008, i=i+1, namely the value of counter “i” for the horizontal direction is incremented by one. Here, the initialization provides i=0 and thus the addition of 1 provides i=1. Then, the flow returns to step SN004. After this, steps SN004 to SN008 are repeated until i=15 is reached. After step SN008 and when i=16 is attained, the flow proceeds to SN004. As m=16 and i=16, the flow proceeds to step SN005.
At step SN005, j=j+1, namely the value of counter “j” for pixels in the vertical direction is incremented by one. At this time, j=0, the addition of 1 provides j=1 and thus the flow returns to SN002. Since the new row is now processed, the flow proceeds to SN003 and SN004. After this, steps SN004 to SN008 are repeated until the pixel in the first row and the 11th column, namely i=10 and j=1 are reached where the pixel values are pixel (i, j)=0 and work (i, j)=1. After step SN008, i=10 is attained. Here, since m=16 and i=10, the flow proceeds to step SN006.
At step SN006, since the pixel values are pixel (i, j)=0 and work (i, j)=1, namely pixel (10, 1)=0 and work (10, 1)=1, the flow proceeds to step SN007.
At step SN007, cnt=cnt+1, namely the value of difference counter “cnt” is incremented by one. Here, since the initialization provides cnt=0, the addition of 1 provides cnt=1. The flow continues to step SN008, where i=16 is reached, and the flow proceeds to step SN004. As m=16 and i=16, the flow proceeds to step SN005 where j=2 is attained, and then the flow proceeds to step SN002.
After this, steps SN002 to SN008 are repeated for j=2 to 15. After step SN008 and when j=16 is attained, the flow proceeds to step SN002 at which the value of counter “j” for the pixels in the vertical direction is compared with the maximum number “n” of pixels in the vertical direction. If the result of comparison indicates j≧n, the flow returns to the flowchart of
At step SR11, rcnt=cnt, namely difference “cnt” calculated through the flowchart in
The process through steps SL0 to SL12 of
As increase “lcnt” when the image is displaced in the left oblique direction, the difference lcnt=115 between image “WLi” in
The process performed on the outputs “rcnt” and “lcnt” is described now, referring back to step SM3 and the following steps in
At step SM3, comparisons are made between “rcnt” and “lcnt” and the lower limit “lcnt0” of the increase in maximum number of black pixels regarding the left oblique direction. When the conditions lcnt>2×rcnt and lcnt>lcnt0 are satisfied, step SM7 is subsequently performed. Otherwise, step SM4 is subsequently performed. At this time, lcnt=115 and rcnt=45, and assuming that lcnt0=4, the flow subsequently proceeds to step SM7. At step SM7, “R” is output to the feature value storage area for partial image “Ri” for the reference image feature value memory 1024 or sample image feature value memory 1025, and the partial image feature value calculation end signal is transmitted to control unit 108.
If the output values at step SM2 are lcnt=30 and rcnt=20 and lcnt0 is assumed to be lcnt0=4, the flow proceeds to step SM4. If the conditions rcnt>2×lcnt and rcnt≧rcnt0 are satisfied, step SM5 is executed, and otherwise, step SM6 is executed.
Here, the flow proceeds to step SM6, at which “X” is output to the feature value storage area for partial image “Ri” for the reference image feature value memory 1024 or sample image feature value memory 1025. Then, the partial image feature value calculation end signal is transmitted to control unit 108.
Further, if the output values at step SM2 are lcnt=30 and rcnt=70 and it is assumed that lcnt0=4 and rcnt0=4, the conditions lcnt>2×rcnt and lcnt≧lcnt0 are not satisfied at step SM3, and therefore, the flow proceeds to step SM4. If the conditions rcnt>2×lcnt and rcnt≧rcnt0 are satisfied at step SM4, step SM5 is executed next, and otherwise, step SM6 is executed next.
Here, the flow proceeds to step SM5, at which “L” is output to the feature value storage area for partial image “Ri” for the reference image feature value memory 1024 or sample image feature value memory 1025. Then, the partial image feature value calculation end signal is transmitted to control unit 108.
Regarding the feature value calculation described above, even if the reference image “A” or the sample image “B” has noise, for example, even if the fingerprint image is partially missing because of a furrow of the finger and consequently partial image “Ri” has a vertical crease at the center as shown in
As discussed above, partial image feature value calculating unit 1045 generates image “WRi” by superposing an image displaced by a prescribed number of pixels in the right oblique direction and image “WLi” by superposing an image displaced by a prescribed number of pixels in the left oblique direction with respect to partial image “Ri”, detects increase “rcnt” in number of pixels that is the difference between image “WRi” generated by superposing the image displaced by one pixel in the right oblique direction and partial image “Ri” and detects increase “lcnt” in number of black pixels that is the difference between image “WLi” generated by superposing the image displaced by one pixel in the left oblique direction and partial image “Ri”, based on these increases, determines whether the pattern of partial image “Ri” is the pattern with the tendency to be arranged in the right oblique direction (for example, right oblique stripe) or the pattern with the tendency to be arranged in the left oblique direction (for example the left oblique stripe) or any except for these patterns, and outputs the value (one of “R”, “L” and “X”) according to the determination.
Feature value calculating unit 1045 may output all the feature values described above. In that case, feature value calculating unit 1045 finds respective amounts of increase “hcnt.”, “vcnt”, “rcnt” and “lcnt” of black pixels in accordance with the procedures described above, and based on these amounts of increase, determines whether the pattern of the partial image “Ri” tends to be arranged in the horizontal (lateral) direction (for example, horizontal stripe), in the vertical (longitudinal) direction (for example, vertical stripe), in the right oblique direction (for example, right oblique stripe) or in the left oblique direction (for example the left oblique stripe) or other than these, and outputs a value corresponding to the result of determination (“H”, “V”, “R”, “L” and “X”). The output value represents the feature value of partial image “Ri”.
Here, values “H” and “V” are used in addition to “R”, “L” and “X” as the feature value of the partial image “Ri”. Therefore, the classification of feature values of the partial image of the object of comparison can be made finer. Even a partial image that would have been classified to “X” according to the classification using three types of feature values could be classified to a value other than “X” if five types of feature values are used for classification. Accordingly, a partial image “Ri” that should be classified to “X” can more exactly be detected.
Here, in view of the fact that there is a notable tendency, for most of fingerprints to be identified, to have the vertical or horizontal pattern, the process shown in
The object of search by maximum matching score position searching unit 105 may be limited in accordance with the feature values calculated in the above-described manner.
First, referring to
Maximum matching score position searching unit 105 searches image “A” of
As can be seen from this image (A)-S1, the first detected partial image feature value is “V”. Therefore, among partial images of image “B”, the partial images having the partial image feature value “V” are to be searched for. The image (B)-S1-1 of
Thereafter, the process is performed on partial image “g14” having feature value “V” subsequently to partial image “g11”, that is, “V1” (image (B)-S1-2 of
Thereafter, for partial images “g29”, “g30”, “g35”, “g38”, “g42”, “g43”, “g46”, “g47”, “g49”, “g50”, “g55”, “g56”, “g58” to “g62” and “g63” (image (A)-S20 of
The number of partial images for which the search is conducted in images “A” and “B” by maximum matching score position searching unit 105 is given by the expression: (the number of partial images in image “A” that have partial image feature value “V”×the number of partial images in image “B” that have partial image feature value “V”+the number of partial images in image “A” that have partial image feature value “H”×the number of partial images in image “B” that have partial image feature value “H”). The number of partial images searched by the procedure in the example shown in
Since the partial image feature value in accordance with the present embodiment depends also on the pattern of the image, an example having a pattern different from that of
For image “A” shown in
Although the partial images having the same feature value are searched for according to the description above, the present invention is not necessarily applied to this. When the reference image feature value is “H”, the partial areas that have sample image feature values “H” and “X” may be searched for and, when the reference image feature value is “V”, the areas that have sample image feature values “V” and “X” may be searched for, so as to improve accuracy in the comparing process.
Feature value “X” means that the correlated partial image has a pattern that cannot be specified as vertical stripe or horizontal stripe. In order to increase the speed of the comparing process, partial areas having feature value “X” may be excluded from the scope of search by maximum matching score position searching unit 105.
In order to improve accuracy, not only the values “H” and “V” but also values “L” and “R” may be applied.
The image that has been corrected by image correcting unit 104 and of which feature values of partial images have been calculated by feature value calculating unit 1045 is next subjected to a calculation process for determining image element that is not eligible for comparison (step T2b). The process is as shown in the flowchart of
Here, it is assumed that each partial image in the image as the object of comparison comes to have feature value of “H”, “V”, “L” or “R” (four values), by a process by element determining unit 1047. Specifically, if there is a stained area on fingerprint reading surface 201 of fingerprint sensor 101 or there is an area from which an image cannot be input as the fingerprint is absent (finger is not placed) thereon, a partial image corresponding to such an area basically has the feature value “X”. Using such a characteristic, element determining unit 1047 detects (determines), in the input image, the stained partial area or the partial area at which fingerprint image is not available, as an image element not eligible for comparison. Then, a process is done to allocate a feature value “E” to such a detected area. Here, allocation of feature value “E” to a partial area of the image (partial image) means that the corresponding partial area (partial image) is excluded from the scope of search by maximum matching score position searching unit 105 performed for image comparison by comparison/determination unit 107 and that it is excluded from the object of similarity score calculation by similarity score calculating unit 106.
Sample image “B” of
Element determining unit 1047 reads the feature value of each of the partial images corresponding to sample image “B” of
Next, element determining unit 1047 searches the feature values of respective partial images of
Specifically, feature values of partial images of sample image “B” shown in
An example of this rewriting will be described. Referring to
Here, a partial area consisting of at least two partial images having the feature value “X” continuous in at least one of longitudinal, lateral and oblique directions of sample image “B” is determined as an image element not eligible for comparison. The reference for determination is not limited to this. By way of example, the partial image having the feature value “X” itself may be determined to be the element not eligible for comparison, or other combination may be used.
Next, the search for the maximum matching score position and the process of similarity score calculation based on the result of search (step T3 of
At the end of determination by element determining unit 1047, control unit 108 transmits a template matching start signal to maximum matching score position searching unit 105, and waits until a template matching end signal is received.
Receiving the template matching start signal, maximum matching score position searching unit 105 starts the template matching process represented by steps S001 to S007. At step S001, the value of a counter variable “i” is initialized to 1. At step S002, an image of a partial area defined as partial image “Ri” of reference image “A” is set as a template to be used for template matching.
At step S0025, maximum matching score position searching unit 105 searches in reference image feature value memory 1024 and reads a feature value “CRi” of partial image “Ri” as the template.
At step S003, a position having the highest matching score with the template set at step S002 in image “B”, that is, a position of which data matches the most in image “B”, is searched. In this search, the following calculation is performed only on the partial image of image “B” that has a feature value other than “E”.
Let us represent the pixel density at coordinates (x, y) with an upper left corner of rectangular partial image “Ri” used as the template being the reference by “Ri” (x, y), pixel density at coordinates (s, t) with an upper left corner of image “B” being the reference by B(s, t), the width of partial image “Ri” by “w”, height by “h”, and maximum possible density of each pixel in images “A” and “B” by “V0”. Here, matching score Ci(s, t) at coordinates (s, t) of image “B” is calculated, for instance, based on the difference in density of each pixel, in accordance with the following equation (Equation 1).
The coordinates (s, t) are successively updated in image “B”, and after every update, matching score Ci(s, t) at the updated coordinates is calculated. The position in image “B” that corresponds to the largest value among the calculated matching scores C(s, t) is determined to be the best match with partial image “Ri”, and the image of the partial area of that position in image “B” is regarded as partial area “Mi”. The matching score C(s, t) corresponding to that position is set as the maximum matching score “Cimax”.
At step S004, the maximum matching score “Cimax” is stored at a prescribed address of memory 102. At step S005, movement vector “Vi” is calculated in accordance with Equation (2) below, and the calculated movement vector is stored in a prescribed address of memory 102.
Here, as described above, based on a partial image “Ri” corresponding to a position “P” in image “A”, image “B” is scanned (searched) and, if a partial area “Mi” of a position having the highest matching score with partial image “Ri” is detected as a result, a directional vector from position “P” to position “M” is referred to as movement vector “Vi”. The manner how a finger is placed on fingerprint reading surface 201 of fingerprint sensor 101 is not uniform, and movement vector “Vi” represents that when one of the images, for example, image “A” is used as a reference, the other image “B” seems to have moved. Movement vector “Vi” represents direction and distance and, therefore, the movement vector represents positional relation between partial image “Ri” of image “A” and partial image “Mi” of image “B” in a quantified manner.
Vi=(Vix, Viy)=(Mix−Rix, Miy−Riy) (Equation 2)
In Equation 2, variables “Rix” and “Riy” represent values of x and y coordinates of the reference position of partial image “Ri”, which correspond, for example, to the coordinates at the upper left corner of partial image “Ri” in image “A”. Further, variables “Mix” and “Miy” represent values of x and y coordinates of the position corresponding to the maximum matching score “Cimax” calculated by the search in partial area “Mi”. By way of example, these values correspond to the coordinates at the upper left corner of partial area “Mi” at the matching position in image “B”.
At step S006, the value of counter variable “i” is compared with the value of variable “n”, and based on the result of comparison, whether the value of counter variable “i” is smaller than the value of “n” or not is determined. If it is determined that the value of variable “i” is smaller than the value of variable “n”, the process proceeds to step S007, and otherwise, the process proceeds to step S008.
At step S007, 1 is added to the value of variable “i”. Thereafter, as long as the value of variable “i” is determined to represent a value smaller than the variable “n”, steps S002 to S007 are repeated. Specifically, for every partial area “Ri” of image “A”, template matching is performed only on that partial area of image “B” which has the feature value “CM” same as the corresponding feature value “CRi” read by searching reference image feature value memory 1024 for the partial area “Ri”, and the maximum matching score “Cimax” of each partial image “Ri” and movement vector “Vi” are calculated.
After the successively calculated maximum matching score “Cimax” and movement vector “Vi” of all the partial images “Ri” are stored at prescribed addresses of memory 102, maximum matching score searching unit 105 transmits a template matching end signal to control unit 108 and ends the process.
Thereafter, control unit 108 transmits a similarity score calculation start signal to similarity score calculating unit 106, and waits until a similarity score calculation end signal is received. Similarity score calculating unit 106 performs processes shown from step S008 to S020 of
At step S008, the value of similarity score P(A, B) is initialized to 0. Here, similarity score P(A, B) refers to a variable storing the degree of similarity between images “A” and “B”. At step S009, the value of an index “i” of movement vector “Vi” used as a reference is initialized to 1. At step S010, the value of similarity score “Pi” related to the movement vector “Vi” as a reference is initialized to 0. At step S011, an index “j” of a movement vector “Vj” is initialized to 1. At step S012, vector difference “dVij” between reference movement vector “Vi” and movement vector “Vj” is calculated in accordance with Equation 3 below.
dVij=|Vi−Vj|=sqrt((Vix−Vjx)ˆ2+(Viy−Vjy)ˆ2) (Equation 3)
where variables “Vix” and “Viy” represent x-directional and y-directional components of movement vector “Vi”, variables “Vjx” and “Vjy” represent x-directional and y-directional components of movement vector “Vj”, variable sqrt(X) represents a root of X, and Xˆ2 represents an operation for calculating a square of X.
At step S013, the vector difference “dVij” between movement vectors “Vi” and “Vj” is compared with a threshold value represented by a constant ε, and based on the result of comparison, whether it is possible to regard the movement vectors “Vi” and “Vj” as substantially the same movement vector or not is determined. If the result of determination shows that the value of vector difference “dVij” is smaller than the threshold value (vector difference) indicated by constant ε, it is determined that movement vectors “Vi” and “Vj” are substantially the same, and the process proceeds to step S014. If the result shows that the difference is not smaller than constant ε, the two vectors are not determined to be substantially the same, and the process proceeds to step S015. At step S014, the value of similarity score “Pi” is increased in accordance with Equations 4 to 6 below.
Pi=Pi+α (Equation 4)
α=1 (Equation 5)
α=Cjmax (Equation 6)
The variable α in Equation 4 is a value of increasing similarity score “Pi”. When this is set as α=1 according to Equation 5, similarity score “Pi” comes to represent the number of partial areas that have the same movement vector as movement vector “Vi” used as the reference. If this is set as α=Cjmax according to Equation 6, similarity score “Pi” comes to represent the total sum of maximum matching scores at the time of template matching for the partial areas having the same movement vector as the movement vector “Vi” used as the reference. It is also possible to make the value of variable α smaller, in accordance with the magnitude of vector difference “dVij.”
At step S015, whether the value of index “j” is smaller than the value of variable “n” or not is determined. If the value of index “j” is smaller than the total number of partial areas represented by variable “n” as a result of determination, the process proceeds to step S016, and if it is not smaller than the total number, the process proceeds to step S017. At step S016, the value of index “j” is incremented by 1. By the process of steps S010 to S016, similarity score “P” is calculated using the information of partial area determined to have the same movement vector as the movement vector “Vi” used as a reference. At step S017, similarity score “Pi” with movement vector “Vi” used as the reference is compared with the value of variable P(A, B), and if the value of similarity score “Pi” is lager than the largest similarity score (value of variable P(A, B)) to that time point, the process proceeds to step S018, and if it is not larger, the process proceeds to S019.
At step S018, the value of similarity score “Pi” when movement vector “Vi” is used as the reference is set as variable P(A, B). At steps S017 and S018, if the similarity score “Pi” with movement vector “Vi” used as the reference is larger than the maximum value (value of variable P(A, B)) of the similarity score with other movement vector used as a reference calculated up to that time point, the movement vector “Vi” used as the reference is considered the most relevant as the reference, among the indexes “i” used up to that time point.
At step S019, the value of index “i” of movement vector “Vi” used as the reference is compared with the number of partial areas (value of variable “n”). If the value of index “i” is smaller than the number of partial areas, the process proceeds to step S020. At step S020, the index “i” is incremented by 1.
From step S008 to step S020, the similarity score between images “A” and “B” is calculated as the value of variable P(A, B). Similarity score calculating unit 106 stores the value of variable P(A, B) calculated in the above-described manner at a prescribed address of memory 102, transmits the similarity score calculation end signal to control unit 108 and ends processing.
Thereafter, control unit 108 transmits a comparison/determination start signal to comparison/determination unit 107, and waits until a comparison/determination end signal is received. Specifically, the similarity score represented by the value of variable P(A, B) stored in memory 102 is compared with a predetermined comparison threshold value T. If variable P(A, B)≧T as a result of comparison, it is determined that the image “A” and image “B” are taken from the same fingerprint, and as a result of comparison, a value representing a “match”, for example, “1”, is written to a prescribed address of memory 102. Otherwise, it is determined that the images come from different fingerprints, and a value representing a “mismatch”, for example, “0”, is written to a prescribed address of calculation memory 1022. Then, a comparison/determination end signal is transmitted to control unit 108; and the process ends.
Receiving the comparison/determination end signal, control unit 108 reads the result of comparison from calculation memory 1022, and determines if the read result of comparison indicates a “match” or not (step T3a). If the result of determination indicates a “mismatch”, the process proceeds to step T4, and a message of “comparison mismatch” is output. If the result of determination represents a “match,” control unit 108 transmits an instruction signal to ratio calculating unit 1048 to start ratio calculation, and waits until a ratio calculation end signal is received.
Receiving the ratio calculation start instruction signal, ratio calculating unit 1048 calculates the ratio occupied by non-eligible elements in image “B” (step T3b). Ratio calculating unit 1048 searches in sample image feature value memory 1025, counts the total number of partial images of sample image “B”, sets the count value as a variable “N”, counts the number of partial images indicating the feature value other than “X” or “E”, and sets the count value as a variable “NNE”. Then, the ratio “PE” of image elements that are not eligible for comparison with respect to sample image “B” is calculated in accordance with the equation PE=1−(NNE/N). The calculated value “PE” is stored in calculation memory 1022, and the calculation end signal is transmitted to control unit 108.
The ratio “PE” calculated in this manner can be regarded as indicating the reliability of the result of comparison process. Specifically, even if the comparison result is a match, the reliability of the comparison result is not high if the ratio “PE” is large. Specifically, large number of partial images were not used for comparison, and therefore, the comparing process was done only on partial images of a very limited area. On the contrary, if the value “PE” is small, reliability of comparison result is believed to be high. The number of partial images not used for comparison is small, and comparing process is done on large number of partial images.
Receiving the calculation end signal, control unit 108 transmits an instruction signal to start determination as to whether execution of an application is to be permitted or not, and waits until a permission determination end signal is received.
Receiving the instruction signal to start determination of permission from control unit 108, execution permitting unit 1049 performs a process for determining whether execution of the application is to be permitted or not (step T3c).
The process for determining whether execution of the application is to be permitted or not of step T3b will be described with reference to the flowchart of
After the start of the process, first, the ratio represented by variable “PE” is read from calculation memory 1022 (step F02). Then, security rank table 1026 is looked up based on the identification information of the desired application input in advance through input unit 700, and the upper limit value indicated by data 1028 corresponding to application list 1029 with which the identification information of the application is registered is read (step F03).
Execution permitting unit 1049 compares the value indicated by the read variable “PE” with the upper limit value indicated by upper limit data 1028 (step F04). By this comparison, whether the result of comparing process satisfies the degree of reliability (security level) required for activating the desired application or not is detected. Based on the result of comparison, if it is determined that the condition of “upper limit value>value of variable ‘PE’” is satisfied (YES at step F04), it is determined that use (execution/activation) of the desired application program is permitted, and the result of determination is stored in calculation memory 1022 (step F05). If it is determined that the condition is not satisfied (NO at step F04), it is determined that use (execution/activation) of the desired application program is not permitted (inhibited), and the result of determination is stored in calculation memory 1022 (step F06). After the result of determination is stored in calculation memory 1022, the permission determination end signal is transmitted.
Receiving the permission determination end signal from execution permitting unit 1049, control unit 108 reads the result of processing by execution permitting unit 1049 from calculation memory 1022, and outputs the read result through display 610 or printer 690 (step T4).
Receiving the permission determination end signal, CPU 622 reads the result of determination indicating whether use of the desired application is permitted or inhibited, stored in calculation memory 1022, and if it is determined that the read determination result indicates “permission”, reads the program of the desired application by searching in memory 624 based on the identification information of the desired application input through input unit 700, and starts execution of the read program. If it is determined that the read determination result indicates “non-permission” (inhibition), execution of the program indicated by the identification information of the desired application is not started. In that case, if any other program is being executed, CPU 622 continues execution of said program, and if no other program is being executed and the operation is in a standby state, CPU 622 operates to maintain the standby state.
It is possible for the user to know whether start of execution of the application of which use (execution) is desired is permitted or inhibited, by confirming the result output at step T4. Therefore, if the start of execution of the desired application has been instructed but the execution of the application does not start, that is, execution of another program continues in the computer of
Though the application as the application processing unit is provided as software (program) here, it may be implemented as hardware formed of a circuitry or the like. In that case, activation means application of a voltage (current) signal of a prescribed level for driving to the circuitry. Further, inhibition of activation means, for example, cut off of the supply voltage to the circuit, or not supplying any voltage (current) signal for driving.
In the present embodiment, some or all of image correcting unit 104, partial image feature value calculating unit 1045, image element determining unit 1047, ratio calculating unit 1048, execution permitting unit 1049, position searching unit 105, similarity score calculating unit 106, comparison/determination unit 107 and control unit 108 may be implemented using an ROM such as memory 624 storing the process procedures as a program and an operating unit such as CPU 622 for executing the program.
Specific examples of the process in accordance with the embodiment and effects attained thereby will be described.
Here, it is assumed that data shown in
Assume that the sample image “B” of
For the image of
Here, security rank table 1026 of
As described above, in the present embodiment, in security rank table 1026, for each application program, data 1028 representing the upper limit of the ratio of image elements not eligible for comparison occupying the sample (input) image as the object of comparison is stored in advance in accordance with the required level of security for the program. Therefore, when execution of an application program requiring low level of security is desired, the upper limit indicated by corresponding data 1028 is low, possibility of repeating the comparing process shown in
The process functions for image comparison are realized by a program. According to Embodiment 2, the program is stored in a computer readable recording medium.
As for the recording medium, in Embodiment 2, the program medium may be a memory necessary for the processing by the computer, such as memory 624, or, alternatively, it may be a recording medium detachably mounted on an external storage device of the computer and the program recorded thereon may be read through the external storage device. Examples of such an external storage device are a magnetic tape device (not shown), an FD drive 630 and a CD-ROM drive 640, and examples of such a recording medium are a magnetic tape (not shown), an FD 632 and a CD-ROM 642. In any case, the program recorded on each recording medium may be accessed and executed by CPU 622, or the program may be once read from the recording medium and loaded to a prescribed storage area shown in
Here, the recording medium mentioned above is detachable from the computer body. A medium fixedly carrying the program may be used as the recording medium. Specific examples may include tapes such as magnetic tapes and cassette tapes, discs including magnetic discs such as FD 623 and fixed disk 626 and optical discs such as CD-ROM 642/MO (Magnetic Optical Disc)/MD (Mini Disc)/DVD (Digital Versatile Disc), cards such as an IC card (including memory card)/optical card, and semiconductor memories such as a mask ROM, EPROM (Erasable and Programmable ROM), EEPROM (Electrically EPROM) and a flash ROM. The computer shown in
The contents stored in the recording medium are not limited to a program, and may include data.
Although the present invention has been described and illustrated in detail, it is clearly understood that the same is by way of illustration and example only and is not to be taken by way of limitation, the spirit and scope of the present invention being limited only by the terms of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2006-154820(P) | Jun 2006 | JP | national |