METHOD AND DEVICE FOR CALCULATING LINE DISTANCE

Information

  • Patent Application
  • 20180018497
  • Publication Number
    20180018497
  • Date Filed
    January 05, 2016
    8 years ago
  • Date Published
    January 18, 2018
    6 years ago
Abstract
The present disclosure discloses a method and device for calculating a line distance. The method includes: obtaining an original image and performing grayscale processing to generate a grayscale image; generating a radical image and a tangential image according to the grayscale image; performing filtering on the grayscale image according to the tangential image to generate a smoothened image and converting the smoothened image into a binary image; dividing the binary image into blocks acid determining a radical direction of a center point of each block according to the radical image; traversing pixels in the radical direction of the center point of each block to calculate a number of times pixel values of two adjacent pixels within the each block change between a first pixel value and a second pixel value, and to calculate a coordinate and sub-pixel value of a boundary point corresponding to a changing-point; and generating the line distance according to the number of times of the change, and the coordinate and the sub-pixel value of the boundary point corresponding to the changing-point. The method seeks the boundary point of the fingerprint ridge lines and the fingerprint valley lines and calculates the line distance according to the coordinates and the sub-pixel value of the boundary point, improving accuracy, strengthening the ability to resist noise
Description

This application claims priority and benefits of Chinese Patent Application No. 201510080690.0, filed with State Intellectual Property Office, P.R.C. on Feb. 13, 2015, the entire content of which is incorporated herein by reference.


FIELD

The present disclosure relates to an image processing technology field and, more particularly, to a method for calculating line distance and a device thereof.


BACKGROUND

With development of society, people raise higher demand on accuracy, security, and practicality of identity authentication. Traditional identity authentication methods, such as passwords, keys, identity cards, etc., have problems such as easy to forget, to leak, to lose, to counterfeit and other issues, causing inconvenience and security issues to daily life. The biometric technology based identification can overcome many defects of the traditional identity authentication and, thus, has become a hotspot in current security technology research. Among a variety of identity authentication methods based on the biological characteristics, fingerprint recognition is a method used earliest and widest. Due to its high stability, uniqueness, easy to collect, and high security, etc., fingerprint is an ideal biological characteristic that can be used for identity authentication, and the market share of fingerprint recognition is increasing year by year. As the fingerprint image belongs to personal privacy, the fingerprint recognition system generally does not directly store the fingerprint image, but extracts feature information of the fingerprint from the fingerprint image via an algorithm, and then performs fingerprint matching and recognition to complete the identity authentication. Therefore, the fingerprint recognition algorithm with high reliability is key to ensure a correct identification of the fingerprint.


Further, the line distance is defined as a distance between a given ridge line and an adjacent valley line. In general, the distance between the center of the ridge line and the center of the valley line is calculated as the line distance. The larger the line distance is, the sparser the ridges are; the smaller the line distance is, the denser the ridges are. The value of the line distance depends, on the structure of the fingerprint itself and the image acquisition resolution. Related technology about calculating the line distance can be divided into two categories: a first category, estimating the fingerprint line-distance based on an entire image. Ideally, it is considered that the line distance of the fingerprint image is in normal distribution. However, in an actual fingerprint database, the line distance of a same fingerprint image can have twice amount of difference and, thus, the line distance cannot be calculated based on the entire image. A second category to estimate local line distance based on image regions, which requires to accurately find a peak point of a spectrum. This may be difficult to achieve in the algorithm, and the line distance obtained may be inaccurate.


For example, in the second category of algorithm in the related art, in a direction perpendicular to the lines in the fingerprint image, the pixel gray scale value exhibits a characteristic of a discrete sinusoidal waveform. As shown in FIG. 1, the distance between the two ridge lines can be expressed as the distance between two adjacent peaks of the sinusoidal waveforms. Because the fingerprint image actually collected by the sensor can contain noise, and the noise information mainly comes from the sensor itself and the actual, conditions, such as the finger having water, oil, or peeling skin, the peak situations in the sinusoidal waveforms become more complex. For example, the sinusoidal waveform cannot have a single peak and, in fact, the peak point cannot be found accurately. For fingerprint images collected on a same finger pressed with same force intensity, the line distance obtained via this method at a same position of the finger are quite different. For the fingerprint grayscale image itself, the distribution of the ridge lines and the valley lines along the direction perpendicular to the lines is not an ideal sine wave, and does no have a prominent peak value of a peak. Therefore, the calculation method gear the line distance based on the gray stale can only adapt to a clear and uniform fingerprint image.


SUMMARY

Embodiments of the present disclosure seek to solve at least one of the problems existing in the related art to at least some extent. Therefore, a first objective of the present disclosure is to provide a method for calculating a line distance, the calculating method seeks the boundary points of the fingerprint ridge lines and the fingerprint valley lines and calculates the line distance according to the coordinates and the sub-pixel values of the boundary points. The accuracy is improved, noise resistance ability is strong, the overall density characteristics of the fingerprint can be reflected more accurately, and the scope of its applications is wider.


The second objective of the present disclosure is to provide a device for calculating the line distance.


In order to achieve the above objectives, the method for calculating the line distance according to some embodiments of the first aspect of the present disclosure includes following steps: obtaining an original image and performing grayscale processing to generate a grayscale image; generating a radical image and a tangential image according to the grayscale image; performing filtering on the grayscale image according to the tangential image to generate a smoothened image and converting the smoothened image into a binary image; dividing the binary image into blocks and determining a radical direction of a center point of each block according to the radical image; traversing pixels in the radical direction of the center point of each block to calculate a number of times pixel values of two adjacent pixels within the each block change between a first pixel value and a second pixel value, and to calculate a coordinate and sub-pixel value of a boundary point corresponding to a changing-point, wherein the first pixel value is a pixel value of a pixel of a ridge line, and the second pixel value is a pixel value of a pixel of a valley line; and generating the line distance according to the number of times the pixel values of two adjacent pixels within the each block change between the first pixel value and the second pixel value, and according to the coordinate and the sub-pixel value of the boundary point corresponding to the changing-point.


The method for calculating the line distance according to the embodiments of the present disclosure seeks the boundary points of the fingerprint ridge lines and the fingerprint valley lines and calculates the line distance according to the coordinates and the sub-pixel values of the boundary points. The accuracy of the line distance may be improved, the true features of the fingerprint can be obtained more closely, and the overall density characteristics of the fingerprint can be reflected more accurately. Moreover, this method's ability to resist noise is strong, and the scope of its applications is wider.


In order to achieve the above objectives, the device for calculating line distance according to some embodiments of the second aspect of the present disclosure includes: a gray processing module configured to obtain an original image and to perform grayscale processing to generate a grayscale image; a generating module configured to generate a radical image and a tangential image according to the grayscale image; a smoothing module configured to perform filtering on the grayscale image according to the tangential image to generate a smoothened image and to convert the smoothened image into a binary image; a block processing module configured to divide the binary image into blocks and to determine a radical direction of a center point of each block according to the radical image; a sub-pixel calculating module configured to traverse pixels in the radical direction of the center point of each block to calculate a number of times pixel values of two adjacent pixels within the each block change between a first pixel value and a second pixel value, and to calculate a coordinate and sub-pixel value of a boundary point corresponding to a changing-point, wherein the first pixel value is a pixel value of a pixel of a ridge line, and the second pixel value is a pixel value of a pixel of a valley line; and a line distance generating module configured to generate the line distance according to the number of times the pixel values of two adjacent pixels within the each block change between the first pixel value and the second pixel value, and according to the coordinate and the sub-pixel value of the boundary point corresponding to the changing-point.


The device for calculating line distance according to the embodiments of the present disclosure seeks the boundary points of the fingerprint ridge lines and the fingerprint valley lines and calculates the line distance according to the coordinates, and the sob-pixel values of the boundary points. The accuracy of the line distance may be improved, the true features of the fingerprint can be obtained more closely, and the overall density characteristics of the fingerprint can be reflected more accurately. Moreover, this method's ability to resist noise is strong, and the scope of its applications is wider.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic diagram of the sine distribution characteristics of ridge lines in a partial area in the related art;



FIG. 2 is a flowchart diagram of a method for calculating a line distance according to an embodiment of the present disclosure;



FIG. 3 is a schematic diagram of a boundary position where pixel value changes from 0 to 1 and the number of times of changing according to an embodiment of the present disclosure;



FIG. 4 is a flowchart diagram of calculating a sub-pixel value of a boundary point according to an embodiment of the present disclosure;



FIGS. 5A-5C are schematic diagrams of calculating sub-pixel values along a horizontal direction according to an embodiment of the present disclosure;



FIGS. 6A-6C are schematic diagrams of calculating sub-pixel values along a non-vertical or non-horizontal direction according to an embodiment of the present disclosure;



FIG. 7A is a schematic diagram of a grayscale image according to an embodiment of the present disclosure;



FIG. 7B is a schematic diagram of a tangential image according to an embodiment of the present disclosure;



FIG. 7C is a schematic diagram of a smooth image according to an embodiment of the present disclosure;



FIG. 7D is a schematic diagram of a binary image according to an embodiment of the present disclosure;



FIG. 7E is a results-schematic diagram after Gabor filtering data of final line distance of each point according to an embodiment of the present disclosure; and



FIG. 8 is a schematic diagram of a device for calculating the line distance according to an embodiment of the present disclosure.





DETAILED DESCRIPTION

Exemplary embodiments will be described in detail herein, and examples thereof are illustrated in accompanying drawings. The same or similar elements and the elements having same or similar functions are denoted by like reference numerals throughout the descriptions. The embodiments described herein with reference to drawings are explanatory, illustrative, and used to generally understand the present disclosure. The embodiments shall not be construed to limit the present disclosure.


In order to solve problems of existing line distance algorithm, such as inaccurate calculated line distance, and narrow application range of the algorithm, etc., the present disclosure provides an improved method and device for calculating line distance. In the following, the method and the device for calculating line distance according to the present disclosure are described in detail with reference to the drawings.



FIG. 2 is a flowchart of the method for calculating the line distance according to an embodiment of the present disclosure. As shown, in FIG. 2, the method for calculating the line distance according to the embodiments of the present disclosure includes the following steps.


S1, obtaining an original image and performing grayscale processing to generate a grayscale image.


Specifically, the grayscale processing is performed on the original image to generate the grayscale image A(i, j).


S2, generating a radical image and a tangential image according to the grayscale image,


Specifically, for the grayscale image A(i, j) the radical image 01 (i, j) and the tangential image 02 (i, j) can be obtained via a gradient method.


S3, performing filtering on the grayscale image according to the tangential image to generate a smoothened image and converting the smoothened image into a binary image.


In an embodiment of the present disclosure, performing filtering on the grayscale image according to the tangential image to generate the smoothened image particularly includes: performing 1*7 mean or average filtering on the grayscale image according to the tangential image to generate the smoothened image.


Specifically, 1*7 mean filtering is performed on the gray scale image A(i, j) using the tangential image O2(i, j) to remove burrs, the smoothened image B(i,j) is obtained, and then the smoothened image B(i,j) is convened into the binary image D(i,j) using differential binarization processing.


S4, dividing the binary image into blocks and determining the radical direction of the center point of each block according to the radical image.


Specifically, the binary image D(i,j) is divided into blocks have a size of N*N (for example, N is 33), in which the blocks are sliding point by point and, therefore, there is an overlap between blocks.


Further, the radical direction of the center point of each block is read from the radical image 01 (i, j).


S5, in each block, traversing pixels in the radical direction of the center point of each block to calculate a total number of times the pixel values of two adjacent pixels within each block change between a first pixel value and a second pixel value, and to calculate coordinates and sub-pixel value of a boundary point corresponding to the changing-point. The first pixel value is the pixel value of a pixel of a ridge line, and the second pixel value is the pixel value of a pixel of a valley line. That is, the pixel with the first pixel value is a pixel of a ridge line, and the pixel with the second pixel value is a pixel of a valley line


In an embodiment of the present disclosure, the first pixel value is 0, the second pixel value is 1. These pixel values are used as an example in the following description.


Specifically, for each block of the binary image D(i,j), within each block, the pixels in the radical direction of the center point of each block are traversed, the total number of times pixel values of two adjacent pixels within each block change from 0 to 1 and change from 1 to 0 are calculated, and the coordinates of the pixels of the changing-points are also calculated coordinates of boundary points). As shown in FIG. 3 the stripe areas represent ridge lines, and the pixel value is 0; the blank areas are valley lines, and the pixel value is 1; and the locations pointed by the arrows are the locations where 0 changes to 1, i.e,, the positions of the boundary points. As shown in FIG. 3, the total, number of times 0 changes to 1 is 1. When the pixel values of two adjacent pixels change from 0 to 1, the coordinate of the boundary point can be recorded as the coordinate of the pixel with the pixel value of 0 of the two adjacent pixels, and can also be recorded as the coordinate of the pixel with the pixel value of 1 of the two adjacent pixels.


In an embodiment of the present disclosure, as shown in FIG. 4, the sub-pixel value of the to boundary point corresponding to the changing-point is generated via the following steps.


S51, obtaining the pixel values of two pixels adjacent to the boundary point along a pre-set direction.


S52, generating the sub-pixel value of the boundary point according to the pixel values of the two pixels adjacent to the boundary point and the pixel value of the boundary point. Specifically, while traversing, the sub-pixel value of the boundary point is calculated at the same time, and the calculation can be divided into two situations, one is to calculate the sub-pixel value of the boundary point along a tilted direction, and the other is to calculate along a vertical or horizontal direction. The two situations will be described respectively in the following descriptions.


In an embodiment of the present disclosure, when the radical angle corresponding to the radical direction of the center point of the block is equal to 0 degree, the pre-set direction is a vertical direction; and when the radical angle corresponding to the radical direction of the center point of the block is equal to 90 degrees, the pre-set direction is a horizontal direction. When the radical angle corresponding to the radical direction of the center point of the block is equal to 0 degrees or 90 degrees, if both of the pixel values of the two pixels adjacent to the boundary point are the same as the pixel value of the boundary point, the sub-pixel value of the boundary point is 0; if only one of the pixel values of the two pixels adjacent to the boundary point is the same as flue pixel value of the boundary point, the sub-pixel value of the boundary point is 0.5; if both of the pixel values of the two pixels adjacent to the boundary point are different from the pixel value of the boundary point, the sub-pixel value of the boundary point is 1.


Specifically, FIGS. 5A to 5C are schematic diagrams of calculating the sub-pixel value along the horizontal direction e.g., the Y-axis direction shown in FIG. 5A). In the present disclosure, an upper left, corner of the fingerprint image (e.g., the grayscale image shown in FIG. 7A) is used as an origin, and the vertical acid horizontal boundaries of the fingerprint image are respectively used as the x-axis and the y-axis to establish the coordinate system. The radical angle corresponding to the radical direction of the center point the block is an angle between the radical direction of the center point of the block and the x-axis, in the figure, the block filled with vertical stripe represents the boundary point, the block filled with white represents the point on the valley line, and the block filled with black represents the point on ridge line. When calculating the sub-pixel value of the boundary point corresponding to the changing-point, the pixel values of two pixels on both sides of the boundary point are used for determination. If both of the pixel values of two pixels on both sides of the boundary point are equal to the pixel value of the boundary point, then δ (the sub-pixel value of the boundary point) takes a value of 0; if one of the pixel values of two pixels on both sides of the boundary point is equal to the pixel value of the boundary point, δ takes a value of 1/2; if both of the pixel values of two pixels on both sides of the boundary point are different from the pixel value of the boundary point, δ takes a value of 1. For example, the two pixels adjacent to the boundary point are the blocks both filled with white in FIG. 5A, the two pixels adjacent to the boundary point are the blocks respectively filled with black and white in FIG. 5B, the two pixels adjacent to the boundary point are the blocks both tilled with black in FIG. 5C.


In another embodiment of the present disclosure, when the radical angle corresponding to to the radical direction of the center point of the block is not equal to 0 degrees or 90 degrees, the pre-set direction is a direction other than the vertical direction and the horizontal direction (i.e., a tilted direction). If both of the pixel values of the two pixels adjacent to the boundary point are the same as the pixel value of the boundary point, the sub-pixel value of the boundary point is 0; if only one of the pixel values of the two pixels adjacent to the boundary point is the same as the pixel value of the boundary point, the sub-pixel value of the boundary point is 0.25; if both of the pixel values of the two pixels adjacent to the boundary point are different from the pixel value of the boundary point, the sub-pixel value of the boundary point is 0.5.


Specifically, FIGS. 6A to 6C are schematic diagrams of calculating the sub-pixel value along the non-vertical direction or non-horizontal direction (i.e., the oblique or tilted direction, as shown in FIG. 6A). As shown in the figures, the block filled with vertical stripe represents the boundary point, the block filled with white represents the point on the valley line, and the block filled with black represents the point on ridge line. When calculating the sub-pixel value of the boundary point, the pixel values of two pixels on both sides of the boundary point are used for determination. if both of the pixel values of two pixels on both sides of the boundary point are equal to the pixel value of the boundary point, δ then takes a value of 0; if one of the pixel values of two pixels on both sides of the boundary point is equal to the pixel value of the boundary point, δ takes a value of 1/4; if both of the pixel values of two pixels on both sides of the boundary point are different from the pixel value of the boundary point, δ takes a value of 1/2. For example, the two pixels adjacent to the boundary point are the blocks both filled with white in FIG. 6A, the two pixels adjacent to the boundary point are the blocks respectively filled to with black and white in FIG. 6B, the two pixels adjacent to the boundary point are the blocks both filled with vertical stripe in FIG. 6C.


S6, generating the line distance according to the total number of times the pixel values of two adjacent pixels within the block change between the first pixel value and the second pixel value, and according to the coordinates and the sub-pixel value of the boundary point corresponding to the changing-point, where the line distance is the line distance of the center point of each block.


In embodiment of the present disclosure, the line distance of the center point of a block is generated via the following formula:










D





1


(

i
,
j

)


=

{






(


X

num





1


-

X
1

-




i
=
1


num





1




δ

X
i




)



(


num





1

-
1

)

×
sin





θ


,





π
4


θ



3

π

4









(


X

num





1


-

X
1

-




i
=
1


num





1




δ

X
i




)



(


num





1

-
1

)

×
cos





θ


,



else








(
1
)







D





2


(

i
,
j

)


=

{






(


Y

num





2


-

Y
1

-




i
=
1


num





2




δ

Y
i




)



(


num





2

-
1

)

×
sin





θ


,





π
4


θ



3

π

4









(


Y

num





2


-

Y
1

-




i
=
1


num





2




δ

Y
i




)



(


num





2

-
1

)

×
cos





θ


,



else








(
2
)







D


(

i
,
j

)


=



D





1


(

i
,
j

)


+

D





2


(

i
,
j

)



2





(
3
)







where num1 and num2 are the number of times the pixel values change between the first pixel value and the second pixel value, num1 is the number of times the pixel values of two adjacent pixels within the block change from the second pixel value to the first pixel value, num2 is the number of times the pixel values of two adjacent pixels within the block change from the first pixel value to the second pixel value, X1 and Xnum1 are respectively the horizontal coordinate of the boundary point corresponding to the point where the pixel values of two adjacent pixels change from the second pixel value to the first pixel value along the radical direction of the center point of the block for the first time, and the horizontal coordinate of the boundary point corresponding to the point where the pixel values of two adjacent pixels change from the second pixel value to the first pixel value along the radical direction of the center point of the block for the num1th time, Y1 and Ynum2 are respectively the horizontal coordinate of the boundary point corresponding to the point where the pixel values of two adjacent pixels change from the first pixel value to the second pixel value along the radical direction of the center point of the block for the first time, and the horizontal coordinate of the boundary point corresponding to the point where the pixel values of two adjacent pixels change from the first pixel value to the second pixel value along the radical direction of the center point of the block for the num1th time, θ is the radical angle of the center point of the block, and θ is in a range of 0 to π, δXi is the sub-pixel value of the boundary point corresponding to the point where the pixel values of two adjacent pixels change from the second pixel value to the first pixel value along the radical direction of the center point of the block for the ith time, δYi is the sub-pixel value of the boundary point corresponding to the point where the pixel values of two adjacent pixels change from the first pixel value to the second pixel value along the radical direction of the center point of the block for the ith time, D1(i, j) and D2(i, j) are respectively calculated distances according to the pixel values of two adjacent pixels changing from the second pixel value to the first pixel value and the pixel values of two adjacent pixels changing from the first pixel value to the second pixel value, D(i, j) is the line distance of the center point of the block.


In this way, the line distance of the center point of each block can be calculated.


In an embodiment of the present disclosure, the method for calculating the line distance also includes: obtaining the number of the boundary points of each block according to the number of times the pixel values of two adjacent pixels within each block change between the first pixel value and the second pixel value: for any block having the number of the boundary points less than a predetermined number, the pixels in a reverse direction of the radical direction of the center point of the block are further traversed within the block.


Specifically, it should be noted that the conditions that D1(i, j) and D2(i, j ) need to satisfy are: {circle around (1)} D1, D2 in each block must have 2 or more value-changing boundary points where pixel value changes from 0 to 1 or from 1 to 0. That is, to complete a calculation of the line distance, two ridge hoes and one valley line are needed, or two valley hues and one ridge line are needed. Lithe number of the ridge line and the valley line is not enough, then D1 and D2 do not exist; {circle around (2)} if any one of D1 and D2 does not exist, the other one of D1 and D2 needs to seek at least two changing points in the reverse direction of the radical direction.


The method for calculating the line distance according to the embodiments of the present disclosure seeks the boundary points of the fingerprint ridge lines and the fingerprint valley lines and calculates the line distance according, to the coordinates and the sub-pixel values of the to boundary points. The accuracy of the line distance may be improved, the true features of the fingerprint can be obtained more closely, and the overall density characteristics of the fingerprint can be reflected more accurately. Moreover, this method's ability to resist noise is strong, and the scope of its applications is wider.


In an embodiment of the present disclosure, after S6, the method for calculating the line distance also includes: performing 5*5 local region mean filtering on the line distance.


Specifically, the 5*5 local region mean filtering is performed on the calculated line distance to smoothen the calculated line distance to obtain a final line distance of each point.


In addition, in order to make more intuitive of the calculation result of each step of the method for calculating the line distance of the embodiment of the present disclosure, effect diagram of each step of the method is given. FIG. 7A is a schematic diagram of a grayscale image, FIG. 7B is a schematic diagram of a tangential image, FIG. 7C is a schematic diagram of a smoothened image, FIG. 7D is a schematic diagram of a binary image, FIG. 7E is a schematic diagram of the results after Gabor filtering on final line distance data of each point.


The method for calculating the line distance according to embodiments of the present disclosure avoids the situation where it is relatively complex to obtain the extreme value of sinusoidal curve of the fingerprint. For example, in a place where there is one maximum point theoretically have more than two uncertain number of extreme points, the line distance of the fingerprint cannot be calculated accurately. According to the disclosed method for calculating the line distance according to the embodiment of the present disclosure, the boundary point of the ridge line and the valley line of the fingerprint is determined, it is certain that there is only one point, and there are no uncertain number of boundary points. Thus, there is a high redundancy for images with noise, requirements for the images are not stringent, and the scope of applications is expanded. The method has high engineering application value and can provide reliable parameters for subsequent image filtering, segmentation, ridge tracking, and matching.


In accordance with the line distance calculation method provided by the above embodiments, an embodiment of the present disclosure also provides a calculating device for line distance. Because the line distance calculation device provided by the embodiment of the present disclosure corresponds to the method for calculating the line distance provided by the above embodiments, the aforementioned line distance calculation method is also adapted to the device for calculating the line distance provided in the present embodiment, and the calculation method is not described in detail in the present embodiment. FIG. 8 is a schematic diagram of the device for calculating line distance according to an embodiment of the present disclosure. As shown in FIG. 8, the calculating device according to an embodiment of the present disclosure includes: grayscale processing module 100, generating module 200, smoothing module 300, block processing module 400, sub-pixel calculating module 500, and line distance generating module 600.


The grayscale processing module 100 is configured to obtain an original image and to. perform grayscale processing to generate a grayscale image.


The generating module 200 is configured to generate a radical image and a tangential image according to the grayscale image.


The smoothing module 300 is configured to perform filtering on the grayscale image according to the tangential image to generate a smoothened image and to convert the smoothened image into a binary image.


in an embodiment of the present disclosure, the smoothing module 300 is specifically configured to perform a 1*7 mean filtering on the grayscale image according to the tangential image to generate the smoothened image and to convert the smoothened image into the binary image.


The block processing module 400 is configured to divide the binary image into blocks and to determine a radical direction of a center point of each block according to the radical image.


The sub-pixel calculating; module 500 is configured to traverse pixels in the radical direction of the center point of each block within each block to calculate a total number of times pixel values of two adjacent pixels within the each block change between a first pixel value and a second pixel value, and coordinates and sub-pixel value of a boundary point corresponding to a changing-point. The first pixel value is a pixel value of a pixel where a ridge line locates, the second pixel value is the pixel value of the pixel where the valley line is located.


In an embodiment of the present disclosure, the first pixel value is 0, the second pixel value is 1.


In an embodiment of the present disclosure, the sub-pixel calculating module 500 is configured to obtain the number of the boundary points of each block according to the number of times the pixel values of two adjacent pixels within the each block changing between the first pixel value and the second pixel value and, for a block with the number of the boundary points less than a pre-set number, to traverse the pixels within the block in a reverse direction of the radical direction of the center point of the block.


In an embodiment of the present disclosure, the sub-pixel calculating module 500 generates the sub-pixel value of the boundary point. Specifically, the pixel values of two pixels adjacent to the boundary point along a pie-set direction is obtained, and the sub-pixel value of the boundary point is calculated according to the pixel values of the two pixels adjacent to the boundary point and the pixel value of the boundary point.


In an embodiment of the present disclosure, when a radical angle corresponding to the radical direction of the center point of the block is equal to 0 degree, the sub-pixel calculating module 500 determines the pre-set direction as a vertical direction; when the radical angle corresponding to, the radical direction of the center point of the block, is equal to 90 degrees, the sub-pixel calculating module 500 determines the pre-set direction as a horizontal direction, Further, when both of the pixel values of the two pixels adjacent to the boundary point are the same as the pixel value of the boundary point, the sub-pixel calculating module 500 is configured to generate 0 as the sub-pixel value of the boundary point; when only one of the pixel value of the two pixels adjacent to the boundary point is the same as the pixel value of the boundary point, the sub-pixel calculating module 500 is configured to generate 0.5 as the sub-pixel value of the boundary point; when both of the pixel value of the two pixels adjacent to the boundary point are different from the pixel value of the boundary point, the sub-pixel calculating module 500 is configured to generates 1 as the sub-pixel value of the boundary point.


In another embodiment of the present disclosure, when the radical angle corresponding to the, radical direction of the center point of the block is not equal to 0 or 90 degrees, the sub-pixel to calculating module 500 determines the pre-set direction as a direction other than the radical direction and the horizontal direction. When both of the pixel value of the two pixels adjacent to the boundary point are the same as the pixel value of the boundary point, the sub-pixel calculating module 500 is further configured to generate 0 as the sub-pixel value of the boundary point; when only one of the pixel value of the two pixels adjacent to the boundary point is the same as the pixel value of the boundary point, the sub-pixel calculating module 500 is configured to generate 0.25 as the sub-pixel value of the boundary point; when both of the pixel value of the two pixels adjacent to the boundary point are different from the pixel value of the boundary point, the sub-pixel calculating module 500 is configured to generate 0.5 as the sub-pixel value of the boundary point.


The line distance generating module 600 is configured to generate the line distance according to the number of times the pixel values of two adjacent pixels within the block change between the first pixel value and the second pixel value, and, according to the coordinates and the sub-pixel value of the boundary point corresponding to the changing-point.


In another embodiment of the present disclosure, the line distance generating module 600 generates the line distance via the formula (1), (2), and (3),


In an embodiment of the present disclosure, the line distance generating module 600 is also configured to perform 5*5 local region mean filtering on the line distance.


The device for calculating line distance according to an embodiment of the present disclosure seeks the boundary points of the fingerprint ridge lines and the fingerprint valley lines and calculates the line distance according, to the coordinates and the sub-pixel values of the boundary points. The accuracy of the line distance may be improved, the true features of the fingerprint ca be obtained more closely, and the overall density characteristics of the fingerprint can be reflected more accurately. Moreover, this method's ability to resist noise is strong, and the scope of its applications is wider.


Reference throughout this specification to “an embodiment,” “some embodiments,” “one embodiment”, “another example,” “an example,” “a specific example,” or “some examples,” means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. Thus, the appearances of the phrases such as “in some embodiments,” “in one embodiment”, “in an embodiment”, “in another example,” “in an example,” “in a specific example,” or “in some examples,” in various places throughout this specification are not necessarily referring to the same embodiment or example of the present disclosure. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments or examples.


It should be understood that each part of the present disclosure may be realized by the hardware, software, firmware or their combination. In the above embodiments, a plurality of steps or methods may be realized by the software or firmware stored in the memory and executed by the appropriate instruction execution system. For example, if it is realized by the hardware, likewise in another embodiment, the steps or methods may be realized by one or a combination of the following techniques known in the art: a discrete logic circuit having a logic gate circuit for realizing a logic function of a data signal, an application-specific integrated circuit having an appropriate combination logic gate circuit, a programmable gate array (PGA), a field programmable gate array (FPGA), etc.


Those skilled in the art shall understand that all or parts of the steps in the above exemplifying method of the present disclosure may be achieved by commanding the related hardware with programs. The programs may be stored in a computer readable storage medium, and the programs comprise one or a combination of the steps in the method embodiments of the present disclosure when run on a computer.


In addition, each function cell of the embodiments of the present disclosure may be integrated in a processing module, or these cells may be separate physical existence, or two or more cells are integrated in a processing module. The integrated module may be realized in a form of hardware or in a form of software function modules. When the integrated module is realized in a form of software function module and is sold or used as a standalone product, the integrated module may be stored in a computer readable storage medium.


The storage medium mentioned above may be read-only memories, magnetic disks, CD, etc. It should be noted that, although the present disclosure has been described with reference n the embodiments, it will be appreciated by those skilled in the art that the disclosure includes other examples that occur to those skilled in the art to execute the disclosure. Therefore, the present disclosure is not limited to the embodiments.

Claims
  • 1. A method for calculating a line distance of a fingerprint, comprising: obtaining an original image and performing grayscale processing to generate a grayscale image;generating a radical image and a tangential image according to the grayscale image;performing filtering on the grayscale image according to the tangential image to generate a smoothened image and converting the smoothened image into a binary image;dividing the binary image into blocks and determining a radical direction of a center point of each block according to the radical image,traversing pixels in the radical direction of the center point of each block to calculate a number of times pixel values of two adjacent pixels within the each block change between a first pixel value and a second pixel value, and to calculate a coordinate and sub-pixel value of a boundary point corresponding to a changing-point, wherein the first pixel value is a pixel value of a pixel of a ridge line, and the second pixel value is a pixel value of a pixel of a valley line; andgenerating the line distance of the fingerprint according to the number of times the pixel values of two adjacent pixels within the each block change between the first pixel value and the second pixel value, and according to the coordinate and the sub-pixel value of the boundary point corresponding to the changing-point.
  • 2. The method according to claim 1, further comprising: obtaining a total number of the boundary points of the each block according to the number of times the pixel values of two adjacent pixels within the each block changing between the first pixel value and the second pixel value; andwhen the total number of the boundary points of the block is less than a pre-set number, traversing pixels in a reverse direction of the radical direction of the center point within the block.
  • 3. The method according to claim 1, wherein the sub-pixel value of the boundary point corresponding to the changing-point is calculated by: obtaining the pixel values of two pixels adjacent to the boundary point along a pre-set direction; andgenerating the sub-pixel value of the boundary point according to the pixel values of the two pixels adjacent to the boundary point and the pixel value of the boundary point.
  • 4. The method according to claim 3, wherein generating the sub-pixel value of the boundary point according to the pixel values of the two pixels adjacent to the boundary point and the pixel value of the boundary point further includes: when a radical angle corresponding to the radical direction of the center point of the block is equal to 0 degree, determining the pre-set direction as a vertical direction; andwhen both of the pixel values of the two pixels adjacent to the boundary point are the same as the pixel value of the boundary point, generating 0 as the sub-pixel value of the boundary point;when only one of the pixel values of the two pixels adjacent to the boundary point is the same as the pixel value of the boundary point, generating 0 as the sub-pixel value of the boundary point; orwhen both of the pixel values of the two pixels adjacent to the boundary point are different from the pixel value of the boundary point, generating 1 the as the sub-pixel value of the boundary point.
  • 5. The method according to claim 3, wherein generating the sub-pixel value of the boundary point according to the pixel values of the two pixels adjacent to the boundary point and the pixel value of the boundary point further includes: when the radical angle corresponding to the radical direction of the center point of the block is equal to 90 degrees, determining the pre-set direction as a horizontal direction; andwhen both of the pixel values of the two pixels adjacent to the boundary point are the same as the pixel value of the boundary point, generating 0 as the sub-pixel value of the boundary point;when only one of the pixel values of the two pixels adjacent to the boundary point is the same as the pixel value of the boundary point, generating 0.5 as the sub-pixel value of the boundary point; orwhen both of the pixel values of the two pixels adjacent to the boundary point are different from the pixel value of the boundary point, generating 1 as the sub-pixel value of the boundary point.
  • 6. The method according to claim 3, wherein generating the sub-pixel value of the boundary point according to the pixel values of the two pixels adjacent to the boundary point and the pixel value of the boundary point further incudes: when the radical angle corresponding to the radical direction of the center point of the block is not equal to 0 degree or 90 degrees, determining the pre-set direction as a direction other than the vertical direction and the horizontal direction; andwhen both of the pixel values of the two pixels adjacent to the boundary point are the same as the pixel value of the boundary point, generating 0 as the sub-pixel value of the boundary point;when only one of the pixel values of the two pixels adjacent to the boundary point is the same as the pixel value of the boundary point, generating 0.25 as the sub-pixel value of the boundary point; orwhen both of the pixel values of the two pixels adjacent to the boundary point are different from the pixel value of the boundary point, generating 0.5 as the sub-pixel value of the boundary point.
  • 7. The method according, to claim 1, wherein the line distance of the center point of one block is generated using:
  • 8. The method according to claim 1 after generating the line distance according to the number of times the pixel values of two adjacent pixels within each block change between the first pixel value and the second pixel value, and according to the coordinate and the sub pixel value of the boundary point corresponding to the changing-point, further comprising; performing 5*5 local region mean filtering on the line distance.
  • 9. A device for calculating a line distance of a fingerprint, comprising: a gray processing module configured to obtain an original image and to perform grayscale processing to generate a grayscale image;a generating module configured to generate a radical image and a tangential image according to the grayscale image;a smoothing module configured to perform filtering on the grayscale image according to the tangential image to generate a smoothened image and to convert the smoothened image into a binary image;a block processing module configured to divide the binary image into blocks and to determine a radical direction of a center point of each block according to the radical image;a sub-pixel calculating module configured to traverse pixels in the radical direction of the center point of each block to calculate a number of times pixel values of two adjacent pixels within the each block change between a first pixel value and a second pixel value, and to calculate a coordinate and sub-pixel value of a boundary point corresponding to a changing point, wherein the first pixel value is a pixel value of a pixel of a ridge line, and the second pixel value is a pixel value of a pixel of a valley line; anda line distance generating module configured to generate the line distance according to the number of times the pixel values of two adjacent pixels within the each block change between the first pixel value and the second pixel value, and according to the coordinate and the sub-pixel value of the boundary point corresponding to the changing-point.
  • 10. The device according to claim 1, wherein the sub-pixel calculating module is also configured to obtain a total number of the boundary points of the each block according to the number of times the pixel values of two adjacent pixels within the each block changing between the first pixel value and the second pixel value and, when the total number of the boundary points of the block is less than a pre-set number, to traverse pixels in a reverse direction of the radical direction of the center point within the block.
  • 11. The device according to claim 9, wherein the sub-pixel calculating module is also configured to obtain the pixel values of two pixels adjacent to the boundary point along a pre-set direction; and to generate the value of the boundary point according to the pixel values of the two pixels adjacent to the boundary point and the pixel value of the boundary point.
  • 12. The device according to claim 11, wherein: when a radical angle corresponding to the radical direction of the center point of the block is equal to 0 degree, the sub-pixel calculating module is configured to determine the pre-set direction as a vertical direction; andwhen both of the pixel values of the two pixels adjacent to the boundary point are the same as the pixel value of the boundary point, the sub-pixel calculating module generates as the sub-pixel value of the boundary point;when only one of the pixel values of the two pixels adjacent to the boundary point is the same as the pixel value of the boundary point, the sub-pixel calculating module generates 0.5 as the sub-pixel value of the boundary point;when both of the pixel values of the two pixels adjacent to the boundary point are different from the pixel value of the boundary point, the sub-pixel calculating module generates 1 the as sub-pixel value of the boundary point.
  • 13. The device according to claim 11, wherein: when the radical angle corresponding to the radical direction of the center point of the block is equal to 90 degrees, the sub-pixel calculating module is configured to determine the pre-set direction as a horizontal direction; andwhen both of the pixel values of the two pixels adjacent to the boundary point are the same as the pixel value of the boundary point, the sub-pixel calculating module further generates 0 as the sub-pixel value of the boundary point;when only one of the pixel values of the two pixels adjacent to the boundary point is the same as the pixel value of the boundary point, the sub-pixel calculating module generates 0.5 as the sub-pixel value of the boundary point;when both of the pixel values of the two pixels adjacent to the boundary point are different from the pixel value of the boundary point, the sub-pixel calculating module generates 1 as the sub-pixel value of the boundary point.
  • 14. The device according to claim 11, wherein: when the radical angle corresponding to the radical direction of the center point of the block is equal to 0 degree or 90 degrees, the sub-pixel calculating module is configured to determine the pre-set direction as a direction other than the vertical direction and the horizontal direction; andwhen both of the pixel values of the two pixels, adjacent to the boundary point are the same as the pixel value of the boundary point, the sub-pixel calculating module generates 0 as the sub-pixel value of the boundary point;when only one of the pixel values of the two pixels adjacent to the boundary point is the same as the pixel value of the boundary point, the sub-pixel calculating module generates 0.25 as the sub-pixel value of the boundary point;when both of the pixel values of the two pixels adjacent to the boundary point are different from the pixel value of the boundary point, the sub-pixel calculating module generates 0.5 as the sub-pixel value of the boundary point.
  • 15. The device according to claim 9, wherein the line distance generating module is also configured to generate the line distance the of the center point of one block using:
  • 16. The device according to claim 9, wherein the line distance generating module is configured to performing 5*5 local region mean filtering on the line distance.
  • 17 (canceled)
  • 18. (canceled)
  • 19. A non-transitory computer-readable medium having computer program for, when being executed by a processor, performing a method for calculating a line distance of a fingerprint, the method comprising: obtaining an original image and performing grayscale processing to generate a grayscale image;generating a radical image and a tangential image according to the grayscale image;performing filtering on the grayscale image according to the tangential image to generate a smoothened image and converting the smoothened image into a binary image;dividing the binary image into blocks and determining a radical direction of a center point of each block according to the radical image;traversing pixels in the radical direction of the center point of each block to calculate a number of times pixel values of two adjacent pixels within the each block change between a first pixel value and a second pixel value, and to calculate a coordinate and sub-pixel value of a boundary point corresponding to a changing-point, wherein the first pixel value is a pixel value of a pixel of a ridge line, and the second pixel value is a pixel value of a pixel of a valley line; andgenerating the line distance according to the number of times the pixel values of two adjacent pixels within the each block change between the first pixel value and the second pixel value, and according to the coordinate and the sub-pixel value of the boundary point corresponding to the changing-point.
  • 20. The non-transitory computer-readable medium according to claim 19, the method further comprising: obtaining a total number of the boundary points of the each block according to the number of times the pixel values of two adjacent pixels within the each block changing between the first pixel value and the second pixel value; andwhen the total number of the boundary points of the block is less than a pre-set number, traversing pixels in a reverse direction of the radical direction of the center point within the block.
  • 21. The non-transitory computer-readable medium according to claim 19, wherein the sub-pixel value of the boundary point corresponding to the changing-point is calculated by: obtaining the pixel values of two pixels adjacent to the boundary point along a pre-set direction; andgenerating the sub-pixel value of the boundary point according to the pixel values of the two pixels adjacent to the boundary point and the pixel value of the boundary point.
Priority Claims (1)
Number Date Country Kind
201510080690.0 Feb 2015 CN national
PCT Information
Filing Document Filing Date Country Kind
PCT/CN2016/070192 1/5/2016 WO 00