The present invention relates to a registration apparatus, a checking apparatus, a data structure, and a storage medium and is suited for, for example, application to biometrics authentication.
Authentication technologies employing a living body as an authentication target are coming into wide use. And, incorporating biometric authentication into a potable communication device, such as a cellular phone, enables authentication processing to be easily performed on a communication party anywhere through the portable communication device, so it is important to incorporate a biometric authentication apparatus into a portable communication device.
Traditionally, one example of biometric authentication apparatuses is a vein authentication apparatus employing a vein of a finger as a target. The vein authentication apparatus generates pattern information indicating a vein characteristic in a venous image obtained as a result of image pickup inside a finger, and registers it into storage means or checks it against pattern information registered in the storage medium.
Meanwhile, when pattern information is registered and when it is checked against registered pattern information, if a misalignment of a venous section with respect to an image pickup unit occurs, an event in which a registrant is falsely determined as an unregistered person or an event in which an unregistered person is falsely determined as a registrant occurs, so a low authentication accuracy problem exists. One solution to this problem is the one in which a waveform state of luminance histogram is utilized as pattern information to prevent incorrect determination resulting from a misalignment occurring in an image pickup (refer to Patent Document 1, for example).
However, for this document, for example, at authentication, if a misalignment occurs in a state where even part of a venous section at registration is outside an image pickup range, an obtained waveform state of luminance histogram is significantly different from one at registration, so a low authentication accuracy problem exists.
The present invention is made in consideration of the above respects and is directed to providing a registration apparatus, a checking apparatus, a data structure, and a storage medium that are capable of achieving an improved authentication accuracy.
To solve the above problems, the present invention is a registration apparatus that includes an image acquisition unit configured to acquire a venous image for a vein of a living body, an extraction unit configured to extract a parameter resistant to affine transformation from part of the venous image, and a registration unit configured to register the parameter extracted by the extraction unit in storage means.
And, the present invention is an authentication apparatus an extraction unit configured to extract a parameter resistant to affine transformation from part of a venous image for a vein of a living body, the venous image being input as an authentication target, a reading unit configured to read registration information registered in storage means, and a determination unit configured to determine whether a person who input the parameter is a registrant in accordance with a degree of checking between the parameter and the registration information.
In addition, the present invention is a data structure for use in processing of determination of a registrant or not performed by a computer. The data structure includes a parameter extracted from part of a venous image for a vein of the registrant, the parameter being resistant to affine transformation. In the processing, the parameter is compared with a corresponding section in a venous image input as an authentication target.
In addition, the present invention is a storage medium that stores data for use in processing of determination of a registrant or not performed by a computer. The data includes a parameter extracted from part of a venous image for a vein of the registrant, the parameter being resistant to affine transformation. In the processing, the parameter is compared with a corresponding section in a venous image input as an authentication target.
As described above, with the present invention, because a target for extracting a parameter resistant to affine transformation is part of a venous image, the permissible amount of movement of a vein within a venous image can be maintained without an increase in an image pickup range. In this manner, a registration apparatus, a checking apparatus, a data structure, and a storage medium that are capable of achieving an improved authentication accuracy can be accomplished.
An embodiment in which the present invention is applied is described below about the drawings.
The control unit 10 is configured as a computer containing a central processing unit (CPU) controlling the whole of the authentication apparatus 1, a read-only memory (ROM) in which various programs and setting information are stored, and a random-access memory (RAM) serving as a work memory of the CPU.
In response to a user operation, a command COM1 to execute a mode for registering a vein of a user being a registration target (hereinafter, the mode is referred to as a vein registration mode, and the registration target user is referred to as a registrant) or a command COM2 to execute a mode for determining whether a person is a registrant (hereinafter referred to as an authentication mode) is input from the operating unit 11 into the control unit 10.
The control unit 10 determines a mode to be executed on the basis of the command COM1 or COM2, appropriately controls the image pickup unit 12, the memory 13, the interface 14, and the notification unit 15 on the basis of a program corresponding to a result of the determination, and executes the vein registration mode or the authentication mode.
The image pickup unit 12 emits light that has a wavelength contained in a wavelength range (700 [nm] to 900 [nm]) having a characteristic of being specifically absorbed in both deoxygenated hemoglobin and oxygenated hemoglobin (hereinafter, this light is referred to as near-infrared light) on a surface being a target on which a finger is to be placed (hereinafter referred to as a finger placement surface).
And, the image pickup unit 12 acquires an image in which a vein within a living-body portion placed on a finger placement surface is projected (hereinafter referred to as a venous image) as data on the image (hereinafter referred to as venous image data) and outputs the venous image data to the control unit 10.
The memory 13 can be a flash memory, for example, and stores or reads data specified by the control unit 10.
The interface 14 exchanges various kinds of data with an external device connected through a predetermined transmission line.
The notification unit 15 includes a display unit 15a and an audio output unit 15b. The display unit 15a displays text and graphics based on display data supplied from the control unit 10. On the other hand, the audio output unit 15b outputs, from a speaker, audio based on audio data supplied from the control unit 10.
Next, the vein registration mode is described. When determining the vein registration mode as the mode to be executed, the control unit 10 notifies that a finger should be placed on the finger placement surface through the notification unit 15, and after that, as illustrated in
In this case, the driving unit 21 acquires venous image data by driving the image pickup unit 12. That is, the driving unit 21 emits near-infrared light on the finger placement surface by driving a light source of the image pickup unit 12. And, the driving unit 21 adjusts a lens position of an optical lens of the image pickup unit 12 such that it is focused on a subject. In addition, the driving unit 21 adjusts an f-number of a stop of the image pickup unit 12 on the basis of a predetermined exposure value (EV) and adjusts a shutter speed (exposure time) with respect to an image pickup element.
The image processing unit 22 extracts, as a feature of a venous image, a parameter resistant to affine transformation from venous image data supplied from the image pickup unit 12 as a result of image pickup in the image pickup unit 12. The parameter resistant to affine transformation is a parameter that is invariant even if the position changes when a luminance state in an image is invariant. Hereinafter, this parameter is referred to as a feature as appropriate.
The registration unit 23 generates data for indentifying a registrant (hereinafter referred to as registration data) on the basis of a feature extracted by the image processing unit 22 and registers it by storing it into the memory 13.
In such a way, the control unit 10 can execute the vein registration mode.
Next, the authentication mode is described. When determining the authentication mode as the mode to be executed, the control unit 10 notifies that a finger should be placed on the finger placement surface through the notification unit 15, and after that, as illustrated in
In this case, the driving unit 21 drives the image pickup unit 12, and the image processing unit 22 extracts a feature on the basis of venous image data supplied from the image pickup unit 12.
The reading unit 31 acquires registration data by reading it from the memory 13 and supplies it to the authentication unit 32.
The authentication unit 32 checks the feature of the registration data supplied from the reading unit 31 against the feature extracted by the image processing unit 22 and determines whether a provider of the venous image data can be authenticated as a registrant in accordance with the degree of the checking.
When determining that the provider cannot be authenticated as a registrant, the authentication unit 32 visually and aurally notifies that the provider cannot be authenticated through the display unit 15a and the audio output unit 15b.
On the other hand, when determining that the provider can be authenticated as a registrant, the authentication unit 32 sends data for starting execution of predetermined processing to a device connected to the interface 14. The device performs, as the predetermined processing, processing to be executed for successful authentication, for example, closing a door in a fixed period of time or clearing an operational mode of a restriction target.
In such a way, the control unit 10 can execute the authentication mode.
Next, the image processing unit 22 is specifically described. The image processing unit 22 has a configuration including a sharpening unit 41, a reference-point detecting unit 42, and a feature extracting unit 43, as illustrated in
The sharpening unit 41 performs sharpening processing called LoG filtering on venous image data obtained from the image pickup unit 12 (
Here, images before and after sharpening processing are illustrated in
The reference-point detecting unit 42 detects a diverging point of a vein contained in a venous image on the basis of venous image data supplied from the sharpening unit 41.
In the present embodiment, the reference-point detecting unit 42 binarizes a venous image and extracts the center of a vein width or the peak of luminance in the binarized image, and thereby generates a pattern in which the vein line width corresponds to one pixel, as pre-stage processing for detecting a diverging point. Accordingly, the reference-point detecting unit 42 can obtain more uniformly detect the position of an intersection of veins, as compared with the case where the pre-stage processing is omitted.
The feature extracting unit 43 acquires venous image data after sharpening processing from the sharpening unit 41 and calculates a feature of the venous image. The feature extracting unit 43 uses a moment invariant as the feature.
A moment invariant is briefly described. An image moment of the order (p+q) of an image f(x, y) at the coordinates (x, y) represents a variance value of a pixel whose center lies in the origin of an image, as defined by the following expression:
Accordingly, as a large pixel value is farther dispersed from the origin, the image moment has a larger value.
In this Expression (1), “p” represents a weight with respect to the x-axis direction, and “q” represents a weight with respect to the y-axis direction. Accordingly, as the value of “p” increases, the weight with respect to the variance toward the x-axis direction increases, whereas as the value of “q” increases, the weight with respect to the variance toward in the y-axis direction increases. Incidentally, in Expression (1), “w” represents the number of pixels in the x-axis direction, and “h” represents the number of pixels in the y-axis direction.
For this image moment, m10/m00 and m10/m00 represent the barycentric coordinates G(xG, YG). On the basis of this barycentric coordinates G(xG, yG), the moment around the barycenter, i.e., barycentric moment can be defined by the following expression:
This is a moment in which the distance from the barycenter is considered, not the distance from the origin. That is, this value represents variance in which the barycenter with respect to each of the x-axis direction and the y-axis direction in a pixel value within an image is considered.
In addition, this barycentric moment is normalized by the following expressions:
and thus the normalized barycentric moment η it is calculated. Due to this normalization, the spread of variance does not affect the value of moment, so it is invariant with respect to translation movement, rotation movement of an object within an image, or an image size.
Moment invariants are ones in which these normalized barycentric moments are combined and can be classified into seven types of the first (I1) to seventh (I7) orders, as defined by the following expressions:
I
1=η20+η02
I
2=(η20−η02)2+4η112
I
3=(η30−3η12)2+(3η21−η03)2
I
4=(η30+η12)2+(η21+η03)2
I
5=(η30−3η12)(η30+η12){(η30+η12)2−3(η21+η03)2}+(3η21−η03)(η21+η03){3(η30+η12)2−(η21+η03)2}
I
6=(η20−η02){(η30+η12)2−(η21+η03)2+4η11(η30+η12)(η21+η03)}
I
7=(3η21−η03)(η30+η12){(η30+η12)2−3(η21+η03)2}+(η30−3η12)(η21+η03){3(η30+η12)2−(η21+η03)2} (4)
For example, as is clear from Expressions (4), the first-order moment invariant I1 is one in which variance toward the x-axis direction and variance toward the y-axis direction are added together. Incidentally, the moment invariants are provided by Hu (M-K Hu, Visual pattern recognition by moment invariants, IRE Trans. on Information Theory, IT-8, pp. 179-187, 1962).
Here, when the sample images illustrated in
The results were illustrated in
Note that, as is also clear from
Consequently, for the feature extracting unit 43, a target for extracting a moment invariant is not the whole of a venous image, but is limited to part of the venous image. Due to this, the feature extracting unit 43 can permit movement of an image pickup target (on a finger placement surface, translation movement of a finger toward its surface direction or rotation movement of a finger about the longitudinal axis of the finger), irrespective of whether the image pickup range is large or small, as long as a range for extracting a moment invariant is within a venous image. As a result, the feature of an image pickup target (vein) can be properly caught, as compared with the case where a target for extracting a moment invariant is the whole of a venous image.
In the present embodiment, as a target for extracting a moment invariant, as illustrated in
Here, in
As is also clear from this
In this way, the feature extracting unit 43 can accurately calculate, in a vein, a moment invariant of a diverging section that greatly varies from person to person and is an important element for identification, irrespective of movement within a venous image.
Meanwhile, when the feature extracting unit 43 sets an extraction range whose center lies in a diverging point, the extraction range contains the edge of the diverging section. This results from that, as illustrated in
That is, when an extraction range AR containing a diverging point PX on the outline is set (FIG. 11(A)), each diverging point on the venous image has traits in the position from the center and a luminance state at that position, whereas when the extraction range AR containing the diverging point PX is not set (FIG. 11(B)), the trait is weak and thus there is no significance as a target for identification.
It is to be noted that, when this extraction range is uniformly set, it is preferable that the radius of the extraction range be equal to or larger than a range being a unit of sharpening in the reference-point detecting unit 42 at the previous stage (kernel size). This is because, in that case, the possibility of containing the diverging point PX on the outline is high.
In addition, in the case of the present embodiment, as illustrated in
Next, the registration unit 23 is specifically described. This registration unit 23 acquires a moment invariant from the feature extracting unit 43 and determines whether the number of moment invariants calculated in the feature extracting unit 43 is equal to or larger than a prescribed number.
Here, when the number of moment invariants is smaller than the prescribed value, the registration unit 23 determines that it is insufficient as the feature of a registrant and notifies that an image should be picked up again through the notification unit 15.
On the other hand, when the number of moment invariants is equal to or larger than the prescribed value, the registration unit 23 acquires position data on a diverging point of a vein in venous image data from the reference-point detecting unit 42 and generates registration data containing the position of the diverging point and a moment invariant calculated with reference to that position. The data structure of this registration data is illustrated in
In the header area HAr, the number of diverging points and the number of applied orders among seven orders in Hu moment invariants are stored. These items specify the details of the processing in the image processing unit 22, so the content stored in the data area DAr has meaning as an identifier for confirming that the content stored in the data area DAr is a result of the processing in the image processing unit 22.
On the other hand, in the data area Dar, the position (xy coordinates) of a diverging point serving as a reference for extracting a moment invariant and a moment invariant H of an h-th order (h is 2, 3, . . . , or 7) are stored in association with each other.
In such a way, the registration unit 23 registers, in the memory 13, registration data containing a result of the processing in the image processing unit 22 and the content that confirms that the result of the processing has been performed through the image processing unit 22.
Next, the authentication unit 32 is specifically described. The authentication unit 32 has a configuration that includes a parameter checking unit 51 and a geometric relationship verification unit 52.
The parameter checking unit 51 checks a moment invariant stored in the data area DAr (
Specifically, the parameter checking unit 51 calculates the deviation dI between each registration invariant and its corresponding authentication invariant for each the same order using the following expression:
where IT (T=1, 2, . . . , m (m is an integer)) is an i-th order (i=1, 2, . . . , 7) registration invariant in the data area DAr (
And, when obtaining the deviations dI as a result of checking, the parameter checking unit 51 determines whether all of these deviations dI is smaller than a predetermined threshold. Here, when at least one of the deviations dI is equal to or larger than the threshold, this means that a venous image at registration and that at authentication are different, that is, the provider of a venous image at authentication is not a registrant. In this case, the parameter checking unit 51 notifies that the person cannot be authenticated as a registrant through the notification unit 15.
On the other hand, when all of the deviations dI is smaller than the predetermined threshold, this means that the possibility that the person who input a venous image is a registrant is high. In this case, the parameter checking unit 51 notifies the geometric relationship verification unit 52 that it should start processing.
Here, a result of processing in the above-described parameter checking unit 51 using venous images illustrated in
And, the same numbers are assigned to corresponding diverging points, and among the diverging points, a diverging point being an authentication target has an apostrophe (') added to its number. Incidentally, in this
When being notified that it should start processing by the parameter checking unit 51, the geometric relationship verification unit 52 verifies whether the positional relationship between a diverging point being a reference point for extracting a registration invariant and a diverging point being a reference point for extracting an authentication invariant is in correlation.
Specifically, the geometric relationship verification unit 52 searches for a combination of a registration invariant and an authentication invariant at which the sum of squares of deviations is a minimum and determines whether the number of the combinations is equal to or larger than a prescribed number.
Here, when the number of the combinations is smaller than the prescribed number, this means that, because the number of correspondences between diverging points being reference points for extracting a registration invariant and diverging points being reference points for extracting an authentication invariant is small, there is no need to verify the positional relationship between the diverging points, and reliability is insufficient to authenticate the person as a registrant. In this case, the geometric relationship verification unit 52 notifies that the person cannot be authenticated as a registrant through the notification unit 15.
On the other hand, when the number of the combinations is equal to or larger than the prescribed number, the geometric relationship verification unit 52 acquires the position of a diverging point being a reference point for extracting a registration invariant and that for an authentication invariant in a combination that satisfies the condition that the minimum value of the sum of squares of deviations is smaller than a predetermined threshold. That is, the geometric relationship verification unit 52 acquires the position of the diverging point corresponding to the registration invariant from the data area DAr (
Then, the geometric relationship verification unit 52 calculates the distances between the diverging points of one component in a combination and the distances between the diverging points of the other component in the combination and calculates the differences between these distances for each combination. In addition, the geometric relationship verification unit 52 divides the sum total of the differences between the distances calculated for each combination by the number of the combinations and sets the result of the division as an evaluated value for use in determining whether there is a geometric correlation.
Here, a result of processing in the above-described geometric relationship verification unit 52 using the venous image illustrated in
And, when calculating an evaluated value, the geometric relationship verification unit 52 determines whether the evaluated value is smaller than a predetermined threshold. Here, when the evaluated value is equal to or larger than the predetermined threshold, this means that, although a registration invariant and an authentication invariant are not different, the diverging points being reference points for extracting these moment invariants are not in relative positional relation, so the shapes of veins in venous images being targets for extracting the registration invariant and the authentication invariant are different and the image acquired in the authentication mode is one other than an authorized venous image. In this case, the geometric relationship verification unit 52 notifies that the person cannot be authenticated as a registrant through the notification unit 15.
In contrast to this, when the evaluated value is smaller than the predetermined threshold, this means that a person who input the venous image in the authentication mode is a registrant. In this case, the geometric relationship verification unit 52 considers that the person can be authenticated as a registrant, generates data for starting execution of predetermined processing relating to successful authentication, and sends it to a device connected to the interface 14 (
Next, a procedure of authentication processing in the authentication unit 32 is described using the flowchart of
In this step SP2, the authentication unit 32 determines whether all of the deviations calculated in step SP1 is smaller than a predetermined threshold. When all of the deviations is smaller than the predetermined threshold, flow proceeds to step SP3.
In this step SP3, the authentication unit 32 searches for a combination at which the sum of squares of deviations is a minimum and determines whether the number of the combinations is equal to or larger than a prescribed number. When the number of the combinations is equal to or larger than the prescribed number, flow proceeds to step SP4.
In this step SP4, the authentication unit 32 identifies the positions of diverging points being reference points for extracting a registration invariant and an authentication invariant in a combination that satisfies the condition that the minimum value of the sum of squares of deviations is smaller than a predetermined threshold. In the subsequent step SP5, the authentication unit 32 calculates an evaluated value based on a result of the identification.
That is, the authentication unit 32 calculates the distances between the diverging points for the registration invariant and the distances between the diverging points for the authentication invariant identified in step SP4, and after that, calculates the differences between the distances for each combination searched for in step SP3. Then, the authentication unit 32 sets, as an evaluated value, a value obtained by dividing the sum total of the differences between the distances calculated for each combination by the number of the combinations searched for in step SP3.
When calculating the evaluated value in step SP5, flow proceeds to step SP6, where the authentication unit 32 determines whether the evaluated value is smaller than a predetermined threshold.
Here, when the evaluated value is smaller than the predetermined threshold, the authentication unit 32 determines that the authentication is successful. In this case, flow proceeds to step SP7, where the authentication unit 32 performs predetermined processing associated with the successful authentication. After that, this authentication processing procedure RT is completed.
On the other hand, when the evaluated value is equal to or larger than the predetermined threshold, when in step SP2 at least one of the deviations between the registration invariants and the authentication invariants is equal to or larger than the predetermined threshold, or when in step SP3 the number of combinations of a registration invariant and an authentication invariant at which the sum of squares of deviations is a minimum is smaller than the prescribed number, the authentication unit 32 determines that the authentication fails. In this case, flow proceeds to step SP8, where the authentication unit 32 performs predetermined processing associated with the authentication failure. After that, this authentication processing procedure RT is completed.
In such a way, the authentication unit 32 can determine whether a person is a registrant by comparing values of features (moment invariants, positional information) without image matching processing.
In the above configuration, this authentication apparatus 1 extracts a feature having a parameter resistant to affine transformation from only a diverging section of a vein shown in a venous image (for example,
Accordingly, this authentication apparatus 1 can more prevent a decrease in authentication accuracy while maintaining the permissible amount of movement of a vein within a venous image, as compared with the case where a feature is extracted from the whole of a venous image. This is particularly useful in the case where it is incorporated in a device whose miniaturization is highly desired, such a cellular phone.
In the case of the present embodiment, the use of a moment invariant as a feature enables extraction of an equivalent feature even if a finger is translated toward in its surface direction on the finger placement surface and/or the finger is rotated about the longitudinal axis of the finger.
And, in the case of the present embodiment, an extraction range is set so as to have, as its center, among diverging sections of a vein shown in a venous image, a diverging section contained in a central area CAR (
Accordingly, this authentication apparatus 1 can further maintain the permissible amount of movement of a vein in a venous image, as compared with the case where a target for extracting a feature is not limited to the central area CAR. And, it is particularly useful in the case where the structure of the image pickup unit 12 is not the one in which a light source and an image pickup element are arranged so as to sandwich a finger placed on a finger placement surface (for example, FIG. 15 in Japanese Unexamined Patent Application Publication No. 2005-353014) but the one in which a light source and an image pickup element are arranged in the same direction with respect to the placement surface of the finger placed on the finger placement surface where the finger is placed (for example, FIG. 2 in the same publication).
This is because, in the case of the structure in which the light source and the image pickup element are arranged with respect to the placement surface of the finger, visible light may be mixed in near-infrared light emitted from the light source to the vein, from the end of that finger placement surface as noise, and the end of the venous image that is difficult to be shown because of this can be removed from the target for extracting a feature.
With the above configuration, the extraction of a feature having a parameter resistant to affine transformation from only a diverging section of a vein shown in a venous image enables the permissible amount of movement of the vein in the venous image without an increase in the image pickup range to be maintained. In this manner, the authentication apparatus 1 capable of achieving an improved authentication accuracy can be accomplished.
In the above-described embodiment, the case where a LoG filter is used in sharpening processing in the sharpening unit 41 is described. However, the present invention is not limited to this case. For example, various filters, such as a morphology filter, Laplacian filter, a Gaussian filter, or a Sobel filter, can also be used. And, the number of filters used may also be more than one.
And, in the above-described embodiment, the case where a diverging point is used as a detection target in the reference-point detecting unit 42 is described. However, the present invention is not limited to this case. Either an end point or an inflection point, or a combination of both may also be used. Alternatively, for example, among points included in a vein, a point, such as a point that intersects a circle whose center lies in the center of the venous image, other than all or part of an end point, a diverging point, and an inflection point may also be used.
In addition, although the case where the reference-point detecting unit 42 detects a detection target fixed at a diverging point is described, the present invention is not limited to this case. The detection target may also be switched at a predetermined timing.
For example, every time the vascular registration mode is executed, the detection target is switched among a diverging point, an inflection point, and a point that intersects a circle whose center lies in the central position of the image in this order. Alternatively, for example, every time the vascular registration mode is executed, the radius of a circle whose center is the central position is switched and a point that intersects the circle is detected. In this case, describing an identifier for identifying how it is detected (that is, a detection technique) in the header area HAr (
With this, even in the event that registered information on a vein is stolen in another system, because the information does not reveal how a reference point for a moment invariant is detected, this authentication apparatus 1 can prevent masquerading as a registrant using the stolen information, so authentication accuracy can be further improved.
In addition, in the above-described embodiment, the case where Hu moment invariants are used as an extraction target in the feature extracting unit 43 is described. However, the present invention is not limited to this case. For example, Zernike moments or entropy may also be used. And, it is possible to use one in which an invariant for only translation movement, like a local luminance histogram in a venous image, not an invariant for both translation movement of a finger toward its surface direction on the finger placement surface and rotation movement of the finger about the longitudinal axis of the finger, like a moment invariant. In short, various features can be used as long as they are resistant to affine transformation.
In addition, in the above-described embodiment, the case where the authentication apparatus 1 having the image pickup function, checking function, and registration function is used is described. However, the present invention is not limited to this case. A mode in which each function or part of the functions is assigned to a single device depending on the application may also be used.
The present invention is applicable in the field of biometrics authentication.
Number | Date | Country | Kind |
---|---|---|---|
2007-182631 | Jul 2007 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2008/062875 | 7/10/2008 | WO | 00 | 12/30/2009 |