The present disclosure relates to a processing apparatus, a system, a biometric authentication system, a processing method, and a computer readable medium for improving accuracy of authentication.
As a technique for taking a tomographic image of a part of an object to be measured near the surface thereof, there is Optical Coherence Tomography (OCT) technology. In this OCT technology, a tomographic image of a part of an object to be measured near the surface thereof is taken by using interference between scattered light that is emitted from the inside of the object to be measured when a light beam is applied to the object to be measured (hereinafter referred to as “back-scattered light”) and reference light. In recent years, this OCT technology has been increasingly applied to medical diagnoses and inspections of industrial products.
The OCT technology has been practically used for tomographic imaging apparatuses for fundi of eyes in ophthalmic diagnoses, and has been studied in order to apply it as a noninvasive tomographic imaging apparatus for various parts of living bodies. In the present disclosure, attention is focused on a technique for dermal fingerprint reading using the OCT technology.
As a technique for using a fingerprint as biometric information, a biometric authentication technique using 2D (two-dimensional) image data of an epidermal fingerprint has been widely used. On the other hand, tomographic data of a finger acquired by using the OCT technology is luminance data at a 3D (three-dimensional) place. That is, in order to use data acquired by the OCT technology for the conventional fingerprint authentication based on 2D images, it is necessary to extract a 2D image containing features of the fingerprint from 3D tomographic data.
As a related art of the present invention, in Non-patent Literatures 1 and 2, a dermal fingerprint image is acquired by averaging tomographic luminance images over a predetermined range in the depth direction in tomographic data of a finger. However, a range of depths in which a dermal fingerprint is visually recognizable is hypothetically determined, and a fixed value is used for the predetermined range.
In Patent Literature 1, a luminance change in the depth direction is obtained for each pixel in a tomographic image. Then, a depth at which the luminance is the second highest is selected as a depth at which a dermal fingerprint is visually recognizable, and an image at this depth having the luminance is used as a dermal fingerprint image.
In Non Patent Literature 3, Orientation Certainty Level (OCL) indicating unidirectionality of a fingerprint pattern in a sub-region is calculated for epidermal and dermal fingerprint images. Then, an image for each sub-region is determined through fusion of the epidermal and dermal fingerprint images on the basis of the OCL value.
In the above-mentioned Non-patent Literatures 1 and 2, since the averaging process is performed for tomographic luminance images over the fixed range of depths, differences of thicknesses of epidermises among individuals are not taken into consideration. For example, when an epidermis has been worn or has become thick due to the occupation, the averaging may be performed over a range of depths that is deviated from the range of depths in which a dermal fingerprint is clearly visually recognizable, therefore making it difficult to obtain a clear dermal fingerprint image. In addition, since an interface between an epidermis and a dermis in which a dermal fingerprint is clearly visible is likely to distort in a depth direction, a fingerprint image extracted in a uniform depth may be locally blurred.
In the aforementioned Patent Literature 1, since the depth at which a dermal fingerprint is clearly visible is determined for each pixel in a tomographic image, the measurement is likely to be affected by noises caused by the measuring apparatus using the OCT technology itself, so that there is a high possibility that the depth is incorrectly determined. Further, since the process for determining a depth is performed for each pixel in a tomographic image, it takes time to extract a dermal fingerprint image.
In the aforementioned Non Patent Literature 3, a technique of obtaining a fingerprint image from two images, that are epidermal and dermal fingerprint images, is explained, that is different from the technique of obtaining an optimum fingerprint image from a plurality of tomographic images that are successive in the depth direction as described in the present disclosure. Further, in the tomographic images that are successive in the depth direction, the OCL calculated after dividing into regions is generally susceptible to noise, leading to a high possibility of erroneous selection of a depth that is not optimal.
An object of the present disclosure is to provide a processing apparatus, a system, a biometric authentication system, a processing method, and a computer readable medium for solving any one of the above-described problems.
A processing apparatus according to the present disclosure includes:
means for calculating, from three-dimensional luminance data indicating an authentication target, depth dependence of striped pattern sharpness in a plurality of regions on a plane perpendicular to a depth direction of the authentication target;
means for calculating a depth at which the striped pattern sharpness is the greatest in the depth dependence of striped pattern sharpness;
rough adjustment means for correcting the calculated depth on the basis of depths of other regions positioned respectively around the plurality of regions;
fine adjustment means for selecting a depth closest to the corrected depth and at which the striped pattern sharpness is at an extreme; and
means for extracting an image with a luminance on the basis of the selected depth.
A processing method according to the present disclosure includes:
a step of calculating, from three-dimensional luminance data indicating an authentication target, depth dependence of striped pattern sharpness in a plurality of regions on a plane perpendicular to a depth direction of the authentication target;
a step of calculating a depth at which the striped pattern sharpness is the greatest in the depth dependence of striped pattern sharpness;
a step of correcting the calculated depth on the basis of depths of other regions positioned respectively around the plurality of regions;
a step of selecting a depth closest to the corrected depth and at which the striped pattern sharpness is at an extreme; and
a step of extracting an image with a luminance on the basis of the selected depth.
A non-transitory computer readable medium storing a program according to the present disclosure causes a computer to perform:
a step of calculating, from three-dimensional luminance data indicating an authentication target, depth dependence of striped pattern sharpness in a plurality of regions on a plane perpendicular to a depth direction of the authentication target;
a step of calculating a depth at which the striped pattern sharpness is the greatest in the depth dependence of striped pattern sharpness;
a step of correcting the calculated depth on the basis of depths of other regions positioned respectively around the plurality of regions;
a step of selecting a depth closest to the corrected depth and at which the striped pattern sharpness is at an extreme; and
a step of extracting an image with a luminance on the basis of the selected depth.
According to the present disclosure, it is possible to provide a processing apparatus, a system, a biometric authentication system, a processing method, and a computer readable medium capable of obtaining a 2D image from 3D tomographic images, extracting an image for accurate authentication, and extracting an image at a high speed.
Example embodiments according to the present invention will be described hereinafter with reference to the drawings. As shown in
The measuring apparatus 12 captures 3D (three-dimensional) tomographic luminance data indicating luminance of an authentication target to be authenticated in a 3D space by using the OCT technology or the like. The authentication target is not particularly limited and may be various types of objects. A specific example thereof is a part of a living body. A more specific example thereof is a finger of a hand. The smoothing apparatus 13 smooths curvatures in the authentication target in the depth direction thereof in the 3D tomographic luminance data acquired by the measuring apparatus 12. Even when the measuring apparatus 12 is an apparatus in which the authentication target, e.g., a fingerprint, is acquired in a non-contact manner, or by pressing the authentication target against a glass surface or the like, the roundness of the authentication target remains. Therefore, the smoothing apparatus 13 smooths curvatures in the authentication target in the depth direction before a process for extracting an authentication image is performed, and generates the 3D luminance data. The authentication apparatus 14 performs authentication by using the extracted authentication image. The authentication apparatus 14 performs biometric authentication by using, for example, a fingerprint image. Specifically, the authentication apparatus 14 identifies an individual by finding matching between a tomographic image and image data associated with individual information, and comparing the tomographic image with the image data associated with the individual information. The system 10 shown in
In the following descriptions of example embodiments, the authentication target is a finger of a hand. In addition, a depth from a surface of an epidermis of a finger to the inside of the skin is referred to as a depth, and a plane perpendicular to the depth direction is referred to as an XY-plane. Further, a luminance image on the XY-plane is referred to as a tomographic image.
An epidermal fingerprint and a dermal fingerprint on a finger are shown most clearly in an interface between the air and an epidermis, and an interface between an epidermis and a dermis, respectively. Given this, in the present application, a depth at which the striped pattern sharpness of the tomographic image is high is selected as a depth at which various types of fingerprints are extracted. Further, in consideration of possibility of distortion of the aforementioned interfaces of the air, an epidermis, and a dermis in the depth direction, a method is employed in which the 3D luminance data is divided into predetermined regions on the XY-plane, and a depth at which each region has high striped pattern sharpness is selected.
The striped pattern sharpness means a feature amount, such as OCL (Orientation Certainty Level) used in Non Patent Literature 3, indicating that there are a plurality of stripes of the same shape consisting of light and dark portions in an image. Examples of the striped pattern sharpness include OCL, RVU (Ridge Valley Uniformity), FDA (Frequency Domain Analysis), and LCS (Local Clarity Score). OCL is disclosed in Non Patent Literature 4. RVU indicates uniformity of widths of light and dark stripes in a sub-region. FDA indicates a mono-frequency characteristic of a striped pattern in a sub-region disclosed in Non Patent Literature 5. LCS indicates uniformity of luminance of each of light and dark portions of stripes in a sub-region disclosed in Non Patent Literature 6. Other examples of the striped pattern sharpness include OFL (Orientation Flow) indicating continuity of a stripe direction with surrounding sub-regions. The striped pattern sharpness may also be defined as a combination of these evaluation indicators.
The depth dependences 110a and 110b of striped pattern sharpness shown in
The depth image 120 includes a pixel 120a indicating the depth 112a in the graph 111a shown in
The depth 112b in the region corresponding to the pixel 120b is largely different from the depths of pixels around the region corresponding to the pixel 120b. In the case of calculating a feature amount through division into a plurality of regions, it is difficult to sufficiently suppress an influence of noise from a measurement device and the like. For example, in the region corresponding to the pixel 120b as in the graph 111b, the depth at which the striped pattern sharpness is the greatest is defined as the depth 112b. However, the depth 113 is close to the depth 112a and considered to be a correct value as a depth at which various fingerprints are to be extracted. Therefore, defining the depth at which the striped pattern sharpness is the greatest as the depth 112b results in generation of an error. In this regard, in the present application, attention is focused on a tendency of succession of distortion or displacement in the depth direction of interfaces in the skin structure, and processing is performed for correcting the depth deviated from the depths of other regions positioned around the region so as to select the depth equal or close to the surrounding depths. Examples of means for correcting the depth of the region deviated from the depths of the surrounding regions include: image processing such as a median filter and a bilateral filter; and filters employing spatial frequency such as a low-pass filter and a Wiener filter.
The depth image 130 shown in
The depth image 140 shown in
The authenticating image extraction apparatus 11 acquires 3D luminance data (Step S101). The authenticating image extraction apparatus 11 divides the 3D luminance data into a plurality of regions on the XY-plane (Step S102). Note that the shapes of the plurality of regions are various, and not limited to a grid shape.
The authenticating image extraction apparatus 11 calculates the depth dependence of the striped pattern sharpness in each region (Step S103). Note that, as described above, the striped pattern sharpness means a feature amount indicating that there are a plurality of stripes of the same shape consisting of light and dark portions in an image, exemplified by OCL.
The authenticating image extraction apparatus 11 selects the depth at which the striped pattern sharpness is the greatest in each region (Step S104). The authenticating image extraction apparatus 11 corrects the depth deviated from the depths of surrounding regions to the selected depth (Step S105). Note that, in the case of the depth image, examples of a method for correcting the deviated depth include processing such as a median filter.
The authenticating image extraction apparatus 11 selects the depth at which the striped pattern sharpness is at an extreme in each region, and which is the closest to the selected depth (Step S106). The authenticating image extraction apparatus 11 converts depth information divided into regions to the same definition as the fingerprint image, to thereby smooth the depth information (Step S107). The authenticating image extraction apparatus 11 performs processes for adjusting an image for biometric authentication, such as conversion into a two-greyscale image and a thinning process (Step S108).
As described above, the authentication image extraction system according to the first example embodiment divides the 3D luminance data of a finger into regions on the XY-plane, and optimizes the extraction depth through use of the striped pattern sharpness. Further, the authentication image extraction system is capable of extracting a clear fingerprint image at a high speed, by means of rough adjustment of the depth through correction processing of the deviated depth, and fine adjustment of the depth through selection of the extreme value of the striped pattern sharpness. As a result, compared to the techniques disclosed in Non Patent Literatures 1 and 2, it is possible to extract an image in an adaptive manner against differences of thicknesses of epidermises among individuals, and to respond to distortion in the depth direction of interfaces in the skin structure.
Further, since a depth is determined based on an image having a plurality of pixels, unlike the depth determination by a single pixel disclosed in Patent Literature 1, the tolerance to noises is high. Further, since the data to be processed is also the number of regions, the processing can be performed at a high speed.
By additionally performing the processing for correcting the deviated depth, it is made possible to stably extract a clear fingerprint image with respect to the depth optimization in a region susceptible to noise.
In a second example embodiment, processing for stabilizing extraction of a clear fingerprint image through repetition of the correction processing of the deviated depth in the first example embodiment is described.
A depth image 200 shown in
The depth image 210 shown in
After Step S104, the authenticating image extraction apparatus 11 retains the depth image output by Step S104 to Step S203 (Step S201). The depth image retained in Step S201 is subjected to the processing of Step S105 and Step S106 as in the first example embodiment.
After Step S106, the authenticating image extraction apparatus 11 calculates a difference between the depth image retained in Step S201 and the depth image after Step S106 (Step S202). Any method for calculating a difference between two depth images can be employed.
In a case in which the difference value calculated in Step S202 is smaller than a threshold value (Step S203: Yes), the authenticating image extraction apparatus 11 terminates the processing for correcting the deviated depth. In a case in which the difference value calculated in Step S202 is no less than the threshold value (Step S203: No), the authenticating image extraction apparatus 11 returns to Step S201 and repeats the processing for correcting the deviated depth. After Step S203, the authenticating image extraction apparatus 11 performs Step S107 and Step S108 as in the first example embodiment.
As described above, in addition to the first example embodiment, the authentication image extraction system in the second embodiment repeats the rough adjustment of the depth through correction processing of the deviated depth, and the fine adjustment of the depth through selection of the depth with the extreme value of the striped pattern sharpness. As a result, stable extraction of a clear fingerprint image is enabled even with a large number of regions with deviated depths.
In a third example embodiment, processing for extracting an epidermal fingerprint and a dermal fingerprint through limitation of a searching range for a target depth in the first and second example embodiments is described. In general, with respect to the 3D luminance data of a finger, the striped pattern sharpness is the maximum at depths of an interface between the air and an epidermis and an interface between an epidermis and a dermis, respectively corresponding to an epidermal fingerprint and a dermal fingerprint. The first and second example embodiments are in a mode of converging into a maximum value and not capable of acquiring two fingerprint images. Given this, in the present example embodiment, a method for extracting two fingerprint images through limiting respective searching ranges is described.
The depth dependence 310 of the striped pattern sharpness shown in
In the graph 311, the striped pattern sharpness is the maximum at the depths 312 and 313, corresponding to average depths of an interface between the air and an epidermis and an interface between an epidermis and a dermis, respectively. For example, by limiting the searching range to a range in the depth direction from the depth 314, which is a median value of the depths 312 and 313, extraction of a dermal fingerprint according to the first and second example embodiment is made possible.
By thus calculating and comparing the average striped pattern sharpness of each tomographic image, it is possible to estimate approximate depths corresponding to the epidermal and dermal fingerprint images and to extract the respective fingerprint images. Although a technique of calculating the striped pattern sharpness of a tomographic image by using the average value of OCL has been described, the maximum value of luminance may also be used instead of the striped pattern sharpness.
Next, the authenticating image extraction apparatus 11 extracts 3D luminance data within the range of the searched depth determined in Step S301 (Step S302). The authenticating image extraction apparatus 11 according to the third example embodiment performs Step S102 to Step S108 as in the first example embodiment.
As described above, the authentication image extraction system according to the third example embodiment makes it possible to independently acquire two types of fingerprints, which are an epidermal fingerprint and a dermal fingerprint, through limitation of the range of searched depth.
In a fourth example embodiment, processing for changing a range of a region to be divided on the XY-plane in the first to third example embodiments in an adaptive manner depending on a finger to be recognized is described. For example, OCL is an amount indicating that a striped pattern within a region is unidirectional; however, when the range of the region is excessively extended, the fingerprint within the region is no longer a unidirectional striped pattern. To the contrary, when the range of the region is narrowed, the striped pattern disappears. Since an interval of the striped pattern varies from person to person, it is desirable that the region is not fixed and is changeable in an adaptive manner depending on a finger to be recognized. Given this, in the fourth example embodiment, the range of a region to be divided on the XY-plane is determined after estimating spatial frequency of a fingerprint, and then the fingerprint extraction processing according to the first to third example embodiments is performed.
A frequency image 410 is formed through Fourier transform of the tomographic image 400. In the frequency image 410, a ring 412 can be observed around a pixel 411 at the center of the image, the ring corresponding to spatial frequency of the fingerprint.
A frequency characteristic 420 graphically shows an average of pixel values at an equal distance from the pixel 411 at the center of the frequency image 410 as a distance from the pixel 411.
In the graph 421, probability is the maximum at the spatial frequency 422, corresponding to a radius from the pixel 411 at the center of the frequency image 410 to the ring 412. The spatial frequency 422 of the fingerprint can thus be identified. By specifying a range of a region so as to include a plurality of stripes on the basis of the spatial frequency 422, an adaptive operation for fingers with different intervals of stripes is made possible.
The authenticating image extraction apparatus 11 determines a range of division of regions on the XY-plane on the basis of the spatial frequency determined in Step S401, and divides the 3D luminance data into regions on the XY-plane (Step S402). The authenticating image extraction apparatus 11 according to the fourth example embodiment performs Step S103 to Step S108 as in the first example embodiment.
As described above, the authentication image extraction system according to the fourth example embodiment performs the processing of obtaining spatial frequency of a fingerprint of a finger to be recognized, and then configuring a range of a region to be divided on the XY-plane in an adaptive manner. As a result, it is made possible to stably extract a clear fingerprint image for fingers with different fingerprint frequencies.
Note that although the present invention is described as a hardware configuration in the above-described first to fourth example embodiments, the present invention is not limited to the hardware configurations. In the present invention, the processes in each of the components can also be implemented by having a CPU (Central Processing Unit) execute a computer program.
For example, the authenticating image extraction apparatus 11 according to any of the above-described example embodiments can have the below-shown hardware configuration.
An apparatus 500 shown in
The above-described program may be stored by using various types of non-transitory computer readable media and supplied to a computer (computers including information notification apparatuses). Non-transitory computer readable media include various types of tangible storage media. Examples of the non-transitory computer readable media include magnetic recording media (e.g., a flexible disk, a magnetic tape, and a hard disk drive), and magneto-optical recording media (e.g., a magneto-optical disk). Further, the example includes a CD-ROM (Read Only Memory), a CD-R, and a CD-R/W. Further, the example includes a semiconductor memory (e.g., a mask ROM, a PROM, an EPROM, a flash ROM, and a RAM). Further, the program may be supplied to a computer by various types of transitory computer readable media. Examples of the transitory computer readable media include an electrical signal, an optical signal, and electromagnetic waves. Transitory computer readable media can provide the program to a computer via a wired communication line (e.g., electric wires, and optical fibers) or a wireless communication line.
Further, as described above as the procedure for processing in the authenticating image extraction apparatus 11 in the above-described various example embodiments, the present invention may also be applied as a processing method.
A part or all of the above-described example embodiments may be stated as in the supplementary note presented below, but is not limited thereto.
A processing apparatus comprising:
means for calculating, from three-dimensional luminance data indicating an authentication target, depth dependence of striped pattern sharpness in a plurality of regions on a plane perpendicular to a depth direction of the authentication target;
means for calculating a depth at which the striped pattern sharpness is the greatest in the depth dependence of striped pattern sharpness;
rough adjustment means for correcting the calculated depth on the basis of depths of other regions positioned respectively around the plurality of regions;
fine adjustment means for selecting a depth closest to the corrected depth and at which the striped pattern sharpness is at an extreme; and
means for extracting an image with a luminance on the basis of the selected depth.
The processing apparatus described in Supplementary note 1, further comprising:
means for calculating for each region a difference amount between roughly adjusted depth information indicating the corrected depth and finely adjusted depth information indicating the selected depth,
wherein, in a case in which the difference amount is no less than a threshold value,
the rough adjustment means re-corrects the corrected depth on the basis of depths of other regions positioned respectively around the plurality of regions and
the fine adjustment means re-selects, as the re-corrected depth, a depth closest to the re-corrected depth and at which the striped pattern sharpness is at an extreme;
means for calculating for each region a difference amount between roughly re-adjusted depth information indicating the re-corrected depth and finely re-adjusted depth information indicating the re-selected depth; and
means for extracting the image with the luminance on the basis of the re-selected depth in a case in which the difference amount is less than the threshold value.
The processing apparatus described in Supplementary note 1 or 2, further comprising means for restricting calculation of the depth dependence of the striped pattern sharpness to a specified depth.
The processing apparatus described in any one of Supplementary notes 1 to 3, further comprising means for determining spatial frequency of the striped pattern from the three-dimensional luminance data and then calculating a range of the regions.
The processing apparatus described in any one of Supplementary notes 1 to 4, wherein the striped pattern sharpness indicates unidirectionality of the striped pattern in the regions.
The processing apparatus described in any one of Supplementary notes 1 to 4, wherein the striped pattern sharpness indicates unity of spatial frequency in the regions.
The processing apparatus described in any one of Supplementary notes 1 to 4, wherein the striped pattern sharpness indicates uniformity of luminance in each of light and dark portions in the regions.
The processing apparatus described in any one of Supplementary notes 1 to 4, wherein the striped pattern sharpness indicates uniformity with respect to widths of light and dark stripes in the regions.
The processing apparatus described in any one of Supplementary notes 1 to 4, wherein the striped pattern sharpness is a combination of the striped pattern sharpnesses described in any of Supplementary notes 5 to 8.
The processing apparatus described in any one of Supplementary notes 1 to 9, wherein the rough adjustment means employs a median filter.
The processing apparatus described in any one of Supplementary notes 1 to 9, wherein the rough adjustment means employs a bilateral filter.
The processing apparatus described in any one of Supplementary notes 1 to 9, wherein the rough adjustment means employs a filter for spatial frequency.
A system comprising:
an apparatus configured to acquire three-dimensional luminance data indicating a recognition target; and
the processing apparatus described in any one of Supplementary notes 1 to 12,
wherein the system is configured to acquire a tomographic image having a striped pattern inside the recognition target.
A biometric authentication system comprising:
an apparatus configured to acquire three-dimensional luminance data indicating a living body as a recognition target;
the processing apparatus described in to any one of Supplementary notes 1 to 12; and
a processing apparatus configured to compare a tomographic image having a striped pattern inside the recognition target with image data associated with individual information,
wherein the biometric authentication system is configured to identify an individual through comparison between the tomographic image and the image data.
A processing method comprising:
a step of calculating, from three-dimensional luminance data indicating an authentication target, depth dependence of striped pattern sharpness in a plurality of regions on a plane perpendicular to a depth direction of the authentication target;
a step of calculating a depth at which the striped pattern sharpness is the greatest in the depth dependence of striped pattern sharpness;
a step of correcting the calculated depth on the basis of depths of other regions positioned respectively around the plurality of regions;
a step of selecting a depth closest to the corrected depth and at which the striped pattern sharpness is at an extreme; and
a step of extracting an image with a luminance on the basis of the selected depth.
A non-transitory computer readable medium storing a program causing a computer to perform:
a step of calculating, from three-dimensional luminance data indicating an authentication target, depth dependence of striped pattern sharpness in a plurality of regions on a plane perpendicular to a depth direction of the authentication target;
a step of calculating a depth at which the striped pattern sharpness is the greatest in the depth dependence of striped pattern sharpness;
a step of correcting the calculated depth on the basis of depths of other regions positioned respectively around the plurality of regions;
a step of selecting a depth closest to the corrected depth and at which the striped pattern sharpness is at an extreme; and
a step of extracting an image with a luminance on the basis of the selected depth.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2019/030364 | 8/1/2019 | WO |