This application is based on and claims priority under 35 USC 119 from Japanese Patent Application No. 2014-266302 filed Dec. 26, 2014.
The present invention relates to an authentication device, an authentication method, and a non-transitory computer readable medium.
According to an aspect of the invention, there is provided an authentication device including a face image extracting unit that extracts a face image of an operator, a footwear image extracting unit that extracts a footwear image, the footwear image being an image of footwear of the operator, a face authentication unit that performs face authentication based on the face image and a registered face image that is registered in advance, and a footwear authentication unit that performs footwear authentication based on the footwear image and a registered footwear image that is registered in advance, in which the operator is authenticated based on a result of the face authentication performed by the face authentication unit, and a result of the footwear authentication performed by the footwear authentication unit.
Exemplary embodiment of the present invention will be described in detail based on the following figures, wherein:
Hereinafter, an exemplary embodiment of the invention will be described in detail with reference to the attached figures.
An authentication device according to the exemplary embodiment is applicable to authentication performed in various scenes by using an image of a user. The description below is directed to a case in which the authentication device according to the exemplary embodiment is incorporated in a specific apparatus (to be referred to as target apparatus hereinafter), and used to authenticate a person (user) who has the authority for using the target apparatus.
<System Configuration>
An authentication device 100 illustrated in
The first image acquiring unit 110 is an image capturing unit for acquiring an image of a specific range. The first image acquiring unit 110 is positioned so as to capture, within its image capture range, a person who is present near the target apparatus 10, particularly a person who is approaching the target apparatus 10 to use the target apparatus 10. For example, the first image acquiring unit 110 is provided on the side of the housing of the target apparatus 10 where the user stands when operating the target apparatus.
The second image acquiring unit 120 is an image capturing unit for acquiring an image of a specific part of the body of an operator who operates the target apparatus 10. The exemplary embodiment uses a face as an example of a specific part of a person's body. Accordingly, the second image acquiring unit 120 is positioned so as to capture the face of a person attempting to operate the target apparatus 10. For example, the second image acquiring unit 120 is located near an operating unit of the target apparatus 10. This makes it easier to capture the face of the operator attempting to look at the operating unit to operate the target apparatus 10.
The first image acquiring unit 110 is provided on the front side (the side where the user stands when operating the target apparatus) of the target apparatus 10 illustrated in
The second image acquiring unit 120 is provided near the operation panel 11 of the target apparatus 10 illustrated in
The registration information storing unit 130 is a database (DB) in which information used for authentication (authentication information) is registered and stored. In the exemplary embodiment, face images and feature information are used as information for use in authentication. The registration information storing unit 130 stores a face image and feature information in association with each other for each person (registered operator) who is registered as an authorized operator of the target apparatus 10. Feature information refers to information about a predetermined outward appearance feature extracted from an image of a person. Information that may be acquired from the whole body image of a person is set as the feature information. Specifically, for example, the body height, the color or shape of shoes (footwear), the color of clothing, or a characteristic gesture may be set as a feature.
The authentication device 100 according to the exemplary embodiment first narrows down registration information used for authentication, which is stored in the registration information storing unit 130, based on an image of a person who is present near the target apparatus 10 (a person who is a potential operator) which is acquired by the first image acquiring unit 110. Then, the authentication device 100 performs authentication by matching a face image of the operator acquired by the second image acquiring unit 120 against the registration information that has been narrowed down.
The feature information detector 140 detects feature information from the image acquired by the first image acquiring unit 110. As mentioned above, feature information refers to information about a predetermined feature. Feature information detected from an image by the feature information detector 140 is not limited to one kind of information. Multiple different kinds of feature information may be acquired. A specific method of extracting feature information will be described later.
The narrowing-down processing unit 150 narrows down the registration information to target registration information against which authentication is to be performed, based on the feature information detected by the feature information detector 140. That is, the narrowing-down processing unit 150 searches the registration information storing unit 130 based on the detected feature information, and acquires registration information whose feature information corresponds to the detected feature information. Specifically, for example, if the feature information is the body height of the operator, the narrowing-down processing unit 150 selects, as target registration information, any registration information having feature information for which the difference from the body height of the person existing near the target apparatus 10 detected by the feature information detector 140 falls within a predetermined range. Further, if the feature information is the color of the shoes, the narrowing-down processing unit 150 selects, as target registration information, any registration information having feature information that matches the color of the shoes of the person existing near the target apparatus 10 detected by the feature information detector 140.
The face image detector 160 detects a face image of the operator from the image acquired by the second image acquiring unit 120. The method for detecting a face image by image analysis is not particularly limited, and existing techniques may be employed. The face image detector 160 is an example of a specific image detector that detects an image of a specific body part of the operator from the image acquired by the second image acquiring unit 120.
The authentication processing unit 170 performs an authentication process by using the face image detected by the face image detector 160. Specifically, the authentication processing unit 170 matches the face image detected by the face image detector 160 (to be referred to as detected face image hereinafter), against each face image in the registration information stored in the registration information storing unit 130 (to be referred to as registered face image hereinafter). If the similarity between a given registered face image and the detected face image is higher than or equal to a predetermined reference value, the authentication processing unit 170 determines that the registered face image and the detected face image correspond to each other. As a result, the operator from whom the detected face image has been acquired is authenticated to be a registered operator identified by the registered face image. In the exemplary embodiment, the method of performing face image matching is not particularly limited, and existing techniques that compare feature points on faces to determine the similarity between face images may be used.
In the exemplary embodiment, the authentication processing unit 170 first matches a face image against each face image in the registration information narrowed down by the narrowing-down processing unit 150. As mentioned above, at this point, the narrowing-down processing unit 150 has narrowed down the target registration information based on an outward appearance feature of the person. Accordingly, if an operator from whom a detected face image has been acquired is a registered operator, it is highly likely that this operator corresponds to a registered operator identified based on the registration information narrowed down by the narrowing-down processing unit 150. When matching is performed against each registered face image narrowed down by the narrowing-down processing unit 150 in this way, even if the amount of registration information increases, authentication is performed against limited target registration information, thereby minimizing an increase in the time necessary for the processing.
In the exemplary embodiment, if matching of a detected face image against each registered face image narrowed down by the narrowing-down processing unit 150 does not result in detection of any registered face image corresponding to the detected face image, the authentication processing unit 170 performs matching of the face image against all the other pieces of registration information. That is, matching is performed against each piece of registration information which has been excluded from the target registration information by the narrowing down in the narrowing-down processing unit 150. As a result, the authentication process is performed against all the pieces of registration information as targets, thus preventing omissions in the authentication process.
When the matching of the detected face image against each registered face image that is excluded from the target registration information by the narrowing down in the narrowing-down processing unit 150 results in detection of any registered face image corresponding to the detected face image, this means that the feature information detector 140 has detected, from an image including a registered operator, feature information different from the feature information included in the registration information of the registered operator. Accordingly, the authentication processing unit 170 updates the feature information of the registration information including the registered face image corresponding to the detected face image, by the feature information detected by the feature information detector 140.
In this regard, feature information may be updated by one of the following methods: changing previously registered feature information to new feature information; and adding new feature information to previously registered feature information. Which one of the two methods is to be used for updating feature information may be set in accordance with the intended use of the authentication device 100 or the kind of feature information. As an example, the following describes a case where this setting is determined in accordance with the kind of feature information. For example, a case is considered in which the body height of the operator is used as feature information. Because the body height of the operator does not change very much within a short period of time, detection of different feature information is likely to indicate that the feature information included in the corresponding registration information is incorrect. Accordingly, in this case, the feature information (the value of the operator's body height) in the registration information is changed to the newly acquired feature information. In contrast, if the color of operator's shoes is used as feature information, it is possible that totally different pieces of feature information are detected for the same operator for reasons such as the operator buying new shoes or switching between multiple shoes. Accordingly, in this case, newly acquired feature information (shoes color) is added to registration information. In the case of updating registration information by adding feature information, the values (for example, shoes colors) that may be added may be limited to, for example, three.
The registration processing unit 180 newly registers authentication information of the operator. Specifically, the registration processing unit 180 associates a face image detected by the face image detector 160 (detected face image) with feature information detected by the feature information detector 140 for the operator whose face image has been acquired, and stores the resulting information into the registration information storing unit 130 as authentication information. At this time, the registration processing unit 180 may receive an input of information used for identifying the operator such as an ID number or password, and store this information into the registration information storing unit 130 as authentication information together with the feature information and the face image.
In other words, the exemplary embodiment performs authentication of an operator based on a face image of the operator and an image of a feature of the operator. That is, if the body height of the operator is used as a feature, the feature information detector 140 functions as a whole body image extracting unit that extracts an image of the whole body of the operator. The authentication processing unit 170 performs authentication of the operator based on this whole body image and a face image extracted by the face image detector 160. Likewise, if the shape or color of the shoes of an operator is used as a feature, the feature information detector 140 functions as a footwear image extracting unit that extracts an image of footwear of the operator. The authentication processing unit 170 performs authentication of the operator based on this footwear image and a face image extracted by the face image detector 160.
<Operation of Authentication Device>
As illustrated in
Next, the feature information detector 140 performs a feature information extraction process (S303). That is, the feature information detector 140 extracts feature information by analyzing the image acquired by the first image acquiring unit 110. Details of the feature information extraction process will be described later. Once feature information is extracted, next, the narrowing-down processing unit 150 narrows down registration information stored in the registration information storing unit 130 to target registration information (S304). At this point, it is unknown whether the person who has approached the target apparatus 10 is attempting to perform face authentication or registration of authentication information. Accordingly, in the exemplary embodiment, narrowing down of registration information is performed in advance at this point to deal with the case where face authentication is performed later.
Next, the authentication device 100 determines whether to perform face authentication (S305). For example, it is determined to perform face authentication if the operator has performed an operation for logging in to the target apparatus 10. If face authentication is not to be performed (NO in S305), the authentication device 100 determines whether to perform a registration process (S306). For example, it is determined to perform new registration of authentication information if the operator has performed an operation for registering authentication information with the target apparatus 10. The specific condition for determining whether to perform face authentication, or whether to perform registration of authentication information may be set on a case by case basis based on factors such as the type, configuration, specifications, and intended use of the target apparatus 10, and the condition in which the authentication device 100 is installed. For example, the condition may be such that face authentication is performed when no operation takes place, and registration of authentication information is performed only when an operation for registering authentication information is performed.
If the authentication device 100 determines to perform face authentication (YES in S305), the authentication processing unit 170 performs an authentication process by using a detected face image detected by the face image detector 160, and each registered face image included in the registration information narrowed down by the narrowing-down processing unit 150 (S307). If the authentication device 100 determines to perform registration of authentication information (NO in S305, YES in S306), the registration processing unit 180 performs the registration process by using a detected face image detected by the face image detector 160, and feature information detected by the feature information detector 140 (S308).
If the authentication device 100 determines to perform neither face authentication nor registration of authentication information (NO in S305, NO in S306), the processing in the authentication device 100 ends with neither an authentication process nor registration process being performed. How the target apparatus 10 operates in this case is set based on factors such as the type, configuration, specifications, and intended use of the target apparatus 10. For example, the target apparatus 10 may not accept any operation from the operator, or may offer only those pre-set functions which are provided to unregistered operators.
As illustrated in
By using the detected face image extracted by the face image detector 160, the authentication processing unit 170 matches the detected face image against each registered face image in the registration information narrowed down by the narrowing-down processing unit 150 (S402). If it is determined as a result of the matching that the detected face image corresponds to any registered face image (OK) (YES in S403), the authentication processing unit 170 notifies the target apparatus 10 that authentication has completed, and also informs the operator to that effect (S407).
If it is determined as a result of the matching that the detected face image does not correspond to any registered face image (Error) (NO in S403), the authentication processing unit 170 matches the detected face image against each piece of registration information which has been excluded by the narrowing down in the narrowing-down processing unit 150 (that is, registration information against which matching has not been performed yet) (S404). If it is determined as a result of this matching that the detected face image corresponds to any registered face image (OK) (YES in S405), the authentication processing unit 170 updates the feature information of the corresponding registration information by the feature information detected by the feature information detector 140 (see S303 in
If, upon performing matching against the registration information against which matching has not been performed yet, it is determined that the detected face image does not correspond to any registered face image (Error) (NO in S405), the authentication processing unit 170 notifies the target apparatus 10 that authentication has failed, and also informs the operator to the effect (S408). If authentication has failed, the authentication processing unit 170 may output a message asking the operator whether to perform registration of authentication information, thus prompting the operator to make a decision.
As illustrated in
Upon acquiring the operator information, the registration processing unit 180 associates the acquired operator information, the detected face image extracted by the face image detector 160, and the feature information detected by the feature information detector 140 (see S303 in
<Feature Information Extraction Process>
Next, the feature information extraction process illustrated as S303 in
Feature information used in the exemplary embodiment to narrow down registration information is information about a predetermined feature about the outward appearance of a person which is extracted from an image of the person. Therefore, depending on how to set a feature and feature information, specific details of a feature information extraction process also vary. In other words, specific details of a feature information extraction process are set in accordance with the kind of feature and feature information selected. Hereinafter, specific examples of feature information extraction processes will be described by using examples of features.
First, the feature information detector 140 determines an image portion that contains motion, from an image acquired by the first image acquiring unit 110 (S601). At this time, an image portion containing motion may be determined by, for example, periodically analyzing images continuously acquired by the first image acquiring unit 110. Specifically, for example, an image acquired at a given point in time is compared with the image acquired immediately before this image, and the portion of the image which differs from the previous image is determined as an image portion containing motion. For example, an image portion containing motion is extracted as a rectangular area within the image acquired by the first image acquiring unit 110.
A first acquired image 111 illustrated in
Among the subjects in the first acquired image 111 illustrated in
Returning to
Next, the feature information detector 140 identifies a face image from the image included in the area 112, and determines the position of the face image (S603). Identification of a face image may be performed by using existing techniques. Unlike the detection of a face image by the face image detector 160, this face image identification is performed not for the purpose of authentication but only to determine the position of a face. Accordingly, this identification may be performed with just enough accuracy for recognizing the presence of a face.
Next, based on the determined position of the face, the feature information detector 140 calculates a length in the image corresponding to the body height of the person 111d (S604). A length in the image corresponding to the body height of the person 111d refers to the length from the bottom end of the portion of the area 112 where motion is detected (the portion that has a difference from the immediately previous first acquired image 111), to the top end of the position of the face determined in S603.
Among various human actions, during actions made with the arm, situations may exist in which the hand is located higher than the head as illustrated in
Next, the feature information detector 140 calculates the actual distance from the first image acquiring unit 110 to the person 111d, based on the position of the bottom end of the area 112 (the bottom end of the portion where motion is detected) in the first acquired image 111 (S605). In the exemplary embodiment, the method of calculating the distance from the first image acquiring unit 110 to the person 111d is not particularly limited, and various existing methods may be used. For example, the distance to a fixed stationary subject (for example, the pillar 111c in
Next, the feature information detector 140 calculates the body height of the person 111d (S606), and holds the value of the calculated body height as feature information for use in various subsequent processes (such as narrowing-down, authentication, and registration processes) (S607). Since the length “h” in the first acquired image 111 corresponding to the body height of the person 111d has been already determined in S604, and the actual distance from the first image acquiring unit 110 to the person 111d is found in S605, the body height of the person 111d is calculated from these values.
First, the feature information detector 140 determines an image portion that contains motion, from an image acquired by the first image acquiring unit 110 (S901). Then, the feature information detector 140 acquires the size of the area 112 determined as the image portion containing motion (S902). The operation up to this point is the same as S601 and S602 illustrated in
Next, the feature information detector 140 calculates the position of the feet of the person 111d (S903), identifies the color of the shoes, and holds the identified color of the shoes as feature information for use in various subsequent processes (such as narrowing-down, authentication, and registration processes) (S904). As described above with reference to S604 in
First, the feature information detector 140 determines an image portion that contains motion, from an image acquired by the first image acquiring unit 110 (S1001). Then, the feature information detector 140 acquires the size of the area 112 determined as the image portion containing motion (S1002). Then, the feature information detector 140 identifies a face image from the image included in the area 112, and determines the position of the face image (S1003). The operation up to this point is the same as S601 to S603 illustrated in
Next, the feature information detector 140 recognizes an action of the person 111d (S1004), and holds information about the recognized action as feature information for use in various subsequent processes (such as narrowing-down, authentication, and registration processes) (S1005). Among various actions that may be performed by the person 111d, the exemplary embodiment focuses on motions of the arm or hand which has a wide range of motion. Specifically, for example, the feature information detector 140 recognizes the kind of arm or hand motion, and the position of the hand relative to the position of the face. Recognition of an arm or hand motion by image analysis may be performed by using existing techniques.
In the example illustrated in
In the example illustrated in
In the example illustrated in
While three kinds of actions that may be extracted as feature information have been described above, the above-mentioned actions are illustrative only, and not intended to limit the actions that may be used as feature information. Further, the specific kinds of feature information illustrated in the feature information extraction processes described above with reference to
<Modification of Authentication Process>
In the exemplary embodiment, a face image of an operator is used to authenticate the operator. This face authentication may be combined with authentication using operator information such as an ID number or password. In this case, in addition to performing face authentication, the authentication device 100 requests the operator to input operator information, and performs authentication using the input operator information. Enhanced security is achieved by performing authentication using multiple different measures in this way. As for the measure to perform authentication using operator information, measures implemented in existing authentication systems or the like may be used.
Although the exemplary embodiment places no particular limitation on the kind of face that may serve as a registered face image for use in face authentication, a special condition may be added to such a registered face image. Specifically, for example, a face image with a special facial expression, or an image including an image of a non-face body part such as the hand together with the face is used as a registered face image. Enhanced security is achieved by using a face image with such a special added condition as registration information.
In accordance with the message, the operator adjusts the position or orientation of his/her face while looking at the image 121 displayed on the screen 11a. Then, the operator touches the button object 11b to have an image of the face captured by the second image acquiring unit 120. In S501 of the registration process described above with reference to
In the example illustrated in
In the example illustrated in
<Modification of Authentication Device>
In the exemplary embodiment mentioned above, the authentication device 100 is incorporated in the target apparatus 10. However, the manner of installation of the authentication device 100 is not limited to this. The authentication device 100 may be installed in various manners depending on factors such as the type, configuration, specifications, and intended use of the target apparatus 10. For example, feature information may be extracted by capturing an image of the operator approaching the target apparatus 10 by the first image acquiring unit 110, in the authentication device 100 provided separately from the target apparatus 10. Further, to acquire a face image by the second image acquiring unit 120, the operator may be instructed to turn his/her face toward the second image acquiring unit 120 provided at a position different from the operating unit of the target apparatus 10. Further, while the first image acquiring unit 110 and the second image acquiring unit 120 are provided separately from each other in the exemplary embodiment, the functions of both the first image acquiring unit 110 and the second image acquiring unit 120 may be implemented by the same single image acquiring unit (camera).
Further, while authentication based on a face image is performed in the exemplary embodiment, the exemplary embodiment is also applicable to authentication based on another specific body part that may be used for authentication. For example, the authentication device 100 according to the exemplary embodiment may be directly applied to authentication using a palm print by simply replacing the face image with a palm print image.
<Hardware Configuration of Authentication Device>
As illustrated in
If the authentication device according to the exemplary embodiment 100 is incorporated in the target apparatus 10 as in the configuration example illustrated in
The foregoing description of the exemplary embodiment of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in the art. The embodiment was chosen and described in order to best explain the principles of the invention and its practical applications, thereby enabling others skilled in the art to understand the invention for various embodiments and with the various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2014-266302 | Dec 2014 | JP | national |