The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention, and together with the general description given above and the detailed description of the embodiments given below, serve to explain the principles of the invention.
Embodiments of the invention will be described hereinafter with reference to the accompanying drawings.
An overview of the invention will be briefly explained. In the invention, for example, as shown in
The first embodiment of the invention will be described below.
The respective building components will be described in detail below.
The face authentication display unit 114 is arranged near the gate device 3, as shown in
The face authentication display unit 114 displays, as shown in, e.g.,
The face search display unit 115 displays the order of face search results of a plurality of (N) persons, and comprises, e.g., a liquid crystal display, CRT display, or the like. As the display contents, for example, face images of the face search results of top 10 persons are displayed, as shown in
The camera 101 used to acquire a recognition image captures an image which includes at least the face of the walker M, and comprises a television camera using an image sensing element such as a CCD sensor or the like. The camera 101 is set at a position between the point A and gate device 3 on the side portion of the path 1, as shown in
By setting the camera 101 in this way, when the walker M looks at the face authentication display unit 114, an image including a full face can be acquired. The acquired image is sent to the change detection unit 102 as digital density image data having 512 pixels in the horizontal direction and 512 pixels in the vertical direction.
The change detection unit 102 detects a changed region from the image obtained by the camera 101. In detection processing of a change region, for example, a change region is detected from the difference from a background image like in a method described in reference (Nakai, “Detection method for moving object using post-confirmation”, IPSJ Transaction, 94-CV90, pp. 1-8, 1994). With this method, an image when no change occurs is provided as a background image, and a changed region is detected as a change region based on the difference between the background image and the current input image.
The head detection unit 103 detects a head region including a face from the change region obtained by the change detection unit 102. The head region is detected by, e.g., processing shown in
The head trace unit 104 associates a head region previously detected by the head detection unit 103 with that detected from the currently input image. This association is implemented by associating, e.g., a head region detected from the currently input image (time t) with a head region detected from the immediately preceding input image (time t−1) which has a size and position close to those of the former head region. In
The face candidate region detection unit 105 detects a candidate region where the face exists from the head region obtained by the head detection unit 103 or head trace unit 104. This unit 105 detects a face candidate region using face-like features, and executes this processing for the purpose of deleting head regions which are detected by the processes of the change detection unit 102 to the head trace unit 104 and do not include any faces. Practical detection processing of a face candidate region uses a method described in reference (Mita, Kaneko, & Hori, “Proposal of spatial difference probability template suitable for authentication of image including slight difference”, Lecture Papers of 9th Symposium on Sensing via Image Information, SSII03, 2003). In this method, a detection dictionary pattern is generated from face learning patterns in advance, and a pattern with a high similarity to the dictionary pattern is retrieved from an input image.
The face region detection unit 106 extracts a face pattern from the face candidate region which is detected and input by the face candidate region detection unit 105. A face region can be detected with high precision using, e.g., a method described in reference (Fukui & Yamaguchi, “Face feature point extraction by combination of shape extraction and pattern recognition”, Journal of IEICE, (D), vol. J80-D-H, No. 8, pp. 2170-2177, 1997) or the like.
The face feature extraction unit 107 extracts a face region into a given size and shape based on the position of a detected component, and uses its density information as a feature amount. In this case, the density values of a region of m pixels×n pixels are used as information intact, and (m×n)-dimensional information is used as a feature vector. A correlation matrix of the feature vector is calculated from these data, and an orthonormal vector is calculated by K-L expansion of that matrix, thus calculating a subspace. In the method of calculating the subspace, a correlation matrix (or covariance matrix) of the feature vector is calculated, and an orthonormal vector (eigenvector) is calculated by K-L expansion of that matrix, thus calculating the subspace. The subspace is expressed using a set of eigenvectors by selecting k eigenvectors corresponding to eigenvalues in descending order of eigenvalue. In this embodiment, a correlation matrix Cd is calculated from the feature vector, and an eigenvector matrix Φ is calculated by orthogonalizing with the correlation matrix:
Cd=ΦdΛdΦdT
This subspace is utilized as face feature information used to identify a person. This information can be registered in advance as a dictionary. As will be described later, the subspace itself may be utilized as face feature information used to perform identification. Therefore, the calculation result of the subspace is output to the face recognition dictionary unit 108 and face recognition unit 109.
The face recognition dictionary unit 108 holds the face feature information obtained from the face feature extraction unit 107 as face information, and allows to calculation of similarity with a person M.
The face recognition unit 109 calculates similarity between face feature information of the walker M extracted from the image acquired by the camera 101 by the face feature extraction unit 107 and face feature information (registered face information) stored in the face recognition dictionary unit 108. When the calculated similarity is equal to or higher than a determination threshold which is set in advance, the unit 109 determines that the walker M is a pre-registered person; otherwise, the unit 109 determines that the walker M is not a pre-registered person. This face recognition processing can be implemented by using a mutual subspace method described in reference (Yamaguchi, Fukui, & Maeda, “Face recognition system using moving image”, IEICE Transactions PRMU97-50, pp. 17-23, 1997-06).
The camera 110 used to acquire a monitoring image acquires an image from which the flow line of the walking walker M in the image capture target zone is to be extracted, and comprises, e.g., a television camera using an image sensing element such as a CCD sensor or the like. The camera 110 is set at a position to look down from the ceiling so that its field angle can cover the image capture target zone of the path 1, as shown in
The flow line extraction unit 111 extracts the flow line of the walker M from the image acquired by the camera 110. The flow line is extracted by, e.g., processing shown in
The reference flow line storage unit 112 stores a reference flow line which is set in advance, and allows calculation of distance with the flow line of the walker M.
The flow line comparison unit 113 calculates distance between the reference flow line stored in the reference flow line storage unit 112 and the flow line of the walker M obtained by the flow line extraction unit 111. The distance is calculated, for example, by:
The gate control unit 116 sends a control signal which instructs to open or close to the gate device 3 shown in
The display authentication control unit 117 controls the overall apparatus, and the flow of processing is shown in the flowchart of
If the walker M exists in the image capture target zone, face region detection processing is executed (step S1), and the detection result is displayed on the face authentication display unit 114 (step S2). It is then checked if the face region detection completion condition is met (step S3). If the face region detection completion condition is not met, the flow returns to step S1 to repeat the same processing.
Note that the face region detection processing indicates a series of processes from the camera 101 to the face candidate region detection unit 105, and
Also, the face detection completion condition includes a case wherein a required number of face candidate regions are acquired. In addition, the face detection completion condition includes a case wherein the image sizes of the detected face candidate regions become equal to or larger than a predetermined value, a case wherein head trace processing is complete, and the like.
If the face detection completion condition is met, the flow line extraction result of the walker M is acquired from the flow line extraction unit 111 (step S4). Next, the acquired flow line extraction result is compared with the reference flow line which is registered in advance in the reference flow line storage unit 112 (step S5), and if the calculated distance is less than the threshold Th1, face recognition processing is executed (step S6). Note that the face recognition processing indicates a series of processes from the face region detection unit 106 to the face recognition unit 109, and
On the other hand, if the distance between the flow line of the walker M and the reference flow line is equal to or larger than the threshold Th1, normal face recognition processing is skipped. More specifically, if the distance is equal to or larger than the threshold Th1 and is less than a threshold Th2 (Th2>Th1) (step S7), the determination threshold in the face recognition unit 109 is changed to be higher (to set a higher security level) (step S8), and face recognition processing is executed (step S6). If the distance is equal to or larger than the threshold Th2, the walker M is excluded from the object to be recognized (step S9), and the flow returns to step S1.
It is checked based on the result of the face recognition processing if authentication has succeeded (step S10). If authentication has succeeded (if it is determined that the walker M is a pre-registered person), a message indicating authentication OK is displayed on the face authentication display unit 112 (step S11), and a passage permission signal to the gate device 3 is set ON for a predetermined period of time (step S12). The flow then returns to step S1. As a result, the walker M can pass through the gate device 3.
As a result of checking in step S10, if authentication has failed (if it is determined that the walker M is not a pre-registered person), a message indicating authentication NG is displayed on the face authentication display unit 112 (step S13), and the flow returns to step S1.
As described above, according to the first embodiment, it is checked based on the flow line extraction result of the walker M if the walker M is suited to an object to be captured, and the face recognition processing is executed based on this checking result, thus greatly improving the face authentication performance.
The second embodiment will be described below.
The reference flow line learning unit 118 updates a reference flow line using the flow line extraction result at the time of successful authentication, and the reference flow line stored in the reference flow line storage unit 112. For this purpose, the flow line comparison unit 113 outputs the flow line comparison result and flow line extraction result to the display authentication control unit 117, which outputs the flow line extraction result at the time of successful authentication to the reference flow line learning unit 118. As the update method of the reference flow line, the average values between the input flow line LXk and reference flow line IXk at sampling points X1 to Xn are adapted as a new reference flow line after update, and the new reference flow line is registered (stored) in the reference flow line storage unit 112.
As described above, according to the second embodiment, the reference flow line is updated using the flow line extraction result at the time of previous successful authentication, and comparison with the reference flow line suited to face authentication is made, thus allowing determination of an object to be captured with high reliability.
The third embodiment will be described below.
Of the above-mentioned building components, the camera 301 used to acquire a recognition image, change detection unit 302, head detection unit 303, head trace unit 304, face candidate region detection unit 305, face region detection unit 306, face feature extraction unit 307, face recognition dictionary unit 308, face recognition unit 309, face authentication display unit 311, face search display unit 312, and gate control unit 313 are the same as the camera 101 used to acquire a recognition image, change detection unit 102, head detection unit 103, head trace unit 104, face candidate region detection unit 105, face region detection unit 106, face feature extraction unit 107, face recognition dictionary unit 108, face recognition unit 109, face authentication display unit 114, face search display unit 115, and gate control unit 116 in the aforementioned first embodiment (
The walking velocity measuring unit 310 measures the walking velocity (moving velocity) of the walker M. For example, the unit 310 pre-stores the relationship between the walking velocity of the walker M and the number of acquired images in the image capture target zone, and measures the approximate walking velocity of the walker M based on the number of recognition images acquired by the camera 30 used to acquire a recognition image.
The display authentication control unit 314 controls the overall apparatus, and the flowchart of
If the walker M exists in the image capture target zone, the same face region detection processing as described above is executed (step S21), and the detection result is displayed on the face authentication display unit 311 (step S22). It is then checked if the face region detection completion condition is met (step S23). If the face region detection completion condition is not met, the flow returns to step S21 to repeat the same processing.
If the face detection completion condition is met, the walking velocity measurement result of the walker M is acquired from the walking velocity measuring unit 310 (step S24). The acquired walking velocity is compared with a threshold Th1 (step S25), and if the acquired walking velocity is less than the threshold Th1, the same face recognition processing as described above is executed (step S26).
On the other hand, if the acquired walking velocity is equal to or higher than the threshold Th1, normal face recognition processing is skipped. More specifically, if the walking velocity is equal to or higher than the threshold Th1 and is less than a threshold Th2 (Th2>Th1) (step S27), the determination threshold in the face recognition unit 309 is changed to be higher (to set a higher security level) (step S28), and face recognition processing is executed (step S26). If the walking velocity is equal to or higher than the threshold Th2, the walker M is excluded from the object to be recognized (step S29), and the flow returns to step S21.
It is checked based on the result of the face recognition processing if authentication has succeeded (step S30). If authentication has succeeded (if it is determined that the walker M is a pre-registered person), a message indicating authentication OK is displayed on the face authentication display unit 311 (step S31), and a passage permission signal to the gate device 3 is set ON for a predetermined period of time (step S32). The flow then returns to step S21. As a result, the walker M can pass through the gate device 3.
As a result of checking in step S30, if authentication has failed (if it is determined that the walker M is not a pre-registered person), a message indicating authentication NG is displayed on the face authentication display unit 311 (step S33), and the flow returns to step S21.
As described above, according to the third embodiment, it is checked based on the walking velocity measurement result of the walker M if the walker M is likely to be an object to be captured, and the face recognition processing is executed based on this checking result, thus greatly improving the face authentication performance.
The fourth embodiment will be described below.
Of these building components, the camera 401 used to acquire a recognition image, change detection unit 402, head detection unit 403, head trace unit 404, face candidate region detection unit 405, face region detection unit 406, face feature extraction unit 407, face recognition dictionary unit 408, face recognition unit 409, camera 410 used to acquire a monitoring image, flow line extraction unit 411, face authentication display unit 413, face search display unit 414, and gate control unit 415 are the same as the camera 101 used to acquire a recognition image, change detection unit 102, head detection unit 103, head trace unit 104, face candidate region detection unit 105, face region detection unit 106, face feature extraction unit 107, face recognition dictionary unit 108, face recognition unit 109, camera 110 used to acquire a monitoring image, flow line extraction unit 111, face authentication display unit 114, face search display unit 115, and gate control unit 116, and a description thereof will be omitted. Only differences from the first embodiment will be explained below.
In the fourth embodiment, a zone where a person must surely pass through when he or she passes through the gate device 3 and which is near the gate device 3 is called a passer confirmation zone. More specifically, a hatched rectangular zone 5 shown in
The camera 410 used to acquire a monitoring image acquires an image from which the flow line in the passer confirmation zone is to be extracted, and comprises, for example, a television camera using an image sensing element such as a CCD sensor or the like. The camera 410 is set at a position to look down from the ceiling so that its field angle can cover the position of the walker M upon completion of authentication and the passer confirmation zone 5, as shown in
The interchange determination unit 412 determines interchange of walkers M using a flow line associated with the face recognition result. For this purpose, the unit 412 acquires the face recognition result and flow line extraction result from the display authentication control unit 416, and outputs its determination result to the display authentication control unit 416. Association between the recognition result and flow line is done based on the coordinate value (
The interchange determination processing is executed according to, e.g., the flowchart shown in
As a result of confirmation in step S42, if a plurality of persons do not exist, it is similarly confirmed if recognition has succeeded (step S46). If recognition has succeeded, no interchange is determined (step S47); otherwise, if it has not succeeded, interchange of a person is determined (step S45). This determination result is output to the display authentication control unit 416. As a practical example,
The display authentication control unit 416 controls the overall apparatus, and the flowchart of
If the walker M exists in the image capture target zone, the same face region detection processing as described above is executed (step S51), and the detection result is displayed on the face authentication display unit 413 (step S52). It is then checked if the face region detection completion condition is met (step S53). If the face region detection completion condition is not met, the flow returns to step S51 to repeat the same processing.
If the face detection completion condition is met, the same face recognition processing as described above is executed, and the result of that face recognition processing is acquired (step S54). Next, the flow line extraction result of the walker M is acquired from the flow line extraction unit 411 (step S55). The acquired face recognition processing result and flow line extraction result are sent to the interchange determination unit 412 (step S56), and the determination result is acquired from the interchange determination unit 412 (step S57).
It is checked if the determination result acquired from the interchange determination unit 412 indicates the presence/absence of interchange (step S58). If the determination result indicates no interchange, a message indicating authentication OK is displayed on the face authentication display unit 413 (step S59), and a passage permission signal to the gate device 3 is set ON for a predetermined period of time (step S60). The flow then returns to step S51. As a result, the walker M can pass through the gate device 3.
As a result of checking in step S58, if interchange is determined, a message indicating authentication NG is displayed on the face authentication display unit 413 (step S61), and the flow returns to step S51.
As described above, according to the fourth embodiment, interchange is determined using the face recognition result of the walker M and the associated flow line of the walker M, and passage control is made based on the determination result, thus preventing interchange, and greatly improving security.
The fifth embodiment will be described below.
Of these building components, the camera 501 used to acquire a recognition image, face feature extraction unit 505, face recognition dictionary unit 506, and gate control unit 508 are the same as the camera 101 used to acquire a recognition image, face feature extraction unit 107, face recognition dictionary unit 108, and gate control unit 116 of the aforementioned first embodiment (
The first face region detection unit 502 detects a face region candidate of the walker M from images captured by the camera 501, and can be implemented by configuring it using, e.g., the change detection unit 102, head detection unit 103, head trace unit 104, and face candidate region detection unit 105 described in the first embodiment. Hence, a description thereof will be omitted. The detected face region is sent to the second face region detection unit 503.
The second face region detection unit 503 detects a face region to be authenticated from the face region candidate detected by the first face region detection unit 502, and can be implemented by configuring it using the face region detection unit 106. Hence, a description thereof will be omitted. The detected face region is sent to the face region image accumulation unit 504.
The face region image accumulation unit 504 accumulates a plurality of images of face regions detected by the second face region detection unit 503, and accumulates images of face regions until an accumulation completion condition is met. Note that the accumulation completion condition includes a case wherein a required number of face candidate regions are acquired. In addition, the accumulation completion condition includes a case wherein the image sizes of the detected face candidate regions become equal to or larger than a predetermined value, and the like.
The face recognition unit 507 recognizes face feature information extracted by the face feature extraction unit 505 with registered face information stored in advance in the face recognition dictionary unit 506, thus determining if the walker M is a pre-registered person.
The flow of the overall processing will be described below based on the flowchart shown in
An image including the face of the walker M is acquired by the camera 501 (step S71), and is sent to the first face region detection unit 502. The first face region detection unit 502 detects a candidate of the face region of the walker M from the image acquired by the camera 501 (step S72), and sends it to the second face region detection unit 503.
The second face region detection unit 503 detects a face region to be authenticated from the face region candidate detected by the first face region detection unit 502 (step S73), and sends it to the face region image accumulation unit 504. The face region image accumulation unit 504 accumulates the image of the face region detected by the second face region detection unit 503 until the accumulation completion condition is met (steps S74 and S75).
After the images of the detected face regions are accumulated until the accumulation completion condition is met, the face feature extraction unit 505 extracts feature information from each of a plurality of face region images accumulated in the face region image accumulation unit 504 (step S76), and sends the extracted feature information to the face recognition unit 507.
The face recognition unit 507 determines if the walker M of interest is a pre-registered person by recognizing extracted feature information with the registered face information stored in advance in the face recognition dictionary unit 506 (step S77), and sends the determination result to the gate control unit 508. The gate control unit 508 determines according to the determination result of the face recognition unit 507 if personal authentication is OK or NG, and controls the gate device 3 based on the OK or NG determination result of personal authentication (step S78).
As described above, according to the fifth embodiment, since the face recognition processing is done using a plurality of face region images by utilizing the first face region detection unit and second face region detection unit, a pattern variation due to a change in face direction caused by walking is absorbed, thus attaining quick face authentication of the walker with high precision.
Note that the first to fifth embodiments described above can be combined as needed. As a result, the operations and effects of respective combined embodiments can be obtained. For example, when the first embodiment is combined with the fifth embodiment, the processes in steps S71 to S78 (except for gate control) in
The effects of the invention will be summarized below.
(1) According to the invention, there can be provided a face authentication apparatus, face authentication method, and entrance and exit management apparatus, which determine using the flow line of a walker whether or not that walker is likely to be an object to be captured, and change the determination threshold in the face recognition processing according to this determination result, thus greatly improving the face authentication performance.
(2) According to the invention, there can be provided a face authentication apparatus, face authentication method, and entrance and exit management apparatus, which determine based on the walking velocity of a walker whether or not that walker is likely to be an object to be captured, and change the determination threshold in the face recognition processing according to this determination result, thus greatly improving the face authentication performance.
(3) According to the invention, there can be provided a face authentication apparatus, face authentication method, and entrance and exit management apparatus, which determine interchange of walkers using the flow line associated with the determination result of the walker to prevent interchange, thus assuring higher security.
(4) According to the invention, there can be provided a face authentication apparatus, face authentication method, and entrance and exit management apparatus, which perform face recognition processing using a plurality of face region images by utilizing the first face detection unit and second face detection unit so as to absorb a pattern variation due to a change in face direction caused by walking, thus attaining quick face authentication of the walker with high precision.
Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2006-165507 | Jun 2006 | JP | national |