This application is based on and claims priority under 35 USC 119 from Japanese Patent Application No. 2015-153702 filed on Aug. 3, 2015 and Japanese Patent Application No. 2015-196260 filed on Oct. 1, 2015.
The present invention relates to an authentication apparatus and a processing apparatus.
An aspect of the present invention provides an authentication apparatus including: an imaging unit that images a person around the authentication apparatus; an authentication unit that authenticates an individual by using a face image of a person imaged by the imaging unit; and an instruction unit that gives an instruction for starting authentication, in which the authentication unit acquires a face image before an instruction is given by the instruction unit, and performs authentication after the instruction is given by the instruction unit.
Exemplary embodiment(s) of the present invention will be described in detail based on the following figures, wherein
Hereinafter, with reference to the accompanying drawings, Exemplary Embodiment 1 of the present invention will be described in detail.
The image forming apparatus 10 includes a scanner 11, a printer 12, and a user interface (UI) 13. Among the elements, the scanner 11 is a device reading an image formed on an original, and the printer 12 is a device forming an image on a recording material. The user interface 13 is a device receiving an operation (instruction) from a user and displaying various information to the user when the user uses the image forming apparatus 10.
The scanner 11 of the present embodiment is disposed over the printer 12. The user interface 13 is attached to the scanner 11. Here, the user interface 13 is disposed on the front side in the image forming apparatus 10 (scanner 11) on which the user stands when using the image forming apparatus 10. The user interface 13 is disposed so as to be directed upward so that the user standing on the front side of the image forming apparatus 10 can operate the user interface 13 while viewing a lower side from an upper side.
The image forming apparatus 10 also includes a pyroelectric sensor 14, a first camera 15, and a second camera 16. Among the elements, the pyroelectric sensor 14 and the first camera 15 are respectively attached to the front side and the left side in the printer 12 so as to be directed forward. The first camera 15 is disposed over the pyroelectric sensor 14. The second camera 16 is attached so as to be directed upward on the left side in the user interface 13.
Here, the pyroelectric sensor 14 has a function of detecting movement of a moving object (a person or the like) including the user on the front side of the image forming apparatus 10. The first camera 15 is constituted of a so-called video camera, and has a function of capturing an image of the front side of the image forming apparatus 10. The second camera 16 is also constituted of a so-called video camera, and has a function of capturing an image of the upper side of the image forming apparatus 10. Here, a fish-eye lens is provided in each of the first camera 15 and the second camera 16. Consequently, the first camera 15 and the second camera 16 captures an image at an angle wider than in a case of using a general lens.
The image forming apparatus 10 further includes a projector 17. In this example, the projector 17 is disposed on the right side of the main body of the image forming apparatus 10 when viewed from the front side. The projector 17 projects various screens onto a screen (not illustrated) provided on the back side of the image forming apparatus 10. Here, the screen is not limited to a so-called projection screen, and a wall or the like may be used. An installation position of the projector 17 with respect to the main body of the image forming apparatus 10 may be changed. In this example, the main body of the image forming apparatus 10 and the projector 17 are provided separately from each other, but the main body of the image forming apparatus 10 and the projector 17 may be integrally provided by using a method or the like of attaching the projector 17 to a rear surface side of the scanner 11.
The user interface 13 includes a touch panel 130, a first operation button group 131, a second operation button group 132, and a USB memory attachment portion 133. Here, the first operation button group 131 is disposed on the right side of the touch panel 130. The second operation button group 132, the USB memory attachment portion 133, and the second camera 16 are disposed on the right side of the touch panel 130.
Here, the touch panel 130 has a function of displaying information using an image to the user, and receiving an input operation from the user. The first operation button group 131 and the second operation button group 132 have a function of receiving an input operation from the user. The USB memory attachment portion 133 allows the user to attach a USB memory thereto.
The second camera 16 provided in the user interface 13 is disposed at a position where an image of the face of the user using the image forming apparatus 10 can be captured. The image (including the image of the face of the user) captured by the second camera 16 is displayed on the touch panel 130. Here, in the image forming apparatus 10 of the present embodiment, as will be described later, authentication for permitting use of the image forming apparatus 10 is performed by using a face image obtained by the first camera 15 capturing a face of a person approaching the image forming apparatus 10. For this reason, a person (user) who intends to use the image forming apparatus 10 is required to register a face image thereof in advance. The second camera 16 in the present embodiment is used to capture the face of the person when such a face image is registered.
In the present embodiment, an image captured by the first camera 15 can be displayed on the touch panel 130. In the following description, an image captured by the first camera 15 will be referred to as a first camera image, and an image captured by the second camera 16 will be referred to as a second camera image.
Here, as illustrated in
In this example, the pyroelectric sensor 14 (refer to
In this example, by using a result of analyzing the first camera image captured by the first camera 15 (refer to
Among the regions, the person detection region R1 is formed on the front side of the image forming apparatus 10, and exhibits a fan shape whose central angle is set to be lower than 180 degrees when viewed from the top in the height direction. The person detection region R1 is set to include the entire detection region F (not to include a part thereof in this example). A central angle of the person detection region R1 may be set to angles other than 180 degrees. However, the first camera 15 has at least the entire person detection region R1 as an imaging region.
Next, the person operation region R2 is set on the front side of the image forming apparatus 10, and exhibits a rectangular shape when viewed from the top in the height direction. In this example, a length of the rectangular region in a width direction is the same as a length of the image forming apparatus 10 in the width direction. The entire person operation region R2 is located inside the person detection region R1. The person operation region R2 is disposed on a side closer to the image forming apparatus 10 in the person detection region R1.
The entry detection region R3 is formed on the front side of the image forming apparatus 10, and exhibits a fan shape whose central angle is set to 180 degrees when viewed from the top in the height direction. The entire entry detection region R3 is located inside the person detection region R1. The entry detection region R3 is disposed on a side closer to the image forming apparatus 10 in the person detection region R1. The entire person operation region R2 described above is located inside the entry detection region R3. The person operation region R2 is disposed on a side closer to the image forming apparatus 10 in the entry detection region R3.
The approach detection region R4 is formed on the front side of the image forming apparatus 10, and exhibits a fan shape whose central angle is set to 180 degrees when viewed from the top in the height direction. The entire approach detection region R4 is located inside the entry detection region R3. The approach detection region R4 is disposed on a side closer to the image forming apparatus 10 in the entry detection region R3. The entire person operation region R2 described above is located inside the approach detection region R4. The person operation region R2 is disposed on a side closer to the image forming apparatus 10 in the approach detection region R4.
In the image forming apparatus 10 of the present embodiment, as will be described later, authentication for performing use of the image forming apparatus 10 is performed by using a face image obtained by the first camera 15 imaging the face of the person H approaching the image forming apparatus 10. In the image forming apparatus 10, as will be described later, the toes of the person H present in the person detection region R1 are detected, and it is determined whether or not the person H approaches the image forming apparatus 10, by using the first camera image captured by the first camera 15.
Here, a height of the image forming apparatus 10 is typically set to about 1000 mm to 1300 mm for convenience of use, and thus a height of the first camera 15 is about 700 mm to 900 mm from the installation surface. As described above, the toes of the person H are required to be imaged by using the first camera 15, and thus the height of the first camera 15 is restricted to a low position to some extent. For this reason, the height (position P) of the first camera 15 from the installation surface is lower than the height of a face of a general adult (person H) as illustrated in
Therefore, in this example, a limit of a distance in which a face image of the person H can be analyzed by analyzing the first camera image captured by the first camera 15 is defined as a face detection limit L. The face detection limit L is determined on the basis of a distance in which the face of the person H having a general height can be imaged by the first camera 15. In this example, the face detection limit L is located outside the person operation region R2 and inside the approach detection region R4.
In a case where there is a person H who intends to use the image forming apparatus 10 of the present embodiment, the person H first enters the detection region F. The person H having entered the detection region F successively enters the person detection region R1, and further enters the person operation region R2 from the entry detection region R3 through the approach detection region R4. In this example, the person H who is moving through the person detection region R1 passes through the face detection limit L while entering the person operation region R2 from the approach detection region R4. The person H having entered the person operation region R2 performs an operation using the user interface 13 while staying in the person operation region R2. Each of the person detection region R1, the person operation region R2, the entry detection region R3, and the approach detection region R4 is not necessarily required to be set as illustrated in
The control unit 101 includes, for example, a central processing unit (CPU) and a memory, and controls each unit of the image forming apparatus 10. The CPU executes a program stored in the memory or the storage unit 105. The memory includes, for example, a read only memory (ROM) and a random access memory (RAM). The ROM stores a program or data in advance. The RAM temporarily stores the program or data, and is used as a work area when the CPU executes the program.
The communication unit 102 is a communication interface connected to a communication line (not illustrated). The communication unit 102 performs communication with a client apparatus or other image forming apparatuses (none of which are illustrated) via the communication line.
The operation unit 103 inputs information corresponding to a user's operation to the control unit 101. In this example, the operation unit 103 is realized by the touch panel 130, the first operation button group 131, and the second operation button group 132 provided in the user interface 13.
The display unit 104 displays various information to the user. In this example, the display unit 104 is realized by the touch panel 130 provided in the user interface 13.
The storage unit 105 is, for example, a hard disk, and stores various programs or data used by the control unit 101.
The image reading unit 106 reads an image of an original so as to generate image data. In this example, the image reading unit 106 is realized by the scanner 11.
The image forming unit 107 forms an image corresponding to the image data on a sheet-like recording material such as paper. In this case, the image forming unit 107 is realized by the printer 12. The image forming unit 107 may form an image according to an electrophotographic method, and may form an image according to other methods.
The detection unit 108 performs detection of a moving object including the person H. In this example, the detection unit 108 is realized by the pyroelectric sensor 14.
The imaging unit 109 images an imaging target including the person H. In this example, the imaging unit 109 is realized by the first camera 15 and the second camera 16.
The person detection unit 110 analyzes the first camera image captured by the first camera 15 so as to detect the person H present in the person detection region R1, the person operation region R2, the entry detection region R3, and the approach detection region R4.
The face detection unit 111 analyzes the first camera image captured by the first camera 15 so as to detect a face image of the person H present inside the person detection region R1 and outside the face detection limit L.
The face registration/authentication unit 112 performs registration using a face image of a user in advance in relation to the person H (the user) who can use the image forming apparatus 10. Here, in the registration, a face image of the user is captured by using the second camera 16, and a feature amount is extracted from the captured face image. A user's ID (registration ID), various information (referred to as registered person information) set by the user, and the feature amount (referred to as face information) extracted from the face image of the user are correlated with each other and are stored in the storage unit 105. In the following description, a table in which the registration ID, the registered person information, and the face information are correlated with each other will be referred to as a registration table, and a user (person H) registered in the registration table will be referred to as a registered person.
The face registration/authentication unit 112 performs authentication using a face image of a user when the user is to use the image forming apparatus 10. Here, in the authentication, a face image of the person H (user) is captured by using the first camera 15, and a feature amount is also extracted from the captured face image. It is examined whether or not the feature amount obtained through the present imaging matches a feature amount registered in advance, and in a case where there is the matching feature amount (in a case of a registered person who is registered as the user), the image forming apparatus 10 is permitted to be used. In a case where there is no matching feature amount (in a case of an unregistered person who is not registered as the user), the image forming apparatus 10 is prohibited from being used.
The instruction unit 113 outputs an instruction for starting an authentication process using the face image captured by the first camera 15 to the face registration/authentication unit 112.
The selection unit 114 selects one face image among a plurality of face images in a case where the plurality of face images are acquired by using the first camera 15 in relation to the same person H.
The notification unit 115 notifies the person H present in, for example, the person detection region R1, of information which is desired to be provided as necessary. The notification unit 115 is realized by the projector 17.
In the present embodiment, the imaging unit 109 (more specifically, the first camera 15) is an example of an imaging unit, the face registration/authentication unit 112 is an example of an authentication unit, and the storage unit 105 is an example of a holding unit. The face detection unit 111 and the face registration/authentication unit 112 are an example of a specifying unit, and the face registration/authentication unit 112 is an example of a processing unit. A region (a region closer to the image forming apparatus 10) located further inward than the face detection limit L in the person detection region R1 is an example of a set region, and the person detection region R1 is an example of a first region. The entry detection region R3 is an example of a second region, and a region located further outward than the face detection limit L in the person detection region R1 is an example of a third region.
Here, the image forming apparatus 10 of the present embodiment operates depending on one of two modes in which a power consumption amount differs, such as a “normal mode” and a “sleep mode”. In a case where the image forming apparatus 10 operates in the normal mode, power required to perform various processes is supplied to each unit of the image forming apparatus 10. On the other hand, in a case where the image forming apparatus 10 operates in the sleep mode, the supply of power to at least some units of the image forming apparatus 10 is stopped, and a power consumption amount of the image forming apparatus 10 becomes smaller than in the normal mode. However, even in a case where the image forming apparatus 10 operates in the sleep mode, power is supplied to the control unit 101, the pyroelectric sensor 14, and the first camera 15, and the above-described elements can operate even in the sleep mode.
In this example, in an initial state, the image forming apparatus 10 is set to the sleep mode (step S1). Even in the sleep mode, the pyroelectric sensor 14 is activated so as to perform an operation. On the other hand, at this time, the first camera 15 is assumed not to be activated. When the image forming apparatus 10 operates in the sleep mode, the control unit 101 monitors a detection result of an amount of infrared rays in the pyroelectric sensor 14 so as to determine whether or not a person H is present in the detection region F (step S2). In a case where a negative determination (NO) is performed in step S2, the flow returns to step S2, and this process is repeatedly performed.
On the other hand, in a case where an affirmative determination (YES) is performed in step S2, that is, the person H is detected in the detection region F, the control unit 101 starts the supply of power to the first camera 15 and also activates the first camera 15 so as to start to image the person detection region R1 (step S3). If imaging is started by the first camera 15, the person detection unit 110 analyzes a first camera image acquired from the first camera 15 and starts a process of detecting motion of the person H (step S4).
In the process of detecting motion of the person H started in step S4, the person detection unit 110 estimates a distance from the image forming apparatus 10 to the person H, and calculates a motion vector indicating motion of the person H. The process of detecting motion of the person H may be performed according to a well-known method, but, for example, the person detection unit 110 estimates a distance from the image forming apparatus 10 to the person H on the basis of a size of a body part detected from a captured image. The person detection unit 110 performs a frame process on the captured image obtained by the first camera 15, and compares captured images corresponding to a plurality of frames with each other in time series order. At this time, the person detection unit 110 detects toes as the body part of the person H, and analyzes motion of the detected part so as to calculate a motion vector. The person detection unit 110 corrects the first camera image (a distorted image obtained using a fish-eye lens) acquired from the first camera 15 to a planar image (develops the first camera image in a plan view) and then detects motion of the person H.
Next, the person detection unit 110 determines whether or not the approach of the person H present in the person detection region R1 to the image forming apparatus 10 has been detected (step S5). For example, in a case where it is determined that the person H is present in the person detection region R1 and moves toward the image forming apparatus 10, the person detection unit 110 performs an affirmative determination (YES) in step S5. In a case where a negative determination (NO) is performed in step S5, the flow returns to step S5, and this process is repeatedly performed.
In contrast, in a case where an affirmative determination (YES) is performed in step S5, the control unit 101 causes a mode of the image forming apparatus 10 to transition from the sleep mode to the normal mode (step S6). At this time, the control unit 101 instructs power corresponding to the normal mode to be supplied to each unit of the image forming apparatus 10 so as to activate each unit of the image forming apparatus 10. In addition, the control unit 101 starts the supply of power to the second camera 16 so as to activate the second camera 16.
In the present embodiment, instant transition from the sleep mode to the normal mode does not occur when the presence of the person H in the person detection region R1 is detected, but transition from the sleep mode to the normal mode occurs when the approach of the person H present in the person detection region R1 to the image forming apparatus 10 is detected. As a result of such control being performed, for example, in a case where the person H just passes through the person detection region R1, an opportunity for the image forming apparatus 10 to transition from the sleep mode to the normal mode is reduced.
If the transition from the sleep mode to the normal mode occurs in step S6, the face detection unit 111 analyzes the first camera image acquired from the first camera 15 and starts a process of detecting the face of the person H present in the person detection region R1 (step S7).
Next, the person detection unit 110 analyzes the first camera image acquired from the first camera 15 so as to determine whether or not the person H is present (stays) in the person operation region R2 (step S8). At this time, the person detection unit 110 analyzes the first camera image from the first camera 15 so as to detect a body part of the person H, and detects the presence of the person H in the person operation region R2 on the basis of a position and a size of the detected part. For example, the person detection unit 110 estimates a distance from the image forming apparatus 10 to the person H on the basis of the size of the detected body part, and specifies a direction in which the person H is present on the basis of the position of the detected body part.
In a case where an affirmative determination (YES) is performed in step S8, the flow returns to step S8, and the process of detecting the face of the person H started in step S7 is continued. Therefore, the person detection unit 110 repeatedly performs the process of detecting the presence of the person H in the person operation region R2 still in the normal mode until the presence of the person H is not detected in the person operation region R2.
On the other hand, in a case where a negative determination (NO) is performed in step S8, that is, the person H is not present in the person operation region R2 (the person H has exited from the person operation region R2), the control unit 101 starts clocking using a timer (step S9). In other words, the control unit 101 measures an elapsed time from the time when the person H is not present in the person operation region R2 with the timer.
Next, the person detection unit 110 determines whether or not the person H is present in the person operation region R2 (step S10). In step S10, the person detection unit 110 determines again whether or not the person H is present in the person operation region R2 after the person H is not present in the person operation region R2.
In a case where a negative determination (NO) is performed in step S10, the control unit 101 determines whether or not a time point measured by the timer has exceeded a set period (step S11). The set period is, for example, one minute, but may be set to a time period other than one minute. In a case where a negative determination (NO) is performed in step S11, the control unit 101 returns to step S10 and continues the process. In steps S10 and S11, it is determined whether or not a period in which the person H is not present in the person operation region R2 lasts for the set period.
In contrast, in a case where an affirmative determination (YES) is performed in step S11, the control unit 101 causes a mode of the image forming apparatus 10 to transition from the normal mode to the sleep mode (step S12). At this time, the control unit 101 instructs power corresponding to the sleep mode to be supplied to each unit of the image forming apparatus 10, and stops an operation of each unit of the image forming apparatus 10 which is stopped during the sleep mode. Thereafter, the control unit 101 stops an operation of the first camera 15 if the pyroelectric sensor 14 does not detect the presence of the person H in the detection region F.
Here, a case is assumed in which the presence of the person H is detected again in the person operation region R2 before the set period elapses from the time when the person H is not present in the person operation region R2 after the timer starts clocking in step S9. In this case, the control unit 101 performs an affirmative determination (YES) in step S10 and also stops clocking of the timer so as to reset the timer (step S13). The control unit 101 returns to step S8 and continues the process. In other words, the process performed in a case where the person H is present in the person operation region R2 is performed again. Herein, a case where the same person H returns to the person operation region R2 is exemplified, but also in a case where another person H moves into the person operation region R2, the person detection unit 110 performs an affirmative determination (YES) in step S10.
Here, in the related art, a person H (user) who intends to use the image forming apparatus 10 gives an instruction for capturing a face image and requests authentication for himself/herself in a case of performing the authentication using the face image of the user. For example, the person H stands in the person operation region R2, and causes a face image to be captured in a state in which the user's face is directed toward the second camera 16 provided in the user interface 13. In contrast, in the image forming apparatus 10 of the present embodiment, a face image of the person H present in the person detection region R1 is captured by the first camera 15 in advance, and an authentication process is performed by using the captured face image of the person H in a state in which a specific condition is satisfied.
If the image forming apparatus 10 is set to the normal mode, as shown in step S7 of
On the other hand, in a case where an affirmative determination (YES) is performed in step S40, the face registration/authentication unit 112 performs a face authentication process of setting whether or not authentication is successful by using a result of the face detection and face image acquisition process in step S20, that is, the face image of the person H obtained from the first camera image which is acquired from the first camera 15 (step S60), and completes the process.
In
Each of the face detection and face image acquisition process in the above step S20 and the face authentication process in the above step S60 will be described in more detail.
First, with reference to
Herein, first, the person detection unit 110 and the face detection unit 111 acquire a first camera image captured by the first camera 15 (step S21). Next, the person detection unit 110 analyzes the first camera image acquired in step S21 so as to determine whether or not a person H is present in the person detection region R1 (step S22). In a case where a negative determination (NO) is performed in step S22, the flow returns to step S21, and the process is continued.
On the other hand, in a case where an affirmative determination (YES) is performed in step S22, the person detection unit 110 determines whether or not the person H whose presence has been detected in step S22 is in a state in which the presence has already been detected and is a tracked person (step S23). In a case where an affirmative determination (YES) is performed in step S23, the flow proceeds to step S25 to be described later.
In contrast, in a case where a negative determination (NO) is performed in step S23, the person detection unit 110 acquires a tracking ID for the person H whose presence has been detected in step S22 and stores the tracking ID in the storage unit 105, and starts tracking of the person H (step S24). The face detection unit 111 analyzes the first camera image acquired in step S21 so as to search for a face of the tracked person (step S25).
Next, the face detection unit 111 determines whether or not the face of the tracked person has been detected from the first camera image (step S26). In a case where a negative determination (NO) is performed in step S26, the flow proceeds to step S30 to be described later.
On the other hand, in a case where an affirmative determination (YES) is performed in step S26, the face detection unit 111 registers face information extracted from the face image of the tracked person in the storage unit 105 in correlation with the tracking ID of the tracked person (step S27). In the following description, a table in which the tracking ID is correlated with the face information will be referred to as a tracking table. The face detection unit 111 determines whether or not face information of the same tracked person is registered in the tracking table in plural (in this example, two) in relation to the tracked person (step S28). In a case where a negative determination (NO) is performed in step S28, the flow proceeds to step S30 to be described later.
In contrast, in a case where an affirmative determination (YES) is performed in step S28, the selection unit 114 selects one of the two face information pieces registered as the tracking table in the storage unit 105, and deletes the other face information which is not selected from the storage unit 105 (step S29).
The person detection unit 110 acquires the first camera image captured by the first camera 15 (step S30). Next, the person detection unit 110 analyzes the first camera image acquired in step S30 so as to determine whether or not the tracked person is present in the person detection region R1 (step S31). In a case where an affirmative determination (YES) is performed in step S31, the flow returns to step S21, and the process is continued.
On the other hand, in a case where a negative determination (NO) is performed in step S31, the person detection unit 110 deletes the tracking ID and the face information of the tracked person (person H) whose presence is not detected in step S31 from the tracking table (step S32), returns to step S21, and continues the process.
Next, with reference to
Herein, first, the selection unit 114 selects a person H (target person) who is a target on which the instruction for the face authentication process is given in step S40 illustrated in
In contrast, in a case where an affirmative determination (YES) is performed in step S61, the face registration/authentication unit 112 determines whether or not face information of the same tracked person as the target person is registered in the storage unit 105 (step S62). In a case where a negative determination (NO) is performed in step S62, the flow proceeds to step S71 to be described later.
On the other hand, in a case where an affirmative determination (YES) is performed in step S62, the face registration/authentication unit 112 makes a request for face authentication by using face information of the target person whose registration in the tracking table is confirmed in step S62 (step S63). Next, the face registration/authentication unit 112 collates the face information of the target person with face information pieces of all registered persons registered in the registration table (step S64). The face registration/authentication unit 112 determines whether or not authentication has been successful (step S65). Here, in step S65, an affirmative determination (YES) is performed if the face information of the target person matches any one of the face information pieces of all the registered persons, and a negative determination (NO) is performed if the face information of the target person does not match any one of the face information pieces of all the registered persons.
In a case where an affirmative determination (YES) is performed in step S65, the notification unit 115 notifies the target person or the like that the authentication has been successful by using the projector 17 (step S66). The display unit 104 displays a UI screen (a screen after authentication is performed) for the target person which is set for the authenticated target person (step S67), and proceeds to step S74 to be described later.
On the other hand, in a case where a negative determination (NO) is performed in step S65, the person detection unit 110 determines whether or not a target person is present in the approach detection region R4 (step S68). In a case where a negative determination (NO) is performed in step S68, the flow returns to step S61, and the process is continued.
In contrast, in a case where an affirmative determination (YES) is performed in step S68, the notification unit 115 notifies the target person or the like that authentication has failed by using the projector 17 (step S69). The display unit 104 displays a UI screen (a screen before authentication is performed) corresponding to an authentication failure which is set for authentication failure (step S70), and proceeds to step S74 to be described later.
On the other hand, in a case where a negative determination (NO) is performed in step S61 and in a case where a negative determination (NO) is performed in step S62, the person detection unit 110 determines whether or not a target person is present in the approach detection region R4 (step S71). In a case where a negative determination (NO) is performed in step S71, the flow returns to step S61, and the process is continued.
In contrast, in a case where an affirmative determination (YES) is performed in step S71, the notification unit 115 notifies the target person or the like that a face image of the target person has not been acquired by using the projector 17 (step S72). The display unit 104 displays a UI screen (a screen before authentication is performed) corresponding to manual input authentication which is set for an authentication process using manual inputting (step S73), and proceeds to step S74 to be described later.
The face registration/authentication unit 112 deletes tracking IDs and face information pieces of all tracked persons registered in the tracking table (step S74), and completes the process.
Next, the present embodiment will be described in more detail by using specific examples.
First, a description will be made of the registration table illustrated in
In the registration table illustrated in
In the registration table illustrated in
Of the two persons, the registered person information is registered as follows in relation to the user having the registration ID “R001”. First, “Fujitaro” is registered as the user name, and “simple copying”, “automatic scanning”, “simple box preservation”, “simple box operation”, “facsimile”, and “private printing (collective output)” are registered as application names. An application function and button design corresponding to each application name are also registered. Face information regarding the user having the registration ID “R001” is also registered.
The registered person information is registered as follows in relation to the user having the registration ID “R002”. First, “Fuji Hanako” is registered as the user name, and “simple copying”, “automatic scanning”, “simple box preservation”, “private printing (simple confirmation)”, “three sheets in normal printing”, “saved copying”, “start printing first shot”, and “highly clean scanning” are registered as application names. An application function and button design corresponding to each application name are also registered. Face information regarding the user having the registration ID “R002” is also registered.
Next, the tracking table illustrated in
In the tracking table illustrated in
Three persons H (tracking IDs “C001” to “C003”) are registered as tracked persons in the tracking table illustrated in
A description will be made of the instruction for starting the face authentication process, shown in step S40 of
In the present embodiment, in a case where it is detected that a specific (single) person H performs an action satisfying a specific condition among one or more persons H present in the person detection region R1 on the basis of an analysis result of the first camera image captured by the first camera 15, the instruction unit 113 outputs an instruction for starting the authentication process in step S60.
In
Here, in the first example, after the specific person H (the second person H2 in this example) enters the entry detection region R3 from the person detection region R1 and is thus selected as a tracked person, the tracked person is not changed from the specific person H to another person H even if another person H (the first person H1 in this example) enters the entry detection region R3 from the person detection region R1 in a state in which the specific person H continues to stay in the entry detection region R3.
In a case where authentication has been successful in the above-described way, the second person H2 as the target person comes close to the image forming apparatus 10. In a case where authentication has failed or a face image cannot be acquired, the second person H2 as the tracked person finds that authentication has not been successful before passing through the face detection limit L in which it is hard to acquire a face image using the first camera 15.
Herein, a case where information that “a face image cannot be acquired” is presented in step S72 has been described, but presented information is not limited thereto. For example, in step S72, a notification that the person H is requested not to come close to an apparatus (the image forming apparatus 10), a notification that the person H is requested not to come close to an apparatus (the image forming apparatus 10) since face authentication of the person H is not completed, a notification that the person H is requested to stop, a notification that the person H is requested to stop since face authentication of the person H is not completed, a notification for informing that a facial part of the person H is deviated from an imaging region of the first camera 15, and the like may be performed.
In the above-described manner, in a state in which the second person H2 as the target person having undergone the face authentication process enters the person operation region R2 and stands in front of the user interface 13, a UI screen corresponding to the second person H2 is already displayed on the touch panel 130.
Here, a description will be made of the UI screen displayed on the touch panel 130 in steps S67, S70 and S73.
First, in a case where a target person is “Fujitaro” as a registered person who is registered in the registration table (refer to
Next, in a case where a target person is “Fuji Hanako” as a registered person who is registered in the registration table (refer to
Next, in a case where a target person is an unregistered person (for example, “Fujijirou”) who is not registered in the registration table (refer to
Finally, in a case where a target person is a registered person (who is herein “Fujitaro” but may be “Fuji Hanako”) who is registered in the registration table (refer to
As mentioned above, in the present embodiment, the content of the screens after authentication is performed (when authentication is successful), illustrated in
Here, a brief description will be made of cases where a face image of a tracked person can be detected and cannot be detected.
The face registration/authentication unit 112 of the present embodiment detects feature points at a plurality of facial parts (for example, 14 or more parts) such as the eyes, the nose, and the mouth in the face registration and face authentication, and extracts a feature amount of the face after correcting a size, a direction, and the like of the face in a three-dimensional manner. For this reason, in a case where the person H wears a mask or sunglasses so as to cover a part of the face, even if an image including the face of the person H is included in the first camera image, detection of feature points of the face and extraction of a feature point cannot be performed from the first camera image. Also in a case where the person H faces straight sideways or backward with respect to the first camera 15, detection of feature points of the face and extraction of a feature point cannot be performed from the first camera image. In such cases, a negative determination (NO) is performed in step S26 illustrated in
Next, a brief description will be made of a method of selecting one face information piece in a case where a plurality of face information pieces are acquired in relation to the same tracked person.
As is clear from
In addition, for example, in a case where face information of the person H is acquired from a first camera image obtained by imaging a face of a person H obliquely facing the first camera 15 and is registered in the tracking table, and then face information of the person H is acquired from a first camera image obtained by imaging the face of the person H facing the front of the first camera 15, the latter face information may be selected and the former face information may be deleted in step S29.
In the above-described first example, a description has been made of a case where the second person H2 enters the entry detection region R3 earlier than the first person H1, and thus the second person H2 becomes a target person. However, in a case where the first person H1 enters the entry detection region R3 earlier than the second person H2, the first person H1 becomes a target person.
In the above-described way, unless the first person H1 or the second person H2 who is being tracked in the person detection region R1 enters the entry detection region R3, a target person is not generated, and, as a result, the face authentication process in step S60 is not started.
Here, in the third example, after the first staying time period T1 of the specific person H (in this example, the first person H1) reaches the predefined time period T0, and thus the specific person H is selected as a target person, the target person is not changed from the specific person to another person even if the second staying time period T2 of another person (in this example, the second person H2) reaches the predefined time period T0 in a state in which the specific person H continuously stays in the person detection region R1.
In a case where authentication has been successful in the above-described way, the first person H1 as the target person comes close to the image forming apparatus 10. In a case where authentication has failed or a face image cannot be acquired, the first person H1 as the tracked person finds that authentication has not been successful before passing through the face detection limit L in which it is hard to acquire a face image using the first camera 15.
In the above-described way, in a state in which the first person H1 who is the target person having undergone the face authentication process enters the person operation region R2 and stands in front of the user interface 13, the UI screen corresponding to the first person H1 is already displayed on the touch panel 130.
In the above-described third example, a description has been made of a case where the first staying time period T1 of the first person H1 reaches the predefined time period T0 earlier than the second staying time period T2 of the second person H2, and thus the first person H1 becomes a target person. However, in a case where the second staying time period T2 of the second person H2 reaches the predefined time period T0 earlier than the first staying time period T1 of the first person H1, the second person H2 becomes a target person.
In the above-described third example, a description has been made of a case where both of the first person H1 and the second person H2 enter the person detection region R1 and then continue to stay in the person detection region R1. However, for example, in a case where the first person H1 moves to the outside of the person detection region R1 before the first staying time period T1 of the first person H1 reaches the predefined time period T0, and the second person H2 moves to the outside of the person detection region R1 before the second staying time period T2 of the second person H2 reaches the predefined time period T0, in the same manner as in the second example, a target person is not generated, and the face authentication process in step S60 is not started.
Here, in the fourth example, after the specific person H (in this example, the second person H2) approaches the image forming apparatus 10 and is thus selected as a target person, the target person is not changed from the specific person to another person even if another person (in this example, the first person H1) approaches the image forming apparatus 10 in a state in which the specific person H continuously approaches the image forming apparatus 10.
In a case where authentication has been successful in the above-described way, the second person H2 as the target person comes close to the image forming apparatus 10. In a case where authentication has failed or a face image cannot be acquired, the second person H2 as the tracked person finds that authentication has not been successful before passing through the face detection limit L in which it is hard to acquire a face image using the first camera 15.
In the state illustrated in
In the above-described way, in a state in which the second person H2 who is the target person having undergone the face authentication process enters the person operation region R2 and stands in front of the user interface 13, the UI screen corresponding to the second person H2 is already displayed on the touch panel 130.
In the above-described fourth example, a description has been made of a case where the second person H2 present in the person detection region R1 approaches the image forming apparatus 10, and the first person H1 present in the same person detection region R1 becomes distant from the image forming apparatus 10, so that the second person H2 becomes a target person. However, in a case where the first person H1 present in the person detection region R1 approaches the image forming apparatus 10, and the second person H2 present in the same person detection region R1 becomes distant from the image forming apparatus 10, the first person H1 becomes a target person.
In the above-described fourth example, a description has been made of a case where the second person H2 present in the person detection region R1 approaches the image forming apparatus 10, and the first person H1 present in the same person detection region R1 becomes distant from the image forming apparatus 10. However, in a case where both of the first person H1 and the second person H2 become distant from the image forming apparatus 10, in the same manner as in the above-described second example, a target person is not generated, and the face authentication process in step S60 is not started. On the other hand, in a case where both of the first person H1 and the second person H2 approach the image forming apparatus 10, a person H who approaches the image forming apparatus 10 faster becomes a target person.
[Others] Here, in the above-described first to fourth examples, a description will be made of a case where two persons H (the first person H1 and the second person H2) are present around the image forming apparatus 10, there may be a case where a single person H is present around the image forming apparatus 10, and a case where three or more persons H are present around the image forming apparatus 10.
In the present embodiment, in a case where face information of a target person (tracked person) has not been registered in the face authentication process in step S62 illustrated in
In the present embodiment, in controlling of a mode of the image forming apparatus 10 illustrated in
In the present embodiment, a case where the projector 17 displaying an image is used as the notification unit 115 has been described as an example, but the present invention is not limited thereto. Methods may be used in which sound is output from, for example, a sound source, or light is emitted from, for example, a light source (lamp). Here, in the present embodiment, when authentication using the acquired face image has been successful (step S66), when authentication using the acquired face image has failed (step S69), and when authentication cannot be performed since a face image cannot be acquired (step S72), a notification is performed, but the present invention is not limited thereto. For example, (1) before a face image is detected from a first camera image, (2) before authentication using a face image is performed after the face image is detected from the first camera image, and (3) after an authentication process is performed, a notification may be performed.
Next, Exemplary Embodiment 2 of the present invention will be described in detail. Hereinafter, a description of the same constituent elements as in Embodiment 1 will be omitted as appropriate.
In the present embodiment, the instruction unit 113 outputs an instruction for starting an authentication process using the face image captured by the first camera 15 to the face registration/authentication unit 112. The instruction unit 113 outputs an instruction for displaying an authentication result of performing the authentication process on the touch panel 130 as a UI screen, to the display unit 104.
In the present embodiment, a UI screen corresponding to an authentication result is not displayed on the touch panel 130 right after an authentication process is performed, but the UI screen corresponding to the authentication result is displayed on the touch panel 130 in a case where a predetermined condition is satisfied after the authentication process is performed.
If the image forming apparatus 10 is set to the normal mode, as shown in step S7 of
On the other hand, in a case where an affirmative determination (YES) is performed in step S40, the face registration/authentication unit 112 performs a face authentication process of setting whether or not authentication is successful by using a result of the face detection and face image acquisition process in step S20, that is, the face image of the person H obtained from the first camera image which is acquired from the first camera 15 (step S60B).
In
After the face authentication process in step S60B is completed, the control unit 101 determines whether or not there is an instruction for starting to display a UI screen corresponding to an authentication result which is a result of the face authentication process on the touch panel 130 from the instruction unit 113 (step S80).
In a case where an affirmative determination (YES) is performed in step S80, the display unit 104 displays the UI screen corresponding to the authentication result, prepared in the face authentication process in step S60B on the touch panel 130 (step S100). The content of the UI screen which is prepared in the face authentication process in step S60B and is displayed in step S100 will be described later. The face registration/authentication unit 112 deletes tracking IDs and face information pieces of all tracked persons registered in the tracking table (step S120), and completes the process. The tracking table (a tracking ID and face information of a tracked person) will be described later.
In contrast, in a case where a negative determination (NO) is performed in step S80, the person detection unit 110 analyzes the first camera image acquired from the first camera 15 so as to determine whether or not the person H (referred to as a target person) who is a target of the face authentication process in step S60B is present in the person detection region R1 (step S140). In a case where an affirmative determination (YES) is performed in step S140, the flow returns to step S80, and the process is continued.
On the other hand, in a case where a negative determination (NO) is performed in step S140, the face registration/authentication unit 112 determines whether or not authentication of the target person has been successful (the face is authenticated) in the face authentication process in step S60B (step S160). In a case where a negative determination (NO) is performed in step S160, the flow proceeds to step S200 to be described later.
In contrast, in a case where an affirmative determination (YES) is performed in step S160, the face registration/authentication unit 112 cancels the face authentication performed in the face authentication process in step S60B (step S180), and proceeds to the next step S200.
The control unit 101 discards the UI screen corresponding to the authentication result, prepared in the face authentication process in step S60B (step S200). Here, the content of the UI screen discarded in step S200 is the same as that described in the above step S100.
Thereafter, the person detection unit 110 deletes the tracking ID and the face information of the person H (tracked person) whose presence is not detected in step S140 from the tracking table (step S220), returns to step S20, and continues the process.
Each of the face detection and face image acquisition process in the above step S20 and the face authentication process in the above step S60B will be described in more detail.
As described above,
Next, with reference to
Herein, first, the selection unit 114 selects a person H (target person) who is a target on which the instruction for the face authentication process is given in step S40 illustrated in
In contrast, in a case where an affirmative determination (YES) is performed in step S61, the face registration/authentication unit 112 determines whether or not face information of the same tracked person as the target person is registered in the storage unit 105 (step S62). In a case where a negative determination (NO) is performed in step S62, the flow proceeds to step S71 to be described later.
On the other hand, in a case where an affirmative determination (YES) is performed in step S62, the face registration/authentication unit 112 makes a request for face authentication by using face information of the target person whose registration in the tracking table is confirmed in step S62 (step S63). Next, the face registration/authentication unit 112 collates the face information of the target person with face information pieces of all registered persons registered in the registration table (step S64). The face registration/authentication unit 112 determines whether or not authentication has been successful (step S65). Here, in step S65, an affirmative determination (YES) is performed if the face information of the target person matches any one of the face information pieces of all the registered persons, and a negative determination (NO) is performed if the face information of the target person does not match any one of the face information pieces of all the registered persons.
In a case where an affirmative determination (YES) is performed in step S65, the notification unit 115 notifies the target person or the like that the authentication has been successful by using the projector 17 (step S66). The display unit 104 prepares a UI screen (a screen after authentication is performed) for the target person which is set for the authenticated target person (step S67B), and finishes the process.
On the other hand, in a case where a negative determination (NO) is performed in step S65, the person detection unit 110 determines whether or not a target person is present in the approach detection region R4 (step S68). In a case where a negative determination (NO) is performed in step S68, the flow returns to step S61, and the process is continued.
In contrast, in a case where an affirmative determination (YES) is performed in step S68, the notification unit 115 notifies the target person or the like that authentication has failed by using the projector 17 (step S69). The display unit 104 prepares a UI screen (a screen before authentication is performed) corresponding to an authentication failure which is set for authentication failure (step S70B), and finishes the process.
On the other hand, in a case where a negative determination (NO) is performed in step S61 and in a case where a negative determination (NO) is performed in step S62, the person detection unit 110 determines whether or not a target person is present in the approach detection region R4 (step S71). In a case where a negative determination (NO) is performed in step S71, the flow returns to step S61, and the process is continued.
In contrast, in a case where an affirmative determination (YES) is performed in step S71, the notification unit 115 notifies the target person or the like that a face image of the target person has not been acquired by using the projector 17 (step S72). The display unit 104 prepares an UI screen (a screen before authentication is performed) corresponding to manual input authentication which is set for an authentication process using manual inputting (step S73B), and finishes the process.
Then, the authentication procedure illustrated in
In the present embodiment, in a case where it is detected that a specific (single) person H performs an action satisfying a specific condition among one or more persons H present in the person detection region R1 on the basis of an analysis result of the first camera image captured by the first camera 15, in step S40, the instruction unit 113 outputs an instruction for starting the authentication process in step S60B. In the present embodiment, in a case where it is detected that the specific person H performs an action satisfying a predefined condition after the face authentication process in step S60B is completed, in step S80, the instruction unit 113 outputs an instruction for starting to display the UI screen in step S100.
Hereinafter, three examples (a first example to a third example) in which the “specific condition”, and the “predefined condition” differ will be described in order. In each of the three examples, a description will be made of a pattern (referred to as a first pattern) in which a UI screen prepared so as to correspond to a specific person H who is a target of the face authentication process in step S60B is displayed on the touch panel 130, and a pattern (referred to as a second pattern) in which the UI screen is not displayed.
Here, in
First, a description will be made of the “first example” in which any one of persons H present in the person detection region R1 entering the entry detection region R3 from the person detection region R1 is used as the instruction for starting the authentication process in step S40, and the person H having entered the entry detection region R3 from the person detection region R1 further entering the approach detection region R4 from the entry detection region R3 is used as the instruction for the display starting process in step S80.
(First Pattern)
In the first example, the respective processes in steps S61 to S65 are completed before the tracked person (herein, the first person H1) having entered the entry detection region R3 passes through the face detection limit L. In the first example, the notification in step S66, S69, or S72 is performed before the tracked person (herein, the first person H1) having entered the entry detection region R3 passes through the face detection limit L. Along therewith, the projector 17 displays a message M on the screen 18. Here, in a case where an affirmative determination (YES) is performed in steps S61 and S62 and then an affirmative determination (YES) is performed in step S65, the projector 17 displays a text image, for example, “authentication has been successful” as the message M in step S66. in a case where an affirmative determination (YES) is performed in steps S61 and S62 and then a negative determination (NO) is performed in step S65, the projector 17 displays a text image, for example, “authentication has failed” or “you are not registered as a user” as the message M in step S69. In a case where a negative determination (NO) is performed in step S61 or S62, the projector 17 displays a text image, for example, “a face image cannot be acquired” in step S72.
In a case where authentication has been successful in the above-described way, the specific person H (herein, the first person H1) as the target person comes near to the image forming apparatus 10. In a case where authentication has failed or a face image cannot be acquired, the specific person H (herein, the first person H1) as the tracked person finds that authentication has not been successful before passing through the face detection limit L in which it is hard to acquire a face image using the first camera 15.
Herein, a case where information that “a face image cannot be acquired” is presented in step S72 has been described, but presented information is not limited thereto. For example, in step S72, a notification that the person H is requested not to come near to an apparatus (the image forming apparatus 10), a notification that the person H is requested not to come near to an apparatus (the image forming apparatus 10) since face authentication of the person H is not completed, a notification that the person H is requested to be stopped, a notification that the person H is requested to be stopped since face authentication of the person H is not completed, a notification for informing that a facial part of the person H is deviated from an imaging region of the first camera 15, and the like may be performed.
In the first example, the respective processes in steps S67, S70 and S73 are completed before the target person (herein, the first person H1) having entered the entry detection region R3 enters the approach detection region R4. The content of UI screens respectively prepared in steps S67, S70 and S73 will be described later.
Here, in the first example, display of a UI screen in step S100 may be performed before the target person (herein, the first person H1) having entered the approach detection region R4 enters the person operation region R2. In the above-described way, in a state in which the target person (herein, the first person H1) enters the person operation region R2 and stands in front of the user interface 13, a UI screen corresponding to an authentication result of the target person is already displayed on the touch panel 130.
Then, here, a description will be made of UI screens which are prepared in steps S67, S70 and S73 and are displayed on the touch panel 130 in step S100.
First, in a case where a target person is “Fujitaro” as a registered person who is registered in the registration table (refer to
Next, in a case where a target person is “Fuji Hanako” as a registered person who is registered in the registration table (refer to
Next, in a case where a target person is an unregistered person (for example, “Fujijirou”) who is not registered in the registration table (refer to
Finally, in a case where a target person is a registered person (who is herein “Fujitaro” but may be “Fuji Hanako”) who is registered in the registration table (refer to
As mentioned above, in the present embodiment, the content of the screens after authentication is performed (when authentication is successful), illustrated in
Here, a brief description will be made of cases where a face image of a tracked person can be detected and cannot be detected.
The face registration/authentication unit 112 of the present embodiment detects feature points at a plurality of facial parts (for example, 14 or more parts) such as the eyes, the nose, and the mouth in the face registration and face authentication, and extracts a feature amount of the face after correcting a size, a direction, and the like of the face in a three-dimensional manner. For this reason, in a case where the person H wears a mask or sunglasses so as to cover a part of the face, even if an image including the face of the person H is included in the first camera image, detection of feature points of the face and extraction of a feature point cannot performed from the first camera image. Also in a case where the person H faces straight sideways or backward with respect to the first camera 15, detection of feature points of the face and extraction of a feature point cannot performed from the first camera image. In such cases, a negative determination (NO) is performed in step S26 illustrated in
Next, a brief description will be made of a method of selecting one face information piece in a case where a plurality of face information pieces are acquired in relation to the same tracked person.
As is clear from
In addition, for example, in a case where face information of the person H is acquired from a first camera image obtained by imaging a face of a person H obliquely facing the first camera 15 and is registered in the tracking table, and then face information of the person H is acquired from a first camera image obtained by imaging the face of the person H facing the front of the first camera 15, the latter face information may be selected and the former face information may be deleted in step S29.
(Second Pattern)
Here, in a case where authentication is successful in the face authentication process in step S60B (YES in step S160), the face authentication is canceled in step S180. In a case where authentication is successful (YES in step S160) and authentication fails (NO in step S160) in the face authentication process in step S60B, the UI screens prepared in steps S67, S70 and S73 are discarded in step S200. In step S220, the tracking ID and the face information regarding the target person (herein, the first person H1) are deleted from the tracking table. However, information regarding the person H (herein, the second person H2) other than the target person is not deleted from the tracking table, the flow returns to step S20, and then tracking and search for a face are continuously performed.
As mentioned above, in the first example, unless the first person H1 or the second person H2 who is being tracked in the person detection region R1 enters the entry detection region R3, a target person is not generated, and, as a result, the face authentication process in step S60B is not started. In the first example, unless a specific person H as a target person further enters the approach detection region R4, the UI screen as an authentication result of the target person (the specific person H) in step S100 is not displayed on the touch panel 130.
Here, in the first example, a description has been made of a case where both of the first person H1 and the second person H2 enter the person detection region R1, then the first person H1 enters the entry detection region R3 earlier than the second person H2, and thus the first person H1 becomes a target person. However, in a case where the second person H2 enters the entry detection region R3 earlier than the first person H1, the second person H2 becomes a target person. In a case where both of the first person H1 and the second person H2 enter the person detection region R1, and then both of the first person H1 and the second person H2 move to the outside of the person detection region R1 without entering the entry detection region R3, a target person is not generated, and thus the face authentication process in step S60B is not started.
Here, in the first example, after the specific person H (the first person H1 in this example) enters the entry detection region R3 from the person detection region R1 and is thus selected as a tracked person, the tracked person is not changed from the specific person H (the first person H1) to another person H (the second person H2) even if another person H (the second person H2 in this example) enters the entry detection region R3 from the person detection region R1 in a state in which the specific person H continues to stay in the entry detection region R3.
Next, a description will be made of the “second example” in which a staying time period from entry to the person detection region R1 in relation to any one of persons H present in the person detection region R1 reaching a first predefined time period which is set in advance is used as the instruction for starting the authentication process in step S40, and the staying time period reaching a second predefined time period (the second predefined time period>the first predefined time period) is used as the instruction for starting the display process in step S80.
(First Pattern)
Also in the second example, the respective processes in steps S61 to S65 are completed before the target person (herein, the first person H1) having entered the entry detection region R3 passes through the face detection limit L. Also in the second example, the notification in step S66, S69 or S72 is performed before the target person (herein, the first person H1) having entered the entry detection region R3 passes through the face detection limit L. Along therewith, the projector 17 displays the message M on the screen 18. The content of the message M is the same as described in the first pattern in the first example illustrated in
In the second example, the respective processes in steps S67, S70 and S73 are completed before the staying time period T of a target person (herein, the first person H1) reaching the first predefined time period Ta reaches a second predefined time period Tb (Tb>Ta). The content of UI screens prepared in steps S67, S70 and S73 is the same as described with reference to
Here, in the second example, the display of the UI screen in step S100 may be performed before the target person (herein, the first person H1) whose staying time period T has reached the second predefined time period Tb enters the person operation region R2. In the above-described way, in a state in which the target person (herein, the first person H1) enters the person operation region R2 and stands in front of the user interface 13, a UI screen corresponding to an authentication result of the target person is already displayed on the touch panel 130.
(Second Pattern)
Here, in a case where authentication is successful in the face authentication process in step S60B (YES in step S160), the face authentication is canceled in step S180. In a case where authentication is successful (YES in step S160) and authentication fails (NO in step S160) in the face authentication process in step S60B, the UI screens prepared in steps S67, S70 and S73 are discarded in step S200. In step S220, the tracking ID and the face information regarding the target person (herein, the first person H1) are deleted from the tracking table. However, information regarding the person H (herein, the second person H2) other than the target person is not deleted from the tracking table, the flow returns to step S20, and then tracking and search for a face are continuously performed.
As mentioned above, in the second example, unless the staying time period T of the first person H1 or the second person H2 who is being tracked in the person detection region R1 reaches the first predefined time period Ta, a target person is not generated, and, as a result, the face authentication process in step S60B is not started. In the second example, unless a staying time period T of a specific person H as a target person further reaches the second predefined time period Tb, the UI screen as an authentication result of the target person (the specific person H) in step S100 is not displayed on the touch panel 130.
Here, in the second example, a description has been made of a case where both of the first person H1 and the second person H2 enter the person detection region R1, then the first staying time period T1 of the first person H1 reaches the first predefined time period Ta earlier than the second staying time period T2 of the second person H2, and thus the first person H1 becomes a target person. However, in a case where the second staying time period T2 of the second person H2 reaches the first predefined time period Ta earlier than the first staying time period T1 of the first person H1, the second person H2 becomes a target person. In a case where both of the first person H1 and the second person H2 enter the person detection region R1, and then both of the first person H1 and the second person H2 move to the outside of the person detection region R1 before the staying time periods T thereof reach the first predefined time period Ta, a target person is not generated, and thus the face authentication process in step S60B is not started.
Here, in the second example, after the staying time period T (herein, the first staying time period T1) of the specific person H (the first person H1 in this example) reaches the first predefined time period Ta, and thus the specific person H is selected as a tracked person, the tracked person is not changed from the specific person H (the first person H1) to another person H (the second person H2) even if a staying time period (herein, the second staying time period T2) of another person H (the second person H2 in this example) reaches the first predefined time period Ta in a state in which the specific person H continues to stay in the person detection region R1.
Finally, a description will be made of the “third example” in which any one of persons H present in the person detection region R1 entering the person detection region R1 and then approaching the image forming apparatus 10 is used as the instruction for starting the face authentication process in step S40, and the person H who enters the person detection region R1 and then approaches the image forming apparatus 10 further approaching the image forming apparatus 10 is used as the instruction for starting the display process in step S80.
(First Pattern)
Also in the third example, the respective processes in steps S61 to S65 are completed before the target person (herein, the first person H1) having entered the entry detection region R3 passes through the face detection limit L. Also in the third example, the notification in step S66, S69 or S72 is performed before the target person (herein, the first person H1) having entered the entry detection region R3 passes through the face detection limit L. Along therewith, the projector 17 displays the message M on the screen 18. The content of the message M is the same as described in the first pattern in the first example illustrated in
In the third example, the respective processes in steps S67, S70 and S73 are completed before the target person (herein, the first person H1) having entered the entry detection region R3 enters the approach detection region R4. The content of UI screens prepared in steps S67, S70 and S73 is the same as described with reference to
Here, in the third example, the display of the UI screen in step S100 may be performed before the target person (herein, the first person H1) who approaches the image forming apparatus 10 in the person detection region R1 enters the person operation region R2. In the above-described way, in a state in which the target person (herein, the first person H1) enters the person operation region R2 and stands in front of the user interface 13, a UI screen corresponding to an authentication result of the target person is already displayed on the touch panel 130.
(Second Pattern)
Here, in a case where authentication is successful in the face authentication process in step S60B (YES in step S160), the face authentication is canceled in step S180. In a case where authentication is successful (YES in step S160) and authentication fails (NO in step S160) in the face authentication process in step S60B, the UI screens prepared in steps S67, S70 and S73 are discarded in step S200. In step S220, the tracking ID and the face information regarding the target person (herein, the first person H1) are deleted from the tracking table. However, information regarding the person H (herein, the second person H2) other than the target person is not deleted from the tracking table, the flow returns to step S20, and then tracking and search for a face are continuously performed.
As mentioned above, in the third example, unless the first person H1 or the second person H2 who is being tracked in the person detection region R1 moves in a direction of coming close to the image forming apparatus 10, a target person is not generated, and, as a result, the face authentication process in step S60B is not started. In the third example, unless a specific person H as a target person further moves in a direction of approaching the image forming apparatus 10, the UI screen as an authentication result of the target person (the specific person H) in step S100 is not displayed on the touch panel 130.
Here, in the third example, a description has been made of a case where both of the first person H1 and the second person H2 enter the person detection region R1, then t the first person H1 moves in a direction of coming close to the image forming apparatus 10 earlier, and thus the first person H1 becomes a target person. However, in a case where the second person H2 moves in a direction of coming close to the image forming apparatus 10 earlier than the first person H1, the second person H2 becomes a target person. In a case where both of the first person H1 and the second person H2 enter the person detection region R1, and then both of the first person H1 and the second person H2 move to the outside of the person detection region R1 without moving in a direction of coming close to the image forming apparatus 10, a target person is not generated, and thus the face authentication process in step S60B is not started.
Here, in the third example, after the specific person H (the first person H1 in this example) moves in a direction of coming close to the image forming apparatus 10 in the person detection region R1, and is thus selected as a tracked person, the tracked person is not changed from the specific person H (the first person H1) to another person H (the second person H2) even if another person H (the second person H2 in this example) moves in a direction of coming close to the image forming apparatus 10 in a state in which the specific person H continues to move in a direction of coming close to the image forming apparatus 10 in the person detection region R1.
[Others]
Here, in the above-described first to third examples, a description will be made of a case where two persons H (the first person H1 and the second person H2) are present around the image forming apparatus 10, there may be a case where a single person H is present around the image forming apparatus 10, and a case where three or more persons H are present around the image forming apparatus 10.
Although not described in the first to third examples, in a case where a tracked person who is given a tracking ID in step S24 as a result of entering the person detection region R1 from the outside of the person detection region R1 but does not become a target person (for example, the second person H2) in step S60B moves to the outside of the person detection region R1 from the inside of the person detection region R1, the tracking ID and face information regarding the tracked person (herein, the second person H2) are deleted from the tracking table in step S32.
In the present embodiment, in a case where face information of a target person (tracked person) has not been registered in the face authentication process in step S62 illustrated in
In the present embodiment, in controlling of a mode of the image forming apparatus 10 illustrated in
In the present embodiment, a case where the projector 17 displaying an image is used as the notification unit 115 has been described as an example, but the present invention is not limited thereto. Methods may be used in which sound is output from, for example, a sound source, or light is emitted from, for example, a light source (lamp). Here, in the present embodiment, when authentication using the acquired face image has been successful (step S66), when authentication using the acquired face image has failed (step S69), and when authentication cannot be performed since a face image cannot be acquired (step S72), a notification is performed, but the present invention is not limited thereto. For example, (1) before a face image is detected from a first camera image, (2) before authentication using a face image is performed after the face image is detected from the first camera image, and (3) after an authentication process is performed, a notification may be performed.
The embodiment(s) discussed above may disclose the following matters.
[1] It is a processing apparatus including:
an imaging unit that images the vicinity of the processing apparatus;
a display unit that displays a screen correlated with an image of a person captured by the imaging unit; and
an instruction unit that instructs the display unit to start display,
in which the imaging unit starts imaging before an instruction is given by the instruction unit, and
the display unit starts to display a screen correlated with the image of the person captured by the imaging unit after the instruction is given by the instruction unit.
[2] It may be the processing apparatus according to [1], in which the imaging unit captures an image of a person present in a first region, and
the instruction unit instructs the display unit to start display in a case where a person is present in a second region which is located inside the first region and is narrower than the first region.
[3] It may be the processing apparatus according to [1], in which the imaging unit captures an image of a person present in a first region, and
the instruction unit instructs the display unit to start display in a case where a person present in the first region stays in the first region for a set period of time or more which is set in advance.
[4] It may be the processing apparatus according to [1], in which the imaging unit captures an image of a person present in a first region, and
the instruction unit instructs the display unit to start display in a case where a person present in the first region approaches the processing apparatus.
The foregoing description of the exemplary embodiments of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in the art. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, thereby enabling others skilled in the art to understand the invention for various embodiments and with the various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2015-153702 | Aug 2015 | JP | national |
2015-196260 | Oct 2015 | JP | national |