This disclosure relates to technical fields of an information processing system, an information processing method, and a recording medium.
A known system of this type detects a position of a subject and starts to capture an iris image. For example, Patent Literature 1 discloses a technique/technology of detecting that a subject enters in a predetermined distance range by using a ranging sensor, and of starting imaging in a case where the subject enters in the predetermined distance range.
As another related art, Patent Literature 2 discloses that a three-dimensional image is acquired on the basis of images captured from a plurality of differing viewpoints. Patent Literature 3 discloses that an image is captured when a corneal apex of a subject eye coincides with an intersection of optical axes between an illumination optical system and an imaging optical system. Patent Literature 4 discloses that a target located at an intersection of optical axes between a camera A and a camera B is displayed on a screen.
This disclosure aims to improve the techniques/technologies disclosed in Citation List.
An information processing system according to an example aspect of this disclosure includes: an acquisition unit that acquires a first image and a second image that are captured such that optical axes intersect at a predetermined point; a determination unit that determines whether or not a difference between a position of a target in the first image and a position of the target in the second image is in a predetermined range; and a control unit that performs a control of capturing an iris image of the target in a case where the difference is in the predetermined range.
An information processing method according to an example aspect of this disclosure is an information processing method executed by at least one computer, the information processing method including: acquiring a first image and a second image that are captured such that optical axes intersect at a predetermined point; determining whether or not a difference between a position of a target in the first image and a position of the target in the second image is in a predetermined range; and performing a control of capturing an iris image of the target in a case where the difference is in the predetermined range.
A recording medium according to an example aspect of this disclosure is a recording medium on which a computer program that allows at least one computer to execute an information processing method is recorded, the information processing method including: acquiring a first image and a second image that are captured such that optical axes intersect at a predetermined point; determining whether or not a difference between a position of a target in the first image and a position of the target in the second image is in a predetermined range; and performing a control of capturing an iris image of the target in a case where the difference is in the predetermined range.
Hereinafter, an information processing system, an information processing method, and a recording medium according to example embodiments will be described with reference to the drawings.
An information processing system according to a first example embodiment will be described with reference to
First, a hardware configuration of the information processing system according to the first example embodiment will be described with reference to
As illustrated in
The processor 11, the RAM 12, the ROM 13, the storage apparatus 14, the input apparatus 15, the output apparatus 16, and the camera 18 are connected through a data bus 17.
The processor 11 reads a computer program. For example, the processor 11 is configured to read a computer program stored by at least one of the RAM 12, the ROM 13 and the storage apparatus 14. Alternatively, the processor 11 may read a computer program stored in a computer-readable recording medium, by using a not-illustrated recording medium reading apparatus. The processor 11 may acquire (i.e., may read) a computer program from a not-illustrated apparatus disposed outside the information processing system 10, through a network 10 interface. The processor 11 controls the RAM 12, the storage apparatus 14, the input apparatus 15, and the output apparatus 16 by executing the read computer program. Especially in the present example embodiment, when the processor 11 executes the read computer program, a functional block for performing a processing for capturing an image of a target, is realized or implemented in the processor 11. That is, the processor 11 may function as a controller for executing each control of the information processing system 10.
The processor 11 may be configured as, for example, a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), a FPGA (Field-Programmable Gate Array), a DSP (Demand-Side Platform), or an ASIC (Application Specific Integrated Circuit). The processor 11 may be one of them, or may use a plurality of them in parallel.
The RAM 12 temporarily stores the computer program to be executed by the processor 11. The RAM 12 temporarily stores the data that are temporarily used by the processor 11 when the processor 11 executes the computer program. The RAM12 may be, for example, a D-RAM (Dynamic Random Access Memory) or a SRAM (Static Random Access Memory). Furthermore, another type of volatile memory may also be used instead of the RAM12.
The ROM 13 stores the computer program to be executed by the processor 11. The ROM 13 may otherwise store fixed data. The ROM13 may be, for example, a P-ROM (Programmable Read Only Memory) or an EPROM (Erasable Read Only Memory). Furthermore, another type of non-volatile memory may also be used instead of the ROM13.
The storage apparatus 14 stores the data that are stored for a long term by the information processing system 10. The storage apparatus 14 may operate as a temporary storage apparatus of the processor 11. The storage apparatus 14 may include, for example, at least one of a hard disk apparatus, a magneto-optical disk apparatus, a SSD (Solid State Drive), and a disk array apparatus.
The input apparatus 15 is an apparatus that receives an input instruction from a user of the information processing system 10. The input apparatus 15 may include, for example, at least one of a keyboard, a mouse, and a touch panel. The input apparatus 15 may be configured as a portable terminal such as a smartphone and a tablet. The input apparatus 15 may be an apparatus that allows voice input/audio input, including a microphone, for example.
The output apparatus 16 is an apparatus that outputs information about the information processing system10 to the outside. For example, the output apparatus 16 may be a display apparatus (e.g., a display) that is configured to display the information about the information processing system 10. The output apparatus 16 may be a speaker device or the like that is configured to audio-output the information about the information processing system 10. The output apparatus 16 may be configured as a portable terminal such as a smartphone and a tablet. The output apparatus 16 may be an apparatus that outputs information in a form other than an image. For example, the output apparatus 16 may be a speaker that audio-outputs the information about the information processing system 10.
The camera 18 is a camera disposed at a point where an image of a target can be captured. The target here is not limited to a human being, but may include an animal such as a dog and a cat, a snake, a bird, a robot, and the like. The camera 18 may capture an image of all of the target, or may image a part of the target. The camera 18 may be a camera that captures a still image, or a camera that images a video. The camera 18 may be configured as a visible light camera, or as a near infrared camera. There may be provided a plurality of cameras 18. The plurality of cameras 18 may be of the same type, or of different types. For example, when not capturing an image, the camera 18 may be equipped with a function that automatically turns off power. In this case, for example, a camera with a short life, such as a liquid lens and a motor, may be preferentially turned off. A specific configuration of the camera 18 will be described in detail in another example embodiment later.
Although
Next, a functional configuration of the information processing system 10 according to the first example embodiment will be described with reference to
The information processing system 10 according to the first example embodiment is configured as an image system that captures an image of a target. More specifically, the information processing system 10 is configured to capture an iris image of the target. The use of the iris image captured by the information processing system 10 is not particularly limited, but it may be used for biometric authentication, for example. For example, the information processing system 10 may be configured as a part of a system that images a walking target and performs biometric authentication (a so-called walk-through authentication system).
As illustrated in
The acquisition unit 110 is configured to acquire a first image and a second image. The first image and the second image are images captured such that optical axes of optical systems that capture the respective images intersect each other at a predetermined point. The first image and the second image may be captured by the camera 18 (see
The determination unit 120 is configured to perform a determination processing using the first image and the second image acquired by the acquisition unit 110. Specifically, the determination unit 120 is configured to determine whether or not a difference between a position of the target in the first image and a position of the target in the second image is in a predetermined range. The “difference” here may be, for example, a distance (i.e., Euclidean distance) between first coordinates indicating the position of the target in the first image and second coordinates indicating the position of the target in the second image, or may be a degree of overlap between a first detection area indicating the position of the target in the first image and a second detection area indicating the position of the target in the second image (e.g., (first detection area n second detection area)/(first detection area U second detection area). Therefore, the determination unit 120 may determine whether or not the distance between the first coordinates indicating the position of the target in the first image and the second coordinates indicating the position of the target in the second image is in a predetermined range, for example. Alternatively, the determination unit 120 may determine whether or not the degree of overlap between the first detection area in the first image and the second detection area in the second image is in a predetermined range. The “predetermined range” here is a threshold set to determine that there is no large difference between the position of the target in the first image and the position of the target in the second image (in other words, the difference is in a threshold set in advance), and an appropriate value may be set in advance. The predetermined range may be set by a human, or a value may be optimized by iterative learning. The determination unit 120 may perform a processing of normalizing the first image and the second image (i.e., of matching a scale) when the difference is calculated. For example, the determination unit 120 may acquire a scale ratio of the first image and the second image in advance, and may perform the normalization processing on the basis of the scale ratio. Alternatively, the determination unit 120 may perform the processing of normalizing the first image and the second image, by using inner parameters (e.g., a focus, pixel size information, etc.) of the camera 18 that captures the first image and the second image.
The control unit 130 is configured to control the capture of the iris image of the target, on the basis of a result of the determination by the determination unit 120. Specifically, the control unit 130 may control the camera 18 to capture the iris image of the target in a case where the difference between the position of the target in the first image and the position of the target in the second image is determined to be in the predetermined range. The camera 18 (i.e., the camera 18 that captures the iris image) controlled here may be different from the camera 18 that captures the first image and the second image.
Next, with reference to
In
As illustrated in
On the other hand, as illustrated in
Next, a flow of operation of the information processing system 10 according to the first example embodiment will be described with reference to
As illustrated in
Subsequently, the determination unit 120 detects the position of the target from each of the first image and the second image acquired by the acquisition unit 110 (step S102). The position of the target may be detected as a position of a part set in advance, for example. For example, the determination unit 120 may detect, as the position of the target, the positions of eyes, a nose, a mouth, an entire face, a hand, a foot, or the like. In a case where the position of the eyes of the target is detected as the position of the target as an example, the determination unit 120 may detect each of a right eye position in the first image and a right eye position in the second image, or may detect each of a left eye position in the first image and a left eye position in the second image. Alternatively, the determination unit 120 may detect each of center coordinates of the right eye in the first image and center coordinates of the right eye in the second image, or may detect each of center coordinates of the left eye in the first image and center coordinates of the left eye in the second image. Alternatively, the determination unit 120 may detect each of center coordinates (a center point) of the right eye and the left eye in the first image, and center coordinates of the right eye and the left eye in the second image. The above is an example of detecting the eyes, but even in a case where another part is detected, the position of the target may be similarly detected on the basis of various points. In a case where the position of the nose of the target is detected as the position of the target as an example, the determination unit 120 may detect each of a nose position in the first image and a nose position in the second image. Alternatively, the determination unit 120 may detect each of center coordinates of the nose in the first image (e.g., center coordinates of an upper end and a lower end of the nose, or center coordinates of a right end and a left end of the nose), and center coordinates of the nose in the second image. In a case where the position of the face of the target is detected as the position of the target as an example, the determination unit 120 may detect each of the position of the entire face in the first image and the position of the entire face in the second image. Alternatively, the determination unit 120 may detect each of center coordinates of the entire face of the first image (e.g., center coordinates of an upper end and a lower end of the entire face, or center coordinates of a right end and a left end of the entire face) and center coordinates of the entire face in the second image. Alternatively, the determination unit 120 may detect a plurality of positions of the parts described above, and set a center position of the plurality of positions, as the position of the target. For example, the determination unit 120 may detect the right eye position, the left eye position, and the nose position, and may set a center position of the right eye position, the left eye position, and the nose position, as the position of the target. In a case where the target is not detected from at least one of the first image and the second image, the subsequent processing may be omitted, and the processing may be restarted from the step S101.
Subsequently, the determination unit 120 determines whether or not the difference between the position of the target in the first image and the position of the target in the second image is in the predetermined range (step S103). The result of the determination by the determination unit 120 is outputted to the control unit 130. When the difference between the position of the target in the first image and the position of the target in the second image is not in the predetermined range (the step S103: NO), the processing is restarted from the step S101.
On the other hand, when the difference between the position of the target in the first image and the position of the target in the second image is in the predetermined range (the step S103: YES), the control unit 130 performs the control of capturing the iris image of the target (step S104). Only one iris image may be captured, or a plurality of them may be captured. The iris image may be captured as each frame of a video.
Next, a technical effect obtained by the information processing system 10 according to the first example embodiment will be described.
As described in
The information processing system 10 according to a second example embodiment will be described with reference to
First, with reference to
As illustrated in
Subsequently, the determination unit 120 detects the position of the target from each of the first image and the second image acquired by the acquisition unit 110 (step S102). Then, the determination unit 120 determines whether or not the difference between the position of the target in the first image and the position of the target in the second image is in the predetermined range (step S103). The result of the determination by the determination unit 120 is outputted to the control unit 130. When the difference between the position of the target in the first image and the position of the target in the second image is not in the predetermined range (the step S103: NO), the processing is restarted from the step S101.
On the other hand, when the difference between the position of the target in the first image and the position of the target in the second image is in the predetermined range (the step S103: YES), the control unit 130 identifies the eye position of the target, by using the first image and the second image (step S201). Although there is no specific limitation on a method of identifying the eye position of the target, an area including the face of the target (a face area) may be detected, and the eye position may be estimated from the detected face area, for example.
Subsequently, the control unit 130 performs a control of capturing the iris image of the target, on the basis the identified eye position (step S202). For example, the control unit 130 may perform the control of moving the camera 18 to a position where the identified eye position can be captured, to start the imaging. Alternatively, the control unit 130 may perform the control of selecting the camera 18 capable of imaging the identified eye position, from among a plurality of cameras 18 at different levels, to start the imaging. Alternatively, the control unit 130 may perform the control of changing an imaging range such that the identified eye position can be captured by rotating the mirror provided in the camera 18, to start the imaging. For example, it is possible to start the imaging after changing the imaging range of the camera 18, by disposing the mirror in the imaging range of the camera 18 (i.e., such that the camera 18 images the target through the mirror) and performing rotary drive on the mirror in accordance with the eye position of the target. For example, it is possible to acquire the iris image by performing rotary drive on the mirror in accordance with the eye position of the target. Moving the camera 18, selecting the camera 18, and driving the mirror described above may be performed on the basis of a position other than the eye position (i.e., the position of a part that is different from the eyes). For example, on the basis of the nose position or the face position of the target, the camera 18 may be moved, or the camera 18 may be selected, or the rotary drive of the mirror may be performed. For example, it is possible to acquire a face image on the basis of the face position, or the nose position located near the center of the face.
Next, a technical effect obtained by the information processing system 10 according to the second example embodiment will be described.
As illustrated in
The information processing system 10 according to a third example embodiment will be described with reference to
First, with reference to
As illustrated in
Next, a technical effect obtained by the information processing system 10 according to the third example embodiment will be described.
As described in
The information processing system 10 according to a fourth example embodiment will be described with reference to
First, with reference to
As illustrated in
Next, with reference to
As illustrated in
Subsequently, the determination unit 120 detects the position of the target from each of the first image and the second image acquired by the acquisition unit 110 (step S102). Then, the determination unit 120 determines whether or not the difference between the position of the target in the first image and the position of the target in the second image is in the predetermined range (step S103). The result of the determination by the determination unit 120 is outputted to the control unit 130. When the difference between the position of the target in the first image and the position of the target in the second image is not in the predetermined range (the step S103: NO), the processing is restarted from the step S101.
On the other hand, when the difference between the position of the target in the first image and the position of the target in the second image is in the predetermined range (the step S103: YES), the control unit 130 continuously captures the iris images (step S401). Thereafter, the control unit 130 determines whether or not to end the continuous imaging (step S402). The control unit 130 may determine to end the continuous imaging, for example, when the target passes through the focus position (see
When the control unit 130 does not determine to end the imaging (the step S402: NO), the continuous imaging of the iris image is continued. On the other hand, when the control unit 130 determines to end the imaging (the step S402: YES), the continuous imaging of the iris image is ended, and a series of operation steps are ended.
Next, a technical effect obtained by the information processing system 10 according to the fourth example embodiment will be described.
As described in
The information processing system 10 according to a fifth example embodiment will be described with reference to
First, with reference to
As illustrated in
The selection unit 140 is configured to be select at least one high-quality iris image, from a plurality of iris images that are continuously captured after the trigger determination. Note that the quality of the iris image may be determined, for example, by calculating a quality score (i.e., a score indicating the quality of the image). For example, the selection unit 140 may select one image with the highest quality score from the plurality of iris images. Alternatively, the selection unit 140 may select a predetermined number of images in descending order of the quality score. Alternatively, the selection unit 140 may select an image(s) whose quality score exceeds a predetermined value.
The authentication unit 150 is configured to perform a biometric authentication processing (iris authentication), on the basis of the iris image of the target captured as a result of the control by the control unit 130. Since a specific method of the iris authentication can adopt the existing techniques/technologies as appropriate, a detailed description thereof shall be omitted here. The authentication unit 150 may be configured to perform various processing in accordance with a result of the iris authentication. For example, the authentication unit 150 may be configured to perform a processing of unlocking a gate in a case where the iris authentication is successful.
Next, with reference to
As illustrated in
Subsequently, the determination unit 120 detects the position of the target from each of the first image and the second image acquired by the acquisition unit 110 (step S102). Then, the determination unit 120 determines whether or not the difference between the position of the target in the first image and the position of the target in the second image is in the predetermined range (step S103). The result of the determination by the determination unit 120 is outputted to the control unit 130. When the difference between the position of the target in the first image and the position of the target in the second image is not in the predetermined range (the step S103: NO), the processing is restarted from the step S101.
On the other hand, when the difference between the position of the target in the first image and the position of the target in the second image is in the predetermined range (the step S103: YES), the control unit 130 continuously captures the iris images (step S401). Thereafter, the control unit 130 determines whether or not to end the continuous imaging (step S402). When the control unit 130 does not determine to end the imaging (the step S402: NO), the continuous imaging of the iris image is continued.
On the other hand, when the control unit 130 determines to end the imaging (the step S402: YES), the continuous imaging of the iris image is ended, and the selection unit 140 selects at least one high-quality image from the plurality of iris images captured so far (step S501). Then, by using the iris image selected by the selection unit 140, the authentication unit 150 performs the iris authentication (step S502).
Next, a technical effect obtained by the information processing system 10 according to the fifth example embodiment will be described.
As described in
The information processing system 10 according to a sixth example embodiment will be described with reference to
First, with reference to
As illustrated in
The first camera 181 and the second camera 182 described above may be calibrated in advance. In this case, it is possible to acquire the position (the depth information) of the target with high accuracy by using stereo vision. Since the first camera 181 and the second camera 182 are arranged such that the optical axes intersect, it is possible to acquire the depth information with high accuracy, by limiting a stereo search range to a center of a field angle and by using corresponding point information (e.g., information about eyes coordinates, face part coordinates, etc.) of the target determined to be near the trigger.
Next, a technical effect obtained by the information processing system 10 according to the sixth example embodiment will be described.
As described in
The information processing system 10 according to a seventh example embodiment will be described with reference to
First, with reference to
As illustrated in
Next, a technical effect obtained by the information processing system 10 according to the seventh example embodiment will be described.
As described in
The information processing system 10 according to the eighth example embodiment will be described with reference to
First, with reference to
As illustrated in
Next, a technical effect obtained by the information processing system 10 according to the eighth example embodiment will be described.
As described in
The information processing system 10 according to a ninth example embodiment will be described with reference to
First, with reference to
As illustrated in
The mirror 200 has a first surface 201 and a second surface 202 at different angles. The first surface is a surface provided to capture the first image, and the second surface 202 is a surface provided to capture the second image. Therefore, the first surface 201 and the second surface 202 are at the angles that allow an optical axis of the fourth camera 184 to intersect at the trigger point. Furthermore, the entire mirror 200 is disposed at an angle of 45 degrees with respect to a horizontal direction, for example. By making an angle of the mirror 200 in this way, the fourth camera 184 is allowed to image the front (i.e., a front side of the paper) through the mirror 200.
The first surface 201 and second surface 202 in the mirror 200 may not be clearly partitioned as illustrated in
Next, with reference to
As illustrated in
Next, a technical effect obtained by the information processing system 10 according to the ninth example embodiment will be described.
As described in
In the above example, the mirror 200 includes two surfaces, but the mirror may include three or more surfaces. For example, the mirror 200 may include a third surface, in addition to first surface 201 and the second surface 202. Such a configuration may be realized, for example, by bending one mirror at two points, or may be realized by three mirrors. Each of the three surfaces may be provided, for example, as a surface for imaging the target from the right, a surface for imaging the target from the front, and a surface for imaging the target from the left. In this instance, in addition to the first image and the second image, a third image captured through the third surface may be used to determine that the target reaches the trigger point. In a case where the first image, the second image, and the third image are used, the determination unit 120 may determine whether or not a difference between the target position in the first image, the target position in the second image, and target position in the third image, is in a predetermined range. Then, when the difference between them is in the predetermined range, the iris image of the target may be captured. In addition, of the three surfaces, the surface for imaging the target from the front may be used to perform face authentication. Alternatively, face images reflected from the surface for imaging the target from the right side, the surface for imaging the target from the front, and the surface for imaging the target from the left side, may be used to perform the face authentication. In this case, a weight (i.e., a degree of impact on the face authentication) of the face image acquired from the surface for imaging the target from the front, may be set larger than that of the face image acquired from the other surfaces (i.e., the surface for imaging the target from the right side and the surface for imaging the target from the left side). This is because, a reliable face image (e.g., a face image facing the front) is more likely to be acquired in a case where the target is imaged from the front, as compared with a case where the target is imaged from the right side or the left side.
The information processing system 10 according to a tenth example embodiment will be described with reference to
First, with reference to
As illustrated in
(First Image and Second image)
Next, with reference to
As illustrated in
Next, a technical effect obtained by the information processing system 10 according to the tenth example embodiment will be described.
As described in
The information processing system 10 according to an eleventh example embodiment will be described with reference to
First, with reference to
As illustrated in
The identification unit 160 is configured to identify the target who performs the trigger determination (in other words, the target whose iris image is captured). As a result of the identification by the identification unit 160, in the determination unit 120, the trigger determination is performed by using the difference of the position in the first image and the position in the second image, of the target identified by the identification unit 160. The identification method of the specific portion 160 is not particularly limited, and any method that the target in the first image and the target in the second image may be guaranteed to be the same person, is sufficient. The identification unit 160 may identify the target, for example, by using the face authentication. In this instance, the face authentication may be performed by using the first image and the second image, or may be performed by using a separately captured image.
Next, with reference to
As illustrated in
Subsequently, the identification unit 160 identifies the target for the trigger determination (step S1101). Thereafter, the determination unit 120 detects the position of the target identified by the identification unit 160, from each of the first image and the second image acquired in the acquisition unit 110 (step S1102). The step S1101 and the step S1102 may be performed one after the other. That is, the step S1101 may be performed after the step S1102. In this case, the identification unit 160 may identify the target when it is determined in the determination unit 120 that the position of the target is in the predetermined range.
Subsequently, the determination unit 120 determines whether or not the difference between the position of the identified target in the first image and the position of the identified target in the second image is in the predetermined range (step S1103). The result of the determination by the determination unit 120 is outputted to the control unit 130. When the difference between the position of the identified target in the first image and the position of the identified target in the second image is not in the predetermined range (the step S1103: NO), the processing is restarted from the step S101.
On the other hand, when the difference between the position of the identified target in the first image and the position of the identified target in the second image is in the predetermined range (the step S1103: YES), the control unit 130 performs the control of capturing the iris image of the target (step S104).
In the information processing system 10 according to the eleventh example embodiment, only when the target who is included in the first image and the target who is included in the second image are the same person (i.e., the identified target), the trigger determination for the target is performed. Therefore, for example, in a case where the first image includes a first target and the second image includes a second target, a difference between a first target position in the first image and a second target position in the second image is not used for the trigger determination.
Next, a technical effect obtained by the information processing system 10 according to the eleventh example embodiment will be described.
As described in
The information processing system 10 according to a twelfth example embodiment will be described with reference to
First, a flow of operation by the information processing system 10 according to the twelfth example embodiment will be described with reference to
As illustrated in
Subsequently, the determination unit 120 determines whether or not the first image and the second image include a plurality of targets (step S1201). When the first image and the second image include the plurality of targets (the step S1201: YES), the determination unit 120 selects one target from the plurality of targets (step S1202). The determination unit 120 may select, for example, a target who is the closest to the camera (e.g., the largest target in the image). Note that the following processing is performed on the target selected in the step S1202. When the first image and the second image do not include the plurality of targets (step S1201: NO), the step S1202 may be omitted.
Subsequently, the determination unit 120 detects the position of the target from each of the first image and the second image acquired by the acquisition unit 110 (step S102). Then, the determination unit 120 determines whether or not the difference between the position of the target in the first image and the position of the target in the second image is in the predetermined range (step S103). The result of the determination by the determination unit 120 is outputted to the control unit 130.
When the difference between the position of the target in the first image and the position of the target in the second image is not in the predetermined range (the step S103: NO), the processing is restarted from the step S101. On the other hand, when the difference between the position of the target in the first image and the position of the target in the second image is in the predetermined range (the step S103: YES), the control unit 130 performs the control of capturing the iris image of the target (step S104).
Next, a technical effect obtained by the information processing system 10 according to the twelfth example embodiment will be described.
As described in
The information processing system 10 according to a thirteenth example embodiment will be described with reference to
First, with reference to
As illustrated in
Subsequently, the determination unit 120 detects the position of the target from each of the first image and the second image acquired by the acquisition unit 110 (step S102). Then, the determination unit 120 determines whether or not the difference between the position of the target in the first image and the position of the target in the second image is in the predetermined range (step S103). When the difference between the position of the target in the first image and the position of the target in the second image is not in the predetermined range (the step S103: NO), the processing is restarted from the step S101.
On the other hand, when the difference between the position of the target in the first image and the position of the target in the second image is in the predetermined range (the step S103: YES), the determination unit 120 further determines whether or not there is a companion who passes through simultaneously with the target (step S1301). An example of the companion includes a person walking alongside the target, a baby held by the target, or the like. The determination unit 120 may use the first image and the second image to determine the presence of the companion, or may use another image to determine the presence of the companion.
When the target does not have a companion (the step S1301: NO), the control unit 130 performs the control of capturing the iris image of the target (step S104). On the other hand, when the target has the companion (the step S1301: YES), the control unit 130 controls different cameras to capture the iris image of the target and an iris image of the companion in parallel (step S1302). Note that, even when there are a plurality of companions, it is possible to capture the iris images in parallel by the number of the cameras. In a case where the number of the cameras is insufficient, only a selected companion may be imaged as in the twelfth example embodiment (see
Next, a technical effect obtained by the information processing system 10 according to the thirteenth example embodiment will be described.
As described in
The information processing system 10 according to a fourteenth example embodiment will be described with reference to
First, with reference to
As illustrated in
Subsequently, the determination unit 120 detects the position of the target in the first image (step S1401). Then, the determination unit 120 determines a search area in the second image, from the position of the target in the first image (step S1402). The search area is determined on the second image as an area for searching for the presence of the target. A method of determining the search area will be specifically described later.
Subsequently, the determination unit 120 searches for the target by using the search area in the second image (step S1403). When the presence of the target can be detected by the search area (step S1404), the determination unit 120 detects the position of the target detected by the search area, as the position of the target in the second image (step S1405). When the presence of the target cannot be detected by the search area, the processing may be restarted from the step S101.
When the position of the target is detected in the first image and the second image, the determination unit 120 determines whether or not the difference between the position of the target in the first image and the position of the target in the second image is in the predetermined range (step S103). The result of the determination by the determination unit 120 is outputted to the control unit 130.
When the difference between the position of the target in the first image and the position of the target in the second image is not in the predetermined range (the step S103: NO), the processing is restarted from the step S101. On the other hand, when the difference between the position of the target in the first image and the position of the target in the second image is in the predetermined range (the step S103: YES), the control unit 130 performs the control of capturing the iris image of the target (step S104).
Next, with reference to
As illustrated in
Next, a technical effect obtained by the information processing system 10 according to the fourteenth example embodiment will be described.
As described in
A processing method that is executed on a computer by recording, on a recording medium, a program for allowing the configuration in each of the example embodiments to be operated so as to realize the functions in each example embodiment, and by reading, as a code, the program recorded on the recording medium, is also included in the scope of each of the example embodiments. That is, a computer-readable recording medium is also included in the range of each of the example embodiments. Not only the recording medium on which the above-described program is recorded, but also the program itself is also included in each example embodiment.
The recording medium to use may be, for example, a floppy disk (registered trademark), a hard disk, an optical disk, a magneto-optical disk, a CD-ROM, a magnetic tape, a nonvolatile memory card, or a ROM. Furthermore, not only the program that is recorded on the recording medium and that executes a processing alone, but also the program that operates on an OS and that executes a processing in cooperation with the functions of expansion boards and another software, is also included in the scope of each of the example embodiments. In addition, the program itself may be stored in a server, and a part or all of the program may be downloaded from the server to a user terminal.
The example embodiments described above may be further described as, but not limited to, the following Supplementary Notes below.
An information processing system according to Supplementary Note 1 is an information processing system including: an acquisition unit that acquires a first image and a second image that are captured such that optical axes intersect at a predetermined point; a determination unit that determines whether or not a difference between a position of a target in the first image and a position of the target in the second image is in a predetermined range; and a control unit that performs a control of capturing an iris image of the target in a case where the difference is in the predetermined range.
An information processing system according to Supplementary Note 2 is the information processing system according to Supplementary Note 1, wherein the control unit identifies an eye position of the target from the first image and the second image, and performs the control of capturing the iris image on the basis of the identified eye position.
An information processing system according to Supplementary Note 3 is the information processing system according to Supplementary Note 1 or 2, wherein the position of the target in the first image and the position of the target the second image are a position of a right eye and a position of a left eye of the target.
An information processing system according to Supplementary Note 4 is the information processing system according to any one of Supplementary Notes 1 to 3, wherein the predetermined point is set at a rear side of a focus position when viewed from a camera that captures the iris image, and the control unit performs a control of continuously capturing iris images of the target who approaches the camera that captures the iris image, after the difference becomes in the predetermined range.
An information processing system according to Supplementary Note 5 is the information processing system according to Supplementary Note 4, further including: a selection unit that selects at least one high-quality iris image from the continuously captured iris images; and an authentication unit that performs iris authentication by using the selected iris image.
An information processing system according to Supplementary Note 6 is the information processing system according to any one of Supplementary Notes 1 to 5, wherein the acquisition unit acquires the first image captured by a first camera and the second image captured by a second camera that is disposed at a different angle from that of the first camera, and the control unit performs a control of imaging the iris image by using a third camera that is different from the first camera and the second camera.
An information processing system according to Supplementary Note 7 is the information processing system according to Supplementary Note 6, wherein the third camera is disposed in front of the target, and the first camera and the second camera are respectively arranged on a left and a right of the third camera.
An information processing system according to Supplementary Note 8 is the information processing system according to Supplementary Note 6, wherein each of the first camera, the second camera, and the third camera is disposed at a position where the target is imaged diagonally at an angle deviated from a front of the target.
An information processing system according to Supplementary Note 9 is the information processing system according to any one of Supplementary Notes 1 to 5, wherein the acquisition unit acquires the first image captured by a fourth camera through a first surface of a mirror, and the second image captured by the fourth camera through a second surface of the mirror.
An information processing system according to Supplementary Note 10 is the information processing system according to any one of Supplementary Notes 1 to 5, wherein the acquisition unit acquires the first image captured directly by a fifth camera, and the second image captured by the fifth camera through a mirror.
An information processing system according to Supplementary Note 11 is the information processing system according to any one of Supplementary Notes 1 to 10, further including an identification unit that identifies the target, wherein the determination unit determines whether or not a difference between a position of the identified target in the first image and a position of the identified target in the second image is in the predetermined range.
An information processing system according to Supplementary Note 12 is the information processing system according to any one of Supplementary Notes 1 to 11, wherein the control unit selects one of a plurality of targets in a case where the first image and the second image include the plurality of targets, and performs a control of capturing the iris image of the selected target.
An information processing system according to Supplementary Note 13 is the information processing system according to any one of Supplementary Notes 1 to 11, wherein the control unit performs a control of capturing the iris images of a plurality of targets by using different cameras for the plurality of respective targets, in a case where the first image and the second image include the plurality of targets.
An information processing system according to Supplementary Note 14 is the information processing system according to any one of Supplementary Notes 1 to 13, wherein the determination unit detects a first target that includes the target in the first image, and then searches for whether or not the target is in a second area corresponding to the first area, in the second image.
An information processing method according to Supplementary Note 15 is an information processing method executed by at least one computer, the information processing method including: acquiring a first image and a second image that are captured such that optical axes intersect at a predetermined point; determining whether or not a difference between a position of a target in the first image and a position of the target in the second image is in a predetermined range; and performing a control of capturing an iris image of the target in a case where the difference is in the predetermined range.
A recording medium according to Supplementary Note 16 is a recording medium on which a computer program that allows at least one computer to execute an information processing method is recorded, the information processing method including: acquiring a first image and a second image that are captured such that optical axes intersect at a predetermined point; determining whether or not a difference between a position of a target in the first image and a position of the target in the second image is in a predetermined range; and performing a control of capturing an iris image of the target in a case where the difference is in the predetermined range.
An information processing apparatus according to Supplementary Note 17 is an information processing apparatus including: an acquisition unit that acquires a first image and a second image that are captured such that optical axes intersect at a predetermined point; a determination unit that determines whether or not a difference between a position of a target in the first image and a position of the target in the second image is in a predetermined range; and a control unit that performs a control of capturing an iris image of the target in a case where the difference is in the predetermined range.
A computer program according to Supplementary Note 18 is a computer program that allows at least one computer to execute an information processing method, the information processing method including: acquiring a first image and a second image that are captured such that optical axes intersect at a predetermined point; determining whether or not a difference between a position of a target in the first image and a position of the target in the second image is in a predetermined range; and performing a control of capturing an iris image of the target in a case where the difference is in the predetermined range.
This disclosure is allowed to be changed, if desired, without departing from the essence or spirit of this disclosure which can be read from the claims and the entire specification. An information processing system, an information processing method, and a recording medium with such changes are also intended to be within the technical scope of this disclosure.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2022/008942 | 3/2/2022 | WO |