This disclosure relates to an imaging system, an imaging method, and a computer program that image a subject.
A known system of this type captures an image that is used for iris authentication. For example, Patent Literature 1 discloses a technique/technology of detecting a face and eyes of a target person to identify a region of interest of an iris. Patent Literature 2 discloses a technique/technology of generating a low-resolution image from a high-resolution image to perform pupil detection from the low-resolution image.
As another related art, Patent Literature 3 discloses a technique/technology of synthesizing a plurality of images to generate a composite image of a wide angle of view.
An iris camera for capturing an image for iris authentication is generally set to have a large number of pixels and a narrow angle of view. For this reason, it is hard to capture a wide-angle image that allows the iris camera to detect an eye position of a subject due to the restrictions of communication velocity and a range of angle of view. Each cited document described above does not mention such problems, and there is room for improvement.
In view of the above problems, it is an example object of this disclosure to provide an imaging system, an imaging method, and a computer program that are configured to properly capture an image of the periphery of the eyes of the subject.
An imaging system according to an example aspect of this disclosure includes: a first control unit that controls an imaging unit to capture a first image of a subject at a first pixel density; a detection unit that detects an eye position of the subject from the first image; a setting unit that sets a peripheral area around eyes of the subject on the basis of the eye position; and a second control unit that controls the imaging unit to capture a second image of the peripheral area at a second pixel density that is higher than the first pixel density.
An imaging method according to an example aspect of this disclosure includes: controlling an imaging unit to capture a first image of a subject at a first pixel density; detecting an eye position of the subject from the first image; setting a peripheral area around eyes of the subject on the basis of the eye position; and controlling the imaging unit to capture a second image of the peripheral area at a second pixel density that is higher than the first pixel density.
A computer program according to an example aspect of this disclosure operates a computer: to control an imaging unit to capture a first image of a subject at a first pixel density; to detect an eye position of the subject from the first image; to set a peripheral area around eyes of the subject on the basis of the eye position; and to control the imaging unit to capture a second image of the peripheral area at a second pixel density that is higher than the first pixel density.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
Hereinafter, an imaging system, an imaging method, and a computer program according to example embodiments will be described with reference to the drawings.
An imaging system according to a first example embodiment will be described with reference to
First, with reference to
As illustrated in
The processor 11 reads a computer program. For example, the processor 11 is configured to read a computer program stored in at least one of the RAM 12, the ROM 13 and the storage apparatus 14. Alternatively, the processor 11 may read a computer program stored by a computer readable recording medium by using a not-illustrated recording medium reading apparatus. The processor 11 may obtain (i.e., read) a computer program from a not-illustrated apparatus that is located outside the imaging system 10 through a network interface. The processor 11 controls the RAM 12, the storage apparatus 14, the input apparatus 15, and the output apparatus 16 by executing the read computer program. Especially in the first example embodiment, when the processor 11 executes the read computer program, a functional block for imaging a subject is realized or implemented in the processor 11. As the processor 11, any one of a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), a FPGA (field-programmable gate array), a DSP (digital signal processor), and an ASIC (application specific integrated circuit) may be used. Furthermore, a plurality of those may be used in parallel.
The RAM 12 temporarily stores the computer program to be executed by the processor 11. The RAM 12 temporarily stores the data that is temporarily used by the processor 11 when the processor 11 executes the computer program. The RAM 12 may be, for example, a D-RAM (Dynamic RAM).
The ROM 13 stores the computer program to be executed by the processor 11. The ROM 13 may otherwise store fixed data. The ROM 13 may be, for example, a P-ROM (Programmable ROM).
The storage apparatus 14 stores the data that is stored for a long term by the imaging system 10. The storage apparatus 14 may operate as a temporary storage apparatus of the processor 11. The storage apparatus 14 may include, for example, at least one of a hard disk apparatus, a magneto-optical disk apparatus, an SSD (Solid State Drive), and a disk array apparatus.
The input apparatus 15 is an apparatus that receives an input instruction from a user of the imaging system 10. The input apparatus 15 may include, for example, at least one of a keyboard, a mouse, and a touch panel.
The output apparatus 16 is an apparatus that outputs information about the imaging system 10 to the outside. For example, the output apparatus 16 may be a display apparatus (e.g., a display) that is configured to display the information about the imaging system 10.
Next, with reference to
As illustrated in
The first control unit 110 is configured to capture a first image of the subject by controlling the iris camera 20. The first image is an image used to detect an eye position of the subject, and is captured at a first pixel density that is relatively low. The first image is captured, for example, such that the subject entirely fits in an imaging range.
The eye position detection unit 120 detects the eye position of the subject (i.e., where the eyes are) by using the first image captured by the control of the first control unit 110. Since the existing techniques/technologies can be properly adopted to a method of detecting the eye position of the subject from the image, a more specific description of the method will be omitted. Information about the eye position of the subject detected by the eye position detection unit 120 is configured to be outputted to the ROI setting unit.
The ROI setting unit 130 is configured to set a ROI (Region Of Interest) fir imaging an iris of the subject on the basis of the eye position of the subject detected by the eye position detection unit 120. The ROI is set as an area through which the eyes of the subject likely pass at a focal point of the iris camera 20. Since the existing techniques/technologies can be properly adopted to a method of setting the ROI from the eye position, a more specific description of the method will be omitted. Information about the ROI set by the ROI setting unit 130 is configured to be outputted to the second control unit 140.
The second control unit 140 is configured to capture a second image of the subject by controlling the iris camera 20. The second image is an image that is captured as an area set by the ROI setting unit 130, and is captured at a second pixel density that is higher than the first pixel density (i.e., the pixel density when the first image is captured). Consequently, the second image is an image obtained by imaging an area around the eyes of the subject at high resolution.
Next, with reference to
As illustrated in
Then, the eye position detection unit 120 detects the eye position of the subject from the first image (step S102). Then, the ROI setting unit 130 sets the ROI on the basis of the detected eye position (step S103).
Then, the second control unit 140 controls the iris camera 20 to capture the second image at the set ROI (step S104). The second image is captured at the second pixel density that is higher than the first pixel density.
Next, with reference to
As illustrated in
A dedicated camera (i.e., a low-resolution camera) may be separately installed to capture the first image; however, in that case, a cost increase and sophistication of the system may be problematic. According to the imaging system of the first example embodiment, however, the iris camera 20 is configured to capture both the first image (i.e., an image for detecting the eye position to set the ROI) and the second image (i.e., a high-definition iris image). Therefore, it is possible to properly capture the iris image of the subject without incurring the above-described cost increase or sophistication of the system. Furthermore, if there are multiple types of cameras, it is necessary to make the user face each camera, so that the user may be aware of the presence of the cameras, which may be complicated for the user. According to the imaging system 10 in the first example embodiment, the eye position can be specified by a low-quality image and an iris area can be specified, only by using the iris camera having a narrow angle of view. It also eliminates a need for the user to be aware of the cameras.
Hereinafter, modified examples of the first example embodiment will be described. The following modified examples may be also combined.
The first control unit 110 may capture the first image, for example, at a time when the subject arrives at a predetermined trigger point. The timing in which the subject arrives at the trigger point may be detected, for example, by various sensors or the like installed around the trigger point.
The second control unit 140 may capture the second image, for example, at a time when the subject arrives at the focal point of the iris camera 20 set in advance. The second control unit 140 may predict the timing in which the subject arrives at the focal point, and may capture a plurality of second images continuously near the timing.
The second image captured by the control of the second control unit 140 may be inputted to a not-illustrated biometric authentication unit and may be used for iris authentication of the subject. The biometric authentication unit may be provided as a part of the imaging system 10, or may be provided outside the imaging system 10 (e.g., an external server or a cloud, etc.). Since the existing techniques/technologies can be properly adopted to the authentication using the iris image (i.e., the second image), a more specific description here will be omitted.
The imaging system 10 according to a second example embodiment will be described with reference to
A hardware configuration of the imaging system 10 according to the second example embodiment may be the same as the hardware configuration of the first example embodiment described in
Next, with reference to
As illustrated in
Next, with reference to
As illustrated in
Then, the eye position detection unit 120 detects the eye position of the subject from a plurality of first images (the step S102). Then, the ROI setting unit 130 sets the ROI on the basis of the detected eye position (the step S103).
Then, the second control unit 140 controls the iris cameras 20 to capture the second image at the set ROI (the step S104). The second image may be captured by one of the first iris camera 21, the second iris camera 22, and the third iris camera 23. That is, it is not necessary that all the iris cameras 20 separately capture the second image. The iris camera 20 that captures the second image may be determined in accordance with the ROI set in the ROI setting section 130, for example. Specifically, it is sufficient that the second image is captured by the iris camera 20 including the ROI in the imaging range.
Next, with reference to
As illustrated in
The plurality of first images may not be captured by using the plurality of iris cameras 20, and the plurality of first images may be captured by a single iris camera 20. Specifically, for example, the first image may be captured from a plurality of angles by properly moving the position of one camera. Even in this case, it is possible to obtain the technical effect described above by synthesizing the plurality of first images to generate a wide-angle image.
The imaging system 10 according to a third example embodiment will be described with reference to
A hardware configuration of the imaging system 10 according to the third example embodiment may be the same as the hardware configuration of the first example embodiment described in
Next, with reference to
As illustrated in
The image synthesis unit 210 is configured to synthesize the respective first images captured by the first iris camera 21, the second iris camera 22, and the third iris camera 23. The first iris camera 21, the second iris camera 22, and the third iris camera 23 are arranged such that their imaging ranges do not greatly overlap each other. Therefore, when the respective first images captured by the iris cameras 20 are synthesized, a single wide-angle image can be generated. The wide-angle image generated by the image synthesis unit 210 is configured to be outputted to the eye position detection unit 110. The image synthesis unit 210 may be realized or implemented, for example, in the processor 11 described above (see
Next, with reference to
As illustrated in
Then, the image synthesis unit 210 synthesizes the plurality of first images captured by the first iris camera 21, the second iris camera 22, and the third iris camera 23 (step S202). Subsequently, the eye position detection unit 120 detects the eye position of the subject from the wide-angle image obtained by synthesizing the plurality of first images (the step S102). Then, the ROI setting unit 130 sets the ROI on the basis of the detected eye position (the step S103).
Then, the second control unit 140 controls the iris cameras 20 to capture the second image at the set ROI (the step S104).
Next, a technical effect obtained by the imaging system 10 according to the third example embodiment will be described.
As described in
The imaging system 10 according to a fourth example embodiment will be described with reference to
A hardware configuration of the imaging system 10 according to the fourth example embodiment may be the same as the hardware configuration of the first example embodiment described in
Next, with reference to
As illustrated in
The eye area determination unit 220 is configured to determine whether or not an eye area is included in the first image captured by each of the first iris camera 21, the second iris camera 22, and the third iris camera 23. In other words, the eye area determination unit 220 is configured to determine which image of the plurality of first images captured by the second iris camera 22 and the third iris camera 23 includes the eye area. A determination result of the eye area determination unit 220 (i.e., information about the first image including the eye area) is configured to be outputted to the eye position detection unit 110. Incidentally, the eye area determination unit 220 may be realized or implemented, for example, in the processor 11 described above (see
Next, with reference to
As illustrated in
Then, the eye area determination unit 220 determines whether or not there is an eye area, for the plurality of first images captured by the first iris camera 21, the second iris camera 22, and the third iris camera 23 (step S203). Subsequently, the eye position detection unit 120 detects the eye position of the subject from the first image for which the eye area is included (the step S102). Then, the ROI setting unit 130 sets the ROI on the basis of the detected eye position (the step S103).
Then, the second control unit 140 controls the iris camera 20 to capture the second image at the set ROI (the step S104).
Next, a technical effect obtained by the imaging system 10 according to the fourth example embodiment will be described.
As described in
The imaging system 10 according to a fifth example embodiment will be described with reference to
First, with reference to
As illustrated in
The eye position of the subject 500 may be detected from each of the plurality of first images captured as described above. For example, all of the plurality of first images may be used to detect the eye position, or the first image including the eye area may be determined from among the plurality of first images and only the first image including the eye area may be used to the detect the eye position.
It is preferable to set the plurality of iris cameras 20 such that an overlapping part of their imaging ranges is sufficiently large. In this way, even for the subjects 500 different in standing height, at least one iris camera 20 is allowed to capture the faces of the subjects 500 without interruption.
Next, a technical effect obtained by the imaging system 10 according to the fifth example embodiment will be described.
As described in
The imaging system 10 according to a sixth example embodiment will be described with reference to
First, with reference to
As illustrated in
A pixel reduction amount by thinning may be changed depending on a location of the imaging area. In other words, the pixel reduction amount by thinning may not be uniform throughout the imaging area. For example, the pixel reduction amount by thinning may be reduced for an area that likely includes the eye area, and the pixel reduction amount by thinning may be increased for an area that less likely includes the eye area.
Next, a technical effect obtained by the imaging system 10 according to the sixth example embodiment will be described.
As described in
The imaging system 10 according to a seventh example embodiment will be described with reference to
First, with reference to
As illustrated in
Furthermore, in addition to or in place of the upper end part and the lower end part described above, the pixels of at least one of a right end part and a left end part of the imaging area may not be read. For example, when the subject passes through the center of a passage (such as when an arrow is painted on a floor and the subject is guided to the center of the passage), the right end part and the left end part of the imaging area less likely includes the eyes of the subject. Therefore, by not reading the pixels of at least one of the right end part and the left end part of the imaging area, it is possible to r efficiently educe the data volume of the first image.
Next, a technical effect obtained by the imaging system 10 according to the seventh example embodiment will be described.
As described in
The example embodiments described above may be further described as, but not limited to, the following Supplementary Notes.
An imaging system described in Supplementary Note 1 is an imaging system including: a first control unit that controls an imaging unit to capture a first image of a subject at a first pixel density; a detection unit that detects an eye position of the subject from the first image; a setting unit that sets a peripheral area around eyes of the subject on the basis of the eye position; and a second control unit that controls the imaging unit to capture a second image of the peripheral area at a second pixel density that is higher than the first pixel density.
An imaging system described in Supplementary Note 2 is the imaging system described in Supplementary Note 1, wherein the first control unit performs a process to thin out pixels of the imaging unit, so that the first pixel density becomes lower than the second pixel density.
An imaging system described in Supplementary Note 3 is the imaging system described in Supplementary Note 1 or 2, wherein the first control unit reduces a data volume of the first image by limiting an imaging area of the imaging unit to be small.
An imaging system described in Supplementary Note 4 is the imaging system described in any one of Supplementary Notes 1 to 3, wherein the imaging unit includes a plurality of cameras, and the first control unit controls the imaging unit to capture the first image with each of the plurality of cameras.
An imaging system described in Supplementary Note 5 is the imaging system described in any one of Supplementary Notes 1 to 4, wherein the detection unit detects the eye position of the subject from a composite image obtained by synthesizing a plurality of first images.
An imaging system described in Supplementary Note 6 is the imaging system described in any one of Supplementary Notes 1 to 5, wherein the first control unit controls the imaging unit to capture the first image when the subject arrives at a predetermined trigger point.
An imaging system described in Supplementary Note 7 is the imaging system described in any one of Supplementary Notes 1 to 6, wherein the second imaging unit controls the imaging unit to capture the second image when the subject arrives at a focal point set in advance.
An imaging system described in Supplementary Note 8 is the imaging system described in any one of Supplementary Notes 1 to 7, further including an authentication unit that performs iris authentication of the subject by using the second image.
An imaging method described in Supplementary Note 9 is an imaging method including: controlling an imaging unit to capture a first image of a subject at a first pixel density; detecting an eye position of the subject from the first image; setting a peripheral area around eyes of the subject on the basis of the eye position; and controlling the imaging unit to capture a second image of the peripheral area at a second pixel density that is higher than the first pixel density.
A computer program described in Supplementary Note 10 is a computer program that operates a computer: to control an imaging unit to capture a first image of a subject at a first pixel density; to detect an eye position of the subject from the first image; to set a peripheral area around eyes of the subject on the basis of the eye position; and to control the imaging unit to capture a second image of the peripheral area at a second pixel density that is higher than the first pixel density.
This disclosure is not limited to the examples described above and is allowed to be changed, if desired, without departing from the essence or spirit of the invention which can be read from the claims and the entire specification. An imaging system, an imaging method, and a computer program with such modifications are also intended to be within the technical scope of this disclosure.
10
20
21
22
23
110
120
130
140
210
220
500
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2020/018151 | 4/28/2020 | WO |