This application claims priority to Japanese Patent Application No. 2018-040311 filed on Mar. 7, 2018, the entire disclosure of which is incorporated herein by reference.
The present invention relates to an imaging apparatus such as an on-vehicle driver monitor, and more particularly, to a technique for detecting an obstacle interfering with capturing of a subject image.
An on-vehicle driver monitor analyzes an image of a driver's face captured by a camera, and monitors whether the driver is falling asleep during driving or the driver is engaging in distracted driving based on the opening degree of the eyelids or the gaze direction. The camera for the driver monitor is typically installed on the dashboard in front of the driver's seat, along with the display panel and instruments.
However, the camera is a small component, and can be blocked by an object on the dashboard hanging over the camera (e.g., a towel), which may be overlooked by the driver. The camera may also be blocked by an object suspended above the driver's seat (e.g., an insect) or by a sticker attached to the camera by a third person. The blocked camera cannot capture an image of the driver's face, failing to correctly monitor the state of the driver.
Patent Literatures 1 and 2 each describe an imaging apparatus that deals with an obstacle between the camera and the subject. The technique in Patent Literature 1 defines, in an imaging area, a first area for capturing the subject and a second area including the first area. When the second area includes an obstacle hiding the subject, the image capturing operation is stopped to prevent the obstacle from appearing in a captured image. The technique in Patent Literature 2 notifies, when an obstacle between the camera and the face obstructs the detection of facial features in a captured image, the user of the undetectable features as well as the cause of such unsuccessful detection and countermeasures to be taken.
Obstacles may prevent the camera from capturing images in various manners. The field of view (imaging area) of the camera may be obstructed entirely or partially. Although an obstacle entirely blocking the field of view prevents the camera from capturing a face image, an obstacle partially blocking the field of view does or does not prevent the camera from capturing a face image.
For example, a camera that captures the face in a central area of its field of view cannot capture the overall face when the central area is entirely or partially blocked by an obstacle. However, the camera can still capture the overall face when an obstacle merely blocks a peripheral area around the central area. In this case, the obstacle detected between the camera and the face does not interfere with capturing of the face. In this state, the processing performed in response to obstacle detection (e.g., an alarm output) will place an additional burden on the apparatus as well as provides incorrect information to the user.
Patent Literature 1: Japanese Unexamined Patent Application Publication No. 2013-205675
Patent Literature 2: Japanese Unexamined Patent Application Publication No.
2009-296355
One or more aspects of the present invention are directed to an imaging apparatus that accurately detects an obstacle interfering with image capturing as distinguishable from an obstacle not interfering with image capturing.
An imaging apparatus according to one aspect of the present invention includes an imaging unit that captures an image of a subject, an image processor that processes the image captured by the imaging unit, and an obstacle detector that detects an obstacle between the imaging unit and the subject based on the captured image processed by the image processor. The image processor divides the image captured by the imaging unit into a plurality of sections, and divides the captured image into a plurality of blocks each including a predetermined number of sections. The obstacle detector checks an obstructed state of each section in each of the blocks, and the obstacle detector detects the obstacle when the obstructed state of each section in at least one block interferes with image capturing of the subject.
In this aspect of the present invention, the obstructed state of each block is checked to detect any obstacle interfering with image capturing between the imaging unit and the subject. Such checking detects no obstacle when an obstacle between the imaging unit and the subject does not interfere with image capturing. This enables an obstacle interfering with image capturing to be accurately detected as distinguishable from an obstacle not interfering with image capturing.
In the above aspect of the present invention, the obstacle detector may detect the obstacle when all the sections in at least one block are obstructed.
In the above aspect of the present invention, each of the blocks may include a part of a specific area containing a specific part of the subject in the captured image.
In this case, the obstacle detector may detect the obstacle when at least one section in the specific area is obstructed.
The obstacle detector may detect no obstacle when all the sections in the specific area are unobstructed.
In the above aspect of the present invention, the specific part may be a face of the subject, and the specific area may be a central area of the captured image.
In the above aspect of the present invention, the obstacle detector may detect the obstacle when all the sections in at least one block are obstructed, and may detect no obstacle when a predetermined section in each of the blocks is unobstructed.
In the above aspect of the present invention, the obstacle detector may compare luminance of a plurality of pixels included in one section with a threshold pixel by pixel, and the obstacle detector may determine that a section including at least a predetermined number of pixels with a result of comparison satisfying a predetermined condition is an obstructed section.
In the above aspect of the present invention, the image processor may define an area excluding side areas of the captured image as a valid area, and the image processor may divide the captured image within the valid area into a plurality of sections.
In the above aspect of the present invention, the obstacle detector may output a notification signal for removing the obstacle when detecting the obstacle.
In the above aspect of the present invention, the imaging unit may be installed in a vehicle to capture a face image of an occupant of the vehicle, and the obstacle detector may detect an obstacle between the imaging unit and the face of the occupant.
The imaging apparatus according to the above aspects of the present invention accurately detects an obstacle interfering with image capturing as distinguishable from an obstacle not interfering with image capturing.
Embodiments of the present invention will be described with reference to the drawings. The same or corresponding components are given the same reference numerals in the figures. In the example below, the present invention is applied to an on-vehicle driver monitor.
The configuration of the driver monitor will now be described with reference to
The imaging unit 1 is a camera, and includes an imaging device 11 and a light-emitting device 12. The imaging device 11 is, for example, a complementary metal-oxide semiconductor (CMOS) image sensor, and captures an image of the face of a driver 53, who is a subject in a seat 52. The light-emitting device 12 is, for example, a light emitting diode (LED) that emits near-infrared light, and illuminates the face of the driver 53 with near-infrared light. As shown in
The image processor 2 processes an image captured by the imaging unit 1. The processing will be described in detail later. The driver state determiner 3 determines the state of the driver 53 (e.g., falling-asleep or being distracted) based on the image processed by the image processor 2. The obstacle detector 4 detects an obstacle between the imaging unit 1 and the driver 53 based on the image processed by the image processor 2 with a method described later.
The signal output unit 5 outputs a signal based on the determination results from the driver state determiner 3 and a signal based on the detection results from the obstacle detector 4. The output signals are transmitted to an electronic control unit (ECU) (not shown) installed in the vehicle 50 through a Controller Area Network (CAN).
Although the functions of the image processor 2, the driver state determiner 3, and the obstacle detector 4 in
A method used by the obstacle detector 4 for detecting the obstacle Z will now be described.
The captured image P in this example includes 640 by 480 pixels. The captured image P is first divided into 16 sections Y. In this case, the area excluding the side areas (solid filled parts) of the captured image P is defined as a valid area, which is then divided into 16 sections Y. The side areas are excluded because any obstacle captured within such areas will not interfere with capturing of a face image. A single section Y includes multiple pixels m.
The captured image P is then divided into four blocks A, B, C, and D, each of which includes four of the 16 divided sections Y. For convenience, the 16 sections Y are individually given numbers 1 to 16 as shown in
Block A includes four sections #1, #2, #5, and #6. Block B includes four sections #9, #10, #13, and #14. Block C includes four sections #3, #4, #7, and #8. Block D includes four sections #11, #12, #15, and #16.
In
To detect an obstacle, the obstructed state of each of the four sections included in one block is checked first. More specifically, the obstacle detector 4 compares the luminance of every pixel m included in each section with a threshold pixel by pixel, and extracts a pixel with a comparison result satisfying a predetermined condition, or more specifically, a pixel with a luminance value higher than the threshold. Referring to a captured image Q shown in
Then, the obstacle detector 4 determines whether the obstructed states of the sections in each block A to D interfere with image capturing of the subject. In the present embodiment, the obstacle detector 4 determines whether all the four sections in each block are obstructed.
In
In any of the examples shown in
In
In
In any of the examples shown in
In step S1 in
In step S4, the obstacle detector 4 checks the obstructed state of each section (#1 to #16) based on the luminance information obtained in step S2 and the above threshold. In step S5, the obstacle detector 4 checks the obstructed state of each block (A to D) based on the check results of each section.
In step S6, the obstacle detector 4 determines whether all the sections included in each block are obstructed. When all the sections are obstructed (Yes in step S6), the processing advances to step S7 to set an obstacle flag. When one or more sections are unobstructed (No in step S6), the processing advances to step S8 without performing the processing in step S7.
In step S8, the obstacle detector 4 determines whether the obstructed state of every block has been checked. When any block has not been checked (No in step S8), the processing returns to step S5 to check the obstructed state of the next block. When the obstructed state of every block has been checked (Yes in step S8), the processing advances to step S9.
In step S9, the obstacle detector 4 determines whether an obstacle flag is set. When an obstacle flag is set in step S7 (Yes in step S9), the processing advances to step S10 to detect an obstacle. In subsequent step S11, the obstacle detector 4 outputs a notification signal for removing the obstacle. The notification signal is transmitted from the signal output unit 5 (
When no obstacle flag is determined to be set in step S9 (No in step S9), the processing advances to step S12 to detect no obstacle. The processing then skips step S11 and ends.
In the present embodiment, as described above, the image P captured by the imaging unit 1 is divided into sections #1 to #16, and the captured image P is also divided into blocks A to D each including a predetermined number of (in this example, four) sections. Then, the obstructed state of each section included in individual blocks A to D is checked. As shown in
The obstructed state of each block A to D is checked to detect an obstacle when any obstacle Z interfering with image capturing is between the imaging unit 1 and the face of the driver 53. Such checking detects no obstacle when an obstacle Z between the imaging unit 1 and the face of the driver 53 does not interfere with capturing of a face image. Thus, an obstacle Z interfering with image capturing can be accurately detected as distinguishable from an obstacle Z not interfering with image capturing.
In the present embodiment, blocks A to D each include a part of the central area K. Thus, at least one block with all the sections obstructed causes a part (or the entire) of the central area K to be in an obstructed state, allowing easy and reliable detection of an obstacle interfering with image capturing.
In the present embodiment, as described with reference to
In the present embodiment, when the presence of an obstacle is detected, or an obstacle Z interfering with image capturing is between the imaging unit 1 and the face, a notification signal for removing the obstacle Z is output. In response to the signal, the driver 53 can quickly find and remove the obstacle Z.
In contrast, the flowchart of
The present invention may not be limited to the above embodiment but be implemented in various other embodiments described below.
In the above embodiment,
In the above embodiment, the captured image P is divided into 16 sections (
In the above embodiment, the captured image P is divided into four blocks A to D (
In the above embodiment, the central area K is defined as a square area (
In the above embodiment, the specific area containing the specific part of the subject is the central area K at the center of the captured image P. However, the specific area may be shifted from the center of the captured image P to any predetermined position depending on the subject.
Although the imaging apparatus according to the embodiment of the present invention is the driver monitor 100 installed in a vehicle, the present invention may also be applied to an imaging apparatus used for applications other than a vehicle.
Number | Date | Country | Kind |
---|---|---|---|
2018-40311 | Mar 2018 | JP | national |