This application claims priority to Japanese Patent Application No. 2018-033132 filed on Feb. 27, 2018, the entire disclosure of which is incorporated herein by reference.
The present invention relates to an occupant monitoring apparatus for monitoring an occupant with a camera installed in a vehicle, and particularly to a technique for measuring the spatial position of a predetermined site of the occupant.
To perform predetermined vehicle control in accordance with a driver's face position, the spatial position of the face is detected in the vehicle. For example, the distance from a reference position (e.g., a camera position) to the face of the driver can differ between when the driver is awake and looking straight ahead and when the driver is falling asleep and has his or her head down. This distance can be detected as the driver's face position for determining whether the driver is awake or falling asleep. A vehicle incorporating a head-up display (HUD) system may detect the face position (in particular, eye position) of the driver for optimally displaying an image at the eye position in front of the driver's seat.
A driver monitor is known for detecting the face of a driver. The driver monitor monitors the driver's condition based on an image of the driver's face captured by a camera, and performs predetermined control, such as generating an alert, if the driver is falling asleep or engaging in distracted driving. The face image obtained by the driver monitor provides information about the face orientation or gaze direction, but contains no information about the spatial position of the face (the distance from a reference position).
The spatial position of the face may be measured by, for example, two cameras (or a stereo camera), a camera in combination with patterned light illuminating a subject, or an ultrasonic sensor. The stereo camera includes multiple cameras and increases the cost. The method using the patterned light involves a single camera, but uses a dedicated optical system. The ultrasonic sensor increases the number of components and increases the cost, and can further yield the distance with an end point indefinite in the subject, which is likely to deviate from the detection result of the driver monitor.
Patent Literature 1 describes a driver monitoring system including a camera installed on a steering wheel of a vehicle for correcting an image of a driver captured by a camera into an erect image based on the steering angle. Patent Literature 2 describes a face orientation detection apparatus for detecting the face orientation of a driver using two cameras installed on the instrument panel of a vehicle. However, neither Literature 1 or 2 describes techniques for measuring the face position with the camera(s) and responds to the above issue.
Patent Literature 1: Japanese Unexamined Patent Application Publication No. 2007-72774
Patent Literature 2: Japanese Unexamined Patent Application Publication No.
2007-257333
One or more aspects of the present invention are directed to an occupant monitoring apparatus that measures the spatial position of a predetermined site of an occupant with a single camera.
The occupant monitoring apparatus according to one aspect of the present invention includes a camera that captures an image of an occupant of a vehicle, an image processor that processes the image of the occupant captured by the camera, and a position calculator that calculates a spatial position of a predetermined site of the occupant based on the image processed by the image processor. The camera is installed on a steering wheel of the vehicle away from a rotational shaft to be rotatable together with the steering wheel. The image processor processes two images captured by the camera at two different positions as the camera is rotated together with the steering wheel. The position calculator calculates the spatial position of the predetermined site of the occupant based on the two images processed by the image processor.
The occupant monitoring apparatus according to the above aspect includes the camera for capturing an image of the occupant installed on the steering wheel away from the rotational shaft. The camera, rotatable together with the steering wheel, can provide two images captured at two different positions. The two captured images are processed by the image processor to be used for calculating the spatial position of the predetermined site of the occupant. The occupant monitoring apparatus thus eliminates the use of multiple cameras or a dedicated optical system, and is simple and inexpensive.
In the apparatus according to the above aspect, the image processor may include a face detector that detects a face of the occupant from the images captured by the camera, and the position calculator may calculate a distance from the camera to a specific part of the face as a spatial position of the face.
In the apparatus according to the above aspect, the two images are, for example, a first captured image captured by the camera rotated by a first rotational angle to a first position and a second captured image captured by the camera rotated by a second rotational angle to a second position. In this case, the image processor generates a first rotated image by rotating the first captured image by a predetermined angle, and a second rotated image by rotating the second captured image by a predetermined angle. The position calculator calculates the spatial position of the predetermined site based on a baseline length that is a linear distance between the first position and the second position, a parallax obtained from the first rotated image and the second rotated image, and a focal length of the camera.
More specifically, the spatial position of the predetermined site may be calculated, for example, in the manner described below. The image processor generates the first rotated image by rotating the first captured image in a first direction by an angle |θ2−θ1|/2, and generates the second rotated image by rotating the second captured image in a second direction opposite to the first direction by an angle |θ2−θ1|/2. The position calculator calculates the baseline length as B=2·L·sin (182-811/2), and calculates the spatial position of the predetermined site as D=B·(f/δ). In the above expressions and formulas, L is a distance from the rotational shaft of the steering wheel to the camera, θ1 is the first rotational angle, θ2 is the second rotational angle, B is the baseline length, δ is the parallax, f is the focal length, and D is a distance from the camera to the predetermined site to define the spatial position of the predetermined site.
The apparatus according to the above aspect may further include a rotational angle detector that detects a rotational angle of the camera. The rotational angle detector may detect the first rotational angle and the second rotational angle based on the first captured image and the second captured image obtained from the camera.
In some embodiments, the rotational angle detector may detect the first rotational angle and the second rotational angle based on output from a posture sensor that detects a posture of the camera.
In some embodiments, the rotational angle detector may detect the first rotational angle and the second rotational angle based on output from a steering angle sensor that detects a steering angle of the steering wheel.
In the apparatus according to the above aspect, the position calculator may calculate the spatial position of the predetermined site based on the two images when the camera is rotated by at least a predetermined angle within a predetermined period between the two different positions.
The occupant monitoring apparatus according to the above aspect of the present invention detects the spatial position of a predetermined site of an occupant with a single camera.
An occupant monitoring apparatus according to a first embodiment of the present invention will now be described with reference to the drawings. The structure of the occupant monitoring apparatus will be described first with reference to
As shown in
As shown in
The image processor 2 includes an image memory 21, a face detector 22, a first image rotator 23, a second image rotator 24, and a rotational angle detector 25. The image memory 21 temporarily stores images captured by the camera 1. The face detector 22 detects the face of the driver from the image captured by the camera 1, and extracts feature points in the face (e.g., eyes). Methods for face detection and feature point extraction are known, and will not be described in detail.
The first image rotator 23 and the second image rotator 24 read images G1 and G2 (described later) captured by the camera 1 from the image memory 21, and rotate the captured images G1 and G2. The rotational angle detector 25 detects rotational angles θ1 and θ2 (described later) of the camera 1 based on the images captured by the camera 1 obtained from the image memory 21. The rotational angles θ1 and θ2 detected by the rotational angle detector 25 are provided to the first image rotator 23 and the second image rotator 24, and the first and second image rotators 23 and 24 then rotate the captured images G1 and G2 by predetermined angles based on the rotational angles θ1 and θ2. This rotation of the images will be described in detail later.
The position calculator 3 calculates the distance D from the camera 1 to the face 41 shown in
The driver state determiner 4 detects, for example, eyelid movements and a gaze direction based on the facial information obtained from the face detector 22, and determines the state of the driver 40 in accordance with the detection result. For example, when the eyelids are detected as being closed for longer than a predetermined duration, the driver 40 is determined to be falling asleep. When the gaze is detected as being aside, the driver 40 is determined to be engaging in distracted driving. The output of the driver state determiner 4 is provided to the ECU through the CAN.
The control unit 5, which includes a central processing unit (CPU), centrally controls the operation of the occupant monitoring apparatus 100. The control unit 5 is thus communicably connected to each unit included in the occupant monitoring apparatus 100 using signal lines (not shown). The control unit 5 also communicates with the ECU through the CAN.
The storage unit 6, which includes a semiconductor memory, stores, for example, programs for implementing the control unit 5 and associated control parameters. The storage unit 6 also includes a storage area for temporarily storing various data items.
The face detector 22, the first image rotator 23, the second image rotator 24, the rotational angle detector 25, the position calculator 3, and the driver state determiner 4 are each implemented by software, although they are shown as blocks in
The principle for measuring the spatial position of the face with the occupant monitoring apparatus 100 will now be described.
As shown in
The apparatus according to one or more embodiments of the present invention uses two images captured by the camera 1 at two different positions to calculate the distance D shown in
The procedure for measuring a distance based on motion stereo performed by the apparatus according to one or more embodiments of the present invention will now be described. As described above, the camera 1 first captures two images at two different positions. The two images include the image G1 shown in
The two captured images G1 and G2 are then each rotated by a corresponding predetermined angle. More specifically, as shown in
The rotated image H1 is the captured image G1 rotated up to the mid-angle of the rotational angles between images G1 and G2. The rotated image H2 is also the captured image G2 rotated up to the mid-angle of the rotational angles between the images G1 and G2. The rotated images H1 and H2 thus have the same inclination on a screen. As described above, the captured images G1 and G2 are rotated in opposite directions by the angle |θ2−θ1|/2 to generate the two images H1 and H2 with the same posture, which can also be captured by a typical stereo camera.
In the present embodiment, the captured images G1 and G2 are directly rotated to generate the rotated images H1 and H2. In some embodiments, as shown in
The obtained rotated images H1 and H2 will now be used to determine the distance based on stereo vision. For the distance determination, a baseline length, which is the linear distance between two positions of the camera, will first be obtained. The baseline length will be described with reference to
In
B=2·L·sin(|θ2−θ1|/2) (1)
The distance L is known, and thus the baseline length B is obtained by detecting the angles θ1 and θ2. The angles θ1 and θ2 are detected from the captured images G1 and G2 in
After obtaining the baseline length B, the distance from the camera 1 to a subject is determined in accordance with typical distance measurement based on stereo vision. The distance determination will be described in detail with reference to
Images of a subject Y captured by the cameras 1a and 1b are formed on the imaging surfaces of the image sensors 11a and 11b. The images of the subject Y include images of a specific part of the subject Y formed at a position P1 on the imaging surface of the first camera 1a and at a position P2 on the imaging surface of the second camera 1b. The position P2 is shifted by a parallax δ from a position P1′, which corresponds to the position P1 for the first camera 1a. Geometrically, f/δ=D/B, where f indicates the focal length of each of the cameras 1a and 1b, and D indicates the distance from the camera 1a or 1b to the subject Y. The distance D is thus calculated with the formula below.
D=B·f/δ (2)
In the formula (2), the baseline length B is calculated with the formula (1). The focal length f is known. Thus, the distance D can be calculated by obtaining the parallax δ. The parallax δ may be obtained through known stereo matching. For example, the image captured by the second camera 1b is searched for an area having the same luminance distribution as a specific area in the image captured by the first camera 1a, and the difference between those two areas is obtained as the parallax.
The apparatus according to one or more embodiments of the present invention detects the parallax δ between the rotated images H1 and H2 in
In step S1, the camera 1 captures images. The images captured by the camera 1 are stored into the image memory 21. In step S2, the rotational angle detector 25 detects the rotational angle of the camera 1 that is rotated together with the steering wheel 51 from the images G1 and G2 (
In step S6, the control unit 5 determines whether distance measurement based on motion stereo is possible using the data stored in step S5. Measuring the distance to a subject based on motion stereo uses images captured by the camera 1 at two positions that are apart from each other by at least a predetermined distance. Additionally, motion stereo uses two images capturing a subject with no movement. Thus, two images captured at a long time interval, which may capture a moving subject, can cause inaccurate distance measurement. In step S6, the control unit 5 thus determines that distance measurement based on motion stereo is possible when the camera 1 is rotated by at least a predetermined angle (e.g., at least an angle of 10°) within a predetermined period (e.g., five seconds) between two different positions. When the camera 1 is not rotated by at least the predetermined angle within the predetermined period, the control unit 5 determines that distance measurement based on motion stereo is not possible.
When distance measurement is determined possible in step S6 (Yes in step S6), the processing advances to step S7. In step S7, the image rotators 23 and 24 rotate the latest image and the image preceding the latest image by N seconds (N≤5) by the angle |θ2−θ1|/2, where |θ2−θ1|≥10°. For example, the captured image G1 in
In step S8, the position calculator 3 calculates the baseline length B with the formula (1) based on the rotational angles θ1 and θ2 obtained from the storage unit 6. In step S9, the position calculator 3 calculates the parallax δ based on the rotated images H1 and H2 (
When the distance measurement based on motion stereo is determined impossible in step S6 (No in step S6), the processing advances to step S12. In step S12, the distance D to the face is corrected based on the change in the size of the face in the captured images. More specifically, the distance in the image (the number of pixels) between any two feature points in the face is stored together with the distance D calculated in step S10 when the distance measurement based on motion stereo is possible (Yes in step S6). The two feature points are, for example, the centers of the two eyes. In step S12, the distance previously calculated in step S10 is corrected in accordance with the amount of change in the distance between the two feature points from the previous image to the current image. More specifically, when m is the distance (the number of pixels) between the feature points and Dx is the calculated distance to the face in the previous step S10, and n is the distance (the number of pixels) between the feature points in the current step S12, the current distance Dy to the face is calculated as Dy=Dx·(m/n), which is the corrected value for the distance to the face. For example, when m is 100 pixels, Dx is 40 cm, and n is 95 pixels, the corrected value for the distance is Dy=40 (cm)×100/95=42.1 (cm). As the face moves away from the camera 1 to reduce the size of the face in the image, the distance between the feature points on the image is reduced (n<m). This increases the calculated value for the distance from the camera 1 to the face (Dy>Dx).
The occupant monitoring apparatus according to the above embodiment includes the camera 1 installed on the steering wheel 51 away from the rotational shaft 52. The camera 1 rotatable together with the steering wheel 51 can thus provide two images G1 and G2 captured at two different positions. The apparatus then rotates the captured images G1 and G2 to generate the rotated images H1 and H2, and uses the parallax δ obtained from the rotated images H1 and H2 to calculate the distance D from the camera 1 to a specific part of the face 41 (the eyes in the above example). The occupant monitoring apparatus according to the above embodiment measures the spatial position of the face with a simple structure without multiple cameras or a dedicated optical system.
In the occupant monitoring apparatus 100 in
The occupant monitoring apparatus 200 in
In the occupant monitoring apparatus 100 in
In the occupant monitoring apparatus 300 in
The occupant monitoring apparatuses 300 and 400 in
As in the apparatus in
In addition to the above embodiments, the present invention may be variously embodied in the manner described below.
In the above embodiments, the camera 1 is installed on the steering wheel 51 at the position shown in
In the above embodiments, the captured image G1 is rotated clockwise by the angle |θ2−θ1|/2, and the captured image G2 is rotated counterclockwise by the angle |θ2−θ1|/2 (
In the above embodiments, the distance D from the camera 1 to the face 41 is calculated based on the eyes as the specific part of the face 41. In some embodiments, the specific part may be other than the eyes, and may be the nose, mouth, ears, or eyebrows. The specific part is not limited to a feature point in the face, such as the eyes, nose, mouth, ears, or eyebrows, and may be any other point. The site to be the subject of the distance measurement according to one or more embodiments of the present invention is not limited to the face, and may be other parts such as the head and the neck.
In the above embodiments, the distance D from the camera 1 to the face 41 is defined as the spatial position of the face 41. In some embodiments, the spatial position may be defined by coordinates, rather than by the distance.
In the above embodiments, the occupant monitoring apparatuses 100 to 400 each include the driver state determiner 4. In some embodiments, the driver state determiner 4 may be external to the occupant monitoring apparatuses 100 to 400.
Number | Date | Country | Kind |
---|---|---|---|
2018-033132 | Feb 2018 | JP | national |