The present invention relates to an in-vehicle camera control device to control an in-vehicle camera.
A monitoring device for monitoring outside a vehicle is known. The monitoring device controls an in-vehicle camera, which is provided in the vehicle, for capturing a scene outside the vehicle through a vehicle window. The monitoring device for monitoring outside the vehicle disclosed in Patent Literature 1 has a failsafe function. The monitoring device determines whether a captured image has an image blur by using a difference in luminance distribution characteristics between a case in which the captured image is a normal image and a case in which the captured image has the image blur. When the captured image has the image blur, the failsafe function is activated to temporarily stop monitoring.
Above-mentioned Patent Literature 1 discloses that the monitoring device for monitoring outside the vehicle can improve image capturing conditions by activating wipers or a defroster when the image blur is caused by deterioration of the image capturing conditions due to dirt on a vehicle window or windshield fog etc. After a normal image is obtained again, the monitoring device returns to a normal monitoring state from a fail state.
However, when the monitoring device fails to improve the image capturing conditions by activating the wipers or the defroster, there exists a problem that the monitoring device cannot return to the normal monitoring state from the fail state.
The present invention is made in order to solve the above-mentioned problem, and it is therefore an object of the present invention to provide an in-vehicle camera control device with which it is possible to obtain a normal image even when wipers etc. fail to improve the image capturing conditions.
An in-vehicle camera control device in accordance with the present invention controls an in-vehicle camera that is movably disposed in a vehicle for capturing a scene outside the vehicle through a vehicle window. The in-vehicle camera control device includes an evaluator for evaluating image capturing conditions with reference to an image captured by the in-vehicle camera, an image capture suitability determiner for determining whether to continue image-capturing in accordance with an evaluation result generated by the evaluator, and a drive controller for outputting a drive signal to a drive unit of the in-vehicle camera to move the in-vehicle camera when the image capture suitability determiner determines that the in-vehicle camera is unsuitable for the image-capturing.
According to the present invention, the in-vehicle camera is moved when the image capturing conditions is determined to be unsuitable for the image-capturing. Thus, it is possible to move the in-vehicle camera to a position where a normal image can be captured when wipers etc. fail to improve the image capturing conditions.
Hereinafter, some embodiments will be explained in detail with reference to the drawings.
As shown in
The in-vehicle camera control device 1 is constituted by, for example, an Electronic Control Unit (ECU), and functions as each of an evaluator 11, an image capture suitability determiner 12, a driving assistance information generator 13, and a drive controller 14 by executing a program stored in an internal memory.
The in-vehicle camera 2 includes a camera body 21 and a drive unit 22. The camera body 21 has an imager such as the Charge Coupled Device (CCD). The drive unit 22 is constituted by, for example, a motor.
When dirt etc. is attached to a window glass, it obstructs a field of view of the in-vehicle camera 2 and is captured in an image. As a result, there exists a case in which the in-vehicle camera 2 fails to capture the scene outside the vehicle. Thus, the evaluator 11 evaluates “image capturing conditions”, that is, the evaluator 11 evaluates whether dirt is captured in the image. The evaluator 11 obtains a captured image from the camera body 21, and evaluates the image capturing conditions based on the luminance values of the captured image. Then, the evaluator 11 determines a movement direction of the camera body 21 based on the evaluation result. The evaluator 11 outputs, to the image capture suitability determiner 12, the evaluation result of the image capturing conditions and the camera movement information indicating the movement direction of the camera body 21.
The image capture suitability determiner 12 determines, based on the evaluation results generated by the evaluator 11, that image-capturing can be continued if dirt is not captured in the image, and that the image-capturing cannot be continued if dirt is captured in the image. The image capture suitability determiner 12 outputs, to the driving assistance information generator 13, the determination results of whether or not continuous image capturing is suitable. When the image capture suitability determiner 12 determines that the continuous image capturing is unsuitable, the image capture suitability determiner 12 also outputs, to the drive controller 14, the camera movement information, which is determined by the evaluator 11. The camera movement information is used for moving the in-vehicle camera 2 to a position where the image capturing is suitable.
When the image capture suitability determiner 12 determines that the image capturing is suitable, the driving assistance information generator 13 determines that normal driving assistance is executable in accordance with the captured image. In this case, the driving assistance information generator 13 generates information correlated with driving assistance based on the captured image obtained from the camera body 21, and outputs the information to devices including a head-up-display (HUD), a throttle control device, and a brake control device or the like. On the other hand, when the image capture suitability determiner 12 determines that the image capturing is unsuitable, the driving assistance information generator 13 determines that the normal driving assistance is inexecutable. In this case, the driving assistance information generator 13 neither generates nor outputs the driving assistance information (a fail state).
For example, when an obstacle or a person is detected based on the captured image, the driving assistance information generator 13 generates the driving assistance information to display an alert message on the HUD, or generates the driving assistance information for highlighting the obstacle or the like on the captured image. Further, for example, the driving assistance information generator 13 detects a vehicle ahead based on the captured image representing the scene ahead of the user's own-vehicle, and generate the driving assistance information to control a throttle and a brake for the purpose of an adaptive cruise (i.e. an auto cruise). Note that the captured image can be used not only for the driving assistance such as the auto cruise, but also for the automatic driving control.
The drive controller 14 generates a drive signal to drive the drive unit 22 in accordance with the camera movement information transmitted from the image capture suitability determiner 12, and outputs the drive signal to the drive unit 22. The drive signal includes information such as the movement direction of the camera body 21 and a movement distance of the camera body 21.
The drive unit 22 moves the camera body 21 based on the drive signal received from the drive controller 14.
The camera body 21 captures the scene outside the vehicle through the window, and outputs the captured image to the in-vehicle camera control device 1.
As shown in
In the example shown in
Further, a plurality of the in-vehicle cameras 2 may be disposed so as to face one window. Alternatively, a plurality of the in-vehicle cameras 2 may be disposed so as to face a plurality of the windows, respectively.
In the above-mentioned example, the in-vehicle camera 2 is configured to be movable in the left-right direction, namely, horizontal direction. However, this embodiment is not limited to such a configuration. For example, the in-vehicle camera 2 may be configured to be movable in the up-down direction, namely, the vertical direction.
In
The evaluation unit 11 performs a binarization process on the captured image 101, and assigns the luminance value of zero (black) or one (white) to each pixel of the captured image 101 based on a luminance threshold value.
In the daytime, the evaluator 11 determines the luminance threshold value t such that the ratio of the number of white pixels in the captured image after the binarization to the number of black pixels in the captured image after the binarization becomes 5:5, as shown in
In the nighttime, since it is dark around the vehicle as compared to in the daytime, the luminance values in the captured image decrease in the nighttime. Thus, the evaluator 11 determines the luminance threshold value t such that the ratio of the number of white pixels in the captured image after the binarization to the number of black pixels in the captured image after the binarization becomes 3:7, as shown in
Since it is dim in the early morning, the luminance values in the captured image become lower as compared to in the daytime but they become higher as compared to in the nighttime. Thus, the evaluator 11, in the early morning, determines the luminance threshold value t such that the ratio of the number of white pixels in the captured image after the binarization to the number of black pixels in the captured image after the binarization becomes 4 : 6, as shown in
Note that the example shown in
For example, when the luminance value histogram has two peaks (i.e. two classes), the evaluator 11 may calculate the luminance threshold value t such that the luminance threshold value t is the valley between two peaks and such that between-class variance is maximized by way of discriminant analysis techniques.
Additionally, the evaluator 11 may smooth noise of the captured image by using a smoothing filter before the binarization.
Next, the evaluator 11 calculates the number of pixels that belong to the left frame L and whose luminance value is zero (black), and calculates the number of pixels that belong to the left frame L and whose luminance value is one (white). Similarly, the evaluator 11 calculates the number of pixels that belong to the right frame R and whose luminance value is zero (black), and calculates the number of pixels that belong to the right frame R and whose luminance value is one (white).
The evaluator 11 outputs, to the image capture suitability determiner 12, the number of pixels that belong to the left frame L and whose luminance value is zero (black), the number of pixels that belong to the left frame L and whose luminance value is one (white), the number of pixels that belong to the right frame R and whose luminance value is zero (black), and the number of pixels that belong to the right frame R and whose luminance value is one (white) as the evaluation results of the image capturing conditions.
Moreover, the evaluator 11 compares the number of pixels that belong to the left frame L and whose luminance value is one (white) with the number of pixels that belong to the right frame R and whose luminance value is one (white). The frame including the larger number of white pixels is brighter than the other frame and is considered to have less area that is obstructed by the dirt 100. Thus, the evaluator 11 determines a direction toward the frame having the larger number of white pixels as the movement direction of the in-vehicle camera 2.
In the case shown in
The image capture suitability determiner 12 determines whether to continue the image-capturing based on the evaluation results generated by the evaluator 11. For example, when the ratio of the number of pixels whose luminance value is zero (black) in the left frame L is larger than a predetermined threshold value (e.g. 90 percent) or the ratio of the number of pixels whose luminance value is zero (black) in the right frame R is larger than a predetermined threshold value, the image capture suitability determiner 12 determines that the dirt 100 obstructs the field of view of the in-vehicle camera 2, and that the image-capturing is unsuitable (i.e. driving assistance is not executable). On the contrary, when both the ratio of the black pixels in the left frame L and the ratio of the black pixels in the right frame R are equal to or less than a predetermined threshold value, the image capture suitability determiner 12 determines that the image-capturing is suitable (i.e. driving assistance is executable).
In the case shown in
When the image capture suitability determiner 12 determines that the image-capturing is unsuitable, the drive controller 14 outputs the drive signal to drive the drive unit 22 of the in-vehicle camera 2 based on the movement direction of the in-vehicle camera determined by the evaluator 11.
In the case shown in
In the example shown in
In Step ST1, the evaluator 11 obtains vehicle information from the vehicle. The vehicle information includes ON/OFF signal of a vehicle ignition switch (IGN) and vehicle speed information.
In Step ST2, the evaluator 11 obtains the captured image from the in-vehicle camera 2.
In Step ST3, the evaluator 11 performs the binarization process on the captured image obtained from the in-vehicle camera.
In Step ST4, the evaluator 11 divides the captured image after the binarization into two frames while the vertical line passing the center of the image is used as a boundary of two frames to set the left frame L and the right frame R.
In Step ST5, the evaluator 11 compares the luminance values of the left frame L with those of the right frame R, and determines the movement direction of the in-vehicle camera 2.
In Step ST6, the image capture suitability determiner 12 determines whether to continue the image-capturing (i.e. whether the driving assistance is executable) based on the luminance values of the left frame L and the luminance values of the right frame R.
When the image capture suitability determiner 12 determines that the image-capturing is unsuitable (Step ST6 “NO”), the driving assistance information generator 13 interrupts the driving assistance. In the subsequent step, that is, in Step ST7, the drive controller 14 generates the drive signal based on the movement direction determined by the evaluator 11 in Step ST5, and outputs the drive signal to the drive unit 22. Then, the process returns to Step ST2. The drive unit 22 moves the camera body 21 in accordance with the drive signal.
Note that the moving distance of the in-vehicle camera 2 may be a predetermined distance, for example, a step of 4 cm in a single movement. By repeating from Step ST2 to Step ST7 until the image capturing becomes suitable, the in-vehicle camera 2 repeatedly moves by the predetermined distance to reach a position in which the dirt on the window is not captured.
The time interval between two processes to be repeated may be constant or may be varied in accordance with vehicle speed. For example, the drive controller 14 may use the vehicle speed information included in the vehicle information obtained by the evaluator 11, and decrease the time interval when the vehicle speed increases, and increase the time interval when the vehicle speed decreases. When the vehicle speed is high, the driving assistance is highly required. So, it is preferable that the in-vehicle camera 2 immediately moves to the position in which the field of view of the in-vehicle camera 2 is not obstructed when camera sensing becomes inexecutable. On the other hand, when the vehicle speed is low, the driving assistance is less required. Therefore, the time interval for the vehicle traveling at a low speed may be longer than the time interval for the vehicle traveling at a high speed.
When the image capture suitability determiner 12 determines that the image-capturing is suitable (Step ST6 “YES”), the in-vehicle camera 2 does not move (STEP ST8), and in subsequent Step ST9, the driving assistance information generator 13 continues the driving assistance.
In Step ST10, the evaluator 11 determines whether the vehicle ignition switch (IGN) is OFF based on the vehicle signal obtained in Step ST1.
When the evaluator 11 determines that the IGN is OFF (Step ST10 “YES”), a series of processes is terminated. On the other hand, when the evaluator 11 determines that the IGN is ON (Step ST10 “NO”), the process returns to STEP ST1.
According to such a process, when the IGN becomes OFF, the in-vehicle camera control device 1 has already moved the in-vehicle camera 2 to the position in which the image-capturing is suitable (i.e. the driving assistance is executable). Then, the process is terminated. Thus, when the IGN becomes ON again, the in-vehicle camera control device 1 can immediately start the driving assistance.
As described above, according to the Embodiment 1, the in-vehicle camera control device 1 includes the evaluator 11 to evaluate the image capturing conditions with reference to the image captured by the in-vehicle camera 2, the image capture suitability determiner 12 to determine whether to continue the image-capturing based on the evaluation results generated by the evaluator 11, and the drive controller 14 that outputs the drive signal to the drive unit 22 of the in-vehicle camera 2 to move the in-vehicle camera 2 when the image capture suitability determiner 12 determines that the in-vehicle camera 2 is unsuitable for the image-capturing. Thereby, it is possible to move the in-vehicle camera 2 to the position where a normal image can be captured when the windshield wipers etc. fail to improve the bad image capturing conditions. Thus, when the driving assistance is performed based on the camera sensing, it is possible to immediately return to the state in which the driving assistance is executable from the fail state caused by bad image capturing conditions.
According to the Embodiment 1, the image capture suitability determiner 12 determines whether to continue the image-capturing by comparing the evaluation value for the left frame L, which is the ratio of the number of pixels whose luminance value is zero (black) to the total number of pixels in the left frame L, with the predetermined threshold value, or by comparing the evaluation value for the right frame R, which is the ratio of the number of pixels whose luminance value is zero (black) to the total number of pixels in the right frame R, with the predetermined threshold value. Thereby, it is possible to accurately determine whether the dirt on the window obstructs the field of view of the in-vehicle camera 2.
Note that the in-vehicle camera control device 1 and the in-vehicle camera 2 are communicably connected via wired connection in Embodiment 1. Alternatively, the in-vehicle camera control device 1 and the in-vehicle camera 2 may be communicably connected via wireless connection.
Further, though the in-vehicle camera control device 1 controls a single in-vehicle camera 2 in Embodiment 1, it may control two or more in-vehicle cameras. In this case, the evaluator 11 evaluates the respective image capturing conditions of corresponding in-vehicle cameras 2 with reference to images captured by the corresponding in-vehicle cameras 2. The image capture suitability determiner 12 determines whether to continue each image-capturing performed by the corresponding in-vehicle camera 2 based on the evaluation result for the corresponding in-vehicle camera 2 generated by the evaluator 11. When the evaluator 11 determines that an in-vehicle camera 2 is unsuitable for the image-capturing, the drive controller 14 outputs a drive signal to the corresponding drive unit 22. Thereby, it is possible to move each of the plurality of in-vehicle cameras 2 to a position where the normal image can be captured.
In the above-mentioned Embodiment 1, timing of moving the in-vehicle camera 2 is not restricted. In Embodiment 2, the timing of moving the in-vehicle camera 2 is restricted.
Further, in the above-mentioned Embodiment 1, the angle of the in-vehicle camera 2 is kept unchanged when the in-vehicle camera 2 moves. In Embodiment 2, the angle of the in-vehicle camera 2 is also changed when the in-vehicle camera 2 moves.
The in-vehicle camera control device 1 and the in-vehicle camera 2 according to the Embodiment 2 have the same configuration as the in-vehicle camera control device 1 shown in
In the Embodiment 2, a motor 27 is added to change the angle of the camera body 21. The drive unit 22 drives the motor 27 in accordance with a drive signal transmitted from the drive controller 14 of the in-vehicle camera control device 1. Thereby, the in-vehicle camera 2 rotates (i.e. swings) as shown in
In Step ST6, the image capture suitability determiner 12 determines whether to continue the image-capturing (i.e. whether the driving assistance is executable) based on the luminance values of the left frame L and the luminance values of the right frame R.
When the image capture suitability determiner 12 determines that the image-capturing is unsuitable (Step ST6 “NO”), the drive controller 14 generates the drive signal based on the movement direction determined by the evaluator 11 and outputs the drive signal to the drive unit 22 (Step ST7). After the camera body 21 moves, the device controller 14 generates the drive signal to change the angle of the camera body 21 such that the camera body 21 after the moving turns toward the scene to be captured by the camera body 21 before the moving, and outputs the drive signal to the drive unit 22 (Step ST11). Then, the process returns to Step ST2. The drive unit 22 drives the motor 27 in accordance with the drive signal, and rotates the camera body 21.
When the image capture suitability determiner 12 determines that the image-capturing is suitable (Step ST6 “YES”), the drive controller 14 determines whether the vehicle is stopped, and determines whether the in-vehicle camera 2 is at the reference position A (Step ST12).
The drive controller 14 determines that the vehicle is stopped when the vehicle speed is 0 km/h based on the vehicle speed information which is obtained by the evaluator 11 in Step ST1. In other cases, the drive controller 14 determines that the vehicle is not stopped.
In addition, the drive controller 14 has coordinate values of the reference position A in a database, and compares the coordinate values of the reference position A with coordinate values of the in-vehicle camera 2 after the drive controller 14 moves the in-vehicle camera 2. Then, the drive controller 14 determines whether the in-vehicle camera 2 is at the reference position based on the comparison result.
Further, a switch is disposed at the reference position A. The switch is pushed when the in-vehicle camera 2 is at the reference position A, and the switch is kept projected when the in-vehicle camera 2 is away from the reference position A. Thus, it is possible to determine whether the in-vehicle camera 2 is at the reference position A also by using hardware means.
When the drive controller 14 determines that the vehicle is stopped and the in-vehicle camera 2 is at a position different from the reference position A (Step ST12 “YES”), the drive controller 14 generates the drive signal to return the in-vehicle camera 2 to the reference position A. Then, the drive controller 14 outputs the drive signal to the drive unit 22 (Step ST13), and the process returns to Step ST1. The drive unit 22 drives the motor 23 in accordance with the drive signal to move the camera body 21 to the reference position A.
When the drive controller 14 determines that the vehicle is not stopped or that the in-vehicle camera 2 is at the reference position A (Step ST12 “NO”), the process goes to the process from Step ST8 to Step ST10.
When the in-vehicle camera 2 returns to the reference position A and the dirt etc. is not captured in the image by the in-vehicle camera 2 being at the reference position A, the in-vehicle camera 2 is subsequently kept at the reference position A. Thus, it is possible to continue capturing the scenes with the in-vehicle camera 2 being at the most appropriate position.
As described above, according to the Embodiment 2, the drive controller 14 moves the in-vehicle camera 2 to a predetermined reference position when the vehicle is stopped. Thereby, it is possible to move the in-vehicle camera 2 to an appropriate position. Moreover, the timing in which the in-vehicle camera 2 returns to the reference position is restricted to the time when the vehicle is stopped. Thus, the movement of the in-vehicle camera does not obstructs driving maneuvers by the driver, even when the in-vehicle camera 2 returns to the reference position from a position far from the reference position.
Further, according to the Embodiment 2, the image capture suitability determiner 12 determines whether to continue the image-capturing by comparing the evaluation value of the image capturing conditions, which is calculated with reference to the captured image when the in-vehicle camera is at the reference position, with a predetermined threshold value. The image capture suitability determiner 12 determines that the image-capturing at the reference position is suitable when the evaluation value is less than the threshold value. Thereby, the appropriate position for the image-capturing can be kept, unless dirt etc. exists at the reference position. Thus, it is possible to provide the driving assistance system with the most appropriate image for the driving assistance.
Moreover, according to the Embodiment 2, the drive controller 14 outputs, to the drive unit 22, the drive signal to change the angle of the in-vehicle camera 2 such that the in-vehicle camera 2 after the moving turns toward the scene to be captured by the in-vehicle camera 2 before the moving. Thereby, it is possible to reduce influence of the movement of the in-vehicle camera 2.
In Embodiment 3, the in-vehicle camera 2 is in contact with a vehicle window while the in-vehicle camera 2 is moving.
The in-vehicle camera 2 according to the Embodiment 3 can move to anywhere on the vehicle window. However, when the in-vehicle camera 2 is disposed on the windshield 3, it is preferable that the movement is restricted such that the in-vehicle 2 camera cannot move to a position where the in-vehicle camera 2 obstructs the field of view of the driver. For example, as shown in
On the other hand, when the driving assistance information generator 13 executes automatic driving controls in place of the driving assistance, the driver is free from the driving maneuvers. Thus, it is allowed for the in-vehicle camera 2 to obstruct the field of view of the driver. In this case, it is not necessary to restrict the movable area of the in-vehicle camera 2. Additionally, two or more in-vehicle cameras can be disposed on the windshield 3.
In Embodiment 4, stereo cameras are used.
In the state shown in
In
However, when the in-vehicle camera 2 is moved to the right, it is impossible to maintain the distance between two in-vehicle cameras 2, 5 to be equal to or larger than the distance D. Therefore, the drive controller 14 modifies the movement direction of the in-vehicle camera 2 to the direction toward left as shown in
Alternatively, the drive controller 14 may also move the in-vehicle camera 5, which is determined to be suitable for the image-capturing, in addition to the in-vehicle camera 2, which is determined to be unsuitable for the image-capturing, to maintain the distance between two in-vehicle cameras 2, 5 at a distances equal to or larger than the distance D.
When the image capture suitability determiner 12 determines that at least one of the in-vehicle camera 2 and the in-vehicle camera 5 is unsuitable for the image-capturing (Step ST6 “NO”), the drive controller 14 makes the following determination. That is, the drive controller 14 determines whether a relative positional relationship between two in-vehicle cameras is to be maintained (namely, the distance between two in-vehicle cameras is to be equal to or larger than the distance D shown in
When the relative positional relationship is determined to be maintained (Step ST21 “YES”), the drive controller 14 generates the drive signal based on the movement direction determined by the evaluator 11, and outputs the drive signal to at least one of the drive units 22, 52 (Step ST23). Then, the process returns to Step ST2. When at least one of the drive units 22, 52 receives the drive signal, it moves the corresponding camera body 21, 51 in accordance with the drive signal.
When the relative positional relationship is determined not to be maintained (Step ST21 “NO”), the drive controller 14 modifies the movement direction, which is determined by the evaluator 11, to a modified movement direction in which the relative positional relationship is to be maintained such that the two in-vehicle cameras keep the function of the stereo cameras (Step ST22). Then, the drive controller 14 generates the drive signal, and outputs the drive signal to at least one of the drive units 22, 52 (Step ST23). As the way of modifying the movement direction, there is the way explained above with reference to
As mentioned above, according to the Embodiment 4, the drive controller 14 outputs the drive signal to at least one of the drive units 22, 52, which moves stereo cameras constituted by two in-vehicle cameras 22, 52. Thereby, it is possible to move at least one of the in-vehicle cameras 2, 5 to the position where the normal image can be captured when the windshield wipers etc. fail to improve the image capturing conditions in a system having the stereo cameras.
Further, according to the Embodiment 4, the drive controller 14 moves in-vehicle cameras 2, 5 under the conditions that the relative positional relationship between two in-vehicle cameras 2, 5 is maintained as shown in
Alternatively, when the image capture suitability determiner 12 determines that either one of the two in-vehicle cameras 2, 5 is unsuitable for the image-capturing, the drive controller may move one of two in-vehicle cameras 2, 5, which is determined to be unsuitable, in a direction away from the other of two in-vehicle cameras 2, 5 as shown in
In this disclosure, it is to be understood that a freely-selected combination of two or more of the above-mentioned embodiments can be made, various changes can be made in a freely-selected component in any one of the above-mentioned embodiments, and any component in any one of the above-mentioned embodiments can be omitted within the scope of the invention.
An in-vehicle camera control device according to the present invention can move an in-vehicle camera to a position where a normal image can be captured even when dirt on a windshield or the like obstructs a field of view of the in-vehicle camera and wipers etc. fail to remove the dirt etc. Thus, the in-vehicle camera control device is suitable for a driving assistance system which performs driving assistance or automatic driving control based on camera sensing.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2014/076974 | 10/8/2014 | WO | 00 |