The present invention relates to an image processing apparatus that detects an object from images taken by an imaging element mounted on a moving body.
Research and development have been devoted to light distribution control technology for switching an automatic high and low beam headlight by means of analysis of light spots in images taken with an onboard camera at night to determine presence/absence of oncoming and preceding vehicles.
Patent Document 1 describes techniques for detecting the taillights of the preceding vehicle and headlights of the oncoming vehicle so as to switch the own headlights from high to low beam thereby not to dazzle drivers in the preceding and oncoming vehicles. There is no need for the driver to be on the lookout for preceding and oncoming vehicles to perform high/low beam switch. The driver hence can concentrate on driving.
However, the driver may get a sense of awkwardness if the timing of switching control is not appropriate. The principal cause of such inappropriate control is oversight by camera of the taillights of the preceding vehicle or the headlights of the oncoming vehicle, as well as false detection of other light sources. The false detection includes overlooking dim taillights and misdetecting ambient lights like delineators of the road, traffic signals, and street lamps as vehicle lights. These irregularities of detection lead to malfunction. The challenge of technology is how to minimize the detection irregularities.
Patent Document 1 depicts means for detecting two light sources as a light source pair. Since the headlights of the oncoming vehicle or the taillights of the preceding vehicle are a pair of light sources positioned right and left horizontally, the detected light sources are attempted to be a pair. Determination if they belong to another vehicle is based on whether their pairing has been successful. The distance between the paired light sources is used to calculate an approximate distance to the light sources.
As in the example above, whether the detected light sources are those of another vehicle is conventionally based on if they can be paired. These light sources, however, can be paired and erroneously recognized as belonging to another vehicle when there are two sets, one on the left and the other on the right, of traffic signals installed on a heavily trafficked road, for example. The traffic lights are generally set up high above the road, so that they appear up above when imaged on the screen and can be distinguished from the light sources of other vehicles as long as one is near the traffic lights. When the traffic signals, on the contrary, are 200 to 300 meters or more away, they appear near a vanishing point on the screen, making it is difficult to distinguish them by height information.
An object of the present invention is to extract, given information from a stereo camera, only the headlights of the oncoming vehicle and the taillights of the preceding vehicle from among various light spots at night so as to offer the driver a safer field of view.
In order to attain the above object, there is provided a structure including: first distance information calculation means which calculates information on a first distance to a detection object candidate from two images obtained by a first imaging element and a second imaging element; second distance information calculation means which calculates information on a second distance to the detection object candidate from the image obtained by the first imaging element; and object detection means which compares the first distance information with the second distance information to detect an object from the detection object candidate on the basis of the result of the comparison.
On the basis of the information obtained through the stereo camera, only the headlights of the oncoming vehicle or the taillights of the preceding vehicle are extracted from various light spots at night to offer the driver a safer field of view.
A camera 101 serving as the imaging device is installed in a headlight unit 105 to capture the field of view ahead of the vehicle, and headlight 104 are also installed in the headlight unit 105 to illuminate the front of the vehicle. Images of the front of the vehicle are obtained with the camera 101 and input to an image signal processing unit 102. Given the images, the image signal processing unit 102 calculates the number and the positions of the headlights of an oncoming vehicle in front of the vehicle and those of the taillights of the preceding vehicle. Information from the calculation is sent to a headlight control unit 103. If there are neither the headlights of an oncoming vehicle nor the taillights of a preceding vehicle, the image signal processing unit 102 serving as the imaging processing apparatus sends information of absence to the headlight control unit 103. On receiving the information, the headlight control unit 103 determines whether the headlight 104 is to be switched to high or low beam and controls the headlight 104 accordingly.
With regard to the position of the respective units, the camera 101 and headlight 104 should preferably be located as close to each other as possible. This contributes to simplifying calibration such as optical axis adjustment.
The camera 101 may be positioned on the back of the rearview mirror, for example, to capture the field of view ahead of the vehicle when the headlight unit 105 does not have enough space inside to accommodate the camera 101.
Further, an optical axis 301 of the camera 101 serving as the imaging device is held parallel to an optical axis 302 of the headlight 104 as shown in
A method for detecting the headlights of the oncoming vehicle and the taillights of the preceding vehicle by use of the camera 101 will now be explained.
The camera 101 is a stereo camera having a right-hand camera 101b (first imaging device) and a left-hand camera 101a (second imaging device).
The cameras have CMOS's 201a, 201b (Complementary Metal Oxide Semiconductors) serving as imaging elements in which photodiodes for converting light to electrical charges are arrayed in a grid-like pattern. The surface of the pixels is furnished with color filters of red (R), green (G), and blue (B) in a Bayer array as shown in
In this structure, red light alone is incident on pixels 401, green light alone on pixels 402, and blue light alone on pixels 403. Raw images obtained by the CMOS's 201a, 201b with the Bayer array are transferred to demosaicing DSP's 202a, 202b that are demosaicing processors installed in the cameras.
The demosaicing DSP's 202a, 202b serving as the demosaicing processors perform a demosaicing process and then convert RGB images to a Y image and a UV image that are transmitted to an image input interface 205 of the image signal processing unit 102 serving as the image processing apparatus.
Below is an explanation of how a color reproduction (demosaicing) process is performed by an ordinary color CMOS having the Bayer array.
Each pixel can measure the intensity of only one of three colors: red (R), green (G), and blue (B). Any other color at that pixel is estimated by way of referencing the surrounding colors. For example, R, G, and B for a pixel G22 in the middle of
Likewise, R, G, and B for a pixel R22 in the middle of
The colors for the other pixels can also be obtained in the same manner. This makes it possible to calculate the three primary colors R, G, and B for all pixels configured, whereby an RGB image may be obtained. Furthermore, luminosity Y and color difference signals U, V are obtained for all pixels with the use of the mathematical expression (3) below, whereby a Y image and a UV image are generated.
In the Y image, the pixels are represented by eight-bit data ranging from 0 to 255. This means that the closer the data to 255 is, the brighter the pixel is.
Image signals are transmitted continuously, and the head of each of the signals includes a synchronization signal. This allows the image input interface 205 to input only the images at a necessary timing.
The images input through the image input interface 205 are written to a memory 206, a storage unit. The stored images are processed and analyzed by an image processing unit 204. The processing will be discussed later in detail. A series of processes is carried out in accordance with program 207 written in a flash ROM. A CPU 203 performs control and carries out necessary calculations for the image input interface 205 to input images and for the image processing unit 204 to process the images.
The CMOS's 201a, 201b serving as the imaging elements each incorporate an exposure control unit for performing exposure control and a register for setting exposure time. The CMOS's 201a, 201b obtain images with the use of the exposure time set on the registers. The content of the registers can be updated by the CPU 203 serving as a processor. The updated exposure time is reflected in image acquisition after the subsequent frame or the following field. The exposure time can be controlled electronically to limit the amount of light hitting the CMOS's 201a, 201b. Whereas exposure time control may be implemented through such an electronic shutter method, a mechanical shutter on/off method may be adopted just as effectively. As another alternative the exposure value may be varied by adjusting a diaphragm. Where exposure is manipulated every other line of image as in interlace, the exposure value may be varied between odd-numbered and even-numbered lines.
In detecting the headlights and taillights, it is necessary to detect the positions of light spots in images. The positions of high luminance only need to be detected in the case of the headlight. Thus the Y image obtained with the expression (3) is binarized with regard to a predetermined luminosity threshold value MinY. The positions whose luminance is equal to or higher than MinY are set to 1's, and those whose luminance is less than MinY are set to 0's. This creates a binary image such as one shown in
Values ρ and θ are calculated with the use of the expression (4) above. If the luminosity threshold value MinY is set to 30, a chroma threshold value MinRho to 30, a chroma threshold value MinRho to 181, a hue threshold value MinTheta to 80, and a hue threshold value MaxTheta to 120, then it is possible to detect the red light spots having the color of red within the range of a red region 601 shown in
This binary image is subsequently labeled, so that light spot regions can be extracted. The labeling is an image process that involves attaching the same labels to related pixels. The resulting label image is as shown in
In step S1, images are first acquired through image acquisition means. An image is obtained from the left-hand camera in the stereo camera serving as the second imaging element CMOS 201a, and another image from the right-hand camera which is the first imaging element CMOS 201b.
In step S2, light spot pair detection means is used to perform image processing for detecting paired light spots from the images. First, the images are subjected to the above-mentioned YUV conversion so as to extract and label red light spots from the UV image. Then, the positions and sizes of the labeled light spots are analyzed for pairing. The pairing of light spots is conducted under the condition that the elevations (y coordinates) of two light spots are approximately the same and that they are about the same in size and are not too far apart. When paired light spots have been detected, step S3 is reached and verification is performed as many times as the number of the detected pairs.
In step S4, second distance information calculation means is used to calculate the distance to the light spots through a monocular system. With reference to
Whereas the width W of the taillights 1001 of the preceding vehicle is an unknown that cannot be measured, that quantity may be assumed to be the general vehicle width of, say, 1.7 meters.
In step S5, first distance information calculation means is used to calculate the distance to the light spots through a stereo system. With reference to
In step S6, object detection means is used to compare in magnitude the distance Z1 serving as the second distance information with the distance Z2 serving as the first distance information. Specifically, it is determined whether the distance Z1 is equal to the distance Z2. In the example of
That is, although the distance Z1 is the same relative to the preceding vehicle 801 and the traffic signals 802, the distance Z2 is longer to the traffic signals 802 than to the preceding vehicle 801. Since the width W of the taillights 1001 ahead is set to that of the preceding vehicle 801 in the expression (5), the relation Z1≈Z2 holds in the case of the traffic signals 802, and Z1≈Z2 in the case of the traffic signals 802.
As a result, the light spots are determined to be the taillights in step S7 when the distances are approximately the same; the light spots are determined to be noise light sources in step S8 when the distances are different.
According to the present invention, as described above, the object detection means is configured to compare the first distance information calculated by the stereo method with the second distance information calculated by the monocular method and detect the object (the headlights of the oncoming vehicle or the taillights of the preceding vehicle) from among the detection object candidates (paired light spots) on the basis of the result of the comparison. In this configuration, the information obtained from the stereo camera is used as the basis for extracting only the headlights of the oncoming vehicle or the taillights of the preceding vehicle from among various light spots at night. This boosts the reliability of light distribution control and offers the driver a safer field of view.
Although this embodiment has been explained in terms of the difference between the distances measured by the monocular camera and the stereo camera being utilized, similar implementation can be achieved through the combination of the monocular camera and radar.
Number | Date | Country | Kind |
---|---|---|---|
2011-253421 | Nov 2011 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2012/077389 | 10/24/2012 | WO | 00 | 4/23/2014 |