This application claims the benefit of priority of Japanese Patent Application No. 2010-167988, filed on Jul. 27, 2010, the disclosure of which is incorporated by reference herein.
Embodiments described herein relate to a vehicle detection apparatus that detects other vehicles based on an image captured of a region in front of a vehicle installed with the vehicle detection apparatus.
Illumination with a high beam light distribution is desirable for the headlights of a vehicle, such as a car, for the purpose of increasing visibility in a region in front of the vehicle. However, a high beam light distribution can sometimes dazzle the driver of a vehicle ahead or the driver of an oncoming vehicle present in the region in front of the vehicle itself. To address this issue, technology is proposed in Japanese Patent Document JP-A-2008-37240 for securing visibility in the region in front of a vehicle while also preventing dazzling a driver of a vehicle ahead or of an oncoming vehicle. In the proposed technology, a determination is made as to whether or not there is an illumination prohibited object, such as a vehicle ahead or an oncoming vehicle, present in the region in front of a vehicle. Illumination with a high beam light distribution then is prohibited if there is an illumination prohibited object present in the region. Japanese Patent Document JP-A-2010-957 also discloses securing visibility in the region in front of a vehicle while preventing dazzling of the driver of a vehicle ahead or of an oncoming vehicle. This is achieved by a camera capturing an image of a region in front of the vehicle and detecting in the image obtained the vehicle position of any other vehicles present in the forward region. A low beam light distribution then is illuminated towards the detected vehicle position, and a high beam light distribution is illuminated towards positions where vehicles are not detected.
When detecting whether or not other vehicles are present in the region in front of the vehicle for controlling the light distribution of the headlights in the foregoing patent documents, a method is employed, as in JP-A-2010-957, whereby a camera captures an image of a region in front of the vehicle, and the image obtained is subjected to image analysis to detect any other vehicles present. In order for this to be performed, it is necessary to discriminate whether or not points of light seen in the captured image are light from a vehicle light, such as the lights of a vehicle ahead or of an oncoming vehicle, or whether the light is from a stationary light, such as the light of a building or road marker lighting. Therefore, for example, points of light are detected in the image, and, by detecting attributes of each point of light, such as the size, shape, color, distribution and movement path, a determination is made as to whether or not the point of light is light from a headlight or light from a taillight of another vehicle, or light from a stationary light. However, such a method requires that such a determination be performed for all of the points of light in the captured image, which results in a large number of data points for processing, and an extreme load for determination processing. This makes it difficult to detect a vehicle ahead or an oncoming vehicle quickly, and consequently also makes it difficult to control in real-time the light distribution from the headlights of the vehicle itself. Furthermore, a problem arises because falsely detecting even a single detected attribute makes it difficult to discriminate between a vehicle illumination device and a stationary light, thus lowering the detection accuracy of other vehicles.
Embodiments described herein are directed towards a vehicle detection apparatus with higher detection accuracy for quickly detecting other vehicles based on captured images.
A vehicle detection apparatus according to an exemplary embodiment of the invention comprises:
In some implementations, the region sectioning module can be configured to employ the most distant point on the vehicle lane-line as a dividing position. For the oncoming vehicle lane region, the other-vehicle detection module can be configured to detect by prioritizing for points of white light. For the own vehicle lane region the other-vehicle detection module can be configured to detect by prioritizing for points of red light. For the vehicle lane exterior region the other-vehicle detection module can be configured to detect by prioritizing to reduce a detection sensitivity for points of light.
In some implementations, the vehicle detection apparatus detects vehicle lane-lines in a captured image, and based on the detected vehicle lane-line, divides the captured image into an own vehicle lane region, an oncoming vehicle lane region, and a vehicle lane exterior region (road shoulder region). The likelihood of detecting a vehicle ahead, an oncoming vehicle and a stationary light can accordingly be raised in each of the regions. By setting detection conditions such that prioritized detection is performed for objects in each of the regions with high detection likelihoods, detection of each of the respective detection objects can be accomplished quickly. It also is possible to reduce false detections. Other aspects, features and advantages will be apparent from the following detailed description, the accompanying drawings and the claims.
The following explanation describes an exemplary embodiment, with reference to the drawings.
The headlights HL can be switched between a high beam light distribution and a low beam light distribution under control of the light distribution controller 2. To perform such light distribution switching, a shade (light blocking plate) is provided to the headlights HL for setting the low beam light distribution. The headlights HL can be configured as a lamp that provides a high beam light distribution by driving the shade. Alternatively, the headlight may be configured as a composite formed from multiple lamp units having light distributions that differ from each other, and with the light distribution switched by selective illumination of these lamp units.
A digital camera equipped with a standard image capture element can be employed as the imaging camera CAM. The illustrated example includes a digital camera having a CDD image capture element or MOS image capture element for outputting an image signal corresponding to the captured image.
The vehicle detection apparatus 1 of the illustrated example includes: a vehicle lane-line detection section 11 for detecting vehicle lane-lines marked with white or yellow lines on the road in the image captured by the imaging camera CAM; a region sectioning section 12 for sectioning the captured image into plural regions based on detected vehicle lane-lines; and an other-vehicle detection section 13 for detecting points of light in the image and detecting attributes of the points of light so as to detect other vehicle(s) separately for each sectioned region. The other-vehicle detection section 13 serves as a device for acquiring road data and, in this exemplary embodiment, is connected to a car navigation device NAV and to a vehicle speed sensor Sv for ascertaining the travelling state of the vehicle itself. The other-vehicle detection section 13 refers to data from the car navigation device NAV and the vehicle speed sensor Sv to detect the attributes of detected points of light, detects whether or not the points of light are from another vehicle or from a stationary light based on the detected attributes, and then proceeds to detect whether or not any other vehicle is a vehicle ahead or an oncoming vehicle.
An example of an initial image captured in a time series is illustrated in
From out of the detected vehicle lane-lines, the vehicle lane-line detection section 11 detects a vehicle lane-line at a position on the right hand side of the facing direction of the vehicle itself as being a first right side vehicle lane-line R1. Similarly, it detects a vehicle lane-line detected at a position on the left hand side of the facing direction as being a first left side vehicle lane-line L1. When one or more vehicle lane-lines (in this example two vehicle lane-lines) are detected on the right hand side of the first right side vehicle lane-line R1, they are detected as right side vehicle lane-lines allocated with sequential numbers, in this example these being the second right side vehicle lane-line R2 and the third right side vehicle lane-line R3. Similarly with the left side vehicle lane-lines, left side vehicle lane-lines are detected and allocated sequential numbers, in this example the single second left side vehicle lane-line L2.
The region sectioning section 12 divides the image into multiple vehicle lane regions based on the vehicle lane-lines detected by the vehicle lane-line detection section 11. Based on road data, such as that obtained from the car navigation device NAV mounted to the vehicle itself and various types of traffic data, sectioning is performed into an own vehicle lane region, an oncoming vehicle lane region, and also into regions outside of the vehicle lanes, called road shoulder regions and vehicle lane exterior regions. As shown in
The other-vehicle detection section 13 scans the captured image and detects points of light in the image, detects the attributes of the detected points of light, and detects whether they are emitted from another vehicle or from a stationary light. In the case of another vehicle, it is determined whether or not it is a vehicle ahead or an oncoming vehicle. The other-vehicle detection section 13 includes a specific detection algorithm as the detection conditions for detecting the attributes of points of light. Such a detection algorithm can operate according to the following rules, and applies these rules to each of the sectioned regions separately.
(a) Priority is given to detecting points of white light when detection is in the oncoming vehicle lane region.
(b) Priority is given to detecting points of red light when detection is in the own vehicle lane region.
(c) Priority is given to lowering the detection sensitivity for points of light when detection is in the vehicle lane exterior regions.
The following explanation describes the detection operation for other vehicles by the vehicle detection apparatus 1, with reference to the flow chart of
(a) Detection in the Oncoming Vehicle Lane Region
Referring to
(b) Detection in the Own Vehicle Lane Region
Referring to
(c) Detection in the Vehicle Lane Exterior Regions
When the region is determined to be a vehicle lane exterior region Ae (S15), the other-vehicle detection section 13 lowers the detection sensitivity for points of light in the vehicle lane exterior regions Ae sectioned by the region sectioning section 12 (S41). The threshold value for detecting a point of light is, for example, raised in the vehicle lane exterior regions Ae in the images of
If a point of light with a detection level higher than the threshold value is detected in the vehicle lane exterior region Ae, then this is interpreted as being a stationary vehicle ahead or oncoming vehicle. In such cases, the same processing flow for detection is applied to the processing flow in whichever of the own vehicle lane region or the oncoming vehicle lane region is on the side closest to the relevant vehicle lane exterior region. Namely, when a point of red light is detected in the vehicle lane exterior region on the own vehicle lane region side, then this can be determined to be a stationary vehicle ahead (S42). However, when a point of white light is detected in the vehicle lane exterior region on the oncoming vehicle lane region side, then this can be determined to be from a stationary oncoming vehicle (S43).
By detecting vehicle lane-lines in the captured image and based on the detected lane dividing the captured image into the own vehicle lane region, the oncoming vehicle lane region and the vehicle lane exterior regions, the probability of detecting a vehicle ahead, an oncoming vehicle, and a stationary light can be raised in each of the regions, respectively. This enables the detection accuracy for vehicles ahead and oncoming vehicles to be raised while also enabling speedy detection. Namely, vehicles ahead can be detected by giving priority to detecting points of red light in the own vehicle lane region, and the detection accuracy of a vehicle ahead can also be raised by excluding stationary lights by detecting the behavior of points of the red light as well. Hence processing to detect the attributes of any point of light in the own vehicle lane region (other than points of red light) becomes unnecessary, enabling speedy detection of vehicles ahead and preventing false detection.
Oncoming vehicles also can be detected by giving priority to detecting points of white light in the oncoming vehicle lane region, and the detection accuracy of oncoming vehicle can be raised by excluding stationary lights by detecting the behavior of the points of white light as well. Hence processing to detect the attributes of any point of light in the oncoming vehicle lane region (other than points of white light present) becomes unnecessary, enabling speedy detection of oncoming vehicles and preventing false detection.
Furthermore, by lowering the detection sensitivity for points of light in the vehicle lane exterior regions, namely the road shoulder regions, there are fewer points of light originating from stationary lights present in the vehicle lane exterior regions detected, or there is no such detection made. Accordingly, processing when detecting for other vehicles to detect attributes of such points of light becomes unnecessary, and false detection of these points of light as other vehicles is not made. This contributes to the speed and accuracy of detecting for other vehicles.
In the foregoing example, an imaging camera for visible light is employed. However, configurations can be made with detection for other vehicles in images captured by a far infrared camera instead of a visible light camera. A far infrared camera captures an image of the heat generated by objects. It is possible to capture an image of the far infrared component of light reflected from vehicle lane-line (white paint) markings on a road. It is thus possible, as shown schematically in
It is possible to set the temperature in a far infrared camera at which points of light are captured as images. Hence, for example, setting the range of temperatures for image capture in the range of 50° C. to 70° C. prevents pedestrians and the like from being captured as points of light, thereby reducing the number of points of light captured. This enables both the detection accuracy for other vehicles to be raised and contributes to speedy detection.
An explanation has been given in the foregoing description of controlling a pattern of light distribution of a headlight HL according to other vehicle detection with the vehicle detection apparatus 1. However, applications can be made to a headlight control device for controlling the light distribution direction or illumination intensity. Alternatively, configurations can be made such that the vehicle detection apparatus 1 of embodiments described herein not only detect for other vehicles in the region in front of the vehicle, but also detect for the presence of vehicles in other peripheral regions. Applications are, therefore, possible to control the speed and direction of the vehicle itself according to the above detection.
Embodiments described herein are applicable to any vehicle detection apparatus that captures an image of a region in front of the vehicle itself, detects points of light in the captured image, and detects other vehicles based on the detected points of light. Other implementations are within the scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
2010-167988 | Jul 2010 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
6587043 | Kramer | Jul 2003 | B1 |
6873912 | Shimomura | Mar 2005 | B2 |
6882287 | Schofield | Apr 2005 | B2 |
7113867 | Stein | Sep 2006 | B1 |
7365769 | Mager | Apr 2008 | B1 |
7566851 | Stein et al. | Jul 2009 | B2 |
7764808 | Zhu et al. | Jul 2010 | B2 |
20020167589 | Schofield et al. | Nov 2002 | A1 |
20070221822 | Stein et al. | Sep 2007 | A1 |
20070263901 | Wu et al. | Nov 2007 | A1 |
20080024325 | Kobayashi et al. | Jan 2008 | A1 |
20080069400 | Zhu et al. | Mar 2008 | A1 |
20080088481 | Kumon et al. | Apr 2008 | A1 |
20090296415 | Heinrich et al. | Dec 2009 | A1 |
20090303356 | Min et al. | Dec 2009 | A1 |
20100026806 | Heinrich et al. | Feb 2010 | A1 |
20100172542 | Stein et al. | Jul 2010 | A1 |
20110285850 | Lu et al. | Nov 2011 | A1 |
Number | Date | Country |
---|---|---|
06-276524 | Sep 1994 | JP |
2008-037240 | Feb 2008 | JP |
2010-000957 | Jan 2010 | JP |
WO03093857 | Nov 2003 | WO |
Entry |
---|
Japan Patent Office, official communication in patent application No. 2010-167988 (Mar. 18, 2014). |
Extended European Search Report dated Nov. 3, 2014, issued in European Application No. EP 11 17 5167 (6 pages). |
Lin, Yu-Chen, et. al., “Adaptive IPM-Based Lane Filtering for Night Forward Vehicle Detection” IEEE Conference on Industrial Electronics and Applications 1568-1573 (2011). |
Wang, Chun-Che, et al., “Driver Assistance System for Lane Detection and Vehicle Recognition with Night Vision” IEEE/RSJ International Conference on Intelligent Robots and Systems, 6 pages (2005). |
Number | Date | Country | |
---|---|---|---|
20120027255 A1 | Feb 2012 | US |