The invention relates to a method for detecting a traffic lane by means of a vehicle camera.
Camera-based driver assistance systems that detect the course of one's own roadway lane (called the ego-lane herein) on the basis of road markings, such as Lane Departure Warning (LDW) systems, have established themselves on the market in the meantime. In particular areas of application, the use of such systems is already mandatory.
Moreover, assistance systems for lane keeping support can intervene in vehicle steering. For example, EP 1414692 B1 shows a method for operating a driver assistance system of a vehicle with servo-assisted steering. A CCD camera covers, i.e. monitors or detects, the surroundings of the vehicle and estimates particularly the course of the traffic lane therefrom. The surroundings data are compared with motion data of the vehicle. On the basis of said comparison, support of the driver's steering operation can be initiated.
Typically, modern driver assistance systems detect the course of the markings of the ego-lane and of the adjacent traffic lanes and estimate therefrom the position of one's own vehicle or subject vehicle (called the ego-vehicle herein) relative to the lane markings.
DE 10 2005 033 641 A1 describes a display device with traffic lane detection presenting lane markings of the detected traffic lane on a display in such a manner that a quality of traffic lane detection is visible. It is possible to define, e.g., a brightness contrast or color contrast for grading the quality of traffic lane detection (detection is good, poor and impossible), said contrast being required for good detection or at least for poor detection. If the road marking is interrupted on a certain stretch of road and thus cannot be detected so that it is only possible to extrapolate the adjacent road markings for that gap, a lane marking representation differing from that of a consistently and reliably detected road marking is selected. It is also possible to include pavement boundary markings (e.g., delineator posts) in traffic lane detection.
In contrast to freeways and developed national highways, there are many roads that have no road markings or that are only partially provided with road markings. On many country roads, for example, only the middle of the pavement is marked as a separating line between the two traffic lanes. In most cases, there are no lateral boundaries and the drivers orientate themselves by delineator posts or by the surroundings or by the lateral boundary of the pavement. Modern driver assistance systems for lane detection notice the absence of markings and turn themselves off completely or switch over to appropriate modes in which only the existing markings are taken into consideration or take the quality of traffic lane detection into consideration when displaying the traffic lane as described above.
Future systems will have to be capable of detecting the ego-lane irrespective of the presence of markings since the demands on driver assistance systems are constantly increasing, especially with regard to autonomous safety functions, such as Emergency Brake Assist and Emergency Steer Assist.
It is thus an object of embodiments of the invention to specify a method for detecting the traffic lane that is capable of reliably detecting and outputting the traffic lane in spite of incomplete or missing road markings.
This object can be achieved by embodiments of the invention as defined in the claims. The subclaims reveal advantageous further developments of the invention, wherein combinations and further developments of individual features are also possible.
An embodiment of the invention is based on the idea that a roadway edge or shoulder (called the verge herein) can be generally detected by means of a camera, regardless of the presence of lane markings, and the traffic lane can be estimated from the position of the verge, or the position of the verge is at least taken into consideration when detecting the traffic lane.
The detection of verges is a challenging task since the surfaces of pavements and of verges are generally different so that there is no universal pavement-verge pattern.
An embodiment of an inventive traffic lane detection method comprises the following steps: image acquisition, corridor detection, and structure detection.
Corridor detection serves to determine, first of all, the free space in front of the vehicle from the available image information and to thereby control subsequent structure detection that determines the actual verge from the image information.
In the next step (structure detection), pattern recognition methods are preferably used, said methods determining the course of the traffic lane boundary while taking the determined corridor into consideration. The determined course of the verge usually corresponds to the traffic lane boundary.
The advantage of an inventive method consists in the fact that taking the verge into consideration enhances the availability and reliability of traffic lane detection.
According to a preferred embodiment, a 3D reconstruction of the surroundings of the vehicle can be created by means of a stereo camera and the driving corridor can be determined from said reconstruction. Assuming that the pavement is relatively even, a threshold value can be applied to the 3D information and regions that exceed a particular height can thus be excluded as corridors.
Alternatively, spatial information about the surroundings of the vehicle can be obtained from a sequence of images acquired by a mono camera. There are, for example, structure-from-motion methods, in which the spatial structure of the acquired scenes can be inferred from temporal changes in the image information (optical flow).
For driving-corridor detection and structure detection, results of previous detections are advantageously taken into consideration, wherein said results are suitably filtered and weighted (e.g., Kalman filter).
Corridor detection is preferably supported by information about the dynamics of the ego-vehicle, i.e., data of at least one vehicle dynamics sensor that are present on the vehicle bus, such as speed, yaw rate, steering angle, etc.
In a preferred embodiment, data of further surroundings sensors are also taken into consideration for corridor detection, e.g., by merging the camera data with the 3D information from radar sensors or laser sensors.
In an advantageous realization of the invention, the detected driving corridor can restrict the search area for an image analysis in structure detection. The restriction to relevant image sections results in a better utilization of resources and also reduces the probability of detection errors.
Preferably, structure detection is only performed in an area of the at least one image in which at least the detected driving corridor is situated. For this purpose, the determined 3D corridor can be projected into the available image information and the range for structure detection can thus be restricted. For example, region segmentation can then be performed in the determined corridor (while advantageously taking color information into consideration) in order to determine the boundary of the ego-lane. Region segmentation serves to distinguish at least between the pavement and the region next to the pavement (e.g., meadow, curbstone, sand).
It is also possible to employ edge-based methods that determine the lateral end of the pavement from intensity differences in the image information.
In an advantageous realization of the invention, structure detection is exclusively or supplementally based on a correlation analysis, in which the image or the image detail is compared with learned patterns of typical verges. Therefore, correlation-based approaches are an alternative in which suitable templates of the pavement are compared with the available image information and the presence of the verge is inferred from the matches and deviations.
Preferably, information about the determined verge or about the estimated traffic lane can be outputted in a further procedure step. For this purpose, the information about the detected verges can be put, in the output step, in a form that is suitable for subsequent processing in one's own system or in other control devices.
The course of the pavement is advantageously represented as parameter values of a model (e.g., clothoid). It is also possible to use open traverses.
Ideally, the course can be indicated in 3D coordinates (in real space).
The method described herein can be advantageously employed together with existing marking-detecting methods in order to obtain a more comprehensive description of traffic lanes.
For this purpose, e.g., the available results can be suitably merged and processed or the two methods can influence each other advantageously. For example, a detected marking of the middle can be used to adjust the search area for the verge to the course of said marking.
In combination, the results can also be used for mutual plausibilization.
Generally, this is also possible in combination with other sensors, e.g., plausibilization on the basis of the traffic lane course estimated by a radar system.
In the following, an exemplary embodiment of an inventive method will be specified.
A vehicle is equipped with a mono camera that is arranged inside the windshield, covers that part of the surroundings which is in front of the vehicle, and continually acquires images. From a sequence of images, spatial structures are reconstructed from the contents of the images by means of the optical flow. The free space in front of the vehicle is determined from the spatial structures in order to determine the corridor for the vehicle, wherein the vertical dimension (height) of the structure on which the vehicle is located is analyzed. The position of a potential free space into which the vehicle can move is inferred from the height profile. The vehicle corridor corresponds to all potential free spaces in front of the vehicle.
The position of the vehicle corridor in the image is now used to specifically detect structures there, e.g., by means of region segmentation. During region segmentation, the image is segmented, in the region of the vehicle corridor, into the travel surface, the adjacent area (e.g., meadow), elevations (e.g., crash barrier, post, curbstone), etc. For example, brightness criteria, color criteria, size criteria and/or edge criteria can be used for segmentation.
As a result thereof, the verge is determined as the border between the travel surface region and an adjacent region.
The course of the detected verge can now be outputted. On the one hand, the verge region in the image can be forwarded to a lane marking detection unit so that it becomes possible to detect said markings in the vicinity of the verge more easily and faster. On the other hand, the course of the verge can be outputted to a controlling means of a steering assist unit in real coordinates or as parameter values of a clothoid.
Number | Date | Country | Kind |
---|---|---|---|
10 2011 109 569 | Aug 2011 | DE | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/DE2012/100202 | 7/6/2012 | WO | 00 | 1/23/2014 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2013/020550 | 2/14/2013 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
6269308 | Kodaka et al. | Jul 2001 | B1 |
7091838 | Shimakage | Aug 2006 | B2 |
7289059 | Maass | Oct 2007 | B2 |
7382236 | Maass et al. | Jun 2008 | B2 |
7411486 | Gern et al. | Aug 2008 | B2 |
7782179 | Machii et al. | Aug 2010 | B2 |
8004428 | Koenig | Aug 2011 | B2 |
8184859 | Tanji | May 2012 | B2 |
8301344 | Simon et al. | Oct 2012 | B2 |
8330592 | Von Zeppelin et al. | Dec 2012 | B2 |
8354944 | Riegel et al. | Jan 2013 | B2 |
8457359 | Strauss et al. | Jun 2013 | B2 |
8466806 | Schofield | Jun 2013 | B2 |
8543254 | Schut et al. | Sep 2013 | B1 |
8615357 | Simon | Dec 2013 | B2 |
8842884 | Klein et al. | Sep 2014 | B2 |
20010016798 | Kodaka et al. | Aug 2001 | A1 |
20040143381 | Regensburger et al. | Jul 2004 | A1 |
20050004731 | Bohm et al. | Jan 2005 | A1 |
20060151223 | Knoll | Jul 2006 | A1 |
20060161331 | Kumon et al. | Jul 2006 | A1 |
20080027607 | Ertl et al. | Jan 2008 | A1 |
20080204212 | Jordan et al. | Aug 2008 | A1 |
20100098297 | Zhang | Apr 2010 | A1 |
20100157058 | Feiden | Jun 2010 | A1 |
20110199200 | Lueke et al. | Aug 2011 | A1 |
20110313665 | Lueke et al. | Dec 2011 | A1 |
20120277957 | Inoue et al. | Nov 2012 | A1 |
20140088862 | Simon | Mar 2014 | A1 |
20140324325 | Schlensag et al. | Oct 2014 | A1 |
20150149076 | Strauss et al. | May 2015 | A1 |
Number | Date | Country |
---|---|---|
103 55 807 | Jul 2004 | DE |
102004011699 | Sep 2004 | DE |
103 36 638 | Feb 2005 | DE |
102004030752 | Jan 2006 | DE |
102004057296 | Jun 2006 | DE |
102005033641 | Jan 2007 | DE |
102005039167 | Feb 2007 | DE |
102005063199 | Mar 2007 | DE |
102005046672 | Apr 2007 | DE |
102006020631 | Nov 2007 | DE |
102007016868 | Oct 2008 | DE |
102008020007 | Oct 2008 | DE |
102009009211 | Sep 2009 | DE |
102009016562 | Nov 2009 | DE |
102009039450 | May 2010 | DE |
102009028644 | Feb 2011 | DE |
0 640 903 | Mar 1995 | EP |
1 089 231 | Apr 2001 | EP |
1 346 877 | Sep 2003 | EP |
1 383 100 | Jan 2004 | EP |
1 398 684 | Mar 2004 | EP |
1 414 692 | May 2004 | EP |
2 189 349 | May 2010 | EP |
WO 2004047449 | Jun 2004 | WO |
WO 2004094186 | Nov 2004 | WO |
Entry |
---|
PCT Examiner Jeroen Bakker, International Search Report of the International Searching Authority for International Application PCT/DE2012/100202, mailed Feb. 20, 2013, 4 pages, European Patent Office, HV Rijswijk, Netherlands. |
PCT Examiner Agnès Wittmann-Regis, PCT International Preliminary Report on Patentability including English Translation of PCT Written Opinion of the International Searching Authority for International Application PCT/DE2012/100202, issued Feb. 11, 2014, 8 pages, International Bureau of WIPO, Geneva, Switzerland. |
German Examiner Ottmar Kotzbauer, German Search Report for German Application No. 10 2011 109 569.5, dated May 2, 2012, 5 pages, Muenchen, Germany, with English translation, 5 pages. |
Claudio Caraffi et al., “Off-Road Path and Obstacle Detection Using Decision Networks and Stereo Vision”, IEEE Transactions on Intelligent Transportation Systems, IEEE, Piscataway, NJ, USA, vol. 8, No. 4, Dec. 1, 2007, pp. 607 to 618, XP011513050, ISSN: 1524-9050. |
Pangyu Jeong et al., “Efficient and Robust Classification Method Using Combined Feature Vector for Lane Detection”, IEEE Transactions on Circuits and Systems for Video Technology, IEEE Service Center, Piscataway, NJ, USA, vol. 15, No. 4, Apr. 1, 2005, pp. 528 to 537, XP011129359, ISSN: 1051-8215. |
Office Action in European Patent Application No. 12 743 662.4, mailed Nov. 11, 2015, 8 pages, with partial English translation, 2 pages. |
Number | Date | Country | |
---|---|---|---|
20140152432 A1 | Jun 2014 | US |