The invention relates to a method for the recognition of traffic signs that at least include one main sign and one assigned additional sign, and to a respective device.
Modern driver assistance systems are increasingly equipped with an electronic traffic sign recognition system, for example to warn the driver when the vehicle exceeds the allowed speed limit. A camera records the area in front of the vehicle and provides respective image data to an image evaluation device, which analyzes and classifies the data using an algorithm to identify traffic signs from it. Such a method is known, for example, from DE 198 52 631 A1. The information from traffic signs identified in this way can then be fed into a driver assistance feature, e.g. as indication of the currently allowed speed limit on the instrument cluster of the vehicle.
Traffic signs are often equipped with additional signs that qualify the meaning of the main sign or restrict it to specific situations.
WO 2008/145545 A1 discloses a method for identifying traffic-relevant information in which the data from a camera sensor and map data of a navigation system are interpreted. Additional signs can be taken into account if their content is included in the map data of the navigation system.
DE 10 2008 057 675 A1 describes a method in which, after identifying a traffic sign, the system looks for an additional sign assigned to the traffic sign at predefined positions (relative to the identified traffic sign). The additional sign is classified by comparison with stored depictions of additional signs, e.g. using a radial basis function.
The disadvantage of this method is that there is any number of variants for time-limiting additional signs, which would either all have to be stored or the driver only receives the additional information that the “traffic sign applies at limited times” because the content information of the additional sign cannot be recognized.
Since such traffic sign recognition systems treat additional signs purely as pictograms, such additional signs are viewed and classified as a whole, that is, as an image pattern. This means that all “time information” falls into one and the same class that cannot be further differentiated without obtaining additional knowledge, e.g. from a navigation system.
It is an object of an embodiment of this invention to state a method that allows fast and reliable recognition of additional signs assigned to traffic signs and overcomes the disadvantages of the prior art as mentioned above.
This object can be achieved by the features of methods according to embodiments of the invention set forth herein.
A method for recognizing traffic signs according to an embodiment of the invention comprises the following steps:
A camera records at least one image of the vehicle environment.
The presence and class of one or several main traffic signs is recognized from the data of the image recorded by the camera.
The presence of one or several additional sign(s) assigned to the main sign(s) is recognized.
The additional sign(s) is/are classified to the extent possible using a pattern recognition method. The system can, for example, recognize that the additional sign contains a distance (in meters or kilometers) but the pattern recognition method does not recognize the value of that distance.
If classification was not or not completely possible using the pattern recognition method, a text recognition method reads and interprets the text of the additional sign to determine its information content. Reading in this context means that the method identifies individual letters or numbers, and interpreting means that the method captures the meaning or meaningful content of this text. Both steps together could optionally be called reading or interpreting.
An embodiment of the invention is based on the rationale that additional signs that have a (partially) varying or variable content, i.e. a content that is not the same for all signs of this class of signs, are subjected to an automatic machine interpretation process of this variable content because such variable content typically is textual information. Varying or variable texts include, in particular, information on time, distance, and weight. These may vary for various additional signs.
In a variant of the method described, a decision is made after the first three steps (see above) if the present additional sign includes varying text. This decision can for example be made based on an advance classification in which additional knowledge can be taken into account. Additional knowledge can mean, for example, knowledge that particular main signs are frequently combined with particular additional signs. For example, the “Construction” sign on freeways is often associated with distance information on the assigned additional sign, e.g. “in 2 km”. If a decision is made that the additional sign includes varying text, the text of the additional sign is read and interpreted using a text recognition method. Furthermore, a decision is made if the additional sign includes other information. This can in particular be other information next to varying text or other information on a sign that does not contain varying text. If other information is present, this other information is interpreted using a pattern recognition method. If no other information is present, no pattern recognition is performed, since there is only varying text on the additional sign that is read by the text recognition feature.
The order of these two steps, in each of which a decision is made, is not predetermined. For example, while performing pattern recognition, such a system could detect that varying text is present and this varying text could then be passed on to text recognition.
Both methods according to the invention have the advantage that traffic signs that differ by text content from other, e.g. learned traffic signs, can be recognized reliably and completely. Recognition of the text content is also independent of the various meanings of traffic signs in different countries and even works for different fonts/font sizes on additional signs. Recognition is therefore highly flexible.
In an advantageous embodiment of the invention a relevance assessment step is added in which the recognized text is compared with situational information such as time of day, weekday, distance the vehicle has traveled since passing the traffic sign and additional sign that were recognized, weight of the vehicle, whether the road is wet, outside temperature, position of the lane the vehicle is in, activation of the turn signal and/or a recommended turn from an active route planning feature. In this way, the relevance of the main sign to which the additional sign is assigned can be determined for the current driving situation.
The main sign is preferably classified as not relevant based on the relevance assessment if the situational information does not fall within the scope of the text recognized on the assigned additional sign. If a speed limit for reasons of noise protection applies only from 8 p.m. to 6 a.m., it can be ignored for the current time of 2:35 p.m. because it is not currently relevant.
In a preferred embodiment of the invention, at least one main sign is classified as relevant if the situational information falls within the scope of the text recognized on the assigned additional sign.
According to a preferred embodiment, the system checks if the relevance assessment will remain constant or not for an expected change in situational information. If the respective speed limit sign was passed at 7:59 p.m., a change in situational information that will result in a changed relevance assessment can be expected for 8 p.m. In this case, it is preferred to repeat the relevance assessment when the situational information changes.
The situational information may preferably include a current vehicle speed, which can be derived in a known way from a series of images taken by the same camera. The distance the vehicle has traveled since passing the recognized traffic sign with additional sign can be determined on the basis of the vehicle speed data. This allows an evaluation of the relevance of traffic signs which apply after or within a specific distance. Recognition of a stop sign with the additional sign “in 200 m” could also advantageously be used in that the traffic sign recognition feature actively “searches” for a stop sign in this range.
In a preferred embodiment, a consistency check of the text found in the two additional signs is performed when similar signs are detected on both sides of the road. If the main signs were classified as identical, it is unlikely that the content of the additional signs differs. A decision which recognized content is correct can be made depending on the reliability of the recognition (pattern and/or text) or by merging the two recognized messages and subjecting them to a plausibility check.
It is preferred that further information from a digital map, a positioning device and/or other vehicle sensors, such as a rain sensor, driving dynamics sensor, radar or LIDAR, is taken into account.
Another subject matter of the invention is a device for recognizing traffic signs that comprises a camera for recording at least one image of a vehicle environment, an image evaluating unit, and a unit for relevance assessment. The image evaluation unit can recognize the presence and class of one or several main signs and the presence of one or several additional signs assigned to the recognized main sign(s) from the image data recorded. The image evaluation unit comprises a pattern recognition unit and a text recognition unit for this purpose. The text recognition unit can in particular be used to read and interpret variable text information on the additional sign. The unit for relevance assessment compares the recognized text with situational information to determine the relevance of the main sign for a current driving situation.
Other advantages of the invention follow from the description and the figures. Embodiments are simplified in the figures and explained in more detail in the description below.
The additional signs in
If, for example, the left additional sign from
This information can now be subjected to a relevance assessment and compared to a piece of situational information, namely the distance the vehicle has traveled since passing the recognized traffic sign and additional sign. The relevance assessment is repeated because it is clear that this distance will change as long as the vehicle keeps moving. As long as this distance is smaller than 800 meters, the main sign is relevant and can be displayed to the driver of the vehicle. As soon as a distance of 800 m or more has been traveled, the relevance assessment will reveal that the main sign is no longer relevant, and it will no longer be displayed, for example.
Number | Date | Country | Kind |
---|---|---|---|
10 2011 109 387 | Aug 2011 | DE | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/DE2012/100208 | 7/10/2012 | WO | 00 | 12/13/2013 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2013/017125 | 2/7/2013 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
4566124 | Yamamoto et al. | Jan 1986 | A |
4901362 | Terzian | Feb 1990 | A |
5594806 | Colbert | Jan 1997 | A |
6208758 | Ono et al. | Mar 2001 | B1 |
6690842 | Silver et al. | Feb 2004 | B1 |
6801638 | Janssen et al. | Oct 2004 | B1 |
6842177 | Garg et al. | Jan 2005 | B2 |
6888892 | Chung et al. | May 2005 | B2 |
7034848 | Sobol | Apr 2006 | B2 |
7058206 | Janssen et al. | Jun 2006 | B1 |
7209141 | Garg et al. | Apr 2007 | B2 |
7346222 | Lee et al. | Mar 2008 | B2 |
7466841 | Bahlmann et al. | Dec 2008 | B2 |
8064643 | Stein et al. | Nov 2011 | B2 |
8170340 | Klefenz | May 2012 | B2 |
8174570 | Yoneyama et al. | May 2012 | B2 |
8233670 | Moed et al. | Jul 2012 | B2 |
8340420 | Smith et al. | Dec 2012 | B2 |
8346706 | Groitzsch et al. | Jan 2013 | B2 |
8370755 | Buecker et al. | Feb 2013 | B2 |
8379014 | Wiedemann et al. | Feb 2013 | B2 |
8391612 | Natroshvili et al. | Mar 2013 | B2 |
8396295 | Gao et al. | Mar 2013 | B2 |
8452524 | Groitzsch et al. | May 2013 | B2 |
8731244 | Wu | May 2014 | B2 |
8872918 | Kroekel | Oct 2014 | B2 |
8953842 | Zobel | Feb 2015 | B2 |
8995723 | Stein et al. | Mar 2015 | B2 |
9160993 | Lish et al. | Oct 2015 | B1 |
20030059088 | Culp et al. | Mar 2003 | A1 |
20030202683 | Ma et al. | Oct 2003 | A1 |
20050086051 | Brulle-Drews | Apr 2005 | A1 |
20050111698 | Kawai | May 2005 | A1 |
20050232469 | Schofield et al. | Oct 2005 | A1 |
20060034484 | Bahlmann et al. | Feb 2006 | A1 |
20060098877 | Barnes et al. | May 2006 | A1 |
20070081739 | Wilbrink et al. | Apr 2007 | A1 |
20080107345 | Melikian | May 2008 | A1 |
20080137908 | Stein et al. | Jun 2008 | A1 |
20080199050 | Koitabashi | Aug 2008 | A1 |
20090074249 | Moed et al. | Mar 2009 | A1 |
20090143974 | Adachi et al. | Jun 2009 | A1 |
20090312888 | Sickert et al. | Dec 2009 | A1 |
20100198488 | Groitzsch et al. | Aug 2010 | A1 |
20100328316 | Stroila et al. | Dec 2010 | A1 |
20110157659 | Zenju | Jun 2011 | A1 |
20120128210 | Zobel | May 2012 | A1 |
20130011016 | Haas et al. | Jan 2013 | A1 |
20130058534 | Zobel | Mar 2013 | A1 |
20140119605 | Zobel | May 2014 | A1 |
Number | Date | Country |
---|---|---|
198 52 631 | May 2000 | DE |
199 38 256 | Feb 2001 | DE |
102005017541 | Oct 2006 | DE |
102005062154 | Jul 2007 | DE |
102006053289 | May 2008 | DE |
102008057675 | Jul 2009 | DE |
102008057675 | Jul 2009 | DE |
102012212091 | Jan 2013 | DE |
102011109387 | Feb 2013 | DE |
0 149 457 | Jul 1985 | EP |
1 503 354 | Feb 2005 | EP |
1 508 889 | Feb 2005 | EP |
1508889 | Feb 2005 | EP |
2 026 313 | Feb 2009 | EP |
2026313 | Feb 2009 | EP |
2 048 597 | Apr 2009 | EP |
2 103 984 | Sep 2009 | EP |
2006-031618 | Feb 2006 | JP |
2007-263629 | Oct 2007 | JP |
2008-176357 | Jul 2008 | JP |
2010-282278 | Dec 2010 | JP |
2011-135513 | Jul 2011 | JP |
WO 9117518 | Nov 1991 | WO |
WO 2008135604 | Nov 2008 | WO |
WO 2008145545 | Dec 2008 | WO |
WO 2009135460 | Nov 2009 | WO |
WO 2013017125 | Feb 2013 | WO |
WO 2015048954 | Apr 2015 | WO |
Entry |
---|
PCT Examiner Sebastian Streich, International Search Report of the International Searching Authority for International Application PCT/DE2012/100208, mailed Nov. 20, 2012, 3 pages, European Patent Office, HV Rijswijk, Netherlands. |
PCT Examiner Agnès Wittmann-Regis, PCT International Preliminary Report on Patentability including English Translation of PCT Written Opinion of the International Searching Authority for International Application PCT/DE2012/100208, issued Feb. 4, 2014, 7 pages, International Bureau of WIPO, Geneva, Switzerland. |
German Examiner Clemens Hauber, German Search Report for German Application No. 10 2011 109 387.0, dated Sep. 9, 2011, 5 pages, Muenchen, Germany, with English translation, 5 pages. |
English translation of DE 10 2008 057 675 A1. |
Wang, Yongping et al., National University of Defense Technology, ChangSha, China, “A Method of Fast and Robust for Traffic Sign Recognition”, Fifth International Conference on Image and Graphics, Sep. 20, 2009, IEEE, NJ, USA, XP031652742, pp. 891 to 895. |
Priese, Lutz et al., University of Koblenz-Landau, Koblenz, Germany, “Ideogram Identification in a Realtime Traffic Sign Recognition System”, Intelligent Vehicles '95 Symposium, Sep. 25, 1995, Michigan, USA; IEEE, NY, USA, XP010194135, pp. 310 to 314. |
Hoessler, Helene et al., “Classifier Training Based on Synthetically Generated Samples”, 5th International Conference on Computer Vision Systems, Mar. 21, 2007, Applied Computer Science Group, Bielefeld University, Germany, XP002510914, pp. 1 to 10. |
X. W. Gao et al., “Recognition of traffic signs based on their colour and shape features extracted using human vision models”, Journal of Visual Communication and Image Representation, vol. 17, Issue 4, Aug. 2006, pp. 675 to 685. |
Taeg Sang Cho et al., “Blur Kernel Estimation Using the Radon Transform”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 241 to 248, 2011. |
Michal Irani et al., “Improving Resolution by Image Registration”, CCGIP: Graphical Models and Image Processing, vol. 53, No. 3, 1991, pp. 231 to 239. |
Céline Mancas-Thillou et al., “An Introduction to Super-Resolution Text”, In: Chaudhuri, Bidyut B.: “Digital Document Processing, Major Directions and Recent Advances”, 2007, London, GB, XP002732930, pp. 305 to 327. |
Hak An Jung et al., “Traffic Sign Detection by Dominant Color Transform and Symbol Recognition by Circular Shift of Distributions on Concentric Circles”, Proceedings of ITC-CSCC, Jan. 1, 1997, Okinawa, Japan, XP055154657, pp. 287 to 290. |
Johannes Brauers et al., “Direct PSF Estimation Using a Random Noise Target”, Institute of Imaging and Computer Vision, RWTH Aachen University, vol. 7537, Jan. 17, 2010, Aachen, Germany, XP055154644, pp. B-1 to B-10. |
Xing Yu Qi et al., “Motion Deblurring for Optical Character Recognition”, Eights International Proceedings on Document Analysis and Recognition, IEEE, School of Computing, National University of Singapore, Aug. 31, 2005, Singapore, XP010877977, pp. 389 to 393. |
English translation of Japanese Office Action in Japanese Patent Application No. 2014-523204, mailed Mar. 16, 2016, 3 pages. |
Number | Date | Country | |
---|---|---|---|
20140119605 A1 | May 2014 | US |