The present invention relates generally to a vehicle vision system for a vehicle and, more particularly, to a vehicle vision system that utilizes one or more cameras at a vehicle.
Use of imaging sensors in vehicle imaging systems is common and known. Examples of such known systems are described in U.S. Pat. Nos. 5,949,331; 5,670,935 and/or 5,550,677, which are hereby incorporated herein by reference in their entireties.
The present invention provides a driver assistance system or vision system or imaging system for a vehicle that utilizes one or more cameras (preferably one or more CMOS cameras) to capture image data representative of images exterior of the vehicle, and determines traffic signs present along the road being traveled by the vehicle and in the field of view of the camera. The system is operable to determine a speed limit on one or more traffic signs and determines whether the detected sign is providing a speed limit for the particular lane in which the vehicle is traveling. The system, responsive to image processing of image data captured by the camera, determines valid signs and determines and ignores invalid signs.
These and other objects, advantages, purposes and features of the present invention will become apparent upon review of the following specification in conjunction with the drawings.
A vehicle vision system and/or driver assist system and/or object detection system and/or alert system operates to capture images exterior of the vehicle and may process the captured image data to display images and to detect objects at or near the vehicle and in the predicted path of the vehicle, such as to assist a driver of the vehicle in maneuvering the vehicle in a rearward direction. The vision system includes an image processor or image processing system that is operable to receive image data from one or more cameras and provide an output to a display device for displaying images representative of the captured image data. Optionally, the vision system may provide display, such as a rearview display or a top down or bird's eye or surround view display or the like.
Referring now to the drawings and the illustrative embodiments depicted therein, a vehicle 10 includes an imaging system or vision system 12 that includes at least one exterior viewing imaging sensor or camera, such as a rearward viewing imaging sensor or camera 14a (and the system may optionally include multiple exterior viewing imaging sensors or cameras, such as a forward viewing camera 14b at the front (or at the windshield) of the vehicle, and a sideward/rearward viewing camera 14c, 14d at respective sides of the vehicle), which captures images exterior of the vehicle, with the camera having a lens for focusing images at or onto an imaging array or imaging plane or imager of the camera (
Existing traffic sign recognition (TSR) systems may fail to dedicate the validity to an according lane, when it comes to traffic situations where adjacent lanes divided by lane dividers have different speed limits, indicated by speed limit signs which are visible from both adjacent lanes. Some advanced TSR systems solve that problem by taking navigation system's map data into context. These systems run a plausibility check to determine whether a traffic sign at a certain vehicle position is plausible to the actual used lane or must be cleared or corrected. The lacking of these systems is that the navigation's map and its according lane plausibility must be accurate at all times. A vehicle buyer tends to not keep the GPS system maps updated out of convenience and cost reasons. Other advanced TSR systems solve the problem by only adapting speed limits when the speed limit signs are acknowledged both on the left and the right side of the vehicle. These systems fail at times the ego or subject or equipped vehicle passes a speed limit entry with one (or both) speed limit sign visible while the other one is not visible (not viewed or captured by the vision system camera), due to, for example, a blockage. The blockage may comprise, for example, a traffic participant or other object, or may be caused by snow or the like that is covering or partially covering the traffic sign.
With reference to
At lanes 3 and 4, a speed limit of 80 km/h is beginning, indicated by two 80 km/h speed limit traffic signs 7 at the left of lane 4 and at the right of lane 3. A conventional TSR system fails to ignore the 80 km/h speed limit (dedicated to the lane 3 and 4) when the system is made to adopt speed limits when detecting just one sign at left or the right of the vehicle.
Also, existing traffic sign recognition (TSR) systems often fail to ignore speed limits dedicated to exit lanes (such as lane 5 in
With reference to the exemplary traffic situation shown in
In situations where two traffic signs showing identical signs (for example, identical speed limits) that are captured (sensed) enclose a lane or multiple lanes by being positioned at the left and right sides of the road, the system may handle this as one speed limit dedicated to that lane or those lanes. In situations where both traffic signs leave the camera's field of view at the right and the subject vehicle has not passed that lane but has followed another lane (to the left), that speed limit may be ignored and the previously determined speed limit may be reestablished as valid. Optionally, the tracking of the lane may be responsive to the vehicle navigation system's data or processing and/or the vehicle lane detection system's data or processing and scene classification data, where the data may be used in fusion with the image data captured by the forward viewing camera 15 (or alternatively) for dedicating which lane the subject vehicle is following for deciding which indicated traffic signs are actually valid. Optionally, the detection of lane dividers may be taken into account as an indication that a speed limit may be indicated different at the adjacent lane or lanes.
Optionally, the TSR system may also detect and classify (recognize) motorway exit marker signs, such as shown in
Optionally, an artificial intelligence (AI) algorithm may be trained to fuse the visual cues and the dedicated plausibility logic and optionally may fuse additional sensors and remote street and traffic data systems' cues to it. The remote street and traffic data systems may be connected via any kind of vehicle to infrastructure (V2X) communication system, such as via a LTE connection or the like.
As shown in
With reference to
If the vehicle is traveling along lane 4 and the left lane gets a speed limit of 80 km/h, the forward viewing camera will capture the sign image and the system will recognize that 80 km/h is valid for the left lane only because the traffic sign is at the left side, has an indicator for the right side, and leaves the imager at the left side. It does not matter whether a sign for 80 km/h at the right side of lane 4 may be covered, since clearly, the 80 km/h sign at the left side of the left lane 4 (with an indicator pointing to the right) is valid for traffic traveling along lane 4.
When the subject vehicle passes a street sign on lane 4 with a valid speed limit of 100 km/h, the forward viewing camera captures and the system recognizes at least one of the signs for 100 km/h, with an indicator for the left side that leaves the imager at its left side or an indicator for the right side that leaves the imager at its right side. Both signs are valid in a single appearance or in combination of both.
The subject vehicle may pass the fork or exit lane 5 or branching off road at the right side and recognizes the 60 km/h speed limit sign at the fork. In situations where only the left sign 6 is captured, it has an indicator for the left side and leaves the imager at its right side. Thus, the system will ignore this sign. In situations where both signs 6 are captured, both signs are ignored because they leave the imager at the right side and there is an indicator for the left side at one of them. The situation where only the right sign 6 is captured is avoided by the limitation of the view angle of the forward viewing camera when the vehicle is at location 31 (
The camera or sensor may comprise any suitable camera or sensor. Optionally, the camera may comprise a “smart camera” that includes the imaging sensor array and associated circuitry and image processing circuitry and electrical connectors and the like as part of a camera module, such as by utilizing aspects of the vision systems described in International Publication Nos. WO 2013/081984 and/or WO 2013/081985, which are hereby incorporated herein by reference in their entireties.
The system includes an image processor operable to process image data captured by the camera or cameras, such as for detecting objects or other vehicles or pedestrians or the like in the field of view of one or more of the cameras. For example, the image processor may comprise an image processing chip selected from the EYEQ family of image processing chips available from Mobileye Vision Technologies Ltd. of Jerusalem, Israel, and may include object detection software (such as the types described in U.S. Pat. Nos. 7,855,755; 7,720,580 and/or 7,038,577, which are hereby incorporated herein by reference in their entireties), and may analyze image data to detect vehicles and/or other objects. Responsive to such image processing, and when an object or other vehicle is detected, the system may generate an alert to the driver of the vehicle and/or may generate an overlay at the displayed image to highlight or enhance display of the detected object or vehicle, in order to enhance the driver's awareness of the detected object or vehicle or hazardous condition during a driving maneuver of the equipped vehicle.
The vehicle may include any type of sensor or sensors, such as imaging sensors or radar sensors or lidar sensors or ladar sensors or ultrasonic sensors or the like. The imaging sensor or camera may capture image data for image processing and may comprise any suitable camera or sensing device, such as, for example, a two dimensional array of a plurality of photosensor elements arranged in at least 640 columns and 480 rows (at least a 640×480 imaging array, such as a megapixel imaging array or the like), with a respective lens focusing images onto respective portions of the array. The photosensor array may comprise a plurality of photosensor elements arranged in a photosensor array having rows and columns. Preferably, the imaging array has at least 300,000 photosensor elements or pixels, more preferably at least 500,000 photosensor elements or pixels and more preferably at least 1 million photosensor elements or pixels. The imaging array may capture color image data, such as via spectral filtering at the array, such as via an RGB (red, green and blue) filter or via a red/red complement filter or such as via an RCC (red, clear, clear) filter or the like. The logic and control circuit of the imaging sensor may function in any known manner, and the image processing and algorithmic processing may comprise any suitable means for processing the images and/or image data.
For example, the vision system and/or processing and/or camera and/or circuitry may utilize aspects described in U.S. Pat. Nos. 9,233,641; 9,146,898; 9,174,574; 9,090,234; 9,077,098; 8,818,042; 8,886,401; 9,077,962; 9,068,390; 9,140,789; 9,092,986; 9,205,776; 8,917,169; 8,694,224; 7,005,974; 5,760,962; 5,877,897; 5,796,094; 5,949,331; 6,222,447; 6,302,545; 6,396,397; 6,498,620; 6,523,964; 6,611,202; 6,201,642; 6,690,268; 6,717,610; 6,757,109; 6,802,617; 6,806,452; 6,822,563; 6,891,563; 6,946,978; 7,859,565; 5,550,677; 5,670,935; 6,636,258; 7,145,519; 7,161,616; 7,230,640; 7,248,283; 7,295,229; 7,301,466; 7,592,928; 7,881,496; 7,720,580; 7,038,577; 6,882,287; 5,929,786 and/or 5,786,772, and/or U.S. Publication Nos. US-2014-0340510; US-2014-0313339; US-2014-0347486; US-2014-0320658; US-2014-0336876; US-2014-0307095; US-2014-0327774; US-2014-0327772; US-2014-0320636; US-2014-0293057; US-2014-0309884; US-2014-0226012; US-2014-0293042; US-2014-0218535; US-2014-0218535; US-2014-0247354; US-2014-0247355; US-2014-0247352; US-2014-0232869; US-2014-0211009; US-2014-0160276; US-2014-0168437; US-2014-0168415; US-2014-0160291; US-2014-0152825; US-2014-0139676; US-2014-0138140; US-2014-0104426; US-2014-0098229; US-2014-0085472; US-2014-0067206; US-2014-0049646; US-2014-0052340; US-2014-0025240; US-2014-0028852; US-2014-005907; US-2013-0314503; US-2013-0298866; US-2013-0222593; US-2013-0300869; US-2013-0278769; US-2013-0258077; US-2013-0258077; US-2013-0242099; US-2013-0215271; US-2013-0141578 and/or US-2013-0002873, which are all hereby incorporated herein by reference in their entireties. The system may communicate with other communication systems via any suitable means, such as by utilizing aspects of the systems described in International Publication Nos. WO 2010/144900; WO 2013/043661 and/or WO 2013/081985, and/or U.S. Pat. No. 9,126,525, which are hereby incorporated herein by reference in their entireties.
The system may also communicate with other systems, such as via a vehicle-to-vehicle communication system or a vehicle-to-infrastructure communication system or the like. Such car2car or vehicle to vehicle (V2V) and vehicle-to-infrastructure (car2X or V2X or V2I or a 4G or 5G broadband cellular network) technology provides for communication between vehicles and/or infrastructure based on information provided by one or more vehicles and/or information provided by a remote server or the like. Such vehicle communication systems may utilize aspects of the systems described in U.S. Pat. Nos. 6,690,268; 6,693,517 and/or 7,580,795, and/or U.S. Publication Nos. US-2014-0375476; US-2014-0218529; US-2013-0222592; US-2012-0218412; US-2012-0062743; US-2015-0251599; US-2015-0158499; US-2015-0124096; US-2015-0352953; US-2016-0036917 and/or US-2016-0210853, which are hereby incorporated herein by reference in their entireties.
Optionally, the vision system may include a display for displaying images captured by one or more of the imaging sensors for viewing by the driver of the vehicle while the driver is normally operating the vehicle. Optionally, for example, the vision system may include a video display device, such as by utilizing aspects of the video display systems described in U.S. Pat. Nos. 5,530,240; 6,329,925; 7,855,755; 7,626,749; 7,581,859; 7,446,650; 7,338,177; 7,274,501; 7,255,451; 7,195,381; 7,184,190; 5,668,663; 5,724,187; 6,690,268; 7,370,983; 7,329,013; 7,308,341; 7,289,037; 7,249,860; 7,004,593; 4,546,551; 5,699,044; 4,953,305; 5,576,687; 5,632,092; 5,708,410; 5,737,226; 5,802,727; 5,878,370; 6,087,953; 6,173,501; 6,222,460; 6,513,252 and/or 6,642,851, and/or U.S. Publication Nos. US-2014-0022390; US-2012-0162427; US-2006-0050018 and/or US-2006-0061008, which are all hereby incorporated herein by reference in their entireties.
Changes and modifications in the specifically described embodiments can be carried out without departing from the principles of the invention, which is intended to be limited only by the scope of the appended claims, as interpreted according to the principles of patent law including the doctrine of equivalents.
The present application claims the filing benefits of U.S. provisional application Ser. No. 62/455,112, filed Feb. 6, 2017, which is hereby incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5539397 | Asanuma et al. | Jul 1996 | A |
5541590 | Nishio | Jul 1996 | A |
5550677 | Schofield et al. | Aug 1996 | A |
5555555 | Sato et al. | Sep 1996 | A |
5670935 | Schofield et al. | Sep 1997 | A |
5737226 | Olson et al. | Apr 1998 | A |
5760962 | Schofield et al. | Jun 1998 | A |
5765118 | Fukatani | Jun 1998 | A |
5781437 | Wiemer et al. | Jul 1998 | A |
5786772 | Schofield et al. | Jul 1998 | A |
5793420 | Schmidt | Aug 1998 | A |
5796094 | Schofield et al. | Aug 1998 | A |
5877897 | Schofield et al. | Mar 1999 | A |
5878370 | Olson | Mar 1999 | A |
5883739 | Ashihara et al. | Mar 1999 | A |
5884212 | Lion | Mar 1999 | A |
5890021 | Onoda | Mar 1999 | A |
5896085 | Mori et al. | Apr 1999 | A |
5899956 | Chan | May 1999 | A |
5915800 | Hiwatashi et al. | Jun 1999 | A |
5923027 | Stam et al. | Jul 1999 | A |
5924212 | Domanski | Jul 1999 | A |
5929786 | Schofield et al. | Jul 1999 | A |
5949331 | Schofield et al. | Sep 1999 | A |
5959555 | Furuta | Sep 1999 | A |
6201642 | Bos | Mar 2001 | B1 |
6223114 | Boros et al. | Apr 2001 | B1 |
6266082 | Yonezawa et al. | Jul 2001 | B1 |
6266442 | Laumeyer et al. | Jul 2001 | B1 |
6285393 | Shimoura et al. | Sep 2001 | B1 |
6302545 | Schofield et al. | Oct 2001 | B1 |
6370329 | Teuchert | Apr 2002 | B1 |
6392315 | Jones et al. | May 2002 | B1 |
6396397 | Bos et al. | May 2002 | B1 |
6477464 | McCarthy et al. | Nov 2002 | B2 |
6498620 | Schofield et al. | Dec 2002 | B2 |
6523964 | Schofield et al. | Feb 2003 | B2 |
6553130 | Lemelson et al. | Apr 2003 | B1 |
6611202 | Schofield et al. | Aug 2003 | B2 |
6636258 | Strumolo | Oct 2003 | B2 |
6690268 | Schofield et al. | Feb 2004 | B2 |
6704621 | Stein et al. | Mar 2004 | B1 |
6711474 | Treyz et al. | Mar 2004 | B1 |
6735506 | Breed et al. | May 2004 | B2 |
6744353 | Sjonell | Jun 2004 | B2 |
6795221 | Urey | Sep 2004 | B1 |
6802617 | Schofield et al. | Oct 2004 | B2 |
6806452 | Bos et al. | Oct 2004 | B2 |
6822563 | Bos et al. | Nov 2004 | B2 |
6891563 | Schofield et al. | May 2005 | B2 |
6946978 | Schofield | Sep 2005 | B2 |
7005974 | McMahon et al. | Feb 2006 | B2 |
7038577 | Pawlicki et al. | May 2006 | B2 |
7058206 | Janssen et al. | Jun 2006 | B1 |
7062300 | Kim | Jun 2006 | B1 |
7065432 | Moisel et al. | Jun 2006 | B2 |
7075427 | Pace | Jul 2006 | B1 |
7079017 | Lang et al. | Jul 2006 | B2 |
7136753 | Samukawa et al. | Nov 2006 | B2 |
7145519 | Takahashi et al. | Dec 2006 | B2 |
7202776 | Breed | Apr 2007 | B2 |
7230640 | Regensburger et al. | Jun 2007 | B2 |
7248283 | Takagi et al. | Jul 2007 | B2 |
7295229 | Kumata et al. | Nov 2007 | B2 |
7301466 | Asai | Nov 2007 | B2 |
7490007 | Taylor et al. | Feb 2009 | B2 |
7526103 | Schofield et al. | Apr 2009 | B2 |
7592928 | Chinomi et al. | Sep 2009 | B2 |
7681960 | Wanke et al. | Mar 2010 | B2 |
7720580 | Higgins-Luthman | May 2010 | B2 |
7724962 | Zhu et al. | May 2010 | B2 |
7855755 | Weller et al. | Dec 2010 | B2 |
7859565 | Schofield et al. | Dec 2010 | B2 |
7881496 | Camilleri et al. | Feb 2011 | B2 |
7952490 | Fechner et al. | May 2011 | B2 |
7972045 | Schofield | Jul 2011 | B2 |
8013780 | Lynam | Sep 2011 | B2 |
8027029 | Lu et al. | Sep 2011 | B2 |
8376595 | Higgins-Luthman | Feb 2013 | B2 |
8849495 | Chundrik, Jr. et al. | Sep 2014 | B2 |
9187028 | Higgins-Luthman | Nov 2015 | B2 |
9195914 | Fairfield | Nov 2015 | B2 |
9280560 | Dube et al. | Mar 2016 | B1 |
9428192 | Schofield et al. | Aug 2016 | B2 |
9460355 | Stenneth | Oct 2016 | B2 |
9489586 | Chung | Nov 2016 | B2 |
9508014 | Lu et al. | Nov 2016 | B2 |
9626865 | Yokochi | Apr 2017 | B2 |
9697430 | Kristensen | Jul 2017 | B2 |
10046764 | Masuda | Aug 2018 | B2 |
10089870 | Ro | Oct 2018 | B2 |
10127466 | Stenneth | Nov 2018 | B2 |
10377309 | Lee | Aug 2019 | B2 |
10423843 | Biemer et al. | Sep 2019 | B2 |
10475338 | Noel | Nov 2019 | B1 |
20020015153 | Downs | Feb 2002 | A1 |
20020113873 | Williams | Aug 2002 | A1 |
20030108252 | Carrig | Jun 2003 | A1 |
20030137586 | Lewellen | Jul 2003 | A1 |
20030202683 | Ma et al. | Oct 2003 | A1 |
20030222982 | Hamdan et al. | Dec 2003 | A1 |
20040010352 | Stromme | Jan 2004 | A1 |
20040114381 | Salmeen et al. | Jun 2004 | A1 |
20060103727 | Tseng | May 2006 | A1 |
20060164221 | Jensen | Jul 2006 | A1 |
20060250501 | Wildmann et al. | Nov 2006 | A1 |
20060290479 | Akatsulca et al. | Dec 2006 | A1 |
20070104476 | Yasutomi et al. | May 2007 | A1 |
20080231710 | Asari et al. | Sep 2008 | A1 |
20090093938 | Isaji et al. | Apr 2009 | A1 |
20090113509 | Tseng et al. | Apr 2009 | A1 |
20090144311 | Stratis et al. | Jun 2009 | A1 |
20090177347 | Breuer et al. | Jul 2009 | A1 |
20090243824 | Peterson et al. | Oct 2009 | A1 |
20090244361 | Gebauer et al. | Oct 2009 | A1 |
20090265069 | Desbrunes | Oct 2009 | A1 |
20100067805 | Klefenz | Mar 2010 | A1 |
20100228437 | Hanzawa et al. | Sep 2010 | A1 |
20100283855 | Becker | Nov 2010 | A1 |
20120044066 | Mauderer et al. | Feb 2012 | A1 |
20120218412 | Dellantoni et al. | Aug 2012 | A1 |
20120262340 | Hassan et al. | Oct 2012 | A1 |
20120310968 | Tseng | Dec 2012 | A1 |
20130116859 | Ihlenburg et al. | May 2013 | A1 |
20130124052 | Hahne | May 2013 | A1 |
20130129150 | Saito | May 2013 | A1 |
20130131918 | Hahne | May 2013 | A1 |
20130191003 | Hahne et al. | Jul 2013 | A1 |
20130278769 | Nix et al. | Oct 2013 | A1 |
20140003709 | Ranganathan | Jan 2014 | A1 |
20140067206 | Pflug | Mar 2014 | A1 |
20140156157 | Johnson et al. | Jun 2014 | A1 |
20140227780 | Salomonsson et al. | Aug 2014 | A1 |
20140236477 | Chen et al. | Aug 2014 | A1 |
20140313339 | Diessner | Oct 2014 | A1 |
20140327772 | Sahba | Nov 2014 | A1 |
20140340510 | Ihlenburg et al. | Nov 2014 | A1 |
20140379233 | Chundrlik, Jr. et al. | Dec 2014 | A1 |
20150124096 | Koravadi | May 2015 | A1 |
20150248771 | Kim | Sep 2015 | A1 |
20150302747 | Ro | Oct 2015 | A1 |
20160034769 | Singh | Feb 2016 | A1 |
20160092755 | Fairfield | Mar 2016 | A1 |
20160104049 | Stenneth | Apr 2016 | A1 |
20160117562 | Chung | Apr 2016 | A1 |
20160210853 | Koravadi | Jul 2016 | A1 |
20160362050 | Lee | Dec 2016 | A1 |
20160379068 | Stenneth | Dec 2016 | A1 |
20170017849 | Kristensen | Jan 2017 | A1 |
20170148320 | Ro | May 2017 | A1 |
20180120857 | Kappauf | May 2018 | A1 |
20180225530 | Kunze et al. | Aug 2018 | A1 |
20180239972 | Biemer et al. | Aug 2018 | A1 |
Entry |
---|
Aharon et al., “K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation,” IEEE Transactions on Signal Processing, vol. 54, No. 11, Nov. 2006. |
He et al., “Deep Residual Learning for Image Recognition”, Computer Vision Foundation, pp. 770-778. |
Ledig, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” arXiv:1609.04802v3 [cs.CV] Nov. 21, 2016, pp. 1-19. |
Van den Oord et al., “Pixel Recurrent Neural Networks,” aparXiv: 1601.06759v3 [cs.CV] Aug. 19, 2016, pp. 1-11. |
Yu, “Ultra-Resolving Face Images by Discriminative Generative Networks,” Australian National University, European Conference on Computer Vision, 2016. |
Number | Date | Country | |
---|---|---|---|
20180225530 A1 | Aug 2018 | US |
Number | Date | Country | |
---|---|---|---|
62455112 | Feb 2017 | US |