The present invention relates generally to a vehicle vision system for a vehicle and, more particularly, to a vehicle vision system that utilizes one or more cameras at a vehicle.
Use of imaging sensors in vehicle imaging systems is common and known. Examples of such known systems are described in U.S. Pat. Nos. 5,949,331; 5,670,935 and/or 5,550,677, which are hereby incorporated herein by reference in their entireties. Image processing of captured image data may be used to detect objects, such as traffic lights, forward of the vehicle and in the field of view of one or more of the imaging sensors.
Known vehicle traffic light indication or monitoring systems may display a traffic light's augmented image in the shape of a traffic light head having the typical three lights (red, yellow and green) on a head unit display or head up system or a cluster display. An example of such a display is shown in
The present invention provides a driver assistance system or traffic light alert system for a vehicle that utilizes one or more cameras (preferably one or more CMOS cameras) to capture image data representative of images exterior of the vehicle, and provides an enhanced display of a traffic light alert to provide enhanced cognitive awareness by the driver of the status of a traffic light ahead of the vehicle. The traffic light alert system displays an iconistic representation of a traffic signal and a background region surrounding or at least partially around the displayed iconistic representation. The display, responsive to a determination of a status of a traffic light ahead of the vehicle, adjusts the display to highlight or illuminate an appropriate one of the lights (upper, middle, lower) of the iconistic traffic light representation and to display the background in the appropriate color. For example, if the system determines that the detected traffic light's upper red light is activated, the upper light of the iconistic traffic light representation is illuminated and the background is displayed as red.
These and other objects, advantages, purposes and features of the present invention will become apparent upon review of the following specification in conjunction with the drawings.
A vehicle traffic light alert system and/or driver assist system and/or object detection system and/or alert system operates to capture images exterior of the vehicle and may process the captured image data to display images and to detect objects at or near the vehicle and in the predicted path of the vehicle, such as to assist a driver of the vehicle in maneuvering the vehicle in a rearward direction. The traffic light alert system includes an image processor or image processing system that is operable to receive image data from one or more cameras and provide an output to a display device for displaying images representative of the captured image data. The traffic light alert system provides a display that represents the traffic light. The system may provide a rearview display or a top down or bird's eye or surround view display or the like.
Referring now to the drawings and the illustrative embodiments depicted therein, a vehicle 10 includes a traffic light alert system that includes a forward viewing camera module 12 that is disposed at and views through the windshield 14 of the vehicle and captures image data of the scene exterior and forward of the vehicle (
The camera system or camera module of the present invention may utilize aspects of the systems and/or modules described in International Publication Nos. WO 2013/123161 and/or WO 2013/019795, and/or U.S. Pat. Nos. 8,256,821; 7,480,149; 7,289,037; 7,004,593; 6,824,281; 6,690,268; 6,445,287; 6,428,172; 6,420,975; 6,326,613; 6,278,377; 6,243,003; 6,250,148; 6,172,613 and/or 6,087,953, and/or U.S. Publication Nos. US-2015-0327398; US-2014-0226012 and/or US-2009-0295181, which are all hereby incorporated herein by reference in their entireties. Optionally, the system may include a plurality of exterior facing imaging sensors or cameras, such as a rearward facing imaging sensor or camera, a forwardly facing camera at the front of the vehicle, and sidewardly/rearwardly facing cameras at respective sides of the vehicle, which capture image data representative of the scene exterior of the vehicle.
The control includes an image processor that processes captured images to detect and identify objects forward of the vehicle and in the field of view of the camera. Responsive to such image processing, the control may determine and identify traffic signals and may determine the signal status, such as which of the typically three lights (red, yellow, green), or sometimes just two lights (red, green) or sometimes just one light (just red), is activated as the vehicle approaches the detected traffic light. Responsive to such determinations, the system is operable to display an iconistic representation of a traffic light, with the appropriate light highlighted depending on which light of the traffic light is activated.
To improve the driver's conception of the visual traffic light display, the present invention provides an enhanced display 16 that more clearly indicates the status of the traffic light to the driver of the vehicle. For example, and such as can be seen with reference to
The system may adjust the display and/or iconistic traffic light representation depending on various driving conditions or situations encountered by the vehicle. For example, depending on the region where the vehicle is driven, the iconistic traffic light representation may comprise three icons lights vertically oriented (as shown in
The system may detect and identify the traffic light via image processing (via the processing system or processor) of image data captured by a forward facing camera of the vehicle. Optionally, the status of the traffic light may be communicated to the vehicle control via a car-to-infrastructure or car2x communication system, where a wireless communication indicative of the traffic light status is received by a receiver of the vehicle and used to adjust the display accordingly.
The system thus may communicate with other systems, such as via a vehicle-to-vehicle communication system or a vehicle-to-infrastructure communication system or the like. Such car2car or vehicle to vehicle (V2V) and vehicle-to-infrastructure (car2X or V2X or V2I or 4G or 5G) technology provides for communication between vehicles and/or infrastructure based on information provided by one or more vehicles and/or information provided by a remote server or the like. Such vehicle communication systems may utilize aspects of the systems described in U.S. Pat. Nos. 6,690,268; 6,693,517 and/or 7,580,795, and/or U.S. Publication Nos. US-2014-0375476; US-2014-0218529; US-2013-0222592; US-2012-0218412; US-2012-0062743; US-2015-0251599; US-2015-0158499; US-2015-0124096; US-2015-0352953; US-2016-0036917 and/or US-2016-0210853, which are hereby incorporated herein by reference in their entireties.
In U.S. provisional application Ser. No. 62/266,734, filed Dec. 14, 2015, which is hereby incorporated herein by reference in its entirety, optical data transmission between a street light and a vehicle (V2X) using a timely modulated code, a code pattern modulated code or a combination of timely and pattern modulated codes, optionally using visual and/or infrared wavelengths or spectral bands, was suggested for providing monodirectional or bidirectional optical data transmission. In U.S. provisional application Ser. No. 62/330,558, filed May 2, 2016, which is hereby incorporated herein by reference in its entirety, vehicle positioning by visible light communication was described.
Optionally, the vehicle system and the traffic light may be equipped with visible light communication (VLC) by machine vision means, such as by utilizing aspects of the systems and/or modules described in the above incorporated U.S. provisional applications, Ser. No. 62/330,558 and/or Ser. No. 62/266,734, and having LEDs as traffic lights for encoding data into the light stream, and processing means such as having a processor and optionally transceivers for encoding the traffic light's status (green, red, red-blinking, red-yellow or yellow, or yellow-blinking) and for encoding the according lane (assuming a map is known with each traffic light's lane number), area or GPS position that the according traffic light signal is for or corresponds with. Optionally, a time or countdown until the light status changes to another defined light status may be transmitted or communicated as well. For example, in case the vehicle light receiving device is a 30 Hz forward vision RGB camera and the traffic light is done primitively, just encoding one bit by being on or off at a time (using a simple single bit coding scheme like Miller, Manchester, DBP or NRZ), the light data channel may transmit up to 15 bits per second. When the encoding is done in a data compressed manner (such as using a lossless compression such a kind of Lempel-Ziv (LZ), which are based on finite state machines or such kinds using prediction by partial matching methods) and/or the traffic light is encoding more than one bit at a time such as by using phase shift coding (phase modulation) and/or amplitude coding (amplitude modulation) and/or frequency (color) coding (frequency modulation) or combinations of all, such as quadrature amplitude modulation, more data may be transmitted at a time. The data may be sent repeatedly without a handshake. The vehicle system may synchronize to the VLC data stream and continue to read it until the traffic light is out of sight of the receiving device. In case the line of sight is interrupted, the vehicle system may interpolate (by any logic or by using parity bits) or correct missing data as far as possible. For example, the vehicle system may continue the countdown until a traffic light changes from green to yellow by using its inherent clock, also when the traffic light's VLC is interrupted for any reason.
The camera or sensor may comprise any suitable camera or sensor. Optionally, the camera may comprise a “smart camera” that includes the imaging sensor array and associated circuitry and image processing circuitry and electrical connectors and the like as part of a camera module, such as by utilizing aspects of the vision systems described in International Publication Nos. WO 2013/081984 and/or WO 2013/081985, which are hereby incorporated herein by reference in their entireties.
The system includes an image processor operable to process image data captured by the camera or cameras, such as for detecting objects or other vehicles or pedestrians or the like in the field of view of one or more of the cameras. For example, the image processor may comprise an image processing chip selected from the EyeQ family of image processing chips available from Mobileye Vision Technologies Ltd. of Jerusalem, Israel, and may include object detection software (such as the types described in U.S. Pat. Nos. 7,855,755; 7,720,580 and/or 7,038,577, which are hereby incorporated herein by reference in their entireties), and may analyze image data to detect vehicles and/or other objects. Responsive to such image processing, and when an object or other vehicle is detected, the system may generate an alert to the driver of the vehicle and/or may generate an overlay at the displayed image to highlight or enhance display of the detected object or vehicle, in order to enhance the driver's awareness of the detected object or vehicle or hazardous condition during a driving maneuver of the equipped vehicle.
The vehicle may include any type of sensor or sensors, such as imaging sensors or radar sensors or lidar sensors or ladar sensors or ultrasonic sensors or the like. The imaging sensor or camera may capture image data for image processing and may comprise any suitable camera or sensing device, such as, for example, a two dimensional array of a plurality of photosensor elements arranged in at least 640 columns and 480 rows (at least a 640×480 imaging array, such as a megapixel imaging array or the like), with a respective lens focusing images onto respective portions of the array. The photosensor array may comprise a plurality of photosensor elements arranged in a photosensor array having rows and columns. Preferably, the imaging array has at least 300,000 photosensor elements or pixels, more preferably at least 500,000 photosensor elements or pixels and more preferably at least 1 million photosensor elements or pixels. The imaging array may capture color image data, such as via spectral filtering at the array, such as via an RGB (red, green and blue) filter or via a red/red complement filter or such as via an RCC (red, clear, clear) filter or the like. The logic and control circuit of the imaging sensor may function in any known manner, and the image processing and algorithmic processing may comprise any suitable means for processing the images and/or image data.
For example, the vision system and/or processing and/or camera and/or circuitry may utilize aspects described in U.S. Pat. Nos. 8,694,224; 7,005,974; 5,760,962; 5,877,897; 5,796,094; 5,949,331; 6,302,545; 6,396,397; 6,498,620; 6,523,964; 6,611,202; 6,201,642; 6,690,268; 6,717,610; 6,757,109; 6,802,617; 6,806,452; 6,822,563; 6,891,563; 6,946,978; 7,859,565; 5,550,677; 5,670,935; 7,881,496; 7,720,580; 7,038,577; 6,882,287; 5,929,786 and/or 5,786,772, which are all hereby incorporated herein by reference in their entireties. The system may communicate with other communication systems via any suitable means, such as by utilizing aspects of the systems described in International Publication Nos. WO/2010/144900; WO 2013/043661 and/or WO 2013/081985, and/or U.S. Publication No. US-2012-0062743, which are hereby incorporated herein by reference in their entireties.
The imaging device and control and image processor and any associated illumination source, if applicable, may comprise any suitable components, and may utilize aspects of the cameras (such as various imaging sensors or imaging array sensors or cameras or the like, such as a CMOS imaging array sensor, a CCD sensor or other sensors or the like) and vision systems described in U.S. Pat. Nos. 5,760,962; 5,715,093; 6,922,292; 6,757,109; 6,717,610; 6,590,719; 6,201,642; 5,796,094; 6,559,435; 6,831,261; 6,822,563; 6,946,978; 7,720,580; 8,542,451; 7,965,336; 7,480,149; 5,550,677; 5,877,897; 6,498,620; 5,670,935; 5,796,094; 6,396,397; 6,806,452; 6,690,268; 7,005,974; 7,937,667; 7,123,168; 7,004,606; 6,946,978; 7,038,577; 6,353,392; 6,320,176; 6,313,454 and/or 6,824,281, and/or International Publication Nos. WO 2009/036176; WO 2009/046268; WO 2010/099416; WO 2011/028686 and/or WO 2013/016409, and/or U.S. Pat. Publication Nos. US 2010-0020170 and/or US-2009-0244361, which are all hereby incorporated herein by reference in their entireties.
Optionally, the vision system may include a display for displaying images captured by one or more of the imaging sensors for viewing by the driver of the vehicle while the driver is normally operating the vehicle. Optionally, for example, the vision system may include a video display device, such as by utilizing aspects of the video display systems described in U.S. Pat. Nos. 5,530,240; 6,329,925; 7,855,755; 7,626,749; 7,581,859; 7,446,650; 7,338,177; 7,274,501; 7,255,451; 7,195,381; 7,184,190; 5,668,663; 5,724,187; 6,690,268; 7,370,983; 7,329,013; 7,308,341; 7,289,037; 7,249,860; 7,004,593; 4,546,551; 5,699,044; 4,953,305; 5,576,687; 5,632,092; 5,677,851; 5,708,410; 5,737,226; 5,802,727; 5,878,370; 6,087,953; 6,173,508; 6,222,460; 6,513,252 and/or 6,642,851, and/or U.S. Publication Nos. US-2012-0162427; US-2006-0050018 and/or US-2006-0061008, which are all hereby incorporated herein by reference in their entireties.
Optionally, the vision system (utilizing the forward facing camera and a rearward facing camera and other cameras disposed at the vehicle with exterior fields of view) may be part of or may provide a display of a top-down view or bird's-eye view system of the vehicle or a surround view at the vehicle, such as by utilizing aspects of the vision systems described in International Publication Nos. WO 2010/099416; WO 2011/028686; WO 2012/075250; WO 2013/019795; WO 2012/075250; WO 2012/145822; WO 2013/081985; WO 2013/086249 and/or WO 2013/109869, and/or U.S. Publication No. US-2012-0162427, which are hereby incorporated herein by reference in their entireties.
Changes and modifications in the specifically described embodiments can be carried out without departing from the principles of the invention, which is intended to be limited only by the scope of the appended claims, as interpreted according to the principles of patent law including the doctrine of equivalents.
The present application claims the filing benefits of U.S. provisional application Ser. No. 62/249,468 filed Nov. 2, 2015, which is hereby incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5550677 | Schofield et al. | Aug 1996 | A |
5635920 | Pogue | Jun 1997 | A |
5670935 | Schofield et al. | Sep 1997 | A |
5949331 | Schofield et al. | Sep 1999 | A |
6087953 | DeLine et al. | Jul 2000 | A |
6172613 | DeLine et al. | Jan 2001 | B1 |
6243003 | DeLine et al. | Jun 2001 | B1 |
6250148 | Lynam | Jun 2001 | B1 |
6278377 | DeLine et al. | Aug 2001 | B1 |
6326613 | Heslin et al. | Dec 2001 | B1 |
6420975 | DeLine et al. | Jul 2002 | B1 |
6428172 | Hutzel et al. | Aug 2002 | B1 |
6445287 | Schofield et al. | Sep 2002 | B1 |
6690268 | Schofield et al. | Feb 2004 | B2 |
6693517 | McCarthy et al. | Feb 2004 | B2 |
6824281 | Schofield et al. | Nov 2004 | B2 |
7004593 | Weller et al. | Feb 2006 | B2 |
7289037 | Uken et al. | Oct 2007 | B2 |
7480149 | DeWard et al. | Jan 2009 | B2 |
7580795 | McCarthy et al. | Aug 2009 | B2 |
8256821 | Lawlor et al. | Sep 2012 | B2 |
9041806 | Baur et al. | May 2015 | B2 |
9264672 | Lynam | Feb 2016 | B2 |
9596387 | Achenbach et al. | Mar 2017 | B2 |
20090295181 | Lawlor et al. | Dec 2009 | A1 |
20110093178 | Yamada | Apr 2011 | A1 |
20120062743 | Lynam et al. | Mar 2012 | A1 |
20120218412 | Dellantoni et al. | Aug 2012 | A1 |
20130002451 | Chen | Jan 2013 | A1 |
20130063281 | Malaska | Mar 2013 | A1 |
20130222592 | Gieseke | Aug 2013 | A1 |
20140218529 | Mahmoud | Aug 2014 | A1 |
20140226012 | Achenbach | Aug 2014 | A1 |
20140375476 | Johnson et al. | Dec 2014 | A1 |
20150015713 | Wang et al. | Jan 2015 | A1 |
20150124096 | Koravadi | May 2015 | A1 |
20150158499 | Koravadi | Jun 2015 | A1 |
20150251599 | Koravadi | Sep 2015 | A1 |
20150262483 | Sugawara | Sep 2015 | A1 |
20150327398 | Achenbach et al. | Nov 2015 | A1 |
20150352953 | Koravadi | Dec 2015 | A1 |
20150379872 | Al-Qaneei | Dec 2015 | A1 |
20160036917 | Koravadi et al. | Feb 2016 | A1 |
20160148508 | Morimoto | May 2016 | A1 |
20160210853 | Koravadi | Jul 2016 | A1 |
20170124868 | Bhat | May 2017 | A1 |
Number | Date | Country |
---|---|---|
102014003781 | Sep 2015 | DE |
Number | Date | Country | |
---|---|---|---|
20170124870 A1 | May 2017 | US |
Number | Date | Country | |
---|---|---|---|
62249468 | Nov 2015 | US |