The present invention relates generally to a vehicle vision system for a vehicle and, more particularly, to a vehicle vision system that utilizes one or more cameras at a vehicle.
Use of imaging sensors in vehicle imaging systems is common and known. Examples of such known systems are described in U.S. Pat. Nos. 5,949,331; 5,670,935 and/or 5,550,677, which are hereby incorporated herein by reference in their entireties.
A driving assistance system or vision system or imaging system for a vehicle utilizes one or more cameras (preferably one or more CMOS cameras) to capture image data representative of images exterior of the vehicle, and provides an electronic control unit (ECU) with electronic circuitry and associated software. The electronic circuitry of the ECU includes an image processor for processing image data captured by the camera. The system includes a video display disposed in the vehicle and viewable by a driver of the vehicle. The video display is operable to display video images derived from image data captured by the camera and includes a plurality of display portions. Each display portion of the plurality of display portions displays a respective portion of the video images. The vehicular vision system, responsive to processing by the image processor of image data captured by the camera, displays video images at the plurality of display portions of the video display. Graphic overlay data stored in memory represents a plurality of graphic overlay portions, and each graphic overlay portion of the plurality of graphic overlay portions is associated with a different respective display portion of the plurality of display portions. Each graphic overlay portion is associated with a respective driving condition of a plurality of driving conditions. Responsive to an occurrence of one of the plurality of driving conditions, the vehicular vision system (i) retrieves the graphic overlay data from memory, (ii) displays each respective graphic overlay portion of the plurality of graphic overlay portions using the associated respective display portion of the plurality of display portions and (iii) adjusts a transparency of at least one of the displayed graphic overlay portions such that one of the plurality of displayed graphic overlay portions associated with the one of the plurality of driving conditions is viewable by the driver of the vehicle when viewing the video display and others of the displayed graphic overlay portions not associated with the one of the plurality of driving conditions are not viewable by the driver of the vehicle when viewing the video display.
These and other objects, advantages, purposes and features of the present invention will become apparent upon review of the following specification in conjunction with the drawings.
A vehicle vision system and/or driver or driving assist system and/or object detection system and/or alert system operates to capture images exterior of the vehicle and may process the captured image data to display images and to detect objects at or near the vehicle and in the predicted path of the vehicle, such as to assist a driver of the vehicle in maneuvering the vehicle in a rearward direction. The vision system includes an image processor or image processing system that is operable to receive image data from one or more cameras and provide an output to a display device for displaying images representative of the captured image data. Optionally, the vision system may provide display, such as a rearview display or a top down or bird's eye or surround view display or the like.
Referring now to the drawings and the illustrative embodiments depicted therein, a vehicle 10 includes an imaging system or vision system 12 that includes at least one exterior viewing imaging sensor or camera, such as a rearward viewing imaging sensor or camera 14a (and the system may optionally include multiple exterior viewing imaging sensors or cameras, such as a forward viewing camera 14b at the front (or at the windshield) of the vehicle, and a sideward/rearward viewing camera 14c, 14d at respective sides of the vehicle), which captures images exterior of the vehicle, with the camera having a lens for focusing images at or onto an imaging array or imaging plane or imager of the camera (
Many driver assist systems or vehicular vision systems or vehicular display systems may, based on various configurations, user preferences, environmental conditions, and/or vehicle inputs, overlay graphics on images displayed on a display 16 for the driver (or other occupant) to view. These overlays can be complex static or dynamic graphics that are drawn or superimposed over, for example, image data captured by a camera or other imaging sensor. For example, a rear backup system may display images captured by a rear backup camera while the vehicle is reversing, and the rear backup system may overlay graphics that indicate a current or predicted path of the vehicle based on the current steering angle or graphics that highlight one or more obstacles to avoid. Other examples include, but are not limited to, overlaying geographic information, traffic or traffic lane information, and weather condition information. These graphics can be made up of several different elements or portions.
Referring now to
While
Typically, each combination of potential graphic overlays 26 is stored in external memory. For example, when the display 16 is divided into nine portion 26 and each portion 26 is associated with one potential graphic overlay 26, the system may store each of 511 different combinations into memory (e.g., flash memory, an electrically erasable programmable read-only memory (EEPROM), etc.) disposed at the vehicle. When a specific combination of graphic overlays 26 is to be displayed on the display 16, the ECU 18 retrieves the appropriate combination from the memory and then displays the retrieved combination on the display 16. However, due to the potentially large number of combinations, a large (and therefore more expensive) memory is often required.
Implementations herein include a vehicular vision system or display system that controls a transparency of each graphic overlay 26 to greatly reduce the number of combinations that must be stored in memory. That is, instead of storing each combination of graphic overlays 26 separately (e.g., 511 different combinations of this example), the system may instead store a single combination with all of the graphic overlays 26 (e.g., all nine graphic overlays 26 in this example). The system may then programmatically control the transparency (e.g., using alpha compositing or alpha blending) of each graphic overlay 26 of each portion 24 to ensure that only the desired graphic overlays 26 are visible based on the presence or selection of a particular driving condition. For example, when the driving condition includes a navigation condition or navigation mode (e.g., the user has requested assistance in navigating to a particular destination), the graphic overlays 26 may (as part of a single overlay) include indications or instructions for navigating the vehicle, such as a “turn left” indication or instruction and a “turn right” indication or instruction. As the vehicle approaches an intersection where the driver should turn the vehicle to the left, the system may ensure that the graphic overlay 26 that indicates the vehicle should turn right is transparent (i.e., not viewable) and the graphic overlay 26 that indicates the vehicle should turn left is non transparent (i.e., is viewable). The driving condition may represent any number of scenarios or conditions or situations, such as user requests (e.g., navigation modes, communication modes, etc.), weather conditions (e.g., low temperature indications, slippery condition indications, such as an indication that “roads may be icy” when the temperature is at or below around 37 degrees F., etc.), a gear of the vehicle (e.g., forward gear, reverse gear), operational status of various systems or sensors, etc.
Optionally, for example, the graphic overlay may comprise a plurality of icons or indicators that are sequentially or dynamically shown at the displayed images by making one (or two or more) of the icons or indicators visible and the rest transparent and then making a next one (or two or more) of the icons visible and the rest (including the initial one or two visible icons or indicators) transparent responsive to a change in a driving condition. For example, the graphic overlay may include a plurality of pairs of curved lines that are used to overlay rear backup camera captured images during a reversing maneuver of the vehicle. Thus, one pair of lines may be visible at a time, with the particular pair that is visible changing responsive to a change in steering angle of the vehicle. Thus, a single graphic overlay includes all of the rear backup indicator lines and the system retrieves that single graphic overlay (such as responsive to the vehicle shifting to a reverse gear or the driver otherwise selecting a reverse propulsion of the vehicle) and then adjusts transparency of the plurality of graphic overlay portions (such as responsive to a change in steering angle of the vehicle during the reversing maneuver) to indicate to the driver the projected path of reverse travel of the vehicle during the reversing maneuver.
The transparency setting controls how visible the graphic overlay 26 is to viewers of the display. For example, a graphic overlay 26 that is set to be transparent or mostly transparent will not be visible (or be less visible) and the underlying video image (i.e., the portion of the video image the graphic overlay 26 is displayed over) is visible (or at least mostly visible). When a graphic overlay 26 is set to opaque or mostly opaque (i.e., less transparent), the graphic overlay 26 will be visible to viewers of the display and the underlying video image will not be visible or at least partially less visible. The system may adjust the transparencies of the graphic overlays 26 by any amount. For example, the system, when “hiding” or masking or otherwise making a graphic overlay 26 not viewable to occupants of the vehicle, may adjust transparencies of one or more graphic overlays 26 to be greater than 70% transparent (i.e., less than 30% opaque), or greater than 80% transparent, or greater than 90% transparent, etc., and the system, when ensuring one or more graphic overlays 26 are viewable or visible to occupants of the vehicle, may adjust or set or ensure the transparency of the graphic overlays 26 are less than 50% transparent (i.e., greater than 50% opaque), or less than 30%, or less than 10%, such that the driver or occupant of the vehicle can readily view and discern the one or more, at least substantially opaque, graphic overlays while not viewing and discerning the other, at least substantially transparent, overlays.
For the combination of portions 24 including a visible graphic overlay 26 exemplified in
Thus, instead of storing each combination separately, the system may store a single combination (or more when a portion 24 is associated with more than one graphic overlay 26) and then adjust the single combination as needed by adjusting a transparency or visibility of each graphic overlay 26 of each portion 24. For example, the system stores a first single combination for a first graphic overlay 26 that may extend across any number of the portions 24 and a second single combination for a second graphic overlay 26 that extends across any number of the portions 24. The portions 24 used by the first graphic overlay 26 may be the same or different than the second graphic overlay 26. Thus, the system may essentially “enable” or “disable” the respective graphic overlay(s) 26 of each portion 24 without the need to store every combination separately. The system may store metadata (such as a table) indicating the parameters (e.g., the transparency parameters) values for each potential combination, which requires far less storage that storing the actual combinations that include the graphic overlays 26. For example, the system could store a lookup table or the like and reference the lookup table to determine the transparency parameter values for a specific combination.
The system includes an image processor operable to process image data captured by the camera or cameras, such as for detecting objects or other vehicles or pedestrians or the like in the field of view of one or more of the cameras. For example, the image processor may comprise an image processing chip selected from the EYEQ family of image processing chips available from Mobileye Vision Technologies Ltd. of Jerusalem, Israel, and may include object detection software (such as the types described in U.S. Pat. Nos. 7,855,755; 7,720,580 and/or 7,038,577, which are hereby incorporated herein by reference in their entireties), and may analyze image data to detect vehicles and/or other objects. Responsive to such image processing, and when an object or other vehicle is detected, the system may generate an alert to the driver of the vehicle and/or may generate an overlay at the displayed image to highlight or enhance display of the detected object or vehicle, in order to enhance the driver's awareness of the detected object or vehicle or hazardous condition during a driving maneuver of the equipped vehicle.
The vehicle may include any type of sensor or sensors, such as imaging sensors or radar sensors or lidar sensors or ultrasonic sensors or the like. The imaging sensor or camera may capture image data for image processing and may comprise any suitable camera or sensing device, such as, for example, a two dimensional array of a plurality of photosensor elements arranged in at least 640 columns and 480 rows (at least a 640×480 imaging array, such as a megapixel imaging array or the like), with a respective lens focusing images onto respective portions of the array. The photosensor array may comprise a plurality of photosensor elements arranged in a photosensor array having rows and columns. Preferably, the imaging array has at least 300,000 photosensor elements or pixels, more preferably at least 500,000 photosensor elements or pixels and more preferably at least 1 million photosensor elements or pixels. The imaging array may capture color image data, such as via spectral filtering at the array, such as via an RGB (red, green and blue) filter or via a red/red complement filter or such as via an RCC (red, clear, clear) filter or the like. The logic and control circuit of the imaging sensor may function in any known manner, and the image processing and algorithmic processing may comprise any suitable means for processing the images and/or image data.
For example, the vision system and/or processing and/or camera and/or circuitry may utilize aspects described in U.S. Pat. Nos. 10,099,614; 10,071,687; 9,233,641; 9,146,898; 9,174,574; 9,090,234; 9,077,098; 8,818,042; 8,886,401; 9,077,962; 9,068,390; 9,140,789; 9,092,986; 9,205,776; 8,917,169; 8,694,224; 7,005,974; 5,760,962; 5,877,897; 5,796,094; 5,949,331; 6,222,447; 6,302,545; 6,396,397; 6,498,620; 6,523,964; 6,611,202; 6,201,642; 6,690,268; 6,717,610; 6,757,109; 6,802,617; 6,806,452; 6,822,563; 6,891,563; 6,946,978; 7,859,565; 5,550,677; 5,670,935; 6,636,258; 7,145,519; 7,161,616; 7,230,640; 7,248,283; 7,295,229; 7,301,466; 7,592,928; 7,881,496; 7,720,580; 7,038,577; 6,882,287; 5,929,786 and/or 5,786,772, and/or U.S. Publication Nos. US-2014-0340510; US-2014-0313339; US-2014-0347486; US-2014-0320658; US-2014-0336876; US-2014-0307095; US-2014-0327774; US-2014-0327772; US-2014-0320636; US-2014-0293057; US-2014-0309884; US-2014-0226012; US-2014-0293042; US-2014-0218535; US-2014-0218535; US-2014-0247354; US-2014-0247355; US-2014-0247352; US-2014-0232869; US-2014-0211009; US-2014-0160276; US-2014-0168437; US-2014-0168415; US-2014-0160291; US-2014-0152825; US-2014-0139676; US-2014-0138140; US-2014-0104426; US-2014-0098229; US-2014-0085472; US-2014-0067206; US-2014-0049646; US-2014-0052340; US-2014-0025240; US-2014-0028852; US-2014-005907; US-2013-0314503; US-2013-0298866; US-2013-0222593; US-2013-0300869; US-2013-0278769; US-2013-0258077; US-2013-0258077; US-2013-0242099; US-2013-0215271; US-2013-0141578 and/or US-2013-0002873, which are all hereby incorporated herein by reference in their entireties. The system may communicate with other communication systems via any suitable means, such as by utilizing aspects of the systems described in U.S. Pat. Nos. 10,071,687; 9,900,490; 9,126,525 and/or 9,036,026, which are hereby incorporated herein by reference in their entireties.
The vision system includes a display for displaying images captured by one or more of the imaging sensors for viewing by the driver of the vehicle while the driver is normally operating the vehicle. Optionally, for example, the vision system may include a video display device, such as by utilizing aspects of the video display systems described in U.S. Pat. Nos. 5,530,240; 6,329,925; 7,855,755; 7,626,749; 7,581,859; 7,446,650; 7,338,177; 7,274,501; 7,255,451; 7,195,381; 7,184,190; 5,668,663; 5,724,187; 6,690,268; 7,370,983; 7,329,013; 7,308,341; 7,289,037; 7,249,860; 7,004,593; 4,546,551; 5,699,044; 4,953,305; 5,576,687; 5,632,092; 5,708,410; 5,737,226; 5,802,727; 5,878,370; 6,087,953; 6,173,501; 6,222,460; 6,513,252 and/or 6,642,851, and/or U.S. Publication Nos. US-2014-0022390; US-2012-0162427; US-2006-0050018 and/or US-2006-0061008, which are all hereby incorporated herein by reference in their entireties.
Changes and modifications in the specifically described embodiments can be carried out without departing from the principles of the invention, which is intended to be limited only by the scope of the appended claims, as interpreted according to the principles of patent law including the doctrine of equivalents.
The present application claims the filing benefits of U.S. provisional application Ser. No. 63/203,457, filed Jul. 23, 2021, which is hereby incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4546551 | Franks | Oct 1985 | A |
4953305 | Van Lente et al. | Sep 1990 | A |
5499334 | Staab | Mar 1996 | A |
5530240 | Larson et al. | Jun 1996 | A |
5550677 | Schofield et al. | Aug 1996 | A |
5576687 | Blank et al. | Nov 1996 | A |
5632092 | Blank et al. | May 1997 | A |
5668663 | Varaprasad et al. | Sep 1997 | A |
5670935 | Schofield et al. | Sep 1997 | A |
5699044 | Van Lente et al. | Dec 1997 | A |
5708410 | Blank et al. | Jan 1998 | A |
5724187 | Varaprasad et al. | Mar 1998 | A |
5737226 | Olson et al. | Apr 1998 | A |
5802727 | Blank et al. | Sep 1998 | A |
5878370 | Olson | Mar 1999 | A |
5949331 | Schofield et al. | Sep 1999 | A |
6087953 | DeLine et al. | Jul 2000 | A |
6173501 | Blank et al. | Jan 2001 | B1 |
6222447 | Schofield et al. | Apr 2001 | B1 |
6222460 | DeLine et al. | Apr 2001 | B1 |
6329925 | Skiver et al. | Dec 2001 | B1 |
6513252 | Schierbeek et al. | Feb 2003 | B1 |
6611202 | Schofield et al. | Aug 2003 | B2 |
6642851 | Deline et al. | Nov 2003 | B2 |
6690268 | Schofield et al. | Feb 2004 | B2 |
7004593 | Weller et al. | Feb 2006 | B2 |
7038577 | Pawlicki et al. | May 2006 | B2 |
7184190 | McCabe et al. | Feb 2007 | B2 |
7195381 | Lynam et al. | Mar 2007 | B2 |
7249860 | Kulas et al. | Jul 2007 | B2 |
7255451 | McCabe et al. | Aug 2007 | B2 |
7274501 | McCabe et al. | Sep 2007 | B2 |
7289037 | Uken et al. | Oct 2007 | B2 |
7308341 | Schofield et al. | Dec 2007 | B2 |
7329013 | Blank et al. | Feb 2008 | B2 |
7338177 | Lynam | Mar 2008 | B2 |
7370983 | DeWind et al. | May 2008 | B2 |
7446650 | Scholfield et al. | Nov 2008 | B2 |
7581859 | Lynam | Sep 2009 | B2 |
7626749 | Baur et al. | Dec 2009 | B2 |
7720580 | Higgins-Luthman | May 2010 | B2 |
7855755 | Weller et al. | Dec 2010 | B2 |
7859565 | Schofield et al. | Dec 2010 | B2 |
8451107 | Lu et al. | May 2013 | B2 |
8643724 | Schofield | Feb 2014 | B2 |
9762880 | Pflug | Sep 2017 | B2 |
10019841 | Gibson et al. | Jul 2018 | B2 |
10179543 | Rathi et al. | Jan 2019 | B2 |
10488215 | Yu | Nov 2019 | B1 |
20060050018 | Hutzel et al. | Mar 2006 | A1 |
20060061008 | Karner et al. | Mar 2006 | A1 |
20120162427 | Lynam | Jun 2012 | A1 |
20130042180 | Sai | Feb 2013 | A1 |
20130321629 | Zhang | Dec 2013 | A1 |
20130328922 | Belanger | Dec 2013 | A1 |
20140022390 | Blank et al. | Jan 2014 | A1 |
20190204827 | Bhalla | Jul 2019 | A1 |
20200070725 | Ding | Mar 2020 | A1 |
20200400456 | Sen | Dec 2020 | A1 |
Number | Date | Country | |
---|---|---|---|
20230028593 A1 | Jan 2023 | US |
Number | Date | Country | |
---|---|---|---|
63203457 | Jul 2021 | US |