The present invention relates generally to a vehicle vision system for a vehicle and, more particularly, to a vehicle vision system that utilizes one or more cameras at a vehicle.
Use of imaging sensors in vehicle imaging systems is common and known. Examples of such known systems are described in U.S. Pat. Nos. 5,949,331; 5,670,935 and/or 5,550,677, which are hereby incorporated herein by reference in their entireties.
A vehicular control system includes a camera disposed at a vehicle equipped with the vehicular control system. The camera views exterior of the vehicle and is operable to capture image data. The camera includes a CMOS imaging array with at least one million photosensors arranged in rows and columns. The system includes an electronic control unit (ECU) with electronic circuitry and associated software. Image data captured by the camera is transferred to the ECU. The electronic circuitry of the ECU includes an image processor that is operable to process image data captured by the camera and transferred to the ECU. The vehicular control system, responsive to processing at the ECU of image data captured by the camera, determines a condition exterior of the vehicle. The vehicular control system, while the vehicle is operating in a power operational mode, switches from the power operational mode to an economy operational mode based at least in part on the determined condition. The power operational mode provides greater performance of the vehicle compared to the economy operational mode, and the economy operational mode provides greater fuel efficiency for the vehicle compared to the power operational mode.
These and other objects, advantages, purposes and features of the present invention will become apparent upon review of the following specification in conjunction with the drawings.
A vehicle vision system and/or vehicle control system operates to capture images exterior of the vehicle and may process the captured image data to display images and to detect objects at or near the vehicle and in the predicted path of the vehicle, such as to assist a driver of the vehicle in maneuvering the vehicle in a rearward direction. The vision system includes an image processor or image processing system that is operable to receive image data from one or more cameras and provide an output to a display device for displaying images representative of the captured image data. Optionally, the vision system may provide a display, such as a rearview display or a top down or bird's eye or surround view display or the like.
Referring now to the drawings and the illustrative embodiments depicted therein, a vision system 10 for a vehicle 12 includes at least one exterior viewing imaging sensor or camera, such as a forward viewing imaging sensor or camera, which may be disposed at and behind the windshield 14 of the vehicle and viewing forward through the windshield so as to capture image data representative of the scene occurring at least forward of the vehicle (
Increasing fuel efficiency is an important goal for automotive manufacturers and drivers alike. Many vehicles offer different operational modes that alter control and handling of the vehicle. For example, many vehicles offer an economy operational mode (e.g., an “eco mode”) that puts restrictions in place (e.g., speed restrictions, acceleration restrictions, engine restrictions, climate control restrictions, etc.) to reduce fuel consumption. Other operation modes offer other benefits. For example, a power operational mode (e.g., a “sport mode”) may provide maximum power to the engine, allowing the driver to handle the vehicle with maximum speed and acceleration capabilities. Implementations herein provide intelligent switching between an economy operational mode and power operational mode (or any other operational modes) based on the current, recent, and/or future predicted driving patterns/road environment. For example, when the driver is traveling in a mixture of high traffic and low traffic areas, a control system automatically switches between economy (e.g., during the low traffic areas) and power mode (e.g., during the high traffic areas) based on the driving pattern/road environment. The control system includes a software algorithm executed by a respective ECU, which controls switching between the different operational modes.
Conventional vehicles often provide two or more available operational modes. For example, vehicles commonly include both the power operational mode (e.g., a sport mode or the like) and the economy operational mode. The economy operational mode may represent a fuel efficiency mode while the power operational mode represents a mode for high power demand from the driver. Other modes may also be offered, such as an inclement weather mode (where, for example, four-wheel drive is continuously engaged). In the power operational mode, fuel efficiency is not the expectation, but instead the expectation is for maximum or immediate power. When the driver is traveling along a route with a combination or mixture of high traffic and low traffic areas, the optimal operational mode may vary along the drive (i.e., at some portions of the drive, the economy operational mode would be ideal because the extra power is not needed while along other portions of the drive, the power operational mode would be ideal because the extra power is needed or helpful). In scenarios such as these, implementations herein include a control system that provides automatic switching of the modes based on a determined current optimal/ideal operational mode. The control system includes a software algorithm addition that executes on an ECU or other computing device at the vehicle (e.g., a battery monitoring system (BMS) ECU of an electric vehicle, an engine management ECU in a gasoline/diesel vehicle, etc.). The system may automatically switch between any available operational driving modes as the vehicle determines or predicts a most optimal operational mode in real-time based on current conditions around the vehicle (e.g., location, traffic, weather, time of day, type of road, etc.).
Based on different input scenarios, the ECU or other control determines and provides commands for automatic switching between, for example, economy and power operational modes (and optionally, other modes, such as a four-wheel drive mode, a comfort mode, a “normal” mode, etc.). Depending on the mode, the performance (e.g., acceleration and/or velocity), the handling (e.g., suspension), and/or the climate controls of the vehicle may be adjusted. The ECU executes the switching algorithm to determine the current operational mode that would provide the most benefit to the occupants of the vehicle based on the current context of the vehicle and the occupants. For example, when the system determines that power is unlikely to be needed (e.g., the system determines based on sensor data or traffic data or map data that there is little traffic near the vehicle), the system may automatically select the economy operational mode to increase fuel efficiency. On the other hand, when the system determines that traffic is heavier than a threshold level, the system may automatically switch to the power operational mode to ensure the vehicle/driver has maximum power and acceleration to maneuver through traffic. The system does not require any additional hardware. The system may be incorporated into any type of vehicle (e.g., an electric vehicle, a gasoline/diesel vehicle, a hybrid vehicle, etc.) to improve fuel efficiency without any human intervention. For example, electric vehicles are a modern trend in the auto industry and the control system could extend the range of electric vehicles.
As shown in
The control system may determine the appropriate or ideal or optimal mode based on determining one or more conditions (e.g., driving conditions, such as traffic conditions, road conditions, weather conditions, etc.) of the vehicle or environment or condition of one or more occupants using a variety of different inputs. For example, the system may receive a status of a cruise control system of the vehicle (e.g., enabled or disabled), past and recent driving patterns of the current driver (e.g., constant speed for a threshold period of time, acceleration patterns, braking patterns, etc.), navigational data (e.g., current location, current route information, past route information, etc.), environmental conditions (e.g., temperature, presence of precipitation, ambient light levels, etc.), a current operational mode status (e.g., economy mode enabled or disabled). The system may determine current conditions of the occupants of the vehicle, such as a temperature of the occupants. The system may receive data from any number of sensors of the vehicle, such as environmental conditions captured by the camera (e.g., weather conditions, road conditions, traffic conditions, etc.) or other sensors (e.g., radar sensors, lidar, ultrasonic sensors, etc.).
Optionally, the system may determine traffic and other road conditions (current or future) from a navigation system of the vehicle (e.g., a GPS sensor and a map or traffic database or the like). The system may determine common or preferred driving habits of the driver or other occupant of the vehicle based on a profile associated with the driver or occupant. Based on these inputs, the control system automatically selects an available mode (e.g., an economy mode or a power operational mode) for the vehicle. In some examples, the system attempts to increase fuel efficiency to whatever extent possible without inconveniencing/endangering the occupants of the vehicle. That is, the system switches to the economy mode whenever it is both safe and convenient (e.g., in a low traffic area and/or in good weather conditions) to maximize fuel efficiency. The control system generates an output on the vehicle bus instructing the appropriate system(s) to switch modes accordingly.
The system may continuously or periodically monitor the conditions (e.g., traffic conditions, weather conditions, etc.) and adjust the mode based on changes in conditions. The driver or other occupant of the vehicle may override the system by manually selecting a mode (e.g., via a user input such as a button, switch, voice command, a touch screen, etc.). The system may remain in the manually selected mode until the driver disables the mode, until the current drive ends, or based on a predefined amount of time.
Referring now to
The mode selector 30, based on the received data, generates a mode selection 40 that selects a driving mode from the available driving modes of the vehicle. The vehicle may include any number of driving modes, such as a sports mode, power mode, normal mode, economy mode, comfort mode, four-wheel drive mode, etc. For example, the mode selector 30 switches the vehicle from a power operational mode (e.g., a sports mode or power mode) to an economy operational mode (e.g., an econ mode). The power operational mode may provide greater performance of the vehicle (e.g., velocity, acceleration, etc.) compared to the economy operational mode. The economy operational mode may provide greater fuel efficiency for the vehicle compared to the power operational mode. The mode selector 30 may include a lookup table or other algorithm to generate the mode selection 40. In some examples, the mode selector 30 includes a model such as a machine learning algorithm (e.g., a neural network or the like) that predicts the mode selection 40 based on training data. The training data may be derived from data based on the current vehicle and occupants (e.g., by monitoring the received data while the driver drives the vehicle). Additionally or alternatively, the model of the mode selector 30 is pre-trained on a large training set that includes data from other vehicles and/or drivers.
In some examples, the user input 34 includes user configuration. The user may configure thresholds or other parameters for the mode selector 30 to use when generating the mode selection 40. For example, the user may provide user input 34 indicating that the mode selector 30 is to only switch to a particular mode when the vehicle is traveling certain speeds, at certain times, or in certain locations.
The camera or sensor may comprise any suitable camera or sensor. Optionally, the camera may comprise a “smart camera” that includes the imaging sensor array and associated circuitry and image processing circuitry and electrical connectors and the like as part of a camera module, such as by utilizing aspects of the vision systems described in U.S. Pat. Nos. 10,099,614 and/or 10,071,687, which are hereby incorporated herein by reference in their entireties.
The system includes an image processor operable to process image data captured by the camera or cameras, such as for detecting objects or other vehicles or pedestrians or the like in the field of view of one or more of the cameras. For example, the image processor may comprise an image processing chip selected from the EYEQ family of image processing chips available from Mobileye Vision Technologies Ltd. of Jerusalem, Israel, and may include object detection software (such as the types described in U.S. Pat. Nos. 7,855,755; 7,720,580 and/or 7,038,577, which are hereby incorporated herein by reference in their entireties), and may analyze image data to detect vehicles and/or other objects. Responsive to such image processing, and when an object or other vehicle is detected, the system may generate an alert to the driver of the vehicle and/or may generate an overlay at the displayed image to highlight or enhance display of the detected object or vehicle, in order to enhance the driver's awareness of the detected object or vehicle or hazardous condition during a driving maneuver of the equipped vehicle.
The vehicle may include any type of sensor or sensors, such as imaging sensors or radar sensors or lidar sensors or ultrasonic sensors or the like. The imaging sensor or camera may capture image data for image processing and may comprise any suitable camera or sensing device, such as, for example, a two dimensional array of a plurality of photosensor elements arranged in at least 640 columns and 480 rows (at least a 640×480 imaging array, such as a megapixel imaging array or the like), with a respective lens focusing images onto respective portions of the array. The photosensor array may comprise a plurality of photosensor elements arranged in a photosensor array having rows and columns. The imaging array may comprise a CMOS imaging array having at least 300,000 photosensor elements or pixels, preferably at least 500,000 photosensor elements or pixels and more preferably at least one million photosensor elements or pixels or at least two million photosensor elements or pixels or at least three million photosensor elements or pixels or at least five million photosensor elements or pixels arranged in rows and columns. The imaging array may capture color image data, such as via spectral filtering at the array, such as via an RGB (red, green and blue) filter or via a red/red complement filter or such as via an RCC (red, clear, clear) filter or the like. The logic and control circuit of the imaging sensor may function in any known manner, and the image processing and algorithmic processing may comprise any suitable means for processing the images and/or image data.
For example, the vision system and/or processing and/or camera and/or circuitry may utilize aspects described in U.S. Pat. Nos. 9,233,641; 9,146,898; 9,174,574; 9,090,234; 9,077,098; 8,818,042; 8,886,401; 9,077,962; 9,068,390; 9,140,789; 9,092,986; 9,205,776; 8,917,169; 8,694,224; 7,005,974; 5,760,962; 5,877,897; 5,796,094; 5,949,331; 6,222,447; 6,302,545; 6,396,397; 6,498,620; 6,523,964; 6,611,202; 6,201,642; 6,690,268; 6,717,610; 6,757,109; 6,802,617; 6,806,452; 6,822,563; 6,891,563; 6,946,978; 7,859,565; 5,550,677; 5,670,935; 6,636,258; 7,145,519; 7,161,616; 7,230,640; 7,248,283; 7,295,229; 7,301,466; 7,592,928; 7,881,496; 7,720,580; 7,038,577; 6,882,287; 5,929,786 and/or 5,786,772, and/or U.S. Publication Nos. US-2014-0340510; US-2014-0313339; US-2014-0347486; US-2014-0320658; US-2014-0336876; US-2014-0307095; US-2014-0327774; US-2014-0327772; US-2014-0320636; US-2014-0293057; US-2014-0309884; US-2014-0226012; US-2014-0293042; US-2014-0218535; US-2014-0218535; US-2014-0247354; US-2014-0247355; US-2014-0247352; US-2014-0232869; US-2014-0211009; US-2014-0160276; US-2014-0168437; US-2014-0168415; US-2014-0160291; US-2014-0152825; US-2014-0139676; US-2014-0138140; US-2014-0104426; US-2014-0098229; US-2014-0085472; US-2014-0067206; US-2014-0049646; US-2014-0052340; US-2014-0025240; US-2014-0028852; US-2014-005907; US-2013-0314503; US-2013-0298866; US-2013-0222593; US-2013-0300869; US-2013-0278769; US-2013-0258077; US-2013-0258077; US-2013-0242099; US-2013-0215271; US-2013-0141578 and/or US-2013-0002873, which are all hereby incorporated herein by reference in their entireties. The system may communicate with other communication systems via any suitable means, such as by utilizing aspects of the systems described in U.S. Pat. Nos. 10,071,687; 9,900,490; 9,126,525 and/or 9,036,026, which are hereby incorporated herein by reference in their entireties.
The imaging device and control and image processor and any associated illumination source, if applicable, may comprise any suitable components, and may utilize aspects of the cameras (such as various imaging sensors or imaging array sensors or cameras or the like, such as a CMOS imaging array sensor, a CCD sensor or other sensors or the like) and vision systems described in U.S. Pat. Nos. 5,760,962; 5,715,093; 6,922,292; 6,757,109; 6,717,610; 6,590,719; 6,201,642; 5,796,094; 6,559,435; 6,831,261; 6,822,563; 6,946,978; 7,720,580; 8,542,451; 7,965,336; 7,480,149; 5,877,897; 6,498,620; 5,670,935; 5,796,094; 6,396,397; 6,806,452; 6,690,268; 7,005,974; 7,937,667; 7,123,168; 7,004,606; 6,946,978; 7,038,577; 6,353,392; 6,320,176; 6,313,454 and/or 6,824,281, and/or International Publication Nos. WO 2009/036176; WO 2009/046268; WO 2010/099416; WO 2011/028686 and/or WO 2013/016409, and/or U.S. Publication Nos. US 2010-0020170 and/or US-2009-0244361, which are all hereby incorporated herein by reference in their entireties.
Changes and modifications in the specifically described embodiments can be carried out without departing from the principles of the invention, which is intended to be limited only by the scope of the appended claims, as interpreted according to the principles of patent law including the doctrine of equivalents.
The present application claims the filing benefits of U.S. provisional application Ser. No. 63/499,541, filed May 2, 2023, which is hereby incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63499541 | May 2023 | US |