This disclosure relates to a method and system for learning driver behavior using artificial intelligence and based on the learned data (algorithm), coaching the driver to improve vehicle efficiency, or adjusting a propulsion system to improve vehicle efficiency.
A vehicle propulsion system includes a source of mechanical power, i.e., engine or electric motor, and a mechanism that transfers this power to generate tractive force, i.e., wheels and axles. The propulsion system drives the vehicle in a forward/rearward direction.
Recent advancements in sensor technology and processing capacity have led to improved safety for vehicles and to the capability of controlling vehicle propulsion systems. Referring to
Therefore, it is desirable to provide a system that is anticipatory versus the previous reactive system to adjusting the propulsion system. In other words, it is desirable to have a system that considers multiple inputs and determines that an adjustment in the propulsion system is necessary based on the received inputs.
One aspect of the disclosure provides a method for providing a suggested driving adjustment in real time to a driver of a vehicle. The method includes receiving, at data processing hardware, one or more direct driver inputs from a vehicle control system in communication with the data processing hardware. The method also includes receiving, at the data processing hardware, sensor data from a vehicle sensor system. The method includes determining, at the data processing hardware, a proposed driver behavior based on the direct driver inputs and the sensor data. The method also includes determining, at the data processing hardware, an ideal driver behavior based on the direct driver inputs and the sensor data. The method also includes determining, at the data processing hardware, a behavior difference between the proposed driver behavior and the ideal driver behavior. In addition. the method includes determining, at the data processing hardware, the suggested driving adjustment based on the behavior difference. The method also includes sending, from the data processing hardware, instructions to notify the driver of the suggested driving adjustment to improve vehicle efficiency and/or performance.
Implementations of the disclosure may include one or more of the following optional features. In some implementations the sensor data includes vehicle sensor data and environment sensor data. The vehicle sensor data may include at least one of battery sensor data, traction drive motor sensor data, and driveline component sensor data. The environment sensor data may include at least one of vehicle speed data, road speed limit data, route profile data, traffic light crossings data and their respective location data, weather conditions data, and dynamic traffic data.
In some examples, the instructions include visual instructions to a user interface in communication with the data processing hardware. The visual instructions cause the user interface to display a message that includes the suggested driving adjustment. Additionally or alternatively, the instructions may include feedback instructions to the vehicle control system. The feedback instructions cause the vehicle control system to provide haptic feedback. The vehicle control system includes at least one of a steering wheel, a brake pedal, an acceleration pedal, and a gear lever. Additionally or alternatively, the instructions include audible instructions to a voice system in communication with the data processing hardware that cause the voice system to output an audible message or a chime.
In some implementations, during a learning phase, the method includes receiving learning direct driver inputs from the vehicle control system and receiving learning sensor data from the vehicle sensor system. In additional, the method includes associating one or more driver actions with the learning direct driver inputs and the learning sensor data. The one or more driver actions are indicative of an action taken by the driver to control the vehicle in response to the learning direct driver inputs and the learning sensor data. Additionally, during the learning phase, the method includes storing the one or more driver actions as one or more stored driver behaviors in memory hardware, where each driver action of the one or more driver actions is associated with the learning direct driver inputs and the learning sensor data.
In some examples, wherein determining the predicted driver behavior includes retrieving, from the memory hardware in communication with the data processing hardware, the predicted driver behavior from the one or more stored driver behaviors. Each one of the stored driver behaviors from the one or more stored driver behaviors is associated with learning direct driver inputs and learning sensor data being similar to the received one or more direct driver inputs and the received sensor data respectively.
Another aspect of the disclosure provides a system for providing a suggested driving adjustment in real time to a driver of a vehicle. The system includes: data processing hardware; and memory hardware in communication with the data processing hardware. The memory hardware stores instructions that when executed on the data processing hardware cause the data processing hardware to perform operations that include the method described above.
The details of one or more implementations of the disclosure are set forth in the accompanying drawings and the description below. Other aspects, features, and advantages will be apparent from the description and drawings, and from the claims.
Like reference symbols in the various drawings indicate like elements.
A vehicle, such as, but not limited to a car, a crossover, a truck, a van, a sports-utility-vehicle (SUV), and a recreational vehicle (RV) may be used for personal driving or commercial driving (deliveries, taxis, etc.). Therefore, a propulsion system associated with each vehicle performs differently based on the vehicle, the use of the vehicle, and sensor data associated with the vehicle and the vehicle environment.
Referring to
The vehicle 100, 100a also includes a sensor system 120 to provide reliable and robust sensor data 122. The sensor system 120 includes different types of sensors. The sensor system 120 may include vehicle sensors 120a that provide vehicle sensor data 122a associated with the vehicle 100, 100a, for example, sensors that are associated with a battery, a traction drive motor, engine, brake system and a driveline component. The sensor system 120 may also include environment sensors 120b that provide environment sensor data 122b that may be used separately or with one another to create a perception of the environment of the vehicle 100, 100a. In addition, the environment sensor data 122b may include, but is not limited to, average vehicle speed affected by surrounding vehicles, road speed limits, route profile (grade, elevation, curvature, three-dimensional profile data, etc.), traffic light crossings and locations, weather conditions, dynamic traffic data. The sensor data 122, i.e., vehicle sensor data 122a and environment sensor data 122b, may be used together or separately to aid the driver 30 and/or vehicle 100, 100a (autonomous driving) to make intelligent decisions when maneuvering the vehicle 100, 100a. The sensor system 120 may include one or more cameras, an IMU (inertial measurement unit) configured to measure a linear acceleration (using one or more accelerometers) of the vehicle 100, 100a and a rotational rate (using one or more gyroscopes) of the vehicle 100, 100a, radar, sonar, LIDAR (Light Detection and Ranging, which may include optical remote sensing that measures properties of scattered light to find range and/or other information of a distant target), LADAR (Laser Detection and Ranging), and ultrasonic sensors. The sensor system 120 may also include other sensors.
The vehicle 100 may include a user interface 130. The user interface 130 may include a display 132, a knob, and a button, which are used as input mechanisms. The user interface 130 may also include a haptic device 134 to notify and alert the driver 30 or to provide guidance. The haptic device 134, may include, but is not limited to, a haptic accelerator, a haptic brake pedal, or a haptic steering wheel that may vibrate based on a triggered condition (e.g., energy inefficient driving or aggressive driving). In some examples, the display 132 may show the knob and the button. While in other examples, the knob and the button are a mechanical knob button combination. In some examples, the user interface 130 receives one or more driver commands from the driver 30 via one or more input mechanisms or the touch screen display 132 and/or displays one or more notifications to the driver 30. In some examples, the driver 30 may select an energy economic mode of driving versus a sport driving mode. The driver may also adjust a level of driving guidance (e.g., provided by a controller 200).
The vehicle 100, 100a also includes a propulsion system 140 that includes a source of mechanical power, i.e., engine or electric motor(s), and a mechanism that transfers this power to tractive force, i.e., transmission, wheels and axles. The propulsion system drives the vehicle 100, 100a in a forward/rearward direction. The propulsion system 140 varies based on the vehicle type, for example, the propulsion system 140 may include, but is not limited to, combustion propulsion, fuel cell propulsion, diesel propulsion, electric propulsion, hybrid propulsion (e.g., combustion engine and electric) or any other kind of propulsion system.
The vehicle also includes a controller 200 in communication with the vehicle control system 110, the sensor system 120, and the user interface 130. The controller 200 includes a computing device (processor or processing hardware) 202 (e.g., central processing unit having one or more computing processors) in communication with non-transitory memory 204 (e.g., a hard disk, flash memory, random-access memory, memory hardware) capable of storing instructions executable on the computing processor(s) 202. In some examples, the hardware processor 202 is configured to execute artificial intelligence (AI) algorithms. As such, the processor 202 receives multiple inputs and takes actions that maximize its change of achieving a specific defined goal; in other words, the processor 202 is configured to mimic cognitive functions that humans associate with other human minds, such as learning and problem solving. The processor 202 is capable of processing large data, that include, but is not limited to vehicle control system data 111, sensor data 122, and other data. The artificial intelligence algorithms may execute one of several learning methods, that include, but are not limited to, deep learning using neural networks, machine learning algorithms such as K-means clustering or regression learning (e.g. driving behavior index), or reinforcement learning algorithms using a performance reward goal (e.g., reward may be energy efficiency or vehicle performance).
The controller 200, i.e., the processor 202, executes an efficiency system 210 that receives data from one or more systems, i.e., the vehicle control system 110 and the sensor system 120, and analyzes the received data to provide an anticipatory action. In some examples, the anticipatory action includes, but is not limited to, an indication to the driver 30 (e.g., by way of the display 132 and/or vibration of the haptic device 134 (e.g., vehicle 100a,
The efficiency system 210 includes an ideal driver behavior algorithm 212 that is either learned or stored in the hardware memory 204. The ideal driver behavior algorithm 212 determines an ideal driving action based on sensor data 122 and/or vehicle control system data 111 and maximizes the energy efficiency of the vehicle 100. Therefore, while the driver 30 is driving the vehicle 100, the ideal driver behavior algorithm 212 determines an ideal driving behavior/action given the current received sensor data 122 and/or vehicle control system data 111.
In some examples, the efficiency system 210 includes a driving behavior learning algorithm 214 that receives vehicle control data 111 (also referred to as direct driver input) from the vehicle control system 110, and vehicle sensor data 122a (also referred to as vehicle sensed observable or indirect driver input) from the vehicle sensors 120a, and environment sensor data 122b (also referred to as vehicle environment observable) from the environment sensors 120b. The driving behavior learning algorithm 214 learns the driving behavior of the driver 30 based on the received data 111, 122 over time. The driving behavior learning algorithm 214 correlates the driver driving actions in relation to propulsion efficiency of the propulsion system 140 and energy consumption of the vehicle 100. In addition, the driving behavior learning algorithm 214 stores the one or more driver driving actions as one or more stored driver behaviors 206 in memory 204, where each driver action is associated with specific direct driver inputs 111 and sensor data 122. In some examples, the driving behavior learning algorithm 214 may correlate driving behavior in relation to other parameter(s) to be optimized (i.e., a cost function). In some examples, the other parameter(s) may include, but is not limited to, fuel consumption, available driving range, driving travel time or any other vehicle parameter. The driving behavior learning algorithm 214 may identify the driver behavior as a class (e.g., aggressive, conservative, etc.) or associate with a behavior value or index within a range of values correlated with behaviors. The identified driver behavior 206 may also change in time and/or vehicle operating environment or scenario. The driving behavior learning algorithm 214 determines the driver behavior 206 continuously at a regular triggered internal (e.g., every 100 millisecond or 1 second). In some examples, the driving behavior learning algorithm 214 also correlates the driver behavior 206 to the vehicle environment (e.g., from environment sensor data from the environment sensors). In some implementations, the driving behavior learning algorithm 214 includes pre-learned training data (e.g., supervised learning) which helps the driving behavior learning algorithm 214 identify an aggressive driver behavior 206 or a conservative driver behavior 206. In other implementations, the driving behavior learning algorithm 214 determines the training data (e.g., unsupervised learning) and based on learning, identifies aggressive behaviors and conservative behaviors or a driving behavior value index in between multiple behavior classes. As such, the driving behavior learning algorithm 214 may predict a driver action (e.g., a predicted driver behavior 215, e.g. wheel torque demand or desired vehicle acceleration, etc., given a set of data 111, 122 and the learned/saved driver behavior 206. In some examples, the driving behavior learning algorithm 214 monitors the driver behavior 206 of the driver for a period of time before the driving behavior learning algorithm 214 is able to determine a predicted driver behavior 215. In some implementations, during a learning phase, the driving behavior learning algorithm 214 receives direct driver inputs 111 (i.e., learning direct driver inputs) and sensor data 122 (i.e., learning sensor data). In addition, the driving behavior learning algorithm 214 may associate one or more driver actions with the learning direct driver inputs 111 and the learning sensor data 122. The one or more driver actions are indicative of an action taken by the driver 30 to control the vehicle 100 in response to the direct driver inputs 111 and the sensor data 122. Also during the learning phase, the driving behavior learning algorithm 214 may store the one or more driver actions as one or more stored predicted driver behaviors 206 in the memory hardware 204. Each driver action of the one or more driver actions is associated with the learning direct driver inputs 111 and the learning sensor data 122. In other words, the driving behavior learning algorithm 214 accumulates data that includes the direct driver inputs 111 and the sensor data 122 for a threshold of time before determining a predicted driver behavior 215 based on the received data 111, 122. Therefore during an implementation phase following the learning phase, the driving behavior learning algorithm 214 determines the predicted driver behavior 215 by retrieving, from the memory hardware 204, a stored learned driver behaviors 206 that is associated with direct driver inputs 111 and sensor data 122, where the direct driver inputs 111 and sensor data 122 similar to the received one or more direct driver inputs and the received sensor data respectively.
In some implementations, the efficiency system 210, i.e., the driving behavior learning algorithm 214 learns the driving behavior of the driver 30 and the behavior's correlations to vehicle propulsion efficiency. For example, the efficiency system 210 determines a base driver classification based on driver inputs 111 and some vehicle sensor inputs 122 (i.e., inputs of the accelerator pedal 114a and brake pedal 114b and longitudinal vehicle acceleration/deceleration) and behavior correlations to propulsion efficiency. Alternatively, the driving behavior learning algorithm 214 may include vehicle environment inputs 122b (e.g., road or driving route profile data) and driving use scenario influences with correlations to propulsion efficiency. In some examples, the driving behavior learning algorithm 214 may use other driver behavior learning approaches, which may include, but is not limited to, dynamic data such as traffic flow or surrounding vehicle data, vehicle following distances, weather conditions, traffic light data, etc. and driving behavior influence and correlation to propulsion efficiency. In some examples, the driving behavior learning algorithm 214 uses supervised learning approaches in which explicit training data sets of driving behavior inputs (i.e., accelerator brake pedal positions and rates of change and corresponding propulsion efficiency) may be used for learning. This is done offline then flashed in memory 204 into the controller 200. Many training data sets with additional inputs in addition to the driver inputs (pedal, steering, etc.) and vehicle sensor data (acceleration/deceleration, etc.) may be included for supervised learning. A neural network can be used to handle the multiple inputs and dimensions for learning. Unsupervised learning approaches may also be implemented in real-time for driver behavior learning, driver coaching and even propulsion control adjustment. For example, a reinforcement learning algorithm may be executed on processor 202 with a defined reward function such as maximizing energy efficiency, or any other vehicle performance target or optimization cost function. Using this approach does not need explicit training data sets, but rather the driving behavior and correlation to propulsion efficiency may be learned by iterative feedback based on achievement of the reward function. In this way, propulsion control adjustment 222 (as will be later discussed) and driver coaching for improved energy efficiency may be executed while driving. If the driver behavior maximized the reward (i.e., energy efficiency) that style of driving will be further encouraged via coaching. If the driving behavior minimizes the reward function, that style of driving will be discouraged. Similarly, the propulsion control adjustments will be adapted to achieve the desired reward (i.e., energy efficiency).
For each set of received data 111, 122, the ideal driver behavior algorithm 212 determines an ideal behavior 213 while the driving behavior learning algorithm 214 provides the predicted driver behavior 215 (of the driver 30) for the same set of received data 111, 122. A comparator 218 compares the ideal behavior 213 with the predicted (or learned) driver behavior 215 and determines a behavior difference 219. The behavior difference 219 may be considered as a driver deviation from the ideal driver behavior 213.
In some implementations, the efficiency system 210 includes a driver co-pilot coach 216 that receives the behavior difference 219 and provides a suggestion or coaching action to the driver 30 to improve the vehicle efficiency and reduce energy consumption. The vehicle efficiency may be fuel efficiency, electrical energy efficiency or other vehicle efficiencies. In some examples, the driver co-pilot coach 216 may instruct the user interface 130 to display a message on the display 132 that includes the suggestion or coaching action. For example, the message may state: “To improve vehicle efficiency, reduce your speed”, or “consider increasing the distance between you and the vehicle in front of you to increase your safety”, “consider moving to the left lane to maintain vehicle speed & efficiency.” The coaching action may be a vehicle speed target recommendation for achieving energy efficiency. The coaching action, in some examples, may be a suggestion to increase the vehicle speed to achieve higher vehicle efficiency. Additionally or alternatively, the driver co-pilot coach 216 may instruct the vehicle control system 110 to provide haptic feedback by way of the steering wheel 112, the pedals 114, and/or the gear lever 116. In some examples, the haptic feedback informs the driver 30 of an optimal or ideal driver behavior pedal position. For example, the driver co-pilot coach 216 may instruct the driver to initiate braking or tip out of the accelerator pedal by vibrating a haptic accelerator or brake pedal in the user interface 130. The driver co-pilot coach 216 may, additionally or alternatively, instruct a voice system (not shown) to provide an audible message or chime to the driver 30. Therefore, the driver co-pilot coach 216 coaches and trains the driver 30 to improve his driving by providing suggested anticipatory driving feedback while the driver 30 is driving. The driver co-pilot coach 216 continuously guides the driver 30 based on the behavior difference 219 to ultimately achieve ideal driver behavior 213 for energy efficiency or other performance driving criteria. In some examples, the co-pilot coach 210 dynamically coaches the driver 30, e.g., via the user interface 130 or the vehicle control system 110, to achieve an efficiency per unique “learned” behavior 215 and efficiency (guidance adjusted for driving scenario).
The efficiency system 210 (i.e., driving behavior learning algorithm 214) learns the driving behavior and patterns of a specific driver 30 and correlates the learned driving behaviors and patterns to the vehicle operating environment and external influence factors (i.e., based on the sensor data 122 from the sensor system 120). Then the efficiency system 210 (i.e., the driver co-pilot coach 216) dynamically coaches and provides suggestions to the driver 30 to adjust the way the driver 30 drives, thus achieving efficiency per learning. Based on the above, the efficiency system 210 associates one or more driver behaviors to the operation of the propulsion system 140 which leads to maximized efficiency or performance. In some examples, the driver co-pilot coach 216 dynamically coaches the driver 30 to achieve efficiency per learned recommendation, e.g., acceleration and de-acceleration profiling recommendations, optimal vehicle speed (Vspeed_optimal). Therefore, the co-pilot coach 216 allows the driver 30 to improve his/her driving skills for maximizing vehicle energy efficiency by learning while driving.
In some implementations, the instructions 217 include visual instructions 217a to a user interface 130 in communication with the data processing hardware 202. The visual instructions causing the user interface 130 to display a message that includes the suggested driving adjustment 216a. Additionally or alternatively, the instructions 217 may include feedback instructions 217b to the vehicle control system 110 in communication with the data processing hardware 202. The feedback instructions 217b causing the vehicle control system 110 to provide haptic feedback to the driver 30. The vehicle control system 110 may include at least one of a steering wheel 112, a brake pedal 114b, an acceleration pedal 114a, and a gear lever 116.
In some examples, the instructions 217. 217a, 217b include audible instructions to a voice system (not shown) in communication with the data processing hardware. The audible instructions cause the voice system to output an audible message or a chime.
Referring to
As shown in
With reference to
In some implementations, as shown in
Referring to
Referring to
Referring to
The path following behaviors 152 may include a braking behavior 152a, a speed behavior 152b, and a steering behavior 152c. Other behaviors 152 may also be included. Each behavior 152a-152c causes the vehicle 100d to take an action, such as driving forward, turning at a specific angle, breaking, speeding, slowing down, among others. The vehicle controller 200 may maneuver the vehicle 100 in any direction across the road surface by controlling the drive system 160, more specifically by issuing commands 154 to a drive system 160.
Referring back to
Additionally, at block 1410, the method 1400 includes determining, at the data processing hardware 202, a propulsion adjustment 222 based on an ideal driver behavior 213 and the sensor data 122, 122a, 122b. At block 1412, the method 1400 includes transmitting, from the data processing hardware 202 to the propulsion system 140 in communication with the data processing hardware 202, propulsion instructions 224 to modify the one or more parameters of the propulsion system 140 based on the propulsion adjustment 222 along the path to improve vehicle efficiency and/or performance.
In some implementations, during a learning phase, the method 1400 includes receiving learning direct driver inputs 111 from a vehicle control system 110 in communication with the data processing hardware 202 and receiving learning sensor data 122, 122a, 122b from the vehicle sensor system 120. The vehicle control system 110 includes at least one of a steering wheel 112, a brake pedal 114a, an acceleration pedal 114b, and a gear lever 116. During the learning phase, the method 1400 also includes associating one or more ideal driver actions with the learning direct driver inputs 111 and the learning sensor data 122, 122a, 122b. The one or more ideal driver actions indicative of an action taken by an ideal driver to control the vehicle in response to the learning direct driver inputs 111 and the learning sensor data 122, 122a, 122b resulting in an improved efficiency and/or performance of the vehicle. During the learning phase, the method also includes storing the one or more associated ideal driver actions with the learning direct driver inputs 111 and the learning sensor data 122, 122a, 122b as one or more stored ideal driver behaviors 206 in memory hardware 204.
The method 1400 may also include determining the ideal driver behavior 213 by: retrieving, from the memory hardware 204 in communication with the data processing hardware 202, the ideal driver behavior 213 from the one or more stored ideal driver behaviors 206. The stored ideal driver behavior 213 associated with learning direct driver inputs 111 and learning sensor data 122, 122a, 122b being similar to the received one or more direct driver inputs 111 and the received sensor data 122, 122a, 122b respectively.
Table 1 below includes driving behavior learning levels 1-5 that may be implemented by the ideal driver behavior algorithm 212 and/or the driving behavior learning algorithm 214 during the learning phase as previously described. For examples, the ideal driver behavior algorithm 212 and/or driving behavior learning algorithm 214 implement the learning phase by executing each one of the described levels, i.e., levels 1 through level 5.
At level 1 DB1, the behavior learning algorithm 212, 214 learns the driving behavior of an ideal driver or the driver 30 without considering external factors and only direct driver inputs 111 and limited sensor data 122. At level 1 DB1 learning, the objective of the driving behavior learning algorithm 214 is to learn the base driving behavior of the driver 30; while the learning objective of the ideal driver behavior 213 is also base driving behavior and correlation of the driving behaviors to energy efficiency, which may be learned using accelerator, brake pedal inputs and longitudinal vehicle acceleration and deceleration and lateral accelerations. As previously mentioned, in some examples, the behavior learning algorithm 214 may associate a classification or a factor of the type of driving behaviors (e.g., aggressive, sport, economic, etc.). In some examples, if the behavior learning algorithm 212, 214 receives sensor data 122 indicative of high longitudinal and lateral accelerations with high rates of accelerator pedal changes, then the efficiency system 210 considers the driver 30 to be in the aggressive category. Similarly, if the behavior learning algorithm 212, 214 receives sensor data 122 indicative of low longitudinal and lateral accelerations and/or slow pedal rates, then the efficiency system 210 determines that the driver 30 is a conservative or economic driver 30. In some examples, the ideal driver behavior algorithm 212 correlates vehicle energy efficiency to each style of driving. For example, when the efficiency system 210 receives sensor data 120 indicative of high longitudinal acceleration and rapid accelerator pedal increases due to driver behavior, then the efficiency system 210 correlates such driver actions with energy inefficient driving. The driver co-pilot coach 216 may use the behaviors learned by the behavior learning algorithms 212, 214 to coach the driver 30 to drive in a smoother way (i.e., reduce longitudinal and lateral accelerations) which would gain vehicle efficiency. In some examples, one or more parameters of the propulsion system 140 are adjusted to filter out rapid changes in propulsion demand causing an increase of vehicle energy efficiency. During level 1 DB1, the behavior learning algorithms 212, 214 do not consider either the external effects for the driving scenario or dynamic conditions around the vehicle 100.
At level 2 DB2, the behavior learning algorithm 212, 214 considers the driving behavior of the driver 30 and a vehicle use (i.e., segmented use or cycle use). The behavior learning algorithm 212, 214 determines the driving behavior changes and/or unique driving patterns based on vehicle usage or specific driving scenarios. For example, if the vehicle 100 is used as a taxi or for utility with mainly low speed driving e.g., <60 kilometers/hour) but with frequent vehicle stops and launches, the driver 30 may be coached (by the driver co-pilot coach 216) to reduce vehicle accelerations to minimize energy losses or may be advised to increase vehicle following distances or even operate at reduced more constant vehicle speeds to maximize energy efficiency. In this case, the propulsion system 140 may be adjusted based on this vehicle usage scenario. For example, increased electric driving in the case of a hybrid vehicle may be implemented to minimize frequent engine stop/starts. The driving behavior learning algorithm 214 may also be implemented by segments of driving. For example, the behavior learning algorithm 212, 214 may learn behaviors only for vehicle launches from a stopped condition. Similarly, the behavior learning algorithm 212, 214 may learn an additional driving behavior for the segment of driving during vehicle deceleration or braking. This segmented learning may be used to advise the driver to decelerate longer to maximize energy efficiency or even brake faster to increase efficiency of regenerative braking and energy recovery in the case of an electric vehicle or hybrid application.
At level 3 DB3, the behavior learning algorithm 212, 214 may also consider the roads and the road environment, for example, from the information provided by a navigation system of the vehicle 100. At level 3 DB3, the driving behavior learning algorithm 214 and ideal driver behavior algorithm 212 may considers the effects of road grades, curvature, intersections, and surfaces on driving behavior. In some examples, the navigation system also provides driving path probability (DPP), time of day can also be used for learning. In this level of learning, the static vehicle driving environment may be correlated with the driving behavior. For example, if the road and driving route includes frequent changes in road curvature or grade, frequent acceleration and braking by the driver is expected. The driver co-pilot coach 216 may coach the driver to minimize rapid acceleration and braking in order to maintain a reduced near constant speed while driving on a curved road. Similarly, the propulsion system 140 may be adjusted to maximize energy recovery potential through regenerative braking if segments of downhill driving are on the route. In addition, the transmission shift schedule may be adjusted to minimize gear shifting and maintain a constant gear while driving through frequent changes in road curvature or grades. The driver will be coached to maintain more constant driving speeds to maximize energy efficiency.
At level 4 DB4, the behavior learning algorithm 212, 214 also considers the vehicle sensor system 120 that provides sensor data 122 including vehicle sensor data 122a and environmental sensor data 122b associated with a field-of-view of the driver 30 and surrounding the vehicle 100. For example, the sensor system 122, i.e., the vehicle sensors 122a may include front and/or rear short-range radars and/or cameras that are used to sense a number of surrounding vehicles and a distance to each one of the surrounding vehicles. As such, the ideal driver behavior algorithm 212 and driving behavior learning algorithm 214 may determine and learn the driver behavior based on a number of surrounding vehicles and the distances to immediate surrounding vehicles. The objective in this level of learning is to further correlate energy efficiency based on the driving behavior with immediate vehicle environment and vehicles. For example, if the driver follows too closely to immediate vehicles in front, frequent and unnecessary changes in the vehicle speed and accelerator and brake pedal inputs will be sensed which ultimately lead to inefficient driving since the driving behavior is determined to be aggressive. The energy losses would potentially increase if this occurs on a higher vehicle speed (i.e., freeway) versus city driving. The driving behavior learning algorithm 214 and ideal driver behavior algorithm 212 may learn the driver's behavior relating to vehicle following distances and speeds for optimized energy efficiency may be learned. Using this additional learning and information, the driver co-pilot coach 216 coaches the driver 30 to increase vehicle following distances which minimizes unnecessary accelerating and braking in addition to maintaining a near constant speed. This energy efficient style of driving would also be imitated and applied during piloted or autonomous driving to maximize energy efficiency. This level of driving behavior learning for immediate vehicle environment within the driver's field of view may be combined with the previous level of learning including information about the road and driving route. For example, if the driving route includes frequent changes in road curvature and grade, the driver may be advised to further increase vehicle following distances to minimize unnecessary changes in vehicle speed in order to increase energy efficiency.
At level 5 DB5, the behavior learning algorithm 212, 214 also considers dynamic vehicle environment information such as information from a telematics system of the vehicle 100. The telematics system may provide information that includes, but is not limited to, traffic information, weather information, light intersection information, and traffic light timing information. The behavior learning algorithm 212, 214 may include other learning levels. In some examples, the vehicle 100 may use the telematics information to increase or decrease the speed of the vehicle 100 during autonomous driving to reduce excessive vehicle stopping and launching using traffic light timing, causing the vehicle to conserve energy.
Table 2 below shows driver behavior classification and learned driver model measurable characteristics. In other words, the table shows measurable inputs that are considered by both the ideal driver behavior algorithm 212 and the behavior learning algorithm 214 when learning the driver behavior. Several factors influence the behavior learning algorithm 214 such as, but not limited to, three-dimensional map (slope/curve, crossing, etc.), traffic flow (traffic level and density, front and/or rear), road surface (weather), time of day, driver preview (distance and vision), and number of surrounding objects (in the field of view of the driver 30). As shown, some of the direct driver inputs 111 may include, but are not limited to, acceleration and brake pedal input velocities, steering input/angle deviations, and time gap between accelerator/brake pedal application and frequency. The sensor system 120 may receive sensor information that includes, but is not limited to, longitudinal vehicle acceleration and deceleration, average deviation from speed limit, and vehicle following distance (for example, at different vehicle speeds). Additionally, the sensor system 120 may also receive sensor information associated with a driver's focus. This sensor information may include, but is not limited to, steering input/angle deviations, and time gap between accelerator/brake pedal application, driver eyes monitoring (eyes on the road), average deviation from speed limit, and vehicle following distance (for example, at different vehicle speeds).
Several factors may influence the measurable driver characteristics. These factors may include, but are not limited to, the three-dimensional map of the road, i.e., the slope/curvature, crossings, etc,), the traffic flow such as the level or density of the traffic, the road surface, the time of day, the driver preview distance/vision, and the number of surrounding objects for example in the driver's field of view.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or
combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor.
Implementations of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Moreover, subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The terms “data processing apparatus”, “computing device” and “computing processor” encompass all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multi-tasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other implementations are within the scope of the following claims.
This U.S. patent application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Application 62/703,254, filed on Jul. 25, 2018, U.S. Provisional Application 62/703,262, filed on Jul. 25, 2018, and U.S. Provisional Application 62/721,926, filed on Aug. 23, 2018. The disclosures of these prior applications are considered part of the disclosure of this application and are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
62703254 | Jul 2018 | US | |
62703262 | Jul 2018 | US | |
62721926 | Aug 2018 | US |