This relates generally to an autonomous driver assistance system of a vehicle, and specifically relates to an adaptive longitudinal cruise control and lane change assistance features of an autonomous driver assistance system.
With Autonomous Driver Assistance System (ADAS) and Autonomous Vehicle becoming more popular, a lot of companies started to incorporate ADAS into their products (e.g., vehicles). Highway ADAS (HWA) is one of the most dominant features in ADAS, as it is deployed in a structured environment with less uncertainty and randomity.
For HWA, there are two key components: (1) Adaptive Longitudinal Cruise Control (ALCC) and (2) Lane Change Assistance (LCA). ALCC can be decomposed into two scenarios: velocity keeping and car following. LCA can also be decomposed into two scenarios: lane changing and on-ramp-off-ramp (OROR). Embodiments of the present disclosure provide the following improvements over existing systems. The embodiments increase the planning safety level by increasing the planning precision and behavior planner's rationale. The embodiments can include an adaptive dynamic function to minimize tracking errors. The embodiments can also include a feasibility-oriented sanity check that can guarantee the planned trajectory is always within a user-defined planning horizon, thereby preventing planning failure. Finally, instead of doing a massive sampling, the embodiments can narrow down the sampling space of each dimension into a narrow range through reasonable numerical deduction, which can in turn lower the total computational cost of the ADAS.
In the following description of preferred embodiments, reference is made to the accompanying drawings which form a part hereof, and in which it is shown by way of illustration specific embodiments, which can be practiced. It is to be understood that other embodiments can be used and structural changes can be made without departing from the scope of the embodiments of this disclosure.
In the embodiments of the disclosure, a motion planner of an HWA is designed to generate a trajectory from time T1 to T2 with initial state and end state constraints, as well as with comfortability considerations. Jerk, the changing rate of acceleration, usually serves as a key indicator of the comfortability of a planned trajectory. To build the motion planner, we form the key philosophy of our methodology into a convex optimization problem:
Quintic Polynomial Introduction
While numerically solving an optimization problem may consume too much computational resource, embodiments of the disclosed HWA utilize a closed form solution provided by a quintic (fifth order) polynomial. Given planning time horizon, initial state, end state, the quintic polynomial parameters can be computed via solving a 6×6 matrix. By setting each planning time horizon starts from 0, the matrix size can further be reduced into 3×3.
The object processor 102 is designed to deal with complicated scenarios in which multiple observed objects exist.
In one embodiment, the object processor 102 can identify each lane by a lane ID and a lateral displacement (d) from the ego vehicle 206's current lane center. The object processor 102 can also receive information about the observed surrounding objects from a sensor fusion module (not shown in
In addition, the object processor 102 can identify a gap point 209 (i.e., a boundary of a gap) by the longitudinal position (s) of the gap point and longitudinal speed (s_d) of the gap point 209 in the Frenet frame. The object processor 102 can then identify a gap 210 between two observed objects 207, 208 by the left and right boundaries (GapPoint_1, Gap_Point2) 211, 212 of the gap 210 and the longitudinal speed (s_d) of the gap 210 in Frenet frame.
The object processor 102 can further include a customized buffer range for the boundary gaps, a list of objects from other modules such as the sensor fusion module, and grouped list of objects (grouped by, for example, lane ID/vector of observed objects) and gaps (grouped by, for example, lane ID/vector of sorted gaps). The sensor fusion module can fuse the data from a number of sensors (e.g., camera, lidar, radar) of the vehicle and provide data about the nearby objects including, for example, their identifications, sizes, distances from the ego vehicle, and velocities. According to one embodiment, the object processor 102 can first sort the objects 204, 205, 206, 207, 208 into different groups based on their respective lane ID. Then, the object processor 102 can sort different groups of objects based on their respective longitudinal distances. The object processor 102 can also predict each object's longitudinal and lateral states for a given time. Based on the grouped list of objects, the object processor 102 can calculate the gaps (e.g., gap 210) in each lane and form the grouped list of gaps.
Referring back to
The object processor 102 and the vehicle state estimator are both in data communication with the mode manager 106. An exemplary operation of the mode manager 106, according to one embodiment of the disclosure, is illustrated in the flow chart of
Both the object processor 102 and the mode manager 106 can be in communication with the behavior planner module 108. According to an embodiment of the disclosure, after receiving the data from the object processor 102 and the mode manager 106, the behavior planner 108 can output the terminal states of the planned trajectory that can be later used to compute the parameters of the quintic/quartic polynomial.
When the mode manager 106 selects the velocity keeping mode, the anchor point generation module 110 sets up anchor points associated to each lane for sampling and sets the terminal lateral velocity and acceleration to zero. Since velocity keeping (tracking) is not subject to position constraints, the target speed is set based on user's input.
When the mode manager 106 selects the car-following mode, the anchor point generation module 110 sets up anchor points associated with each lane for sampling and sets the terminal lateral velocity and acceleration to zero. In this mode, the following behavior is subject to position constraints. Hence, the terminal states are set as target position and target velocity, where target state is set as a buffer distance from the followed object (e.g., front vehicle) and target velocity is set as the followed object's velocity.
When the mode manager 106 selects the merging mode (i.e., lane-changing mode), the behavior planner 108 sets the lateral trajectory into two segments: during time [0, tmerge] the ego vehicle will continue driving in its original lane, where tmerge is the starting point of time of the merging (lane changing) action. During time [tmerge, Tsample], the vehicle will attempt to merge into the target lane. If tmerge≤0, the ego vehicle can directly initiate merging into the target lane. With the information of gaps received from the object processor 102, the behavior planner 108 can iterate the process for each gap and uniformly sample various points in the gap as the target position. The behavior planner 108 can set the gap's speed as the ego vehicle's final speed. Once the target position and final speed are set, the behavior planner 108 can use the car-following logic described above to complete the longitudinal movement of the ego vehicle.
Referring again to
αinit=ρ·αmax·(1−e−(v
In the velocity keeping mode, no lateral sanity check is required.
In the car-following mode, the sanity check layer 110 calculates the distance from the current position to the target position (Δs) and the difference in the current velocity and the target velocity (Δv). It should be noted that using only Δs as tracking indicator may not be sufficient for a decent tracking convergence time. In one embodiment, the sanity check layer 110 can use a method of dynamically buffer the position difference as represented in the formula below.
Δs′=(1+Ps)Δs
s
virtual
=s
curr
+Δs′
This essentially reflects an elastic “tracking force” during the car-following process, which could decrease the time at which the ego vehicle reaches its ideal tracking location.
In the car-following mode, no lateral check is required.
In the “merging” mode, there is no sanity check required in the lateral direction during time [0, tmerge]. During time [tmerge, Tsample] (when the vehicle is merging into the target lane), the same sanity check described above in the car-following mode can be applied. In the longitudinal direction, again, the same sanity check performed during the car-following mode can be used after the terminal states are determined by the behavior planner 108.
After the sanity checks are completed, the anchor point generation module 112 can narrow down the sampling space as follows. In the velocity-keeping mode, the anchor point generation module 112 determines the sampling anchor point Tsample in time horizon using the formula:
T
sample
=|v
target
−v
curr|/αmax
Then, the anchor point generation module 112 adjusts Tsample to make sure [Tsample−ΔTsample+Δt] is within in [Tmin, Tmax].
In the car-following mode, a constant acceleration model is adopted where ego vehicle is assumed to be driving with a constant acceleration during tracking process. The anchor point generation module 112 can calculate the estimated tracking time using the formula:
T
sample=2·Δs′/(vcurr+vcurr)
The Δs′ can be adjusted to ensure that Tsample is within [Tmin, Tmax]. Finally, Tsample is adjusted to ensure that [Tsample−Δt, Tsample+Δt] is within [Tmin, Tmax].
In the merging mode, after the sanity check is performed by the sanity check layer 110, the same procedure as in the car-following mode is followed based on each terminal state, with respect to longitudinal travel. With respect to lateral travel, between [0, tmerge], the anchor point is always locked on the original lane with zero target velocity and acceleration. Between the same procedure from the car-following mode can be followed.
After anchor point generation, trajectory generation and trajectory evaluation are performed. First, the trajectory generation module 114 can generate the trajectory of the ego vehicle. In the velocity-keeping mode, without position constraints, the trajectory generation module 114 can use quartic (4th order) polynomial to generate the trajectory based on the given initial state and target state. In the car-following mode, with position constraint, the trajectory generation module 114 can use quintic (5th order) polynomial to generate the trajectory based on the given initial state and target state. In the merging mode, the trajectory can be generated in the same way as in the car-following mode, with respect to both the lateral and longitudinal components of the trajectory.
The trajectory evaluation module 116 can provide trajectory validation. In one embodiment, trajectory validation can be performed in the following order: (a) speed validation, (b) acceleration validation, (c) curvature validation, and (d) collision check validation. In particular, speed validation verifies whether the trajectory under check violates a upper speed limit. Acceleration validation can ensure that the trajectory under check does not contain way points that has aggressive acceleration command for passenger comfort purpose. Curvature validation checks the curvature along the planned trajectory to make sure that it is smooth enough without drastic turns. Collision check ensures that the planned trajectory is collision free with regard to the surrounding objects. The above processing order can bear significant benefits for runtime performance because it can rule out unnecessary collision checks on invalid trajectories.
Human-like cost function design can be incorporated in trajectory evaluation. That is, a schematic for selecting the best trajectory from sample trajectories can be employed, which would best satisfy occupants' comfort and safe driving standards. For example, the selected trajectory needs to be collision free first, and then in avoidance of drastic acceleration, frequent velocity changes, etc. Cost function components can include, for example, lateral acceleration, longitudinal acceleration, lateral jerk, longitudinal jerk, max lateral acceleration, max longitudinal acceleration, target longitudinal velocity error, target lateral velocity error, longitudinal position error, lateral position error, and time cost.
In one embodiment, to evaluate each component with less bias and reduce the burden of tuning weights, each component can be normalized to constraint their value between 0 and 1.
The output of the trajectory evaluation module 116 can be fed into the controller 118, which controls the behavior (e.g., actual trajectory) of the ego vehicle.
Vehicle control system 400 can include one or more cameras 106 capable of capturing image data (e.g., video data) of the vehicle's surroundings. In one embodiment, the one or more cameras 106 can be front facing and capable of detecting objects such as other vehicles in front of the vehicle. Additionally or alternatively, vehicle control system 400 can also include one or more distance sensors 407 (e.g., radar, ultrasonic, and LIDAR) capable of detecting various characteristics of the vehicle's surroundings. Additionally, vehicle control system 400 can include a speed sensor 409 for determining the speed of the vehicle. The camera(s) 406, distance sensor(s) 407, and speed sensor 409 can be part of the ADAS or HWA system of the ego vehicle.
Additionally, vehicle control system 400 can include one or more user interfaces (UIs) 408 configured to receive input from the driver to control the movement of the vehicle. In one embodiment, the UIs 408 can include an accelerator pedal, a brake pedal, and steering wheel, that would allow a user (driver) to control the speed, direction, acceleration, and deceleration of the ego vehicle.
Vehicle control system 400 includes an on-board computer 410 that is operatively coupled to the cameras 416, distance sensors 417, speed sensor 419, and UIs 418. The on-board computer 410 is capable of receiving the image data from the cameras and/or outputs from the sensors 417, 419. The on-board computer 410 can also receive outputs from the UIs 418.
In accordance with one embodiment of the disclosure, the on-board computer 410 can be configured to operate the HWA 100 in response to the data/outputs from the camera(s) 416, sensor(s) 417, speed sensor 419, and UIs 418. Additionally, the on-board computer 410 is also capable of setting the vehicle in different operation modes. The different operation modes can include a normal driving mode, in which the vehicle is largely operated manually by the driver, and one of more different levels of autonomous driving modes, in which, the vehicle can provide various driving assistances to the driver including some of the features described in the embodiments of this disclosure.
In some examples, the on-board computer 410 may include, among other modules (not illustrated in
I/O interface 402 may be configured for two-way communication between on-board computer 410 and various components of vehicle control system 400, such as camera(s) 416, distance sensor(s) 417, UIs 418, speed sensor 419, as well as a controller 420. I/O interface 402 may send and receive the data between each of the devices via communication cables, wireless networks, or other communication mediums.
Processing unit 404 may be configured to receive signals and process the signals to determine a plurality of conditions of the operation of the vehicle, for example, through controller 420. For example, processing unit 404 can receive image/video data from camera(s) 416 and/or sensor data from distance sensor(s) 417. The processing unit 404 can determine based on the image/video and sensor data whether there is another object (e.g., vehicle) ahead by analyzing the image/video and sensor data. In some embodiments, the processing unit 404 can determine a distance to other objects. Processing unit 404 can also receive user input (e.g., merge command signal) from UIs 418. Additionally, processing unit 404 can also receive the speed of the vehicle from the speed sensor 419.
Processing unit 404 may also be configured to generate and transmit command signals, via I/O interface 402 to controller 420 in order to actuate the various actuator systems 430 of the vehicle control system 400 as described below. The controller 420 can be the controller 118 of
Storage unit 406 and/or memory module 408 may be configured to store one or more computer programs that may be executed by on-board computer 410 to perform functions of system. For example, storage unit 406 and/or memory module 408 may be configured to process instructions to enable the ALCC and LCA functions described herein.
Vehicle control system 400 may also include a controller 420 connected to the on-board computer 410 and capable of controlling one or more aspects of vehicle operation, such as performing ALCC and LCA operations using instructions from the onboard computer 410.
In some examples, the controller 420 is connected to one or more actuator systems 430 in the vehicle. The one or more actuator systems 430 can include, but are not limited to, a motor (or engine) 431, battery system 433, steering 435, and brakes 436. The on-board computer 410 can control, via controller 420, one or more of these actuator systems 430 during vehicle operation; for example, to control the speed and direction of the vehicle when the HWA system is engaged, using the motor 431, battery system 433, steering 435, brakes 436, and other actuator systems (not illustrated in
A person skilled in the art can further understand that, various exemplary logic blocks, modules, circuits, and algorithm steps described with reference to the disclosure herein may be implemented as specialized electronic hardware, computer software, or a combination of electronic hardware and computer software. For examples, the modules may be implemented by one or more processors to cause the one or more processors to become one or more special purpose processors to executing software instructions stored in the computer-readable storage medium to perform the specialized functions of the modules/units.
The flowcharts and block diagrams in the accompanying drawings show system architectures, functions, and operations of possible implementations of the system and method according to multiple embodiments of the present invention. In this regard, each block in the flowchart or block diagram may represent one module, one program segment, or a part of code, where the module, the program segment, or the part of code includes one or more executable instructions used for implementing specified logic functions. It should also be noted that, in some alternative implementations, functions marked in the blocks may also occur in a sequence different from the sequence marked in the drawing. For example, two consecutive blocks actually can be executed in parallel substantially, and sometimes, they can also be executed in reverse order, which depends on the functions involved. Each block in the block diagram and/or flowchart, and a combination of blocks in the block diagram and/or flowchart, may be implemented by a dedicated hardware-based system for executing corresponding functions or operations, or may be implemented by a combination of dedicated hardware and computer instructions.
As will be understood by those skilled in the art, embodiments of the present disclosure may be embodied as a method, a system or a computer program product. Accordingly, embodiments of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware for allowing specialized components to perform the functions described above. Furthermore, embodiments of the present disclosure may take the form of a computer program product embodied in one or more tangible and/or non-transitory computer-readable storage media containing computer-readable program codes. Common forms of non-transitory computer readable storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM or any other flash memory, NVRAM, a cache, a register, any other memory chip or cartridge, and networked versions of the same.
Although embodiments of this disclosure have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of embodiments of this disclosure as defined by the appended claims.