TRAJECTORY PLANNING SYSTEM FOR AN AUTONOMOUS VEHICLE WITH A REAL-TIME FUNCTION APPROXIMATOR

Information

  • Patent Application
  • 20240132103
  • Publication Number
    20240132103
  • Date Filed
    October 05, 2022
    a year ago
  • Date Published
    April 25, 2024
    10 days ago
Abstract
A trajectory planning system for an autonomous vehicle includes one or more controllers in electronic communication with one or more external vehicle networks that collect data with respect to one or more moving obstacles located in an environment surrounding the autonomous vehicle. The one or more controllers approximate a set of real-time ego states of the autonomous vehicle by a function approximator, where the function approximator has been trained during a supervised learning process with the set of offline ego states as a ground-truth dataset. The one or more controllers compute a plurality of relative state trajectories for the autonomous vehicle, where the plurality of relative state trajectories avoid intersecting the set of real-time ego states of autonomous vehicle. The one or more controllers select a trajectory from the plurality of relative state trajectories for the autonomous vehicle, where the autonomous vehicle follows the trajectory while performing the maneuver.
Description
INTRODUCTION

The present disclosure relates to a trajectory planning system that determines a trajectory of an autonomous vehicle based on a real-time approximation of a set of ego states of the autonomous vehicle.


Semi-autonomous and autonomous vehicles are becoming more ubiquitous on the road. An autonomous vehicle may execute various planning tasks such as mission planning, behavior planning, and local planning. In general, a mission planner determines a trajectory or route based on an ego vehicle's start position to an end position. A behavior planner focuses on handling moving obstacles and static objects while following any stipulated road rules as the vehicle progresses along the prescribed route determined by the mission planner.


Behavior planners do not currently utilize dynamics-based predictive information to make decisions regarding a vehicle's trajectory. As a result, in some instances a vehicle may not be able to complete a maneuver to avoid a moving obstacle in time during extreme or unexpected events. Some examples of extreme or unexpected events include inclement weather, when an actuator becomes non-operational, or suddenly erratic drivers. Traditional approaches may use a kinematic model combined with worst-case accelerations to constrain the headway between vehicles. However, the kinematic model does not consider both linear and non-linear tire dynamics, higher-slip driving conditions, or coupled-motion between lateral and longitudinal states of the vehicle.


Thus, while current autonomous vehicles achieve their intended purpose, there is a need in the art for an improved trajectory planning system that ensures that evasive maneuvers always exist and are executable within an operating domain of interest by leveraging higher fidelity dynamics models.


SUMMARY

According to several aspects, a trajectory planning system for an autonomous vehicle is disclosed. The trajectory planning system includes one or more controllers in electronic communication with one or more external vehicle networks that collect data with respect to one or more moving obstacles located in an environment surrounding the autonomous vehicle. The one or more controllers execute instructions to determine a discrete-time relative vehicle state based on an autonomous vehicle dynamics model. The one or more controllers determine, based on the discrete-time relative vehicle state, a position avoidance set representing relative lateral positions and longitudinal positions that the autonomous vehicle avoids while bypassing the one or more moving obstacles when performing a maneuver. The one or more controllers determine a set of offline ego states for which the autonomous vehicle is unable to execute maneuvers without entering the position avoidance set. The one or more controllers approximate, in real-time, a set of real-time ego states of the autonomous vehicle by a function approximator, where the function approximator has been trained during a supervised learning process with the set of offline ego states as a ground-truth dataset. The one or more controllers compute a plurality of relative state trajectories for the autonomous vehicle, where the plurality of relative state trajectories avoid intersecting the set of real-time ego states of autonomous vehicle. The one or more controllers select a trajectory from the plurality of relative state trajectories for the autonomous vehicle, where the autonomous vehicle follows the trajectory while performing the maneuver.


In another aspect, the function approximator approximates the set of real-time ego states in real-time based on a current position and a current velocity of the autonomous vehicle and the one or more moving obstacles, a speed limit of a roadway that the autonomous vehicle is presently driving along, environmental variables, and road conditions.


In yet another aspect, the one or more controllers select the trajectory by assigning a score to each relative state trajectory for the autonomous vehicle based on one or more properties and selecting the relative state trajectory having the highest score as the trajectory.


In an aspect, the one or more properties include one or more of the following: ride comfort, fuel consumption, timing, and duration.


In another aspect, the trajectory planning system includes a plurality of sensors in electronic communication with the one or more controllers, where the one or more controllers receive a plurality of dynamic variables as input from the plurality of sensors.


In yet another aspect, the one or more controllers determine the autonomous vehicle dynamics model for the autonomous vehicle based on the plurality of dynamic variables and vehicle chassis configuration information.


In an aspect, the position avoidance set is determined by:






custom-character={(es, ed)∈R2:es,l≤es≤es,u, ed,l≤ed≤ed,u}


where custom-character is the position avoidance set, es is a discrete-time relative longitudinal state of the autonomous vehicle with respect to an obstacle, ed is a discrete-time relative lateral state of the autonomous vehicle with respect to the obstacle, es,l is a lower limit of the discrete-time relative longitudinal state es, es,u is an upper limit of the discrete-time relative longitudinal state es, ed,l is a lower limit of the discrete-time relative lateral state ed, and ed,u is a lower limit of the discrete-time relative longitudinal state es, of the position avoidance set custom-character.


In another aspect, the one or more controllers determine the plurality of relative state trajectories for the autonomous vehicle based on an initial state of the autonomous vehicle, a final state of the autonomous vehicle, and one or more levels of driving aggression.


In yet another aspect, the one or more levels of driving aggression include a conservative level, a moderate level, and an aggressive level of aggression.


In an aspect, the one or more controllers determine the set of ego states during an offline process based on one of simulated data and experimental data.


In another aspect, the one or more moving obstacles include another vehicle located along a roadway that the autonomous vehicle is driving along.


In yet another aspect, the set of ego states represent vehicle states where the autonomous vehicle is unable to execute maneuvers during a time horizon to avoid the one or more moving obstacles.


In an aspect, the autonomous vehicle dynamics model includes one or more of the following: a linear tire model and non-linear tire models.


In another aspect, an autonomous vehicle including a trajectory planning system is disclosed. The autonomous vehicle includes a plurality of sensors that determine a plurality of dynamic variables, one or more external vehicle networks that collect data with respect to one or more moving obstacles located in an environment surrounding the autonomous vehicle, and one or more controllers in electronic communication with the one or more external vehicle networks and the plurality of sensors. The one or more controllers execute instructions to determine an autonomous vehicle dynamics model for the autonomous vehicle based on the plurality of dynamic variables and vehicle chassis configuration information. The one or more controllers determine a discrete-time relative vehicle state based on an autonomous vehicle dynamics model. The one or more controllers determine, based on the discrete-time relative vehicle state, a position avoidance set representing relative lateral positions and longitudinal positions that the autonomous vehicle avoids while bypassing the one or more moving obstacles when performing a maneuver. The one or more controllers determine a set of offline ego states for which the autonomous vehicle is unable to execute maneuvers without entering the position avoidance set. The one or more controllers approximate, in real-time, a set of real-time ego states of the autonomous vehicle by a function approximator, where the function approximator has been trained during a supervised learning process with the set of offline ego states as a ground-truth dataset. The one or more controllers compute a plurality of relative state trajectories for the autonomous vehicle, where the plurality of relative state trajectories avoid intersecting the set of real-time ego states of autonomous vehicle. The one or more controllers select a trajectory from the plurality of relative state trajectories for the autonomous vehicle, where the autonomous vehicle follows the trajectory while performing the maneuver.


In another aspect, the function approximator approximates the set of real-time ego states in real-time based on a current position and a current velocity of the autonomous vehicle and the one or more moving obstacles, a speed limit of a roadway that the autonomous vehicle is presently driving along, environmental variables, and road conditions.


In yet another aspect, the one or more controllers select the trajectory by assigning a score to each relative state trajectory for the autonomous vehicle based on one or more properties and selecting the relative state trajectory having the highest score as the trajectory.


In an aspect, the one or more controllers determine the plurality of relative state trajectories for the autonomous vehicle based on an initial state of the autonomous vehicle, a final state of the autonomous vehicle, and one or more levels of driving aggression.


In another aspect, the autonomous vehicle dynamics model includes one or more of the following: a linear tire model and non-linear tire models.


In yet another aspect, a method for selecting a trajectory for an autonomous vehicle is disclosed. The method includes determining, by one or more controllers, a discrete-time relative vehicle state based on an autonomous vehicle dynamics model, where the one or more controllers are in electronic communication with one or more external vehicle networks that collect data with respect to one or more moving obstacles located in an environment surrounding the autonomous vehicle. The method includes determining, based on the discrete-time relative vehicle state, a position avoidance set representing relative lateral positions and longitudinal positions that the autonomous vehicle avoids while bypassing the one or more moving obstacles when performing a maneuver. The method also includes determining a set of offline ego states for which the autonomous vehicle is unable to execute maneuvers without entering the position avoidance set. The method further includes approximating, in real-time, a set of real-time ego states of the autonomous vehicle by a function approximator, where the function approximator has been trained during a supervised learning process with the set of offline ego states as a ground-truth dataset. The method further includes computing a plurality of relative state trajectories for the autonomous vehicle, where the plurality of relative state trajectories avoid intersecting the set of real-time ego states of autonomous vehicle. Finally, the method includes selecting a trajectory from the plurality of relative state trajectories for the autonomous vehicle, where the autonomous vehicle follows the trajectory while performing the maneuver.


In another aspect, the method includes approximating the set of real-time ego states in real-time based on a current position and a current velocity of the autonomous vehicle and the one or more moving obstacles, a speed limit of a roadway that the autonomous vehicle is presently driving along, environmental variables, and road conditions.


Further areas of applicability will become apparent from the description provided herein. It should be understood that the description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.





BRIEF DESCRIPTION OF THE DRAWINGS

The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way.



FIG. 1 is a schematic diagram of an autonomous vehicle including the disclosed trajectory planning system, where the trajectory planning system includes one or more controllers in electronic communication with a plurality of sensors, according to an exemplary embodiment;



FIG. 2 is an illustration of the autonomous vehicle and another vehicle driving along a roadway, according to an exemplary embodiment;



FIG. 3 is a block diagram of the one or more controllers shown in FIG. 1, according to an exemplary embodiment; and



FIG. 4 is a process flow diagram illustrating a method for selecting a trajectory for the autonomous vehicle shown in FIG. 1, according to an exemplary embodiment.





DETAILED DESCRIPTION

The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses.


Referring to FIG. 1, an exemplary trajectory planning system 10 for an autonomous vehicle 12 is illustrated. The trajectory planning system 10 ensures that maneuvers to avoid one or more moving obstacles always exist for the autonomous vehicle 12. It is to be appreciated that the autonomous vehicle 12 may be any type of vehicle such as, but not limited to, a sedan, truck, sport utility vehicle, van, or motor home. The autonomous vehicle 12 may be a fully autonomous vehicle including an automated driving system (ADS) for performing all driving tasks or a semi-autonomous vehicle including an advanced driver assistance system (ADAS) for assisting a driver with steering, braking, and/or accelerating.


The trajectory planning system 10 includes one or more controllers 20 in electronic communication with a plurality of sensors 22 configured to monitor data indicative of a dynamic state of the autonomous vehicle 12 and data indicating obstacles located in an environment 52 (FIG. 2) surrounding the autonomous vehicle 12. In the non-limiting embodiment as shown in FIG. 1, the plurality of sensors 22 include one or more wheel speed sensors 30 for measuring an angular wheel speed of one or more wheels 40 of the autonomous vehicle 12, one or more cameras 32, an inertial measurement unit (IMU) 34, a global positioning system (GPS) 36, and LiDAR 38, however, is to be appreciated that additional sensors may be used as well. The one or more controllers 20 are also in electronic communication with a plurality of vehicle systems 24. In one non-limiting embodiment, the vehicle systems 24 include a brake system 42, a steering system 44, a powertrain system 46, and a suspension system 48, however, it is to be appreciated that other vehicle systems may be included as well. The one or more controllers 20 are also in electronic communication with one or more external vehicle networks 26. The one or more external vehicle networks 26 may include, but are not limited to, cellular networks, dedicated short-range communications (DSRC) networks, and vehicle-to-infrastructure (V2X) networks.



FIG. 2 is an illustration of the autonomous vehicle 12 traveling along a roadway 50 in the environment 52 surrounding the autonomous vehicle 12, where the environment 52 includes one or more moving obstacles 54 disposed along the roadway 50. In the example as shown in FIG. 2, the moving obstacle 54 is another vehicle located along the roadway 50 that the autonomous vehicle 12 is driving along, however, it is to be appreciated that FIG. 2 is merely exemplary in nature. Indeed, the moving obstacle 54 may be any moving object that the autonomous vehicle 12 avoids contacting such as, for example, bicycles, pedestrians, and animals. The one or more controllers 20 (FIG. 1) may receive information regarding the moving obstacle 54 via the one or more external vehicle networks 26. As seen in FIG. 2, a region of interest 60 exists between the moving obstacle 54 and the autonomous vehicle 12. The region of interest 60 represents an area along the roadway 50 where the autonomous vehicle 12 is unable to execute a maneuver during a time horizon while avoiding contact with the moving obstacle 54. In one non-limiting embodiment, the maneuver is an evasive maneuver executed for purposes of avoiding a traffic incident with the moving obstacle 54. When the autonomous vehicle 12 is located outside of the region of interest 60, the autonomous vehicle 12 is able to execute a maneuver to avoid contacting the moving obstacle 54. If the autonomous vehicle 12 is within the region of interest 60, it is no longer the case that the autonomous vehicle 12 will necessarily be able to execute a maneuver that avoids contact with the moving obstacle 54. This is because once the autonomous vehicle 12 enters the region of interest 60, now the ability to avoid obstacles is no longer determined solely by the planning and control algorithms executed by the autonomous vehicle 12, and instead is influenced or determined based on the actions of the moving obstacle 54 as well.


Referring to both FIGS. 1 and 2, the trajectory planning system 10 selects a trajectory 62 that the autonomous vehicle 12 follows, where the trajectory 62 does not intersect or avoids the region of interest 60 during the time horizon. As mentioned above, in some embodiments the trajectory 62 may be executed to purposely evade or avoid the moving obstacle 54. The autonomous vehicle 12 follows the trajectory 62 when performing a maneuver while avoiding the one or more moving obstacles 54. It is to be appreciated in some embodiments, the trajectory 62 may slightly encroach the region of interest 60 along its boundaries 66. During the time horizon, the moving obstacle 54 moves as well, where the region of movement created by the moving obstacle 54 is represented by a moving obstacle region 64. When the autonomous vehicle 12 avoid entering the region of interest 60, it follows that the autonomous vehicle 12 may avoiding entering the moving obstacle region 64.



FIG. 3 is an illustration of the one or more controllers 20 shown in FIG. 1. The one or more controllers 20 includes an offline module 70, a machine learning module 72, a state monitoring system module 74, and a real-time module 76 that selects the trajectory 62 of the autonomous vehicle 12 in real-time as the autonomous vehicle 12 is driving. As explained below, the machine learning module 72 executes one or more machine learning algorithms that approximate a plurality of real-time occupancy sets custom-characterk+1 corresponding to the one or more moving obstacles 54 in the environment 52 and a set of real-time ego states custom-characterN−k corresponding to the autonomous vehicle 12 that are provided to the real-time module 76, where k represents a time index or time step, and a horizon of interest ranges from a final time N to a current or initial time 0, or k=N−1, . . . , 0. The set of real-time ego states custom-characterN−k represent vehicle states where the autonomous vehicle 12 is unable to execute maneuvers during a time horizon to avoid the one or more moving obstacles 54. The plurality of real-time occupancy sets custom-characterk+1 bound the one or more moving obstacles 54 located in the environment 52 over the horizon of interest forward in time starting from an initial time 0 to a maximum horizon time Nf


As explained below, the offline module 70 first determines ground-truth datasets 78, 80 corresponding to a plurality of offline occupancy sets custom-characterk+1 and a set of offline ego states custom-characterN−k, respectively, during an offline process. It is to be appreciated that the ground-truth datasets 78 represent ideal or expected values of the plurality of offline occupancy sets custom-characterk+1 that are determined for a range of positions and velocities of the one or more moving obstacles 54 as well as varying environmental variables such as, for example, road geometry, inclination, the coefficient of friction based on road conditions, vehicle mass, and vehicle moment of inertia. Some examples of the road conditions include, but are not limited to, road geometry, curvature, the coefficient of friction, and inclination. Similarly, the ground-truth datasets 80 represent expected values of the set of offline ego states custom-characterN−k that are determined for a range of positions and velocities of the autonomous vehicle 12 and the one or more moving obstacles 54, a speed limit of the roadway 50 (FIG. 2), and by varying the environmental variables and the road conditions.


The machine learning module 72 receives the ground-truth datasets 78, 80 from the offline module 70 as input. The machine learning module 72 trains a function approximator 82 to compute the plurality of offline occupancy sets custom-characterk+1 based on the corresponding ground-truth dataset 78 during a supervised learning process that is conducted offline. Similarly, the machine learning module 72 trains a function approximator 84 to compute the set of offline ego states custom-characterN−k based on the corresponding ground-truth occupancy set 80 during a supervised learning process conducted offline. Once the function approximators 82, 84 are trained during the supervised learning process, the machine learning module 72 approximates the plurality of real-time occupancy sets custom-characterk+1 and the set of real-time ego states custom-characterN−k in real-time by the respective function approximators 82, 84.


The offline module 70 considers a control input vector ukh corresponding to the autonomous vehicle 12 and a control input vector uko corresponding to the one or more moving obstacles 54 located in the environment 52. The control input vectors ukh, uko are constrained based on data saved in the one or more controllers 20 or by inference based on information such as vehicle model, vehicle make, and vehicle type, where h denotes the host vehicle (i.e., the autonomous vehicle 12), and o denotes the moving obstacle 54. The control input vector ukh represents a vector form of input variables that are used to determine an autonomous vehicle dynamics model, which is described below. Specifically, the input variables for determining the autonomous vehicle dynamics include a longitudinal acceleration along and a steering wheel angle δ of the autonomous vehicle 12. The input variables, which are in vector form, belongs to or are constrained by an admissible input set custom-characterh that captures an upper limit and a lower limit for the longitudinal acceleration along and the steering wheel angle δ. Similarly, a control input vector uko corresponding to the one or more moving obstacles 54 located in the environment 54 represents the vector form of the input variables to the dynamics model of the moving obstacle 54 (i.e., a longitudinal acceleration along and a steering wheel angle δ corresponding to the moving obstacle 54). The input vector of the moving obstacle 54 belongs to or is constrained by an admissible input set custom-charactero, that captures an upper limit and a lower limit for the longitudinal acceleration along and the steering wheel angle δ of the moving obstacle 54.


Referring to FIGS. 1-3, the offline module 70 receives a plurality of dynamic variables 90 as input from the one or more sensors 22, where the dynamic variables 90 each represent an operating parameter indicative of a dynamic state of the autonomous vehicle 12 and driving environment conditions. Some examples of driving environment conditions include, but are not limited to, type of road, road surface, and weather conditions. It is to be appreciated that the dynamic variables 90 are determined based on experimental data or, alternatively, simulated data. The offline module 70 also receives vehicle chassis configuration information 92 as input as well, where the vehicle chassis configuration information 92 indicates information such as, but not limited to, number of wheels, number of driven wheels, and number of steered wheels. The offline module 70 determines an autonomous vehicle dynamics model for the autonomous vehicle 12 based on the dynamic variables 90, the vehicle chassis configuration information 92 for the autonomous vehicle 12, and the control input vector uko for the autonomous vehicle 12.


The offline module 70 also receives dynamics variables 98 and vehicle chassis configuration information 100 regarding one or more vehicles located in the environment 52 surrounding the autonomous vehicle 12 from the one or more external vehicle networks 26. The offline module 70 determines the obstacle dynamics model based on the dynamic variables 98, the vehicle chassis configuration information 100, and the control input vector uko corresponding to the one or more moving obstacles 54 located in the environment 52. It is to be appreciated that both the vehicle dynamic model and the obstacle dynamics model are both expressed in terms of the Frenet coordinate system. Both the autonomous vehicle dynamics model and the obstacle dynamics model are based on the dynamic modeling approach, and not the kinematic modeling approach. It is to be appreciated that the autonomous vehicle dynamics model and the obstacle dynamics model include linear and/or non-linear tire models as well as coupling between lateral states and longitudinal states of the relevant vehicle.


The offline module 70 determines a discrete-time relative vehicle state e based on the autonomous vehicle dynamics model for the autonomous vehicle 12 and the obstacle dynamics model for the one or more moving obstacles 54 located in the environment 52, where the discrete-time relative vehicle state e represents a function that predicts a relative state of the autonomous vehicle 12 at the next time step k, with respect to the moving obstacle 54. It is to be appreciated that the lateral displacements and the longitudinal displacements of the discrete-time relative vehicle state e are expressed in the Frenet coordinate system.


The offline module 70 determines the discrete-time relative vehicle state e by first determining a difference in position and velocity between the autonomous vehicle dynamics model corresponding to the autonomous vehicle 12 and the obstacle dynamic model of the one or more moving obstacles 54 located in the environment 52 to determine a relative nonlinear dynamic model. As mentioned above, the autonomous vehicle dynamics model for the autonomous vehicle 12 and the obstacle dynamics model for the one or more moving obstacles 54 located in the environment 52 both include a linear tire model, a non-linear tire model, or both the linear and non-linear tire model. Accordingly, the non-linear tire model of the relative nonlinear dynamic model is linearized about an operating condition of interest. One example of an operating condition of interest is constant speed along a centerline of a lane, however, it is to be appreciated that other operating conditions may be used as well. Linearizing the non-linear tire model results in a continuous time plant and input matrices associated with the relative nonlinear dynamic model. The linearized relative nonlinear dynamics model is then discretized from a continuous-time model into a discrete time model. In an embodiment, the linearized relative nonlinear dynamics model is discretized by either a zero-order hold model or a first-order hold model, which yields the discrete-time relative vehicle state e, however, other discretization approaches may be used as well.


The offline module 70 then determines a position avoidance set custom-character and an occupancy set custom-character, which are both expressed in Frenet frame coordinates based on the discrete-time relative vehicle state e. The position avoidance set custom-character represents relative lateral positions and longitudinal positions that the autonomous vehicle 12 avoids in order to bypass or avoid contact with the moving obstacle 54 while performing a maneuver. In an embodiment, the position avoidance set custom-character is expressed by Equation 1, which is:






custom-character={(es, ed)∈R2:es,l≤es≤es,u, ed,l≤ed≤ed,u}  Equation 1


where es is a discrete-time relative longitudinal state of the autonomous vehicle 12 with respect to an obstacle, ed is a discrete-time relative lateral state of the autonomous vehicle 12 with respect to the obstacle, es,l is a lower limit of the discrete-time relative longitudinal state es, es,u is an upper limit of the discrete-time relative longitudinal state es, ed,l is a lower limit of the discrete-time relative lateral state ed, and ed,u is a lower limit of the discrete-time relative longitudinal state es, of the avoidance set custom-character. It is to be appreciated that the limits merely characterize the avoidance set custom-character, and not the lateral or longitudinal states of the autonomous vehicle 12 while driving.


The occupancy set custom-character is centered at the moving obstacle 54 and represents lateral positions and longitudinal positions that bound the one or more moving obstacles 54 located in the environment 52 (FIG. 2) at the current or an initial time. In an embodiment, the occupancy set custom-character is expressed in Equation 2 as:






custom-character={(s, d)∈R2:sl≤s≤su, dl≤d≤du}  Equation 2


where s is a longitudinal position of the one or more vehicles, d is the lateral position of the one or more vehicles, sl is the lower limit of the longitudinal position, su is the upper limit of the longitudinal position, dl is the lower limit of the lateral position, and du is the upper limit of the lateral position of the initial occupancy set custom-character0. Referring to both FIGS. 2 and 3, a union Uk=0Ncustom-characterk for the occupancy set custom-character along the horizon of interest is represented by the moving obstacle region 64.


The offline module 70 then determines the ground-truth datasets 78, which represent expected values of the plurality of offline occupancy sets custom-characterk+1 over the horizon of interest from the initial time 0 to the maximum horizon time Nf for a range of positions and velocities of the one or more moving obstacles 54 and by varying the environmental variables such as road geometry, inclination, and the coefficient of friction based on the road conditions. The occupancy set at the initial time custom-character0 is equal to the occupancy set custom-character. For each time step, an individual occupancy set custom-characterk+1 that is part of the ground-truth dataset 78 is determined based on Equation 3, which is:






custom-character
k+1=projP({xk+1custom-character|∃ukocustom-charactero, xk+1=Akxk+Bkh+Bkouko})   Equation 3


where P represents an admissible position subspace (i.e., the allowable driving area or road geometry), custom-character represents admissible states of the moving obstacle 54, custom-charactero is the admissible input sets of the one or more vehicles located in the environment 52, xk+1 represents a state vector of the moving obstacle 54, Ak is a continuous-time plant matrix of the linearized relative dynamics model, Bkh is a continuous-time input matrix of the autonomous vehicle 12 with respect to the time step k, and Bko is a continuous-time input matrix of the one or more moving obstacles 54 in the environment 52 with respect to the time step k.


The machine learning module 72 receives the ground-truth datasets 78, which represent the expected values of the plurality of offline occupancy sets custom-characterk+1 computed forwards in time over the horizon of interest from the initial time 0 to the maximum horizon time Nf for a range of positions and velocities of the one or more moving obstacles 54 as well as varying environmental variables from the offline module 70. The machine learning module 72 trains the function approximator 82 to compute the plurality of offline occupancy sets custom-characterk+1 based on the corresponding ground-truth occupancy set 78 during the supervised learning process, which is conducted offline. Specifically, the machine learning module 72 is trained by first computing the offline occupancy sets custom-characterk+1, which each correspond to a specific position and velocity of the one or more moving obstacles 54 for a specific set of environment variables. The machine learning module 72 compares the offline occupancy sets custom-characterk+1 with corresponding ground-truth data values and adjusts computation of the offline occupancy sets custom-characterk+1 until a difference between successive computations of the offline occupancy sets custom-characterk+1 are less than an occupancy convergence criterion. The occupancy convergence criterion represents a threshold difference between a successive computation of a particular offline occupancy set custom-characterk+1 to a respective value that is part of the ground-truth dataset 78. Once the function approximator 82 is trained during the supervised learning process, the machine learning module 72 approximates the plurality of real-time occupancy sets custom-characterk+1 in real-time by the respective function approximator 82 based on a current position and a current velocity of the one or more moving obstacles 54 and current environmental variables.


The offline module 70 also determines the ground-truth datasets 80, which represent expected values of the set of offline ego states custom-characterN−k over the horizon of interest from the final time N to a current or initial time 0, or k=N−1, . . . , 0 for a set of given parameters. The set of given parameters include variables such as various positions and speeds of the autonomous vehicle 12 and the one or more moving obstacles 54, speed limits, environmental variables, and the road conditions. The set of offline ego states custom-characterN−k represent states where the autonomous vehicle 12 is unable to execute the maneuver without entering the position avoidance set custom-character over the horizon of interest for the set of given parameters. At the final time N, the individual set of ego states custom-characterN is equal to the position avoidance set custom-character. For each time step, an individual offline ego state custom-characterN−k that is part of the ground-truth dataset 80 is determined based on Equation 4, which is:






custom-character
N−k
={e
k+1
∈ε|∀u
k
hcustom-characterh, ∃ukocustom-charactero, Akek+Bkhukh+Bkoukocustom-characterN−k+1}  Equation 4


where ek+1 is the discrete-time relative vehicle state at time step k−1, ε is a set of admissible relative position and velocity states between the autonomous vehicle 12 and the moving obstacle 54, custom-characterh, custom-charactero are the admissible input sets of the autonomous vehicle 12 or the one or more moving obstacles 54 located in the environment 52, respectively, that represents combinations of the longitudinal acceleration along and the steering wheel angle δ, and ek is the discrete-time relative vehicle state at the time step k.


The machine learning module 72 receives the ground-truth datasets 80 representing the set of offline ego states custom-characterN−k computed backwards over the horizon of interest from the final time N to a current or initial time 0, or k=N−1, . . . , 0 for which the autonomous vehicle 12 is unable to execute the maneuver without entering the position avoidance set custom-character over the horizon of interest for the set of given parameters from the offline module 70. The machine learning module 72 trains the function approximator 84 to compute the plurality of offline ego states custom-characterN−k based on the corresponding ground-truth occupancy set 80 during the supervised learning process, which is conducted offline. Specifically, the machine learning module 72 is trained by first computing the offline ego states custom-characterN−k, which each correspond to a specific position and velocity of the autonomous vehicle 12 and the one or more moving obstacles 54 for a specific set of environment variables. The machine learning module 72 compares the offline ego states custom-characterN−k with corresponding ground-truth data values 80 and adjusts computation of the ego states custom-characterN−k until a difference between successive computations of the ego states custom-characterN−k are less than an ego convergence criterion. The ego convergence criterion represents a threshold difference between a successive computation of a particular offline ego state custom-characterN−k to a respective value that is part of the ground-truth dataset 80. Once the function approximator 84 is trained during the supervised learning process, the machine learning module 72 approximates the set of real-time ego states custom-characterN−k in real-time by the respective function approximator 82 based on a current position and a current velocity of the autonomous vehicle 12 and the one or more moving obstacles 54, the speed limit of the roadway 50 (FIG. 2), the environmental variables, and the road conditions.


The state monitoring system module 74 estimates the vehicle state 120, where the vehicle state 120 indicates a current position and velocity of the autonomous vehicle 12. The state monitoring system module 74 also estimates an obstacle state 122, where the obstacle state 122 indicates a current position and velocity of the one or more moving obstacles 54 located within the environment 52. It is to be appreciated that the vehicle state 120 and the obstacle state 122 are both expressed in the Frenet coordinate system. The vehicle state 120 and the obstacle state 122 are both estimated based on any estimation technique such as, for example, linear and non-linear Kalman filters or object detection and tracking systems.


The respective function approximator 82 of the machine learning module 72 approximates the plurality of real-time occupancy sets custom-characterk+1 in real-time based on a current position and a current velocity of the one or more moving obstacles 54 and the environmental variables. Similarly, the function approximator 84 of the machine learning module 72 approximates the set of real-time ego states custom-characterN−k in real-time based on a current position and a current velocity of the autonomous vehicle 12 and the one or more moving obstacles 54, the speed limit of the roadway 50 (FIG. 2), the environmental variables, and the road conditions.


The real-time module 76 determines a set of trajectory constraints based on the plurality of real-time occupancy sets custom-characterk+1, the set of real-time ego states custom-characterN−k, the speed limit of the roadway 50 that the autonomous vehicle 12 is presently traveling along (seen in FIG. 2), the environmental variables, and the road conditions. The real-time module 76 then applies the trajectory constraints pairwise for the autonomous vehicle 12 and each of the one or more moving obstacles 54 located within the environment 52 to determine the plurality of real-time occupancy sets custom-characterk+1 and the set of real-time ego states custom-characterN−k based on a position and a velocity of the autonomous vehicle 12 and the one or more moving obstacles 54 located in the environment.


The real-time module 76 computes the plurality of relative state trajectories ph(t) for the autonomous vehicle 12 to determine a maneuver that avoids the region of interest 60 (FIG. 2). In an embodiment, the maneuver is an evasive maneuver that is executed for purposes of avoiding a traffic incident with the moving obstacle 54, and therefore avoids the moving obstacle region 64 (FIG. 2). In the event the real-time module 76 determines a maneuver to avoid the region of interest 60, the plurality of relative state trajectories ph(t) for the autonomous vehicle 12 are computed based on an initial state of the autonomous vehicle 12, a final state of the autonomous vehicle 12, and one or more levels of driving aggression, where the plurality of relative state trajectories ph(t) avoid intersecting the set of real-time ego states custom-characterN−k. As mentioned above, in embodiments the relative state trajectories ph(t) may slightly encroach the region of interest 60 along its boundaries 66 (FIG. 2). In another embodiment, if the real-time module 76 determines an evasive maneuver is required, then the plurality of relative state trajectories ph(t) for the autonomous vehicle 12 are computed based on the initial state of the autonomous vehicle 12, the final state of the autonomous vehicle 12, and the one or more levels of driving aggression, where the plurality of relative state trajectories ph(t) avoid intersecting the plurality of real-time occupancy sets custom-characterk+1.


In an embodiment, the one or more levels of driving aggression include three levels of aggression, a conservative level, a moderate level, and an aggressive level of aggression, however, it is to be appreciated that fewer or more levels of aggression may be used as well. The relative state trajectories ph(t) are calculated based on a multi-objective optimization between cumulative jerk and driving aggression based on the initial state of the autonomous vehicle 12 and the final state of the autonomous vehicle 12. The real-time module 76 also computes a trajectory po(t) of the moving obstacle 54 based on the same conditions for linearizing the nonlinear relative dynamics model, where the trajectory po(t) represents the operating point about which the moving obstacle 54 motion is linearized.


The real-time module 76 then selects the trajectory 62 by assigning a score to each relative state trajectory ph(t) for the autonomous vehicle 12 based on one or more properties, and then selecting the trajectory 62 based on the scare of each relative state trajectory ph(t). Specifically, the real-time module 76 selects the relative state trajectory ph(t) having the highest score as the trajectory 62. The one or more properties represent characteristics such as, but not limited to, ride comfort, fuel consumption, timing, and duration. In one embodiment, the real-time module 76 may discard any relative state trajectories ph(t) for the autonomous vehicle 12 that falls within the set of real-time ego states custom-characterN−k. The real-time module 76 may then select the relative state trajectory ph(t) based on the score assigned to each relative state trajectory ph(t). Referring to both FIGS. 2 and 3, the autonomous vehicle 12 follows the trajectory 62 while performing a maneuver while avoiding the one or more moving obstacles 54.



FIG. 4 is a process flow diagram illustrating a method 200 for determining the trajectory 62 of the autonomous vehicle 12 by the trajectory planning system 10. Referring generally to FIGS. 1-4, the method 200 may begin at block 202. In block 202, the offline module 70 of the one or more controllers 20 determines the autonomous vehicle dynamics model for the autonomous vehicle 12 based on the dynamic variables 90 and the vehicle chassis configuration information 92 for the autonomous vehicle 12 (FIG. 2). The method 200 may then proceed to block 204.


In block 204, the offline module 70 of the one or more controllers 20 determine the discrete-time relative vehicle state e based on the autonomous vehicle dynamics model for the autonomous vehicle 12. The method 200 may then proceed to block 206.


In block 206, the offline module 70 of the one or more controllers 20 determine, based on the discrete-time relative vehicle state e, the position avoidance set custom-character representing relative lateral positions and longitudinal positions that the autonomous vehicle 12 avoids in order to bypass or avoid contact with the moving obstacle 54 while performing a maneuver. The method 200 may then proceed to block 208.


In block 208, the offline module 70 of the one or more controllers 20 determines the ground-truth datasets 80 corresponding to the set of offline ego states custom-characterN−k for which the autonomous vehicle 12 is unable to execute the maneuver without entering the position avoidance set custom-character over the horizon of interest for the set of given parameters. The method 200 may then proceed to block 210.


In block 210, the machine learning module 72 of the one or more controllers 20 trains the function approximator 84 to compute the plurality of offline ego states custom-characterN−k based on the corresponding ground-truth occupancy set 80 during the supervised learning process. It is to be appreciated that blocks 202-210 are performed during an offline process. The method 200 may then proceed to block 212.


In block 212, the real-time module 76 of the one or more controllers 20 approximate, in real-time, the set of real-time ego states custom-characterN−k of autonomous vehicle 12 by the function approximator 84, where the function approximator 84 has been trained during the supervised learning process with the set of offline ego states custom-characterN−k representing a ground-truth dataset. The method 200 may then proceed to block 214.


In block 214, the real-time module 76 of the one or more controllers 20 computes the relative state trajectories ph(t) for the autonomous vehicle 12, where the relative state trajectories ph(t) avoid intersecting the set of real-time ego states custom-characterN−k. The method 200 may then proceed to block 216.


In block 216, the real-time module 76 of the one or more controllers 20 selects a trajectory 62 from the plurality of relative state trajectories ph(t) for the autonomous vehicle 12, where the autonomous vehicle 12 follows the trajectory 62 while performing a maneuver. Specifically, the real-time module 76 determines the trajectory 62 in blocks 216A-216D. In block 216A, the real-time module 76 assigns a score to each relative state trajectory ph(t) for the autonomous vehicle 12 based on one or more properties. In decision block 216B, if a particular relative state trajectory ph(t) has the score of all the relative state trajectories ph(t), then in block 214C the real-time module 76 selects the particular relative state trajectory ph(t) as the trajectory 62 in block 216C. Otherwise, the real-time module 76 does not select the particular relative state trajectory ph(t) as the trajectory 62.


Referring generally to the figures, the disclosed trajectory planning system provides various technical effects and benefits. Specifically, the disclosure provides a methodology and architecture that ensures maneuvers always exist and are executable by the autonomous vehicle within an operating domain of interest. It is to be appreciated that the trajectory planning system may be enhanced by applying external vehicle networks (such as V2X) to communicate with other moving obstacles located within the environment surrounding the autonomous vehicle. In embodiments, information regarding downstream road conditions may be provided to the trajectory planning system from other vehicles that are connected to the external network. Current approaches may rely upon the kinematic model to determine the headway between vehicles, however, the kinematic model does not consider linear and non-linear tire dynamics or coupling between the motion of the lateral and longitudinal states of the vehicle. In contrast, the disclosed trajectory planning system captures linear as well as non-linear tire dynamics as well as coupled lateral and longitudinal motion of the vehicle. Finally, it is to be appreciated that the disclosed trajectory planning system also has the ability to re-plan at high frequency cycles, while also meeting driving constraints of interest (i.e., the avoidance of various regions disposed along the roadway), because of the function approximators that have been trained during an offline process.


The controllers may refer to, or be part of an electronic circuit, a combinational logic circuit, a field programmable gate array (FPGA), a processor (shared, dedicated, or group) that executes code, or a combination of some or all of the above, such as in a system-on-chip. Additionally, the controllers may be microprocessor-based such as a computer having a at least one processor, memory (RAM and/or ROM), and associated input and output buses. The processor may operate under the control of an operating system that resides in memory. The operating system may manage computer resources so that computer program code embodied as one or more computer software applications, such as an application residing in memory, may have instructions executed by the processor. In an alternative embodiment, the processor may execute the application directly, in which case the operating system may be omitted.


The description of the present disclosure is merely exemplary in nature and variations that do not depart from the gist of the present disclosure are intended to be within the scope of the present disclosure. Such variations are not to be regarded as a departure from the spirit and scope of the present disclosure.

Claims
  • 1. A trajectory planning system for an autonomous vehicle, the trajectory planning system comprising: one or more controllers in electronic communication with one or more external vehicle networks that collect data with respect to one or more moving obstacles located in an environment surrounding the autonomous vehicle, the one or more controllers executing instructions to: determine a discrete-time relative vehicle state based on an autonomous vehicle dynamics model;determine, based on the discrete-time relative vehicle state, a position avoidance set representing relative lateral positions and longitudinal positions that the autonomous vehicle avoids while bypassing the one or more moving obstacles when performing a maneuver;determine a set of offline ego states for which the autonomous vehicle is unable to execute maneuvers without entering the position avoidance set;approximate, in real-time, a set of real-time ego states of the autonomous vehicle by a function approximator, wherein the function approximator has been trained during a supervised learning process with the set of offline ego states as a ground-truth dataset;compute a plurality of relative state trajectories for the autonomous vehicle, wherein the plurality of relative state trajectories avoid intersecting the set of real-time ego states of autonomous vehicle; andselect a trajectory from the plurality of relative state trajectories for the autonomous vehicle, wherein the autonomous vehicle follows the trajectory while performing the maneuver.
  • 2. The trajectory planning system of claim 1, wherein the function approximator approximates the set of real-time ego states in real-time based on a current position and a current velocity of the autonomous vehicle and the one or more moving obstacles, a speed limit of a roadway that the autonomous vehicle is presently driving along, environmental variables, and road conditions.
  • 3. The trajectory planning system of claim 1, wherein the one or more controllers select the trajectory by: assigning a score to each relative state trajectory for the autonomous vehicle based on one or more properties; andselecting the relative state trajectory having the highest score as the trajectory.
  • 4. The trajectory planning system of claim 3, wherein the one or more properties include one or more of the following: ride comfort, fuel consumption, timing, and duration.
  • 5. The trajectory planning system of claim 1, further comprising a plurality of sensors in electronic communication with the one or more controllers, wherein the one or more controllers receive a plurality of dynamic variables as input from the plurality of sensors.
  • 6. The trajectory planning system of claim 5, wherein the one or more controllers determine the autonomous vehicle dynamics model for the autonomous vehicle based on the plurality of dynamic variables and vehicle chassis configuration information.
  • 7. The trajectory planning system of claim 1, wherein the position avoidance set is determined by: ={(es, ed)∈R2:es,l≤es≤es,u, ed,l≤ed≤ed,u}wherein is the position avoidance set, es is a discrete-time relative longitudinal state of the autonomous vehicle with respect to an obstacle, ed is a discrete-time relative lateral state of the autonomous vehicle with respect to the obstacle, es,l is a lower limit of the discrete-time relative longitudinal state es, es,u is an upper limit of the discrete-time relative longitudinal state es, ed,l is a lower limit of the discrete-time relative lateral state ed, and ed,u is a lower limit of the discrete-time relative longitudinal state es, of the position avoidance set .
  • 8. The trajectory planning system of claim 1, wherein the one or more controllers determine the plurality of relative state trajectories for the autonomous vehicle based on an initial state of the autonomous vehicle, a final state of the autonomous vehicle, and one or more levels of driving aggression.
  • 9. The trajectory planning system of claim 8, wherein the one or more levels of driving aggression include a conservative level, a moderate level, and an aggressive level of aggression.
  • 10. The trajectory planning system of claim 1, wherein the one or more controllers determine the set of ego states during an offline process based on one of simulated data and experimental data.
  • 11. The trajectory planning system of claim 1, wherein the one or more moving obstacles include another vehicle located along a roadway that the autonomous vehicle is driving along.
  • 12. The trajectory planning system of claim 1, wherein the set of ego states represent vehicle states where the autonomous vehicle is unable to execute maneuvers during a time horizon to avoid the one or more moving obstacles.
  • 13. The trajectory planning system of claim 1, wherein the autonomous vehicle dynamics model includes one or more of the following: a linear tire model and non-linear tire models.
  • 14. An autonomous vehicle including a trajectory planning system, the autonomous vehicle comprising: a plurality of sensors that determine a plurality of dynamic variables;one or more external vehicle networks that collect data with respect to one or more moving obstacles located in an environment surrounding the autonomous vehicle; andone or more controllers in electronic communication with the one or more external vehicle networks and the plurality of sensors, the one or more controllers executing instructions to: determine an autonomous vehicle dynamics model for the autonomous vehicle based on the plurality of dynamic variables and vehicle chassis configuration information;determine a discrete-time relative vehicle state based on an autonomous vehicle dynamics model;determine, based on the discrete-time relative vehicle state, a position avoidance set representing relative lateral positions and longitudinal positions that the autonomous vehicle avoids while bypassing the one or more moving obstacles when performing a maneuver;determine a set of offline ego states for which the autonomous vehicle is unable to execute maneuvers without entering the position avoidance set;approximate, in real-time, a set of real-time ego states of the autonomous vehicle by a function approximator, wherein the function approximator has been trained during a supervised learning process with the set of offline ego states as a ground-truth dataset;compute a plurality of relative state trajectories for the autonomous vehicle, wherein the plurality of relative state trajectories avoid intersecting the set of real-time ego states of autonomous vehicle; andselect a trajectory from the plurality of relative state trajectories for the autonomous vehicle, wherein the autonomous vehicle follows the trajectory while performing the maneuver.
  • 15. The autonomous vehicle of claim 14, wherein the function approximator approximates the set of real-time ego states in real-time based on a current position and a current velocity of the autonomous vehicle and the one or more moving obstacles, a speed limit of a roadway that the autonomous vehicle is presently driving along, environmental variables, and road conditions.
  • 16. The autonomous vehicle of claim 14, wherein the one or more controllers select the trajectory by: assigning a score to each relative state trajectory for the autonomous vehicle based on one or more properties; andselecting the relative state trajectory having the highest score as the trajectory.
  • 17. The autonomous vehicle of claim 14, wherein the one or more controllers determine the plurality of relative state trajectories for the autonomous vehicle based on an initial state of the autonomous vehicle, a final state of the autonomous vehicle, and one or more levels of driving aggression.
  • 18. The autonomous vehicle of claim 14, wherein the autonomous vehicle dynamics model includes one or more of the following: a linear tire model and non-linear tire models.
  • 19. A method for selecting a trajectory for an autonomous vehicle, the method comprising: determining, by one or more controllers, a discrete-time relative vehicle state based on an autonomous vehicle dynamics model, wherein the one or more controllers are in electronic communication with one or more external vehicle networks that collect data with respect to one or more moving obstacles located in an environment surrounding the autonomous vehicle;determining, based on the discrete-time relative vehicle state, a position avoidance set representing relative lateral positions and longitudinal positions that the autonomous vehicle avoids while bypassing the one or more moving obstacles when performing a maneuver;determining a set of offline ego states for which the autonomous vehicle is unable to execute maneuvers without entering the position avoidance set;approximating, in real-time, a set of real-time ego states of the autonomous vehicle by a function approximator, wherein the function approximator has been trained during a supervised learning process with the set of offline ego states as a ground-truth dataset;computing a plurality of relative state trajectories for the autonomous vehicle, wherein the plurality of relative state trajectories avoid intersecting the set of real-time ego states of autonomous vehicle; andselecting a trajectory from the plurality of relative state trajectories for the autonomous vehicle, wherein the autonomous vehicle follows the trajectory while performing the maneuver.
  • 20. The method of claim 19, further comprising: approximating the set of real-time ego states in real-time based on a current position and a current velocity of the autonomous vehicle and the one or more moving obstacles, a speed limit of a roadway that the autonomous vehicle is presently driving along, environmental variables, and road conditions.