The application generally relates to autonomous driving technologies, and in particular, to collision avoidance of an autonomous driving vehicle.
Autonomous driving is a challenging task that integrates many technologies.
An autonomous driving system generally has several components that enable autonomous driving without human participation, including a localization module to determine where the vehicle is, a perception module to determine the surrounding environment of the vehicle and a control module to determine how to drive based on the available information. Typically, an autonomous vehicle generates a planned trajectory based on the output from the localization module and the perception module. The control module then outputs the vehicle's throttle, brake and steering in order to follow the planned trajectory.
Compared with conventional driving on streets and highways, in some low-speed driving scenarios which may need very large road wheel angles (for example, driving in a parking lot), collision avoidance is extremely difficult. Even though the planned trajectory generated based on the initial vehicle state and surrounding environment of the autonomous driving vehicle is collision-free, the vehicle may still collide with an obstacle due to large deviations between the actual trajectory and the planned trajectory. Therefore, there is a continued need for further improving a collision avoidance scheme.
In a first aspect, the present disclosure provides a collision avoidance method for an autonomous driving vehicle (ADV). In one embodiment, the method comprising: A) obtaining a current vehicle state of the ADV and a planned trajectory of the ADV over a predetermined drive period, wherein the planned trajectory is generated according to an initial vehicle state of the ADV and an environment of the ADV when the ADV is at the initial vehicle state, and wherein the initial vehicle state is a vehicle state of the ADV when the ADV is at a start point of the predetermined drive period; B) generating a simulated trajectory over at least a portion of the predetermined drive period based on the current vehicle state and the planned trajectory; C) determining whether the simulated trajectory overlaps with an obstacle; and D) decelerating the ADV in response to a determination result that the simulated trajectory overlaps with an obstacle.
In another aspect, the present disclosure provides a device for controlling an autonomous driving vehicle.
The foregoing has outlined, rather broadly, features of the present application. Additional features of the present application will be described, hereinafter, which form the subject of the claims of the present application. It should be appreciated by those skilled in the art that the conception and specific embodiments disclosed herein may be readily utilized as a basis for modifying or designing other structures or processes for carrying out the objectives of the present application. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the present application as set forth in the appended claims.
The aforementioned features and other features of the present application will be further described in the following paragraphs by referring to the accompanying drawings and the appended claims. It will be understood that, these accompanying drawings merely illustrate certain embodiments in accordance with the present application and should not be considered as limitation to the scope of the present application. Unless otherwise specified, the accompanying drawings need not be proportional, and similar reference characters generally denote similar elements.
Before the present disclosure is described in greater detail, it is to be understood that this disclosure is not limited to particular embodiments described, and as such may, of course, vary. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting, since the scope of the present disclosure will be limited only by the appended claims.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. Although any methods and materials similar or equivalent to those described herein can also be used in the practice or testing of the present disclosure, the preferred methods and materials are now described.
All publications and patents cited in this specification are herein incorporated by reference as if each individual publication or patent were specifically and individually indicated to be incorporated by reference and are incorporated herein by reference to disclose and describe the methods and/or materials in connection with which the publications are cited. The citation of any publication is for its disclosure prior to the filing date and should not be construed as an admission that the present disclosure is not entitled to antedate such publication by virtue of prior disclosure. Further, the dates of publication provided could be different from the actual publication dates that may need to be independently confirmed.
As will be apparent to those of skill in the art upon reading this disclosure, each of the individual embodiments described and illustrated herein has discrete components and features which may be readily separated from or combined with the features of any of the other several embodiments without departing from the scope or spirit of the present disclosure. Any recited method can be carried out in the order of events recited or in any other order that is logically possible.
The present disclosure relates to methods and systems for autonomous collision avoidance. For the sake of brevity, conventional techniques and components related to the autonomous driving technology and other functional aspects of the system (and the individual operating components of the system) may not be described in detail herein. Furthermore, the connecting lines shown in the various figures contained herein are intended to represent example functional relationships and/or physical couplings between the various elements. It should be noted that many alternative or additional functional relationships or physical connections may be present in an embodiment of the invention.
Autonomous Driving System
Autonomous driving vehicles (ADVs, also known as driverless cars, self-driving cars or robot cars) are capable of sensing its environment and navigating without human input. Autonomous driving system is a complex system integrating many technologies that coordinate to fulfill the challenging task of controlling a vehicle without human input.
Referring to
The information in the HD map is used by many other modules of the autonomous driving system. In the first place, a localization module depends on the HD map to determine the exact location of the ADV. The HD map also helps a perception module to sense the environment around the ADV when the surrounding area is out of the range of the sensors or blocked by an obstacle. The HD map also helps a planning module to find suitable driving space and to identify multiple driving routes. The HD map allows the planning module to accurately plan a path and choose the best maneuver.
A localization module of the autonomous driving system helps an ADV to know where exactly it is, which is a challenging task because any single sensor or instrument currently available, such as GPS and IMU, is insufficient to provide location information accurately enough for autonomous driving. Current localization technology uses information gathered by the sensors installed in the ADV to identify landmarks in the surrounding environment and determines the location of the ADV relative to the landmarks. The localization module then compares the landmarks identified by the sensors to the corresponding landmarks in the HD map, thereby determining the exact location of the ADV in the map. Typically, to ensure a localization of high precision required by autonomous driving, a localization module combines information collected by multiple sensors using different localization techniques, such as GNSS RTK (Global Navigation Satellite System Real-time Kinematics) used by GPS, inertial navigation used by IMU, LiDAR localization and visual localization.
A perception module of the autonomous driving system is configured to sense the surrounding of the ADV using sensors such as camera, radar and LiDAR and to identify the objects around the ADV. The sensor data generated by the sensors are interpreted by the perception module to perform different perception tasks, such as classification, detection, tracking and segmentation. Machine learning technologies, such as convolutional neural networks, have been used to interpret the sensor data. Technologies such as Kalman filter have been used to fuse the sensor data generated by different sensors for the purposes of accurate perception and interpretation.
Many of the objects around the ADV are also moving. Therefore, a prediction module of the autonomous driving system is configured to predict the behavior of these moving objects in order for the ADV to plan its path. Typically, the prediction module predicts the behavior of a moving object by generating a trajectory of the object. The collection of the trajectories of all the objects around the ADV forms a prediction of a timestep. For each timestep, the prediction module recalculates the prediction for every moving object around the ADV. These predictions inform the ADV to determine its path.
A planning module of the autonomous driving system incorporates the data from the HD map module, the localization module and the prediction module to generate a trajectory for the vehicle. The first step of planning is route navigation that generates a navigable path. Once a high-level route is built, the planning module zooms into trajectory planning, which makes subtle decisions to avoid obstacles and creates a smooth ride for the passengers, i.e., to generate a collision-free and comfortable trajectory to execute.
The trajectory generated in the planning module is then executed by a control module to generate a series of control inputs including steering, throttling and/or braking. Several conditions need be considered by the control module when generating control inputs. First, the controller needs to be accurate so that the result avoids deviation from the target trajectory, which is important for safety. Second, the control strategy should be feasible for the car. Third, comfortable driving is important for the passenger. Hence the actuation should be continuous and avoid sudden steering, acceleration or braking. In sum, the goal of the controlling module is to use viable control inputs to minimize deviation from the target trajectory and maximize passenger comfort.
Collision Avoidance Scheme
Although the planned trajectory is expected to be collision-free, due to various reasons, the ADV 201 may deviate from the planned trajectory 202 during its movement. For example, imprecise execution of large road wheel angles may occur in the low-speed driving scenario 200. As times goes on, the deviation of the ADV 201 from the planned trajectory 202 may be significant, as shown in
Referring to
Referring back to
According to one embodiment of the present application, after obtaining the current vehicle state of the ADV and the planned trajectory of the ADV, whether the ADV deviates from the planned trajectory is determined: in response to determining that the ADV deviates from the planned trajectory, a simulated trajectory of the ADV over at least a portion of the predetermined drive period will be generated to determine whether there is an obstacle that will collide with the ADV, as illustrated in detail below; in response to determining that the ADV does not deviate from the planned trajectory, the planned trajectory will be taken as a simulated trajectory of the ADV to determine whether there is an obstacle that will collide with the ADV, as illustrated in detail below.
According to another embodiment of the present application, after obtaining the current vehicle state of the ADV and the planned trajectory of the ADV, the deviation of the ADV from the planned trajectory is determined: if the deviation of the ADV from the planned trajectory is larger than a predefined threshold, the ADV is determined as deviating from the planned trajectory and a simulated trajectory over at least a portion of the predetermined drive period will be generated to determine whether there is an obstacle that will collide with the ADV, as illustrated in detail below; if the deviation of the ADV from the planned trajectory is not larger than a predefined threshold, the ADV is determined as not deviating from the planned trajectory and the planned trajectory will be taken as a simulated trajectory of the ADV to determine whether there is an obstacle that will collide with the ADV, as illustrated in detail below.
In step 302, a simulated trajectory over at least a portion of the predetermined drive period is generated based on the current vehicle state and the planned trajectory.
The at least a portion of the predetermined drive period may be divided into a plurality of time steps. Referring back to
In step 401, a closest waypoint to a reference vehicle state at a time step Tk−1 prior to the time step Tk is selected on the planned trajectory.
Referring back to
In step 402, the reference vehicle state at the time step Tk−1 is compared with the closest waypoint to determine a road wheel angle.
Referring back to
where: ψ(t) is yaw angle/heading; ψss(t) is steady-state yaw; e(t) is cross-track error; v(t) is ego car speed; k, ksoft, kd,yaw, kd,steer are tuning parameters; rmeas, rtraj are measured yaw rate and trajectory yaw rate, respectively; δmeas(i), δmeas (i+1) are discrete-time measurements of the road wheel angle; and i is the index of measurement one control period earlier. Further, the steady-state yaw ψss(t) is defined based on the follow equation (2)
where: m is ego car mass; Cy is lateral corner stiffness; and a, b are distances from center of mass to front axle and rear axle, respectively.
Stanley Lateral Controller is detailedly illustrated in “Autonomous Automobile Trajectory Tracking for Off-Road Driving: Controller Design, Experimental Validation and Racing” (Authors: Gabriel M. Hoffmann, Claire J. Tomlin, Michael Montemerlo, Sebastian Thrun; Source Publication: American Control Conference, 2007), which is incorporated into the present disclosure by reference.
It can be understood that the road wheel angle can be calculated according to other equations in addition to the Stanley Lateral Controller and the present disclosure is not limited to a particular one.
In step 403, the simulated vehicle state at the time step Tk is generated based on the reference vehicle state at the time step Tk−1 and the road wheel angle.
Referring back to
According to one embodiment, in a simple kinematic model, the simulated vehicle state (x1, y1; θ1) at the time step 2051 can be generated based on the following equations (3)-(6):
x
1
=x
0
+v
0*cos θ0 (3)
y
1
=y
0
+v
0*sin θ0*T (4)
θ1=θ0+ω0*T (5)
where: v0 is the vehicle speed at the time step 2050; T is the sampling time between the time steps 2050 and 2051; ω0 is the angular velocity at the time step 2050 and defined based on the follow equation (6)
ω0=v0*tan δ0/L (6)
where L is the wheelbase of the vehicle, or length from the front wheel center to the rear wheel center; δ0 is the determined road wheel angle at the time step 2050.
It can be understood that the simulated vehicle state (x1,y1; θ1) at the time step 2051 can be generated according to other equations in addition to the equations (3)-(6) illustrated above and the present disclosure is not limited to any particular equations.
It can be understood that to generate a simulated trajectory, the steps 401 to 403 of the process 400 depicted in
It can be understood that the simulated vehicle state generated for the time step Tk can further be used as the reference vehicle state when generating a simulated vehicle state corresponding to the next time step Tk+1. For example, referring back to
Referring back to
In some embodiments, during driving, the ADV scans the surrounding environment to detect existence of obstacles in the environment through, for example, cameras, radars and LiDARs mounted on the ADV. In some embodiments, the obstacle is detected based on a real-time detection manner.
Referring back to
Referring back to
Referring back to
After the ADV is decelerated, the ADV may take different subsequent actions for different kinds of obstacles overlapping with the simulated trajectory.
In some embodiments, the ADV determines whether the obstacle overlapping with the simulated trajectory is a static obstacle based on data from the HD map module, localization module and perception module. For example, if the data from the HD map module and the localization module over a period indicates that the obstacle is a fixed object in the environment (e.g., a road barrier, a tree, a fence etc.), the obstacle is determined as a static obstacle. For another example, if the obstacle is not a fixed object on the road but the perception module does not detect the speed of the obstacle over a predefined time period, the obstacle may also be determined as a static obstacle (e.g., a parked vehicle). There are a plurality of ways to determine whether an obstacle is a static obstacle and the present disclosure is not limited to a particular one.
If the ADV determines that the obstacle overlapping with simulated trajectory is a static obstacle, the process 500 goes to Block 502; otherwise, the process 500 goes to Block 504.
In Block 502, in response to determining that the obstacle is a static obstacle, the ADV is stopped. In Block 503, after the ADV is stopped, a second planned trajectory is generated according to a second initial vehicle of the ADV and an environment of the ADV where the ADV is stopped.
If it is determined that the obstacle overlapping with the simulated trajectory is a static obstacle, it means that the obstacle will not move away from the simulated trajectory of the ADV. In that case, it is desired to stop the ADV and generate a second planned trajectory based on the second vehicle state of the ADV and the environment of the ADV where the ADV is stopped for the ADV to execute to avoid colliding with the obstacle. The second vehicle state of the ADV refers to the vehicle state of the ADV when the ADV is stopped.
Referring back to
In Block 504, in response to determining that the obstacle is not a static obstacle such as a human being, whether the obstacle moves away from the simulated trajectory is determined immediately or later within a short period such as 100 ms or 1 second. If it is determined that the obstacle moves away from the simulated trajectory, the process 500 goes to Block 505; otherwise, the process 500 goes to Block 506.
In Block 505, the ADV is accelerated back to the original speed; and in Block 506, the ADV is drove at a speed lower than the original speed.
In response to determining that the obstacle overlapping with the simulated trajectory is not a static obstacle, the ADV may then determine whether the obstacle moves away from the simulated trajectory. If the obstacle moves away from the simulated trajectory, it means that the obstacle no longer impedes the ADV so the ADV can be accelerated back to the original speed.
As shown in
If the obstacle is not a static obstacle but does not move away from the simulated trajectory, i.e., still overlaps with the simulated trajectory, the ADV may be drove at a lower speed to allow more time for the non-static obstacle to move away from the simulated trajectory. However, due to safety requirements, the ADV may be stopped to plan a new trajectory if the ADV gets too close to the obstacle.
As shown in
It should be noted that, the device and methods disclosed in the embodiments of the present disclosure can be implemented by other ways. The aforementioned device and method embodiments are merely illustrative. For example, flow charts and block diagrams in the figures show the architecture and the function operation according to a plurality of devices, methods and computer program products disclosed in embodiments of the present disclosure. In this regard, each frame of the flow charts or the block diagrams may represent a module, a program segment, or portion of the program code. The module, the program segment, or the portion of the program code includes one or more executable instructions for implementing predetermined logical function. It should also be noted that in some alternative embodiments, the function described in the block can also occur in a different order as described from the figures. For example, two consecutive blocks may actually be executed substantially concurrently. Sometimes they may also be performed in reverse order, depending on the functionality. It should also be noted that, each block of the block diagrams and/or flow chart block and block combinations of the block diagrams and/or flow chart can be implemented by a dedicated hardware-based systems execute the predetermined function or operation or by a combination of a dedicated hardware and computer instructions.
If the functions are implemented in the form of software modules and sold or used as a standalone product, the functions can be stored in a computer readable storage medium. Based on this understanding, the technical nature of the present disclosure, part contributing to the prior art, or part of the technical solutions may be embodied in the form of a software product. The computer software product is stored in a storage medium, including several instructions to instruct a computer device (may be a personal computer, server, or network equipment) to perform all or part of the steps of various embodiments of the present. The aforementioned storage media include: U disk, removable hard disk, read only memory (ROM), a random access memory (RAM), floppy disk or CD-ROM, which can store a variety of program codes.
Various embodiments have been described herein with reference to the accompanying drawings. It will, however, be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented, without departing from the broader scope of the invention as set forth in the claims that follow.
Those skilled in the art may understand and implement other variations to the disclosed embodiments from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. In applications according to present application, one element may perform functions of several technical feature recited in claims. Any reference signs in the claims should not be construed as limiting the scope. The scope and spirit of the present application is defined by the appended claims.
Number | Date | Country | |
---|---|---|---|
62916807 | Oct 2019 | US |