This application is related to the subject matter of: U.S. patent application Ser. No. ______ filed ______, 2021 and entitled SYSTEM AND METHOD IN THE PREDICTION OF TARGET VEHICLE BEHAVIOR BASED ON IMAGE FRAME AND NORMALIZATION (Attorney Docket CNOO01-00048); U.S. patent application Ser. No. ______ filed ______, 2021 and entitled SYSTEM AND METHOD IN DATA-DRIVEN VEHICLE DYNAMIC MODELING FOR PATH-PLANNING AND CONTROL (Attorney Docket CNOO01-00049); U.S. patent application Ser. No. ______ filed ______, 2021 and entitled SYSTEM AND METHOD IN VEHICLE PATH PREDICTION BASED ON FULL NONLINEAR KINEMATICS (Attorney Docket CNOO01-00051); U.S. patent application Ser. No. ______ filed ______, 2021 and entitled SYSTEM AND METHOD IN LANE DEPARTURE WARNING WITH FULL NONLINEAR KINEMATICS AND CURVATURE (Attorney Docket CNOO01-00052); U.S. patent application Ser. No. ______ filed ______, 2021 and entitled SYSTEM AND METHOD FOR LANE DEPARTURE WARNING WITH EGO MOTION AND VISION (Attorney Docket CNOO01-00066). The content of the above-identified patent documents is incorporated herein by reference.
This disclosure relates generally to vehicle driver assist or autonomous driving systems. More specifically, this disclosure relates to lateral control for path tracking.
Advanced driving assist system (ADAS) features, which use automated technology to assist the vehicle operator in driving and parking, form a foundation for autonomous driving (AD). Determination of vehicle position information and/or detection of nearby objects enables features such as: collision detection and avoidance for adaptive cruise control (ACC), emergency braking; blind spot detection for collision warning and/or evasive steering; lane detection for lane keeping and/or centering, lane changing, or lane departure warning; and path planning and control. Other ADAS and AD features may also be implemented using the same sensor set(s).
Electric vehicles (EVs) are often capable of higher driving and handling performance relative to conventional vehicles. EV designs can include low centers of gravity, independent steering, and immediate, quick, and smooth acceleration. As a result, ADAS and AD features for EVs can involve different considerations than those for conventional vehicles.
Lateral control of a vehicle for path tracking, within the vehicle's ADAS or AD features, is improved in ways suitable to EVs having higher driving and handling performance, covering all driving ranges from static to highly dynamic maneuvering with better performance. To maintain vehicle travel within an occupied traffic lane during (for example) lane keeping, vehicle yaw rate is determined using a kinematics control and employed by a dynamics control to determine the steering angle needed. A steering actuator is activated, if necessary, based on the determined steering angle. A braking actuator may also be actuated based on the determined steering angle, depending on vehicle speed.
In one embodiment, an apparatus includes at least one camera configured to capture an image of a traffic lane in front of a vehicle. The apparatus also includes a path tracking controller configured to detect lane boundaries and a path curvature for the traffic lane from the image, determine a lateral offset of the vehicle from a reference path for the traffic lane and a heading offset for the vehicle from the path curvature, determine a yaw rate maintaining the vehicle within the traffic lane using a kinematics control, determine a steering angle maintaining the vehicle within the traffic lane using a dynamics control and the yaw rate determined by the kinematics control, and activate a steering control based on the determined steering angle.
The yaw rate ωd maintaining the vehicle within the traffic lane may be determined from:
where v is vehicle forward velocity, θ is a relative course angle based on the heading offset for the vehicle, κ is the path curvature, r is the lateral offset of the vehicle, and η is feedback from the dynamics control. The determined steering angle may be derived from {dot over (χ)}=Aχ+B1u+B2ωd+B3 sin(ϕ), where {dot over (χ)} is the first derivative of χ=[e1 ė1 e2 ė2], e1=r is the lateral offset of the vehicle, ė1 is the first derivative of e1, e2 is a relative yaw angle of the vehicle, ė2 is the first derivative of e2, and A, B1, B2 and B3 are parameters determined by one of pole placement or linear-quadratic regulation. The kinematics control may receive, as inputs, the lateral offset r of the vehicle from the reference path and the heading offset Δψ of vehicle heading from the path curvature. The dynamics control may receive, as inputs, the lateral offset r, the first derivative {dot over (r)} of the lateral offset r, the heading offset Δψ, and the first derivative Δψ of the heading offset Δψ. The dynamics control may output a steering angle δf to the steering control, and the steering control may be configured to generate a steering command.
The path tracking controller may be configured to generate a braking command based on a speed of the vehicle and the determined steering angle. The path tracking controller may be configured to generate the braking command when the speed of the vehicle exceeds a predetermined safety limit for the determined steering angle.
In another embodiment, a vehicle includes the apparatus and a motor configured to drive wheels of the vehicle. The vehicle also includes a chassis supporting axles on which the wheels are mounted. The steering control is configured to generate a steering command controlling the wheels based on the determined steering angle. The vehicle further includes a brake actuator configured to actuate brakes for one or more of the wheels, where the brake actuator is configured to receive a braking control signal based on an output of the path tracking controller. The vehicle may be an electric vehicle.
In still another embodiment, a method includes capturing an image of a traffic lane in front of a vehicle using at least one camera mounted on the vehicle. The method also includes detecting lane boundaries and a path curvature for the traffic lane from the image and determining a lateral offset of the vehicle from a reference path for the traffic lane and a heading offset for the vehicle from the path curvature. The method further includes determining a yaw rate maintaining the vehicle within the traffic lane using a kinematics control, determining a steering angle maintaining the vehicle within the traffic lane using a dynamics control and the yaw rate determined by the kinematics control, and activating a steering control based on the determined steering angle.
The yaw rate ωd maintaining the vehicle within the traffic lane may be determined from:
where v is vehicle forward velocity, θ is a relative course angle based on the heading offset for the vehicle, κ is the path curvature, r is the lateral offset of the vehicle, and η is feedback from the dynamics control. The determined steering angle may be derived from {dot over (χ)}=Aχ+B1u+B2ωd+B3 sin(ϕ), where {dot over (χ)} is the first derivative of χ=[e1 ė1 e2 ė2], e1=r is the lateral offset of the vehicle, ė1 is the first derivative of e1, e2 is a relative yaw angle of the vehicle, ė2 is the first derivative of e2, and A, B1, B2 and B3 are parameters determined by one of pole placement or linear-quadratic regulation. The kinematics control may receive, as inputs, the lateral offset r of the vehicle from the reference path and the heading offset Δψ of vehicle heading from the path curvature. The dynamics control may receive, as inputs, the lateral offset r, the first derivative {dot over (r)} of the lateral offset r, the heading offset Δψ, and the first derivative Δ{dot over (ψ)} of the heading offset Δψ. The dynamics control may output a steering angle δf to the steering control, and the steering control may generate a steering command.
The method may include generating a braking command based on a speed of the vehicle and the determined steering angle. The braking command may be generated when the speed of the vehicle exceeds a predetermined safety limit for the determined steering angle. The method may include driving wheels of the vehicle with a motor, generating a steering command controlling the wheels based on the determined steering angle, and generating a braking control signal based on the determined steering angle and a vehicle speed. The method may be performed within or by an electric vehicle.
Other technical features may be readily apparent to one skilled in the art from the following figures, descriptions, and claims.
For a more complete understanding of this disclosure and its advantages, reference is now made to the following description taken in conjunction with the accompanying drawings, in which like reference numerals represent like parts:
There are many lateral control features that are useful for ADAS and AD, including without limitation: lane centering, lane keeping, lane changing, and evasive steering assistance. Lateral controllers have often been designed based on kinematics or dynamics separately. As a result, kinematics-based controllers have not performed well in high dynamic maneuvers, such as where lateral acceleration is greater than about 5 meters per second squared (m/s2). Kinematics-based controller designs typically employ linear/nonlinear kinematics-based control, pure-pursuit, or a Stanley method. Meanwhile, dynamics-based controllers may suffer performance issues in low-speed maneuvers, where dynamic effect is not very dominant. Dynamics-based controller designs often use a bicycle model based controller. This disclosure describes approaches for combining the above two governing principles in a single lateral acceleration controller to cover the entire driving regime, which may be particularly useful for performance handling vehicles.
The vehicle 100 of
Passengers may enter and exit the cabin 101 through at least one door 102 forming part of the cabin 101. A transparent windshield 103 and other transparent panels mounted within and forming part of the cabin 101 allow at least one passenger (referred to as the “operator,” even when the vehicle 100 is operating in an AD mode) to see outside the cabin 101. Rear view mirrors 104 mounted to sides of the cabin 101 enable the operator to see objects to the sides and rear of the cabin 101 and may include warning indicators (e.g., selectively illuminated warning lights) for ADAS features such as blind spot warning (indicating that another vehicle is in the operator's blind spot) and/or lane departure warning.
Wheels 105 mounted on axles that are supported by the chassis and driven by the motor(s) (all not visible in
In the present disclosure, the vehicle 100 includes a vision system including at least a front camera 106, side cameras 107 (mounted on the bottoms of the rear view mirrors 104 in the example depicted), and a rear camera. The front camera 106 and side cameras 107 provide images to the vehicle control system for use as part of ADAS and AD features as described below, and the images may optionally be displayed to the operator.
Although
Each ECU typically includes a printed circuit board (PCB) with a processor or microcontroller integrated circuit coupled to various input sensors, switches, relays, and other output devices. The CAN design permits the ECUs to communicate with each other without the need for a centralized host. Instead, communication takes place on a peer-to-peer basis. The CAN design therefore permits data from sensors and other ECUs to circulate around the vehicle ECUs, with each ECU transmitting sensor and programming information on the CAN bus while simultaneously listening to the CAN bus to pull out data needed to complete tasks being performed by that ECU. There is no central hub or routing system, just a continuous flow of information available to all the ECUs.
By way of example, power doors on a vehicle may be operated by an ECU called the body control module (not shown in
Notably, vehicle control systems are migrating to higher-speed networks with an Ethernet-like bus for which each ECU is assigned an Internet protocol (IP) address. Among other things, this may allow both centralized vehicle ECUs and remote computers to pass around huge amounts of information and participate in the Internet of Things (IoT).
In the example shown in
For the present disclosure, the vehicle control system 200 includes an image processing module (IPM) CAN 211 to which the front camera ECU 216, side camera ECU(s) 217, and rear camera ECU 218 are connected. The front camera ECU 216 receives image data from the front camera 106 on the vehicle 100, while the side camera ECU(s) 217 receives image data from each of the side cameras 107 and the rear camera ECU 218 receives image data from the rear camera. In some embodiments, a separate ECU may be used for each camera, such that two side camera ECUs may be employed. The IPM CAN 211 and the front camera ECU 216, side camera ECU(s) 217, and rear camera ECU 218 process image data for use in vision-based ADAS features, such as providing a rear back-up camera display and/or stitching together the images to create a “bird's eye” view of the vehicle's surroundings.
Although
To support various ADAS functions such as lane keeping during high performance operation, the chassis CAN 205 for the vehicle 100 can control the steering angle and/or braking across a wide range of dynamic accelerations. In the present disclosure, a combination of kinematics-based and dynamics-based lateral acceleration control is implemented.
To support ADAS and AD features, the system 300 includes the functions of camera perception 301, ego vehicle behavior prediction 302 (where “ego” refers to the vehicle implementing the ADAS and/or AD feature(s)), decision and motion planning 303, and motion control 304. Camera perception 301 detects a traffic lane ahead, while vehicle behavior prediction 302 determines the speed at which and a lane position in which the ego vehicle will likely enter an upcoming curve in that traffic lane. Decision and motion planning 303 and motion control 304 respectively determine and, if necessary, implement reactive responses to the ego vehicle's entry into the traffic lane curve, such as steering and/or emergency braking.
Decision and motion planning 303 implements at least kinematics-based vehicle modeling 305 based on the projected path determined by the behavior prediction 302 and full non-linear kinematics, as well as dynamics-based vehicle modeling 306 based on the projected path and time-varying linear dynamics. The outputs of the kinematics-based vehicle modeling 305 and the dynamics-based vehicle modeling 306 are used by the motion control 304 in a path-tracking control 307, which may include one or both of lane centering and/or changing control and emergency motion control.
The kinematics-based vehicle modeling 305 and the dynamics-based vehicle modeling 306 are integrated in a way such that the two control regimes are in a cascade connection. The kinematics-based vehicle modeling 305 provides the desired yaw rate (ωd) to the dynamics-based vehicle modeling 306, which outputs final control of steering wheel angle (δf) and/or braking to the vehicle's physical control systems. The inputs to both the kinematics-based vehicle modeling 305 and the dynamics-based vehicle modeling 306 from the vehicle vision system include lateral offset (r) from the lane boundary, the derivative ({dot over (r)}) of the lateral offset, a vehicle heading offset (Δψ), and the derivative (Δ{dot over (ψ)}) of the vehicle heading offset.
The kinematics-based vehicle modeling 305 of the lateral acceleration controller 400 in
In some embodiments, Frenet coordinates may be used for the kinematics-based vehicle modeling 305.
The vehicle 100 travels exactly along the path if the reference point lies on the path and the velocity vector is tangent to the path. Therefore, the relative course angle θ=β+ψ−ψp is introduced, where β is side-slip angle of the ego vehicle, ψ is the orientation of the ego vehicle's velocity vector relative to the x-axis of the ego vehicle coordinate frame, and ψp is the orientation of the path at the projection point P. The vehicle kinematics may be described by:
where κ is the curvature of the reference path at the projection point P. The curvature κ is defined as the derivative of the orientation with respect to the travelled distance along the path and may be interpreted as the reciprocal of the local curve radius.
The first differential equation (for {dot over (s)}) in equation set (1) describes how fast the vehicle moves along the reference path, which is determined by taking the fraction of the velocity tangent to the path (i.e., v cos(θ)) and applying the rule of three. The first differential equation plays an important role in deriving the dynamics but is usually ignored in lateral motion control. This indicates one benefit of using Frenet coordinates in lateral control, namely that the number of relevant differential equations can be reduced.
For feedback linearization, the motion control tasks can be greatly simplified when describing vehicle motion in Frenet coordinates, as only the lateral offset r from the s-axis may need to be regulated. To account for nonlinear kinematics, feedback linearization can be used. For feedback linearization in general, one starts by defining the controlled output z and deriving the output with respect to time until the control input u appears in the equation.
For moderate curvature changes as can be expected on highways, the path dynamics are much slower than the vehicle dynamics. To simplify the design, the vehicle dynamics can be neglected in this step. Instead, the side-slip angle may be treated as a known, time-varying parameter similar to the longitudinal velocity.
By defining the lateral offset as a controlled output, the first three derivatives read:
where ω is ego vehicle yaw and the last term in the last differential equation above assumes that {dot over (β)}=0. Equating the last equation in equation set (2) with the virtual input η to the feedback linearization 404 and solving for the yaw rate yields the feedback linearizing control:
As annotated above, the first term can be considered the feedback term based on offsets from the path, and the second term is the feedforward part and is dependent on the path's curvature.
For path feedback control, the feedback linearization yields in the double-integrator {umlaut over (z)}=η, with the integrator states:
Since a double integrator is a second-order system, the control performance may be defined in terms of settling time ts and overshoot yp. With η=−k0z−k1ż, the closed-loop differential equation reads:
{umlaut over (z)}+k
1
ż+k
0
z=0 (5)
By comparing the system to the common representation of second-order systems in terms of natural frequency ω0 and damping ratio ζ:
{umlaut over (z)}+2ζω0ż+ω02z(s)=0 (6)
the gains can be expressed as:
k0=ω02
k1=2ζωo (7)
To compute the gains, the natural frequency and damping ratio can be related to the performance criteria. In the underdamped case, i.e., ζ<1, the following equations hold true:
where the parameter ϵ specifies the width of the settling window. For example, for ϵ=0.02, the settling time is defined as the time after which the lateral error does not deviate more than 2% from the desired value. Hence, given the desired settling time and overshoot:
In the critically damped case, i.e., ζ1, the following relation holds for the natural frequency:
ϵ=(1−ω0ds)e−ω
Numerically solving the above equation for ϵ=0.02 gives:
In the kinematics-based control, the desired yaw rate (ωd) can be calculated from the feedback linearization with nonlinear kinematic, and the steering wheel angle command (swacmd) can be calculated with proportional integral (PI) control:
swacmd=Kp(ω−ωd)−Ki∫(ω−ωd).
However, in the present disclosure, the final steering wheel angle command will not be like the above when the kinematics are combined with the following dynamics.
{dot over (χ)}=Aχ+B1u+B2{dot over (ψ)}des+B3 sinϕ,
where e1=r is lateral deviation from the reference path (still the lane centerline), e2=Δψ is the relative yaw angle, ω=[e1 ė1 e 2 ė2], u=δf, g is gravitational acceleration, ϕ is the bank angle, and:
where Cf and Cr are, respectively, front and rear tire cornering stiffness; If and I r are respectively front and rear wheel-base length; m and Iz, are, respectively, translational and rotational mass inertia; and vx , is the vehicle's forward velocity/speed.
The model-based yaw rate controller of the present disclosure can use the system model represented at a path coordinate and replace {dot over (ψ)}des with ωd so that the yaw rate controller uses four states of lateral/heading offset and their derivatives. With the term substitutions indicated by the annotations above, the final form of the controller can be expressed as:
{dot over (χ)}=Aχ+B1u+B2{dot over (ψ)}des+B3 sin( ϕ){dot over (χ)}=Aχ+B1u+B2ωd+B3 sin(ϕ) (12)
where {dot over (ψ)}des can be simply expressed as vx/R with vehicle speed and road curvature without any consideration of kinematics of the vehicle at the road, whereas ωd is output from the kinematics-based controller. The substitution thus takes advantage of full nonlinear kinematics and the vision signals of lateral/heading offset and road curvature.
The final control effort for the steering wheel angle can be found by any linear or nonlinear controller design with the cascade connection of kinematics and dynamics. For example, either pole placement or linear-quadratic regulation (LQR) may be employed.
Representative signals of environment are in the left column of each of
In
The example of process 800 illustrated in
A kinematics control (e.g., kinematics-based vehicle modeling 305) is used to determine the yaw rate needed to maintain vehicle travel along the reference path (step 804) using the vehicle lateral offset r and the vehicle heading offset Δψ. Using the yaw rate determined by the kinematics control, a dynamics control (e.g., dynamics-based vehicle modeling 306) determines a steering angle necessary to maintain vehicle travel along the reference path (step 805) using the yaw rate from the kinematics control. A steering control (e.g., steering ECU 207) receives the steering angle determined by the dynamics control and generates a steering command (step 806), which is used to provide steering assistance for lane keeping and/or lane centering.
Optionally, the process may include a check of whether the vehicle speed (determined, for example, from the input signal for the speedometer within the dashboard ECU 209) exceeds a safety limit corresponding to the steering angle determined by the dynamics control (step 807). If not, another iteration of the process is started. If so, the process activates a brake control (step 808) or illuminates a warning indicator until the vehicle speed is sufficiently reduced, and another iteration of the process is started.
It may be advantageous to set forth definitions of certain words and phrases used throughout this patent document. The terms “include” and “comprise,” as well as derivatives thereof, mean inclusion without limitation. The term “or” is inclusive, meaning and/or. The phrase “associated with,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, have a relationship to or with, or the like. The phrase “at least one of,” when used with a list of items, means that different combinations of one or more of the listed items may be used, and only one item in the list may be needed. For example, “at least one of: A, B, and C” includes any of the following combinations: A, B, C, A and B, A and C, B and C, and A and B and C.
The description in this patent document should not be read as implying that any particular element, step, or function is an essential or critical element that must be included in the claim scope. Also, none of the claims is intended to invoke 35 U.S.C. § 112(f) with respect to any of the appended claims or claim elements unless the exact words “means for” or “step for” are explicitly used in the particular claim, followed by a participle phrase identifying a function. Use of terms such as (but not limited to) “mechanism,” “module,” “device,” “unit,” “component,” “element,” “member,” “apparatus,” “machine,” “system,” “processor,” “processing device,” or “controller” within a claim is understood and intended to refer to structures known to those skilled in the relevant art, as further modified or enhanced by the features of the claims themselves, and is not intended to invoke 35 U.S.C. § 112(f).
While this disclosure has described certain embodiments and generally associated methods, alterations and permutations of these embodiments and methods will be apparent to those skilled in the art. Accordingly, the above description of example embodiments does not define or constrain this disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of this disclosure, as defined by the following claims.