This invention relates to minimization of modeling error and control error for an aircraft system.
If an aircraft/spacecraft vehicle encounters a failure (such as a jammed control surface or loss of a surface), most controllers cannot adapt to the failure and a crash may occur. In most cases, the vehicle has enough redundant actuation mechanisms to salvage the vehicle. Several airplane crashes have occurred in the past where the pilot is unable to control the damaged airplane due to the pilot's inability to learn to fly this altered aircraft configuration in the very short time available. The flight computer, however, may have the necessary information as well as bandwidth available to learn the new dynamics, and control the vehicle within a reasonable time interval.
The flight computer needs an intelligent controller that flies the vehicle with the baseline controller during nominal conditions, and adapts the design, when the vehicle suffers damage. Thus, given the information about the vehicle from all the available sensors, the control system needs to determine whether the vehicle is in its nominal state or is damaged. One approach to deal with this is to utilize smart algorithms that attempt to identify the vehicle characteristics and to change the control system, if necessary. This approach is known as Indirect Adaptive Control. For systems such as airplanes, there is usually very little time available to make changes to the control system, and this indirect approach is often insufficient to achieve the desired safety metrics. Another approach, known as the direct adaptive control (“DAC”), looks directly at the errors, and updates the control law accordingly. This is typically much faster and meets the timing considerations for airplane system implementations.
The current state of the art implementation consists of the Intelligent Flight Control Architecture that uses a DAC approach. This has been implemented by us at the NASA Ames Research Center, and has been test flown on the F-15 research aircraft at the Dryden Flight Research Center. The update law uses tracking error to change the control law. This approach is based on the work at the Georgia Tech Aerospace Engineering Department, under R. T. Rysdyk and A. J. Calise, “Fault Tolerant Flight Control Via Adaptive . . . Augmentation” AIAA 98-4483.
When operating in the real world, an airplane will always have tracking errors associated with its states. For example, when an pilot provides an aggressive stick command, there is always a large transient tracking error that eventually disappears as the controller continues to perform. Adaptation should typically occur only when the aircraft experiences damage or change in its flight configuration, which the original control design cannot deal with. Usually much effort goes into the design of the nominal baseline control design, which should be changed only if necessary.
What is needed is an approach that implements DAC that looks not just at the tracking error, but rather its characteristics or evolution over time to determine whether the controller needs to be adapted or left alone. The time evolution of the tracking error provides clues for investigating whether the system is in good health or has undergone damage/faults. This crucial piece of available information remains un-utilized in all the existing DAC approaches.
This invention presents a novel stable discrete-time adaptive law that is designed and implemented for flight control to targets damages/modeling errors in a direct adaptive control (DAC) framework. The approach is based on the observation that, where modeling errors are not present, the original control design has been tuned to achieve the desired performance. The adaptive control should, therefore, work towards achieving back this performance only when the design has modeling uncertainties/errors or when the vehicle suffers damage or substantial flight configuration change. In this work, the baseline controller uses dynamic inversion with proportional-integral augmentation. Dynamic inversion is carried out using the assumed system model. On-line adaptation of this control law is achieved by providing a parameterized augmentation signal to a dynamic inversion block. The parameters of this augmentation signal are updated to achieve the nominal desired error dynamics. Contrary to the typical Lyapunov-based adaptive approaches that guarantee only stability, the current approach investigates conditions for stability and for performance. A high-fidelity F-15 aircraft model is used to illustrate the overall approach.
Operationally, the aircraft plant dynamics is modeled, using the original plant description without changes, and the parameters representing the plant components are monitored. Under normal conditions, the controller responds to an excursion in the tracking error e(k), which is the difference between the desired and the actual aircraft behavior, and drives this tracking error toward a zero value according to n asymptote curve that is characteristic of the controller. If the tracking error does not conform to, or lie close to, this asymptotic curve, a resulting error (difference between desired error behavior and actual error behavior) is observed. This difference, called the performance error E(k), represents a difference between normal aircraft parameters and damaged aircraft parameters, and its components are monitored.
Assume that the system senses that (1) at least one component of aircraft tracking error e(k) is experiencing an excursion and (2) the return of this component value toward a reference value (e.g., a constant, such as 0) is not proceeding according to the expected controller characteristics (which gives rise to a non-zero magnitude |E(k)| above an expected threshold magnitude). Only when both conditions (1) and (2) are satisfied will the system reactivate the neural network (NN), change the plant dynamics according to the NN, and change the modeling of aircraft plant operation. Where condition (1) is satisfied but the return of the vector component e(k) toward the reference value proceeds according to the controller characteristics (E(k)=0), or within a selected neighborhood of this asymptote, so that condition (2) is unsatisfied, the system will not change modeling of the plant operation. In this latter instance, the NN will continue to model operation of the aircraft plant according to the original model. In a prior art approach, as long as condition (1) is satisfied, modeling of the aircraft plant dynamics is changed, irrespective of whether the components of the vector E(k) are following the controller characteristics.
An adaptive controller, according to the invention, updates the nominal baseline control approach only if there is a modeling error or damage occurs or a substantial change in flight configuration occurs that cannot be corrected in a conventional manner by the controller.
Control Architecture.
The control system is given a command, ycom(k+1) (e.g., pitch rate command from the pilot's stick). The time index (k+1) refers to the desired value at the next time index (k+1). Given the knowledge of how fast or slow the aircraft plant can handle such a command, it is typically taken through a second order reference model, with appropriate damping and natural frequency to obtain the corresponding achievable reference command yref(k+1). It is important to note that the value for the time index (k+1) for this reference signal is not necessarily computed at time index (k+1), but is the desired reference value of the output the time index (k+1) that is computed at time index (k). The controller is designed to achieve a prescribed second order error dynamics with respect to this reference command. Let this error dynamics, in a discrete form, be given in scalar form as:
e(k+1)+KPee(k)+K1ee1(k)=0, (1A)
e(k)=y(k)−yref(k) (1B)
where e1k represents the integrated error until time index k. KPe and K1e are gains, chosen appropriately to have the desired transient response characteristics. Equation (1), with the definition of the error e(k), is used to compute the control input to achieve the desired error dynamics as follows. Equation (1) can be re-expressed as
y(k+1)=yref(k+1)+KPe{yref(k)−y(k)}−K1ee1(k) (2)
The plant output y(k+1) must satisfy Eq. (2) to achieve the prescribed second order error dynamics. The right hand side of Eq. (2) can thus be labeled as ydes(k+1), the desired plant output. Thus,
ydes(k+1)=yref(k+1)+KPe{yref(k)−y(k)}−K1ee1(k) (3)
Again, note that this value of the desired output at time index (k+1) is computed at time index (k). Let the plant dynamics be given as:
We can thus invert the dynamics represented by Eq. (4) to compute the control function u(k) to achieve the desired error dynamics, Eq. (1), as:
where f and g are functions characterizing the plant.
This control input, with exact knowledge of the plant (f and g), will help achieve the desired second order error dynamics. With modeling uncertainties and other errors, we will not know f and g exactly, but only their estimates given by the model, f^ and g^. The adaptive augmentation is now designed to offset these modeling errors, so that we can get the same error dynamics or the desired performance. With the adaptive augmentation, as shown in
y(des)(k+1)=yref(k+1)+KPe{yref(k)−y(k)}−K1ee1(k)−yad(k) (6)
The control input is given as:
To analyze the effect of this control input, we look at the modeling error, which is defined as the difference ε(k+1) between the actual plant output and that predicted by the model:
Substituting the expression for the control input, given by Eq. (7), in Eq. (8) gives:
e(k+1)−yad(k)=y(k+1)−yref(k+1)+KPe{yref(k)−y(k)}−K1ee1(k) (9)
In terms of the definition of the tracking error, Eq. (9) can written as:
e(k+1)+KPe{y(k)−yref(k)}+K1ee1(k)=ε(k+1)−yad(k) (10)
Equation (10) represents a key equation of this approach. The left hand side of Eq. (10) is the desired second order error dynamics. The right hand side of Eq. (10) is the difference between the modeling error and adaptive augmentation signal input. Equation (10) indicates that, if the adaptive augmentation signal can learn the modeling error and cancel this error, the error dynamics of this control loop will be restored to its desired nature. In other words, we will recapture the performance desired from this control loop. We, therefore, define the left hand side of Eq. (10) as the performance error, E(k), which is more realistically expressed as a vector of performance error components.
E(k+1)=e(k+1)+KPee(k)+K1ee1(k) (11)
We can now form a Lyapunov function of the performance error as:
L(k)=γ|E(k)|2, (12)
An update law now can now be devised for the adaptive augmentation input, yad, that imposes monotonically decreasing behavior on this Lyapunov function.
Parameterization and Update Laws for the Adaptive Augmentation.
In this section, we investigate two questions. The first relates to the parameterization of the modeling error, and the second relates to the choices for designing stable update laws.
1. Parameterization for a Linear System:
Consider a linear system of the form:
x(k+1)=Ax(k)+Bu(k), (13)
where x and u are vector components of the plant variables and the control inputs and A and B are system matrices. In a manner similar to that illustrated by Eqs. (3-5), the control input is computed as:
u(k)=B^−1{xref(k+1)+KPee(k)+K1ee1(k)−xad(k)−A^x(k)} (14)
where A^ and B^ are estimates of the system A and B matrices. If the system matrices (A, B) are known, adaptive augmentation is not needed, and the control input is computed as:
u(k)=B−1{xref(k+1)+KPee(k)+K1ee1(k)−Ax(k)} (15)
If these control inputs are to provide the same desired error dynamics, they must be equated, which gives the form of the idealized value of the augmentation signal xad(k).
2. Parameterization for a Non-Linear System Affine in Control:
Consider a non-linear system that is affine in control, and whose dynamics can be written as linear in parameters.
x(k+1)=Wfβf(k)+Bu(k) (18)
where Wf is the linear dynamic weight matrix, and the vector βf corresponds to the linear and/or nonlinear functions of the system state. The control input is computed in a similar manner as:
u(k)=B^−1{xref(k+1)+KPee(k)+K1ee1(k)−xad(k)−W^fβf(k)} (19)
where Wf ^ and B^ are the corresponding estimates of the system matrices. By carrying out the analysis similar to the linear system case, the ideal augmentation signal can be computed to be:
Equations (17) and (20) imply that the ideal augmentation signal can be written as:
xad(k)=W*adtrβf(k) (21)
with the ideal weights, W*ad, and the basis functions, β, as given in Eqs. (16) and (19). These are the same basis functions used in Rysdyk and Calise, ibid. Thus, we can parameterize a neural network in this form, and compute the ideal weights iteratively using an appropriate update algorithm.
3. Update Laws for the Adaptive Augmentation.
Having looked at the question of parameterization, we now construct a stable update law for the parameters Wad. Parameterizing the adaptive augmentation signal in the form given by Eq. (20), and using the definition of the performance error as given in Eq. (11), Eq. (10) can be rewritten in vector form as:
E(k+1)=ε(k)−xad(k), (22)
Compared to Eq. (10), this is written for an error vector, E, corresponding to the general case of multiple control loops. Written in this form, the equation indicates that one estimates the vector modeling error, ε(k) (for all loops) using the adaptive augmentation signal xad(k). The vector E(k+1) is a corresponding error in the estimate. This error dynamics for the performance error E(k) corresponds to a system identification like problem. This opens up a host of approaches for doing this online system identification. In this work, we consider a normalized gradient update approach.
4. Normalized Gradient Update.
Let Ei(k) correspond to the ith element of the vector performance error E(k). Let W*ad,ii represent the ith column vector of the weight matrix W*ad, which corresponds to the ideal weights that minimize the performance error vector components Ei(k) to Δ*={δ*1, . . . , δ*I}.
Similarly, let Wad,i represent the ith column vector of the current estimate of the ideal weight matrix. The update law for each of these column vectors of the weight matrix is given as:
Wad(k)=Wad(k−1) {γ*Ei(k)*β(k−1)}/{1+βtr(k−1)β(k−1)} (23)
The parameter γ (Eq. (12)) corresponds to the learning rate that lies in a range
0<γ≧2 (24)
Reference [11] proves that with this weight update law, the performance error, E(k)i is monotonically decreasing for all i, Further, it is known that if the system experiences sufficient persistent excitation, the weights Wad,i approach the ideal weights W*ad,i.
5. What Happens to Tracking Error?
The final part of this analysis investigates the behavior of the system error e(k). This work provides an update only when modeling error is present, as opposed to presence of tracking error. However, tracking error is what is ultimately important. It is, therefore, appropriate to analyze the asymptotic behavior of the tracking error given the behavior of the performance error. For simplicity, in this analysis we consider the case where the desired error dynamics is first order given as:
Ei(k)=ei(k+1)−KPeei(k)=0 (25)
Let Ei(k)<δi after time k, where δ is some small positive scalar. This implies
|ei(k+1)−KPeei(k)|<δ, (26)
From The Cauchy-Schwarz inequality,
|ei(k+1)−KPeei(k)|≧|ei(k+1)|−|KPe∥ei(k)|. (27)
Equations (26) and (27) imply:
|ei(k+n)|<|KPe|n|e1(k)|+δ{1+|KPe|+ . . . +|KPe|n−1} (28-n)
Because |KPe|<1 for stable error dynamics, as k→∞, |ei(k)| is bounded above as:
|e1(k)|<δ/{1−|KPe|} (29)
Thus, if the performance error is bounded, Eq. (29) establishes bounds on the tracking errors. A similar analysis can be carried out for second order error dynamics. The result summarizes that as long as the desired error dynamics (first or second order) is stable, the tracking error will be bounded above, given that the performance error is bounded.
6. Application to Aircraft Control.
The modeling error-driven performance-seeking adaptive control design was implemented for aircraft roll, pitch, and yaw rate control. The NASA Intelligent Flight Controller (IFC) design has been tested, and is currently undergoing various modifications for being flight-tested on the research F-15 aircraft. The IFC design has been implementing the adaptive control design as outlined by Rysdyk and Calise, ibid. For implementing the performance seeking adaptive augmentation, the requirement was that it needed to fit within the existing architecture. The main issue in the implementation is that the baseline controller in the IFC architecture uses continuous-time aircraft dynamic inversion, whereas the proposed design has been outlined in the discrete-time. The equations outlined in the preceding sections have been formulated for a discrete-time model inversion. We realized, however, that after reducing the problem to the core error dynamics, the problems became identical. The error equation for the continuous-time implementation for a desired second-order error dynamics for a scalar error e is given as:
(∂e/∂t)+KPee+K1e∫e(t′)dt′=ε−Uad(k). (30)
The error is defined in the same manner as the discrete case (e.g., q−qref). The modeling error, ε, however corresponds to the difference in the acceleration, as predicted by the model, and the acceleration actually observed. Similarly, Uad represents the augmentation acceleration command given by the adaptive block. If the left hand side of Eq. (30) is discretized while maintaining the continuous-time constants, the resulting scalar discrete-time equation is given as:
Defining the left hand side of Eq. (31) as the modified performance error, E^(k), one obtains
E^(k)=e−Uad(k). (32)
This modified performance error equation is identical to the discrete-time version given by Eq. (21). The adaptive augmentation acceleration signal Uad(k) can be parameterized in a similar manner, and the same update laws remain valid for the parameters of this augmentation signal for reducing E^(k). A zero value of this modified performance error restores the second order error dynamics (LHS of Eq. (31)) to zero, and thereby regains the desired performance from the control loops. Formulated in this manner, this adaptive approach fits within the existing IFC framework, and is considered as an alternate approach for flight testing. In the following discussion, we present some results of this implementation on the high fidelity model of the modified F-15 aircraft used at the NASA Dryden Flight Research Center. The adaptive control architecture is kept the same as in the original IFC design. This design has three loops, one for each of the pitch, yaw, and roll, respectively. Adaptive augmentation is provided to each loop. Kaneshige, and Burken., “Enhancements to a Neural Adaptive Flight Control System for a Modified F-15 Aircraft,” AIAA-2008-6986, give details on the implementation approach such as choice of the basis functions etc. The only difference is that the update law is given by Eq. (23). In this study, we look at two cases. In the first case, the right stabilator is locked at 4 degrees at t=10 sec into the flight experiment. In the second case, the canard multiplier is set at −1, again at t=10 sec into the flight experiment. The behavior of the aircraft and update algorithm is examined for the longitudinal and lateral pilot stick inputs given by
This invention was made, in part, by one or more employees of the U.S. government. The U.S. government has the right to make, use and/or sell the invention described herein without payment of compensation, including but not limited to payment of royalties.
Number | Name | Date | Kind |
---|---|---|---|
6041273 | Burken et al. | Mar 2000 | A |
6102330 | Burken et al. | Aug 2000 | A |
6126111 | Burcham et al. | Oct 2000 | A |
6185470 | Pado et al. | Feb 2001 | B1 |
6332105 | Calise et al. | Dec 2001 | B1 |
6735500 | Nicholas et al. | May 2004 | B2 |
6751529 | Fouche | Jun 2004 | B1 |
6873887 | Zagranski et al. | Mar 2005 | B2 |
6879885 | Driscoll et al. | Apr 2005 | B2 |
7177710 | Calise et al. | Feb 2007 | B2 |
20040162647 | Koshizen et al. | Aug 2004 | A1 |
20040181499 | Corban | Sep 2004 | A1 |