Embodiments described herein include a dynamic system control using deep machine learning. In one embodiment, a nonlinear dynamic control system is defined by a set of equations that include a state vector z and one or more control inputs u. Via a machine learning method, a sub-optimal controller is derived that stabilizes the nonlinear dynamic control system at an equilibrium point. The sub-optimal controller is retrained to be used as a stabilizing controller for the nonlinear dynamic control system under general operating conditions.
In another embodiment, a nonlinear dynamic control system is defined by a set of equations that include a state vector z and one or more control inputs u. Via a machine learning method, a sub-optimal controller is described that stabilizes the nonlinear dynamic control system at an equilibrium point. The sub-optimal controller is retrained to be used as a stabilizing controller for the nonlinear dynamic control system under general operating conditions. Retraining the sub-optimal controller involves a model predictive control (MPC) approach in which learning a parameterized, state-dependent control map is learned by solving a sequence of finite horizon optimal control problems.
In another embodiment, a controlled apparatus has a dynamic, closed-loop, stabilizing controller used to control a state of the controlled apparatus. A computer has a memory coupled to a processor. The memory includes instructions that cause the processor to: define a dynamic control system by a set of equations that include a state vector z of the controlled apparatus and one or more control inputs u of the controlled apparatus; via a machine learning method, derive a sub-optimal controller that stabilizes the controlled apparatus at an equilibrium point; retrain the sub-optimal controller to be used as the stabilizing controller for the controlled apparatus under general operating conditions; and facilitate transferring the stabilizing controller to the controlled apparatus. These and other features and aspects of various embodiments may be understood in view of the following detailed discussion and accompanying drawings.
The discussion below makes reference to the following figures, wherein the same reference number may be used to identify the similar/same component in multiple figures. The drawings are not necessarily to scale.
The present disclosure relates to machine learning in control systems. Machine learning platforms, e.g., deep-neural networks, have become popular due to their successes in natural language processing and image processing. Increasingly, solutions based on deep learning algorithms have also made their way in many other applications. This disclosure demonstrates how deep learning platforms can be used for control systems. A deep learning platform may be trained to learn control policies, which can be challenging for some types of problems. The approaches described herein are different than reinforcement algorithms sometimes applied to control problems, although some challenges are common to both. To illustrate this approach, an example is shown to solve the stabilization of an inverted pendulum system at its unstable equilibrium point.
Embodiments described below illustrate how deep learning (DL) platforms can be used to learn control policies. The challenges and proposed approaches are described in some detail, as well as practical applications. The proliferation of DL algorithms is enabled by their ability to deal with large amount of data and by the success stories in natural language processing and image processing, with direct applications in autonomous vehicles control. Deep learning is based on large scale, first-order, gradient-based optimization algorithms. Many DL platform also feature automatic differentiation that leads to the ability to accurately evaluate loss functions gradients. More generally, there is a growing interest in “differentiable programming,” a programming paradigm where programming constructs are enhanced with differential operators. In such a paradigm, the approach is to construct a causality graph describing the control flow and data structures of the program, and the application of the differential operators by backtracking on the causality graph. The graph can be static (TensorFlow) or dynamic (Pytorch, Autograd).
In this disclosure, automatic differentiation (AD) is used to learn control policies and Autograd is an example tool used for executing AD. The working example for illustrating the design of control policies is the stabilization of the inverted pendulum, a nonlinear system with an unstable equilibrium point. Gradient based optimization algorithms are used to learn parameterized, state dependent control maps (e.g., neural networks) that stabilize the pendulum at the unstable equilibrium point. The optimization algorithms train the controller parameters based on a state and control dependent loss function. Control inputs are learned explicitly by treating them as optimization variables. It can be shows that this process is non-trivial and that the choice of initial values of the controller parameters is key for the success of the learning process. Algorithms are proposed for deriving stabilizing, suboptimal controllers that we use as initial condition for the DL algorithms. This initial control policy by these sub-optimal controllers is improved by minimizing a loss function. DL platforms have already been successfully used for learning stabilizing controllers using reinforcement learning. As described below, many of the challenges found in reinforcement learning algorithms can also be found in these new approaches as well.
Notations: The vector or matrix norms are denoted by ∥⋅∥. Indices refer to a particular norm, when needed. A (closed) neighborhood of size c around a point z0 is defined as ε(z0) {z, ∥z−z0∥≤ε}. The Jacobian of a vector valued function ƒ(x) is denoted by ∇ƒ(x). The Jacobian becomes the gradient for a scalar valued function. For a function ƒ(x,y),
denotes its partial derivative with respect to x, evaluated at (x,y). The spectral radius of a matrix A is denoted by ρ(A).
In the rest of this disclosure, Section II describes the problem setup and the working example. Section III shows the connection between the closed loop system and the stability of the learning algorithm. Section IV describes three approaches for solving optimal control problems, and how ideas and tools from machine learning can be instantiated in control problems. Section V provides additional examples.
Assume that a dynamical system is described by a possibly nonlinear ordinary differential equation (ODE) given by Equation (1) below, where z denotes the state vector, u represents the input signal, and w is a set of system parameters.
ż=ƒ(z,u;w) (1)
Without loss of generality, it may be assumed that z=0 is a, possibly unstable, equilibrium point. One objective is to learn control inputs that stabilize the state at the equilibrium point. State-dependent, parametric control schemes are considered (e.g., u=g(z;β) where β are the controller parameters) as well as non-parametric control schemes (e.g., control inputs are optimization variables). In addition, some state-dependent loss function (z0,u)=∫0Tl(z0, u)dt, can be optimized for some time horizon T and some initial condition z0. Control theory offers many options to approach this problem, including methods based on linearization, Lyapunov functions or model predictive control approaches. In this disclosure, DL models and platform features are used to learn control policies. Principles such as transfer learning can be used to ensure the stability of the learning process and reduce the time to compute an optimal solution. More formally, the goal is to solve an optimization problem of the form minu(z0;u).
Additional conditions can be imposed on u such as form (e.g., u=g(z;β)) or magnitude limitations (e.g., u belongs to some bounded set). In control theory, typical loss functions are quadratic in the state and the control input: l(z;u)=1/2(zTQz+uTRu) for some semi-positive matrices Q and R. If ƒ(z; u; w) is linear in the state and input, a quadratic regulator (LQR) optimal control problem is recovered.
For a working example, an inverted pendulum model is used, a diagram of which is shown in
where x and v are the cart's position and velocity, respectively, while θ and ω are the pole's angle and angular velocity, respectively. The symbol F denotes the force acting on the cart and plays the role of the input signal. The state vector is given by zT=[x;v;θ;ω]. The remaining symbols represent parameters of the system and their values are listed in Table I, where M is the mass of the cart, m is the mass of the pendulum, g is the acceleration of gravity, J is the moment of inertia of the pendulum, and b is the coefficient of friction affecting the cart.
Note that in its original form, the dynamics of the inverted pendulum system are represented as a differential algebraic equation (DAE). We explicitly solved for the accelerations to generate the ODE form. It is not always possible to transform DAEs into ODEs. In such cases DL platforms (e.g., Pytorch or TensorFlow) cannot be used since they do not support DAE solvers. In that case, a platform is used that supports DAE solvers enhanced with sensitivity analysis capabilities such as DAETools, a Python package.
Deep learning platforms require training data and first order gradient-based algorithms to learn model parameters. When learning control policies, one starts with a set of initial conditions of the system ODE, and the training data is generated online through model simulation, e.g., by solving an ODE for each iteration of the learning algorithm. For brevity, a continuous time version of the learning algorithm is used that can be readily discretized to obtain the familiar gradient descent algorithm. To minimize the chain rule application, when considering a state-dependent parameterized map, it is assumed that g(z;β) has already been integrated in the loss function l, and vector field ƒ, so that they only depend on z and β. The continuous dynamics of the gradient descent algorithm are given by Equation (6a) below. The partial derivative of the loss function can be explicitly written as in Equation (6b).
For notational brevity let
which represents me sensitivity of the state vector with respect to the controller parameters. The sensitivity zβ has its own dynamics, and is given by:
A first observation is that Equation (7) is identical to the linearized system dynamics around the equilibrium point, up to the initial conditions. More importantly, such linearization enables the stability analysis of the closed loop system around the equilibrium point. In particular, as is well established in control theory, the spectral properties of
determine the behavior of the system near the equilibrium point. We summarize this result in the following proposition.
Proposition 3.1: Assume that the real parts of eigenvalues of the matrix
are negative. Then, there exists a scalar ε>0 such that
limt→∞∥z(t)∥=0,∀t≥0,∀z0∈ε(z0).
The proposition above provides the means to test, at least locally, the stabilizing properties of control maps. The main message here is that the stability of the sensitivity vector zβ is directly related to the (local) stability of the closed loop system. More importantly, an unstable sensitivity vector will induce instability in the gradient of the loss function. As a result, unless the gradient descent is able to update the parameters β fast enough such that the closed loop system becomes stable, the gradients of the loss function will grow unbounded leading to the instability of the parameter learning algorithm itself.
Since gradient based algorithms are first order algorithms, they typically have a slow convergence rate. Hence, unless the initial controller parameters are such that they induce a stable closed loop system, the learning algorithm will very likely fail. An experiment was performed with respect to the stability properties of the initial controller parameters. A linear map was considered for the controller since it has only four parameters. We drew 105 controller parameters from the interval [−10; 10]. Out of 105 parameter choices, only 18 controllers were stabilizing near the equilibrium point. This suggests an extremely high probability (0.99982 in this case) of starting with an unstable controller.
The situation becomes even worse when considering more complex controller maps, e.g., neural networks, whose number of parameters is considerably larger. Therefore, a strategy is needed that will avoid the failure of the learning algorithm. In what follows, two such strategies are proposed that have a transfer learning interpretation. The first strategy is based on first learning an initial controller that is guaranteed to stabilize the closed loop, at least near the equilibrium point.
The second strategy is based on a sequence of learning problems which start with a short enough time horizon to avoid gradient blow-up, and then keep increasing the time horizon until the model converges to a stable controller.
Regarding the stability of the loss function gradients, it is useful to formally show why stability of the closed loop system matters for learning a controller. Consider the linear dynamics z(t+1)=Az(t)+bu(t) with A∈n×n, b∈n and a linear controller parameterization u(t)=βTz(t). One goal is to find a stabilizing controller that minimizes the quadratic loss function
We use a first order, gradient based optimization algorithm given by βk+1=βk−ak□(βk), where ak is the iteration step-size. The next result shows that a stable initial controller ensures bounded loss function gradients.
Proposition 3.2: Let u(t)=bβ0 be a stabilizing controller, where β0 is the initial value for the vector of optimization variables β. Then there exists a sequence {ak}k≥0 and a large enough N such that ∥∇(βk)∥ is bounded for all k.
Proof: The gradient of the loss function can be explicitly written as
In particular, recalling that zβ(0)=0, this leads to
The loss function gradient can now be expressed as
Assuming that u(t)=bβk is a stabilizing controller, this results in ∇(βk) being bounded since for large enough N the exponential decay of (A+bβT)t dominates the linear increase of t. There should exist αk such that ρ(A+bβkT)<1. Let ck=∇(βk) be the gradient vector at iteration k. Then the stability of zβ
Remark 3.1: The theoretical bound on the step-size αk that ensures closed loop stability is
This bound should be compared with the bound that ensures the convergence of the gradient descent algorithm and the minimum between the two bounds should be chosen.
Remark 3.2: The previous result was proven for a linear system and quadratic cost function. Since near the equilibrium point the behavior of a nonlinear system can be approximated with a linear system, the result can be applied to nonlinear systems, provided the initial condition of the state vector is close enough to the equilibrium point.
In this section three approaches are presented for learning a stabilizing controller. All approaches use concepts related to transfer learning methods used in deep learning. Such methods proved to be effective tools in image classification applications to reduce the training time and training data requirements. The main idea behind transfer learning is to use a “favorable” initial condition for the training algorithm, taken from a previously trained model, and to re-train the entire model (or only a part of it).
In the first approach, a parameterized map is learned for a stabilizing controller that minimizes a quadratic loss function. The initial values of the map parameters are chosen by separately training a controller that stabilizes the linear approximation of the nonlinear dynamics around the equilibrium point. In the second approach, a stabilizing controller is again learned for the linearized dynamics, but this time the non-parameterized control inputs are used as initial conditions to solve a finite horizon optimal control problem that explicitly generates optimal control inputs. In the third approach, a parameterized, state dependent control map by solving a sequence of optimal control problems whose time horizons are sequentially increased.
For all three approaches, Autograd and Python were used to automatically compute gradients used for learning the control policies. The optimization algorithms already available in Python packages (e.g., SciPy) were used to ensure compatibility with Autograd. While these computational tools are generally not appropriate for real time applications, they do provide insights into the behaviors and challenges of the approaches.
A. Learning a State-Dependent Control Parameterization with a Stabilizing Initial Controller
Learning a stabilizing initial controller is based on manipulating the spectral properties of the Jacobian off (z;β) at the equilibrium point such that we ensure the existence of an attractor around the equilibrium point. Recall that one of the approaches for finding a stabilizing controller is to locally linearize in terms of the state z and control input u in order to obtain a linear ODE ż=Az+Bu. Under a controllability assumption that ensures the existence of a stabilizing linear controller, a simple pole placement approach will find a linear control u=Kz. In this case, no linear constraint is imposed on the controller since the objective is to cover a richer class of control maps.
Proposition 3.1 connected the local stability properties of the closed loop system to the spectral properties of the Jacobian
A feature of DL platforms is the ability to automatically differentiate loss functions. By using this feature, a function can be generated that can be repeatedly evaluated at any state and control parameters pair (z;β). More importantly, it can be applied to high dimensional vector fields f (z;β). It is more convenient to look at the discrete time version of
For a small enough time step h, the discrete version of A(β) is given by
Since the matrix multiplication operations are efficiently executed on DL platforms, we enough terms of the Taylor series expansion can be kept to ensure a good approximation of the matrix exponential. The stability of the closed loop system can now be expressed in terms of the spectral radius ρ(Ad;β), which is the largest of the absolute values of the eigenvalues of Ad(β). The closed loop system is stable if and only if ρ(Ad;β)<1. A useful inequality for this purpose is: ρ(Ad;β)k≤∥Ad(β)k∥ all k≥1 and for all consistent norms. Hence it is sufficient to impose that ∥Ad(β)k∥ converges to zero at an exponential rate to ensure the closed loop stability. To learn the controller parameters β, we solve the optimization problem below:
minβmax{0,∥Ad(β)k∥−λk}, (8)
for some large enough k≥1, and some positive real scalar λ<1. Note that if starting with a large k and the initial values of β are such that ρ(Ad;β)>1, computing the matrix powers will quickly lead to a numerical overflow. To avoid this situation, a sequence of the optimization problems of type in Equation (8) are solved starting with a small k, and continually increase k until no further improvement is observed. Note that it is not necessary to use the “max” operator in the cost function—its primary purpose is to stop the optimization when a feasible solution is reached. The process for learning stabilizing initial controller parameters is summarized in Algorithm 1.
Algorithm 1 was applied to the inverted pendulum example where the controller was mapped as a neural network with one hidden layer of size 20. The bias vector was set to zero. Hence there were a total of 104 parameters. The initial values of the entries of β were drawn uniformly from the interval [0; 0:1], and the time step size was chosen as h=0:01. The discrete matrix Ad(β) was computed using 5 terms in the Taylor series expansion.
Equation (8) was solved for k∈{1, 2, . . . , 11} using what is hereinafter referred to as the Adams algorithm described in Diederik P. Kingma and Jimmy Ba. Adam, A method for stochastic optimization, 2014 (Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015). In this case, for the solution of Equation (8), each problem was allotted 2000 iterations and a step size of 0:001. The imposed decay rate was chosen as λ=0.999. The results produced by Algorithm 1 are shown in
In
are complex, with negative real parts. In
where ƒ(z;β)=ƒ(z,ψ(z;β)). The controller map learned using Algorithm 1 was used as the initial value for learning the control map parameters that minimize a quadratic loss function of the form
where u(t)=ψ(z;β) and has the same structure as the controller learned using Algorithm 1. We executed 100 iteration of the Adams gradient-based algorithm, minimizing the quadratic loss function for q=1 and r=2, with a step size α=0.001. The
results are shown in
B. Learning Finite Horizon Optimal Control Inputs
Model predictive control (MPC) is a modern control approach that generates control inputs that optimize the present while considering the future as well. It is an effective approach to fight model uncertainties and external disturbances, but it has a high computational cost. At each time instance a finite horizon, optimal control problem must be solved. The feasibility of the MPC approach to feedback control depends on the size of the problem (e.g., the number of optimization variables), the number of constraints, the computational power of the platform implementing the MPC method, and the system time constants. One of the factors that affects how fast we converge to a (local) minima is the initial value of the optimization algorithms. Automatic differentiation can again be used to compute the gradient and the Hessian matrix (if needed) of the loss function.
We tested the effect of a “good” initial condition on the optimization algorithm solving one step of the MPC approach. We used the same loss function as in the previous section. However, here the control policy is no longer explicitly parameterized as a function of the state—the control inputs themselves are the optimization variables. Due to its ability to accommodate constraints, we use the Sequential Least Squares Quadratic Programming (SLSQP) optimization algorithm. This time we constrained the control input (e.g., ∥u2∥≤9), and we used Autograd to compute the loss function gradients provided to the SLSQP algorithm. When training DL models, bound constraints are typically imposed through clipping. Here, they are explicitly considered.
We tested the optimization algorithm under three scenarios: (a) initial control inputs generated using Algorithm I, (b) random initial conditions (u∈[−2; 2]), and (c) zero initial conditions. The optimization algorithm statistics (optimal value, number of iterations and number of function evaluations) for one finite horizon problem are shown in Table II.
It is clear that using a stabilizing initial control decreases the convergence time. In MPC, a sequence of such problems are solved. It may be tempting to use the solution of the previous step as initial condition for the next MPC step. If the time horizon is not long enough, we typically cannot guarantee that the control input stabilizes the feedback loop. As a consequence, the learning algorithm itself can become unstable leading to learning failure. To reinforce this idea,
over a 6 sec time horizon. It is clear from
Having a better control of the parameters of the algorithm, the learning process was repeated using Adams algorithm. We empirically noticed that long time horizons tend to induce gradient explosions. This phenomenon can be explained as follows: during the search process, control inputs are generated that do not stabilize the system. As a result, for longer time horizons the instability is amplified. Consequently, the loss function gradients become unstable as well. We can control the gradient explosion to some extent by decreasing the learning rate, but we pay a price in terms of convergence speed.
C. Iterative Learning Based Approach
We emphasized in the previous section that longer time horizon and unstable initial control inputs can lead to learning failure. Here we explore a “small steps” approach, where we solve a sequence of optimal control problems, where in the beginning we use small time horizons that are gradually increased. This approach does not require first computing a stabilizing control policy. We use a parameterized, state dependent control map u=ψ(z;β)—the objective is to learn the parameters of the map such that a quadratic loss function is minimized. We consider a set on initial states and our objective is to “encourage” the state vector to be close to zero and remain there after some time Tss. As such, the loss function is now given by (β; T)=Σi=1Ni(β;z0[i]), where i(β;z0[i])=Σj,T
To avoid the blow-up of gradients during the learning process, we use an iterative approach where we solve a sequence of optimization problems. Each such optimization problem is solved for a time horizon chosen from the set ={Tl} with Tl+1>Tl. For time horizons Tl<Tss, the loss function is defined as i(β;z0[i])=∥z(Tl)∥2, which basically encourages the state vector at the end of the horizon to get closer to zero. In addition to the loss function, we add a regularization function that ensures stabilization happens at the zero equilibrium point. In particular, regularization function takes the form ∥ƒ(0;β)∥2. The sequential learning method is summarized in Algorithm 2.
The graphs in
In this section, the concepts described above are used to derive control policies for unmanned aerial vehicles (e.g., drones). Specifically, the control policy is for a quadrotor (also referred to as a quadcopter) recovering when affected by motor failures. We train the control algorithm using deep learning methods. We map between the system state and the control input. We focus on an unstable system that involves careful initialization of the learning algorithms and derive initialization procedures that ensure stable learning algorithms. We derive iterative algorithms that learn optimal control policies while incrementally increasing the time horizons.
Currently, large scale optimization is used to solve constrained optimization problem, where the cost is quadratic, and the constraints are linear. The algorithm uses convex optimization methods that do not require gradient computations. We use the deep learning platforms featuring automatic differentiation to accurately compute loss function of control objectives and a constraints function. The method applies to non-convex objective functions and nonlinear system.
The control algorithm is designed to prevent drone crashes when they lose at least one motor. The drone dynamics are unstable and deep learning training algorithms cannot be used as is for such systems. We overcome the stability challenges of the learning algorithm when dealing with unstable systems in this particular scenario. We design initialization strategies and iterative schemes to learn optimal control problems.
One objective is to safely recover (e.g., land) a drone when affected by a catastrophic failure such as loss of thrust due to motor failure. We assume the drone has four motors (quadrotor). The same approach would work for drone with more than four motors, although with different equations that describe the controller dynamics. We formulate the problem as a trajectory tracking problem, without full actuation capability. The diagram in
The dynamics of the drone are described by a system of ordinary differential equations {dot over (X)}=F(X, U), where U=[U1, U2, U3, U4] is the vector of inputs for the drone control that are defined as function of the rotor angular velocity dependent lift forces:
U
1
=b(Ω22+Ω32+Ω42)
U
2
=b(−Ω22+Ω42)
U
3
=b(−Ω32)
U
4
=d(Ω22−Ω32+Ω42)
The function F(X, U) is defined as
In the case of a motor failure, the actuation capability is reduced. Assuming motor 1 failed, the input vector becomes:
U
1
=b(Ω22+Ω32+Ω42)
U
2
=b(−Ω22+Ω42)
U
3
=b(−Ω32)
U
4
=d(Ω22−Ω32+Ω42)
The solve the safe landing problem, we solve an optimization problem of the form
minΩ
subject to: z(tƒ)=0,ż(tƒ)=0
which ensures that the drone decreases its speed as it approaches the landing position. Additional constraints functions on (x, y) position can be added as needed. We can solve this problem using a tool such as Pytorch that has the capability to use automatic differentiation, while also using an ODE solver to solve for the system dynamics. The diagram in
The flowchart in
In
The software 1514 also includes application-specific programs such as machine learning libraries/programs 1522, controller modeling libraries/programs 1524, and automatic differentiations libraries/programs 1526. These software components are used to define a dynamic control system by a set of equations. The set of equations include a state vector z of a controlled apparatus 1530 and one or more control inputs u of the controlled apparatus 1530. Via a machine learning method, the computer 1501 is used to derive a sub-optimal controller that stabilizes the controlled apparatus 1530 at an equilibrium point. The computer 1501 then retrains the sub-optimal controller to be used as a stabilizing controller 1534 for the controlled apparatus under general operating conditions, e.g., in response to arbitrary input conditions and arbitrary disturbances encountered while the controlled apparatus 1530 is in operation.
The controlled apparatus 1530 includes its own computing hardware, as represented by CPU/RAM 1532. Other specific hardware may also be used in the apparatus 1530, such as digital-to-analog converters, analog-to-digital converters, digital signal processors, etc. The controller 1534 of the controlled apparatus may include a software component that can be changed, e.g., by having the stabilizing controller developed at the computer 1501 be transferred to the controlled apparatus 1530 via data interfaces 1520, 1521. These interfaces 1520, 1521 may use a direct wire or wireless connection, or may use some other media, such as a memory card, to affect data transfer between the devices 1501, 1530.
The controller 1534 is part of a closed loop control system that may include a physical plant 1536 and sensors 1538. In the case the controlled apparatus is a quadrotor vehicle, the physical plant 1536 will include the motors and motor control electronics, and the sensors may include accelerometers, compass, tilt angle detectors, geolocation sensor, etc. It will be understood that these generic control components 1534, 1536, 1538 are found in many different types of control systems, and a stabilizing controller found through deep machine learning may be used in many different types of these systems.
We demonstrated that deep learning platforms can be used for training stabilizing controllers. We showed that for unstable systems, most often the training algorithm will fail since the system instability induces instability in the gradient descent algorithm. We demonstrated that starting with a stable initial controller, we are guaranteed the existence of gradient descent step sizes that ensure bounded gradients. We borrowed ideas from transfer learning in DL to develop three strategies to overcome learning instability. In the first approach, we learned an optimal nonlinear controller by using a sub-optimal stabilizing controller as the initial value. The sub-optimal control policy was learned by enforcing spectral properties of the system Jacobian around the equilibrium point. In the second approach, we used an MPC strategy and learned non-parameterized control inputs that minimize a quadratic cost, starting once again from a sub-optimal stabilizing controller. In the third approach, we applied a “small steps” strategy, where we solved a sequence of quadratic optimal control problems with increasing time horizons, preventing gradient explosion. For all three approaches, we used automatic differentiation for computing and evaluating Jacobians and loss function gradients.
Automatic differentiation is a useful tool for learning control policies. It can be directly applied on system models to generate linear approximations and compute local stabilizing controllers. First order gradient algorithms (e.g., Adams) applied to large scale problems, together with automatic differentiation that computes loss function gradients, enable control policy design for nonlinear system (although only local solutions can be guaranteed without addition assumptions, like convexity, on the loss function properties). Our experiments showed that offline design of control policies based on parameterized, state-dependent maps definitely benefits from the computational tools offered by DL platforms. This is due, in part, to the integration of ordinary differential equation (ODE)/differential-algebraic equation (DAE) solvers.
Further development may include assessing the feasibility of using DL platforms for real-time control design (e.g., MPC) in order to investigate the stability of the learning algorithms and the practical challenges of transferring learning models to physical platforms that implement the control policies.
Unless otherwise indicated, all numbers expressing feature sizes, amounts, and physical properties used in the specification and claims are to be understood as being modified in all instances by the term “about.” Accordingly, unless indicated to the contrary, the numerical parameters set forth in the foregoing specification and attached claims are approximations that can vary depending upon the desired properties sought to be obtained by those skilled in the art utilizing the teachings disclosed herein. The use of numerical ranges by endpoints includes all numbers within that range (e.g. 1 to 5 includes 1, 1.5, 2, 2.75, 3, 3.80, 4, and 5) and any range within that range.
The various embodiments described above may be implemented using circuitry, firmware, and/or software modules that interact to provide particular results. One of skill in the arts can readily implement such described functionality, either at a modular level or as a whole, using knowledge generally known in the art. For example, the flowcharts and control diagrams illustrated herein may be used to create computer-readable instructions/code for execution by a processor. Such instructions may be stored on a non-transitory computer-readable medium and transferred to the processor for execution as is known in the art. The structures and procedures shown above are only a representative example of embodiments that can be used to provide the functions described hereinabove.
The foregoing description of the example embodiments has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the embodiments to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. Any or all features of the disclosed embodiments can be applied individually or in any combination are not meant to be limiting, but purely illustrative. It is intended that the scope of the invention be limited not with this detailed description, but rather determined by the claims appended hereto.