The invention relates generally to solving a multi-stage stochastic convex quadratic program (QP) for a convex set with linear equalities, and more particularly to solving the multi-stage stochastic convex QP by an Alternating Direction Method of Multipliers (ADMM).
The Alternating Direction Method of Multipliers (ADMM), which is a variant of an augmented Lagrangian scheme, uses partial updates for dual variables to solve a quadratic program. The ADMM alternately solves for a first variable while holding a second variable fixed, and solving for the second variable while holding the first variable fixed, see Boyd et al. “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, p. 1-122, 2011. Rather than iterating until convergence as in the Lagrangian-augmented problem, which is possibly complex, the ADMM performs iterations of alternating steps of updates on subsets of the variables.
The ADMM is frequently used for solving structured convex quadratic programs (QP) in applications, such as for example, compressive sensing, image processing, machine learning, distributed optimization, regularized estimation, semi-definite programming, and real-time control and signal processing applications, to name but a few.
Of particular interest to the invention is the application of ADMM to model predictive control (MPC) problems for application to systems with uncertain parameters. MPC is a method for controlling constrained dynamical systems, such as robots, automotive vehicles, spacecrafts, HVAC systems, processing machines, by repeatedly solving a finite horizon optimal control formulated from a mathematical model of the system dynamics, constraints, and a user specified cost function. Uncertainty arises due to the presence of disturbances that affect the system dynamics or due to mismatch between the model of the system dynamics and the true system dynamics. When probability information can be associated to the uncertain values, a stochastic system is obtained.
For linear models of the systems with a quadratic cost function and subject to linear constraints, the MPC finite horizon optimal control problem can be formulated as a parametric QP. At every control cycle, a specific QP is generated from the parametric quadratic program and the current values of the system state, and possibly the current reference. Then, the QP is solved, and the first part of the control input sequence is applied as control input. At the following control cycle, a new optimization problem is solved from the updated system state. MPC has been increasingly applied to systems with fast dynamics where the MPC is implemented in a low computational power embedded processors.
U.S. Pat. No. 8,600,525 discloses an active set algorithm that can be used for solving MPC generated QP problems. U.S. Pat. No. 7,328,074 discloses a method of providing an active-set algorithm wherein an initial guess for an optimization problem is provided from the solution of a previous optimization. U.S. Publication 20060282177 discloses an interior point algorithm for solving quadratic programs that arise in the context of model predictive control of gas turbine engines.
However, the computational effort per iteration of those methods can be prohibitive for application to large scale problems. The complexity of the operations that are performed, such as the solution of large scale linear systems, makes them infeasible for the type of computing hardware commonly used in real-time control and signal processing applications.
Methods such as gradient methods and accelerated gradient methods cannot easily handle linear inequality constraints. Low complexity fast optimization methods have been developed for MPC. U.S. Publication 20120281929 discloses a method for solving quadratic programs with non-negative constraints and a method to optimize such method for application to MPC.
A fast gradient algorithm is described for an application to MPC by Richter et al., “Computational Complexity Certification for Real-Time MPC With Input Constraints Based on the Fast Gradient Method,” IEEE Trans. Automat. Contr. 57(6): 1391-1403, 2012.
A Lagrangian method for MPC is described by Kögel et al., “Fast predictive control of linear, time-invariant systems using an algorithm based on the fast gradient method and augmented Lagrange multipliers,” CCA 2011: 780-785, 2011.
However, those methods are limited in the application by the types of constraints that can be handled, e.g., Richter et al., or can only handle input constraints, or need to perform division operations, e.g., in U.S. Publication 20120281929, which are time consuming in the computing hardware for MPC, or by complex iteration, e.g., Kögel et al. This is mainly due to the need of achieving the solution of the Lagrangian-augmented problem, which is complex due to the presence of constraints, before updating the Lagrange multipliers, multipliers at every update.
Thus, there is a need to provide a method that can solve large scale problems with small computational complexity per iteration, rapid convergence when problems are feasible, and quick detection of infeasibility.
The invention relates to finding solution of stochastic quadratic programs (StQPs) wherein constraints are linear and the objectives are quadratic and depend on uncertain parameters.
StQPs arise in the fields of financial planning, inventory management among others. Of particular interest is the application of quadratic stochastic programming to model predictive control of machines and manufactoring processes and plants.
The embodiments of the invention provide a method for solving a stochastic quadratic programs (StQP) for a convex set with a set of general linear equalities and inequalities by an alternating direction method of multipliers (ADMM). The invention relates to solution of Stochastic Quadratic Programs (StQPs) wherein the constraints are linear, objective is quadratic and the constraints and objective depend on uncertain parameters.
The method of the invention determines an optimal solution, or alternatively certifies that no solution to the problem exists. The embodiments also provide a method for optimizing a step size β for the ADMM, which achieves convergence with the least number of iterations. The embodiments also provide a method for accelerating the convergence of the ADMM using conjugate gradient (CG) method.
Of special interest is the application of ADMM to StQPs that are solved for Stochastic Model Predictive Control (StMPC), where inequalities represent constraints on states and controls of a dynamic system, and the equalities represent the equations of the system dynamics that couple the variables of the optimization problem. The stochasticity in MPC arises from uncertain parameters in the system dynamics arising from disturbances or plant-model mismatch when probability information on the uncertain parameters is available. A uniform probability on the uncertain values may be assumed to solve an MPC problem with uncertain parameters and no probability information by StMPC.
In StMPC, the uncertain parameters are typically sampled to obtain a finite set of realizations. The optimization is performed over the finite set of realization to obtain optimal control action that in expectation is optimal over all the realizations of the uncertain parameters. A StMPC problem formulation is an instance of a StQP.
Prior art methods are known for minimizing the QP while satisfying linear equalities and the convex set. For example, active set methods and interior point methods are the most common iterative methods. The embodiments use such methods to solve the StQP.
The embodiments of the invention overcome the difficulties of prior art methods by performing only simple operations in iterations, in contrast with U.S. Pat. Nos. 8,600,525, 7,328,074, an U.S. Publication 20060282177. The embodiments do not involve divisions as in U.S. Publication 20120281929, and perform low complexity iterations by alternatively updating subsets of Lagrange multipliers by single iterations, rather than updating the entire set of multipliers, thus operating only on simple constraints.
The embodiments provide the optimal selection for the step size β of the ADMM iterations, which minimizes the number of iterations performed by the method to a least number, and hence achieves the maximal rate of execution for a StMPC controller implemented in a given computing hardware.
In one embodiment, the StMPC problem is decomposed into two blocks. The first block corresponds to an equality constrained QP, and the second block corresponds to a projection onto the StMPC inequalities and anticipativity constraints. The StMPC problem can also be decomposed into a set of time step problems, and then iterated between the time step problems to solve the decoupled problems until convergence.
Furthermore, the embodiments allow for an early and effective detection of infeasibility, meaning that the method, on termination, certifies that no solution exists for the QP problem. The embodiments also described a solution that minimizes violations of the constraints.
The term “machine” is used generally because it is well understood that MPC has been used for decades in chemical and oil refineraries. Generally, models used in MPC are intended to represent the behavior of complex dynamical systems. The additional computations of the MPC is generally not needed to provide adequate control of simple systems, which are often controlled well by proportional integral-derivative PID controllers.
Common dynamic characteristics that are difficult for PID controllers include large time delays, constraints, multiple control inputs, and high-order dynamics. MPC models predict the change in the dependent variables of the modeled system that will be caused by changes in the independent variables. For example, in a chemical process, independent variables that can be adjusted by the controller are often either the setpoints of regulatory PID controllers (pressure, flow, temperature, etc.) or the final control element (valves, dampers, etc.). Independent variables that cannot be adjusted by the controller are used as disturbances. Dependent variables in these processes are other measurements that represent either control objectives or process constraints.
Stochastic model predictive control (StMPC) is a process for controlling systems subject to stochastic uncertainty. As shown for example in
x(t+1)=Am(w(t))x(t)+Bm(w(t))u(t)+Gm(w(t))
p(w(t))=ƒp(w(t−1), . . . ,w(t−tw)), (1)
where x∈n
n
n
The states and inputs of the system whose behavior is described in Equation (1) can be subject to constraints
x(t+k)∈Xk|t,u(t+k)∈Uk|t, (2)
where Xk|t, Uk|t are admissible sets for state and input, respectively. St MPC selects the control input by solving the optimal control problem formulated from Equations (1) and (2) as
where ak|t is a predicted value of a generic vector a for k steps ahead of t, T is a transpose operator, Q and P are positive semidefinite matrices, R is a positive definite matrix, the positive integer N is a MPC prediction time horizon, and ut=(u0|t, . . . , uN-1|t) is the control policy.
In the problem described by Equation (3), x is a random vector due to the effect of w. Hence, the problem is difficult to solve in such form.
Scenario-enumeration stochastic MPC control operates as described for
The scenarios are sequences of possible realizations w, i.e.,
sk|t(j)=[w0|tj . . . wN-1|tj],
and to each scenario, a probability π(sk|t(j)) can be associated via p and ƒp. The scenarios can be generated as described for
The stochastic MPC based on scenario enumeration is described for
In scenario-enumeration MPC for Nr scenarios, which is a subset of the possible scenarios on the MPC prediction horizon, the scenario-enumeration stochastic MPC problem is formulated as
where xtr=(x1|tr . . . xN|tr), utr=(u0|tr . . . uN-1|tr), and Rk|t is the set of all scenarios, which are equal from the beginning until the step k−1, that is,
r,r′∈Rk|tsk′|t(r)=sk′|t(r′)∀k′=0, . . . ,k−1.
Due to the scenario enumeration, all vectors in Equation (4) are now deterministic, so that Equation (4) can be formulated as
where for each scenario r=1, . . . , Nr, kmax(r) is the largest index for which r∈Rk
The matrix Ar is defined so that,
As shown in
In one embodiment of the invention, the quadratic program (5) resulting from scenario-enumeration stochastic MPC problem can be reformulated as,
where a convex set Ysd is defined as
Consider reformulating Equation (6) as,
where y=(y1, . . . , yNn with n=Nr(nx+nu+nu(kmax(r)+1)). The variables w∈
n are required to be in the convex set YSD. The advantage of Equation (8) is that the inequalities and the scenario coupling non-anticipativity constraints are placed on separate variables w, coupled with the others by y=w. The variables y is the linear subspace constrained variables, and w will be called the convex set constrained variables.
The ADMM procedure dualizes the constraints yr=wr into the objective function using scaled multipliers βrλr respectively where βr>0. Additionally, the objective is also augmented with a penalty on the squared norm of the violation of the dualized equality constraints. Thus, we obtain
where
In Equation (9), w and y are coupled only by the objective function. The steps 110, 120, 130 and 140 in the ADMM procedure respectively are:
The update step 140 of λ in Equation (13) scales linearly in the number of scenarios. We show in the following that Equations (11) and (12) also decouple by scenarios.
Observe that the objective functions and constraints in Equation (11) are decoupled by scenarios. Hence, the update 111 can be rewritten as,
yrl+1=Mr(wrl+λrl−qr/βr)+Nrbr
where,
Mr:=Zr(ZrT(Qr/βr+In)Zr)−1ZrT,
Nr:=(In−MrQr/βr)Rr(ArRr)−1, (14)
with Rr,Zr denote an orghonormal basis for the range space of ArT and null space of Ar, respectively. Thus, the update for y decouples by scenario and scales linearly with the number of scenarios.
In (12), the objective function is component wise separable in the w. Using wr=(wr,
wr∈Yr∀r=1, . . . ,nr and
Note that
where we have used λr=(λr,
A check is made 133 to call the CG process every nadmm iterations, where nadmm is predetermined. If yes, then the CG process 700 is called 135. The ADMM-CG is explained in detail later in this invention. Otherwise, it is proceeded to step 150. The ADMM-CG procedure is described in greater detail later in this invention with reference to
Step 150 checks if an optimality termination holds. Given a predetermined tolerance ∈, a termination condition 151 for an optimal feasible solution 160 is
ropt=max(∥wl+1−wl∥,∥λl+1−λl∥)<∈. (17)
If the termination condition for the feasible solution is satisfied, then the optimal feasible solution is output 160. The termination condition 151 in Equation (17) checks for the satisfaction to the tolerance greater ∈ greater than zero of a maximum of a norm of a change in the set constrained variable w from a current value to a value at a previous iteration, and a norm of the change in the Lagrange multiplier λ from a current value to a value at a previous iteration.
Otherwise, for ∈>0, a termination condition 171 for certifying a solution to the problem is infeasibility is checked in step 170
where ∘ denotes the element-wise multiplication of two vectors. If the termination condition for the infeasible solution is satisfied, then certification that the solution is infeasible can be signaled 180.
The termination condition 171 in Equation (18) is checked for the satisfaction of four conditions.
The first condition is a satisfaction to a tolerance ∈ greater than zero of a maximum of a normed change in the set constrained variable vector w from a current value to a value at a previous iteration.
The second condition is the satisfaction to a tolerance ∈ greater than zero of the normed change in the linear subspace constrained variable vector y from the current value to the value at the previous iteration.
The third condition checks for a deviation from 0 to a tolerance of not more than ∈ of an angle between the Lagrange multiplier vector λ and the vector resulting from a difference of the linear subspace constrained variable vector and the set constrained variable vector, i.e., (y−w), at the current value.
The fourth condition requires that a difference of the linear subspace constrained variable vector and the set constrained variable vector at the current value have, element-wise, an identical sign as the Lagrange multiplier vector. Otherwise, update 140 l=l+1, and perform the next iteration.
The general method can be implemented in a processor or other hardware as describe above connected to memory and input/output interfaces by buses as known in the art.
The choice of the step size βr, which ensures that a least number of iterations are required for termination of the ADMM method is
βropt=√{square root over (λmin(ZrTQrZr)λmax(ZrTQrZr))}, (19)
where λmin(•),λmax(•) are minimal and maximal eigenvalues of contained matrix arguments. In other words, the optimal step size is determined as a square root of a product of minimum and maximum eigenvalues of a Hessian matrix of the scenarios problem pre- and post multiplied by an orthonormal basis for a null space of a linear equality constraint matrix.
In another embodiment of the invention, a single value of β is chosen for all scenarios βr=β. In this case, the optimal value of the parameter is prescribed as,
βopt=√{square root over (λmin(ZTQZ)λmax(ZTQZ))}, (20)
where, Z is the orthonormal basis for the vectors satisfying the constraints,
Aryr=0∀r=1, . . . ,Nr
and,
Hence,
yr=(yr,
where
(y+,w+,λ+)=ADMM(y,w,λ)
denotes one iteration of ADMM, that is application of Equations (11-13), where (yk, wk, λk)=(y, w, λ), and the parameter βr=β. For convenience, let vl=yl−λl−1.
After every nadmm iterations of the ADMM-CG process 700, as checked in step 133, defined by steps in Equation (11-13), the ADMM-CG process is called 135, see
Given an ADMM iterate (yl, wl, λl) 705, we define the index sets 710 as
where ∈act is a tolerance for estimating the inequality constraints in the convex set that are expected to hold as equalities at the solution to the StQP, and Er is a matrix that extracts the components of wr that are in Ir∪Īr, and Er0 is a matrix that extracts the components of wr that correspond to
where
The full solution ycg,λcg can be obtained as
The solution (ycg,λcg) is the optimal of Equation (6). Given a solution from the CG method, the solution is checked 720 to determine if all multipliers are of the correct sign by. The violated indices are found by
Irviol={i∈Ir|λr,i<0},Īrviol={i∈Īr|λr,i>0}.
If the set Iviol∪Īviol≠Ø, then the active index sets Ir,Īr are updated 722 as,
Ir=Ir\Irviol,Īr=Īr\Īrviol.
If no such multiplier indices are found, then the obtained CG solution is checked 725 to see if it makes progress toward solving Equation (6). If sufficient progress is not made then, then the CG solution is discarded 730 and the procedure stops 750 and returns to the ADMM restarting from the previous ADMM iteration value.
Otherwise, check 735 of the termination condition is satisfied, and if yes, use 745 the CG solution to restart the ADMM and go to step 750, and otherwise, select 740 the CG solution as a starting guess and continue the CG iterations. Namely, the iterates with superscripts cg,1 and cg,2 are used in 151 and 171.
To do this, two iterations of the ADMM procedure are performed as follows,
wcg=Y(ycg−λcg/β)
(ycg,1,wcg,1,λcg,1)=(ycg,wcg,λcg/β)
(ycg,2,wcg,2,λcg,2)=(ycg,1,wcg,1,λcg,1)′
vcg,1=ycg,1−λcg,vcg,2=ycg,2−λcg,1
The condition checked for sufficient progress is,
∥vcg,2−vcg,1∥≦(1−η)∥vl−vl−1∥,
where η<<1 is a constant. If sufficient progress is made towards an optimal solution then, ADMM iterates are updated as,
(yl,wl,λl)=(ycg,2,wcg,2,λcg,2)
(yl,wl,λl)=(ycg,2,wcg,2,λcg,2)
vl=vcg,2,vl−1=vcg,1,λ=βλcg,2.
Namely, the iterates with superscripts cg,1 and cg,2 are used in 151 and 171. If the termination conditions are satisfied then the ADDM-CG process 700 is terminated. If not, more CG iterations are performed using the computed CG iterates as initial solution 740. The procedure is designed so that after the correct set of active indices are found, then the ADDM-CG process is used to compute the solution to Equation (6) and ADMM iterations are not used.
In the Appendix, pseudocode for the CG-based ADMM procedure is provided as Algorithm 1. Pseudocode for the CG process is provided in Algorithm 2, and pseudocode for finding indices of multipliers with the incorrect sign is provided as Algorithm 3.
Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications can be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
This is a Continuation-in Part application of U.S. patent application Ser. No. 14/185,024, “Method for Solving Quadratic Programs for Convex Sets with Linear Equalities by an Alternating Direction Method of Multipliers with Optimized Step Sizes,” filed by Raghunathan et al. on Feb. 20, 2014.
Number | Name | Date | Kind |
---|---|---|---|
7328074 | Das et al. | Feb 2008 | B2 |
8600525 | Mustafa et al. | Dec 2013 | B1 |
20060282177 | Fuller et al. | Dec 2006 | A1 |
20120281929 | Brand et al. | Nov 2012 | A1 |
20130018517 | Kalagnanam | Jan 2013 | A1 |
20140089759 | Draper | Mar 2014 | A1 |
Entry |
---|
Mulvey, John M. et al, “A New Scenario Decomposition Method for Large-Scale Stochastic Optimization”, May-Jun. 1995, Operations Research vol. 43, No. 3, Informs. |
Yang, Sen et al., “An Efficient ADMM Algorithm for Multidimensional Anisotropic Total Variation Regulation Problems”, Aug. 11-14, 2013, KDD'13, ACM. |
Boyd, Stephen et al., “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers”, 2011, Foundations and Trends in Machine Learning, vol. 3, No. 1. |
Muske, Kenneth R. “Lagrangian Quadratic Programming Approach for Linear Model Predictive Control”, May 8-10, 2002, Proceedings of the American Control Conference, AACC. |
Liang, Ximing et al., “Applying Infeasible Interior Point Method to SQP for Constrained Nonlinear Programming”, 2008, International Conference on Computer Science and Software Engineering, IEEE. |
S. Boyd, N. Parikh, E. Chu, and J. Eckstein, œDistributed optimization and statistical learning via the alternating direction method of multipliers, Foundations and Trends in Machine Learning, vol. 3, No. 1, p. 1-122, 2011. |
C. Garcia, D. Prett, and M. Moran, “Model predictive control: Theory and practice,” Automatica(Oxford), vol. 25, No. 3, pp. 335-348, 1989. |
Kögel et al., “Fast predictive control of linear, time-invariant systems using an algorithm based on the fast gradient method and augmented Lagrange multipliers,” CCA 2011: 780-785, 2011. |
O'Donoghue, “A splitting method for optimal control,” IEEE Trans., IEEE Transactions on Control Systems Technology, 21(6):2432-2442, Nov. 2013. (to appear), 2013. |
Rockafellar, “Monotone operators and the proximal point algorithm,” SIAM J. Control Opt., vol. 14, pp. 877-898, 1976. |
Richter et al., “Computational Complexity Certification for Real-Time MPC With Input Constraints Based on the Fast Gradient Method,” IEEE Trans. Automat. Contr. 57(6): 1391-1403, 2012. |
Number | Date | Country | |
---|---|---|---|
20150234780 A1 | Aug 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14185024 | Feb 2014 | US |
Child | 14475989 | US |