Method and System for Devising an Optimum Control Policy

Information

  • Patent Application
  • 20190258228
  • Publication Number
    20190258228
  • Date Filed
    April 03, 2018
    6 years ago
  • Date Published
    August 22, 2019
    5 years ago
Abstract
A method for devising an optimum control policy of a controller for controlling a system includes optimizing at least one parameter that characterizes the control policy. A Gaussian process model is used to model expected dynamics of the system. The optimization optimizes a cost function which depends on the control policy and the Gaussian process model with respect to the at least one parameter. The optimization is carried out by evaluating at least one gradient of the cost function with respect to the at least one parameter. For an evaluation of the cost function a temporal evolution of a state of the system is computed using the control policy and the Gaussian process model. The cost function depends on an evaluation of an expectation value of a cost function under a probability density of an augmented state at time steps.
Description

This application claims priority under 35 U.S.C. § 119 to patent application no. DE 10 2018 202 431.6, filed on Feb. 16, 2018 in Germany, the disclosure of which is incorporated herein by reference in its entirety.


BACKGROUND

PID control architectures are widely used in industrial applications. Despite their low number of open parameters, tuning multiple, coupled PID controllers can become tedious in practice.


The publication “PILCO: A Model-Based and Data-Efficient Approach to Policy Search”, Marc Peter Deisenroth, Carl Edward Rasmussen, 2011, which can be accessed at http://www.icml-2011.org/papers/323_icmlpaper.pdf discloses a model-based policy search method.


SUMMARY

The method with the features disclosed herein has the advantage that it renders PID tuning possible as the solution of a finite horizon optimal control problem is possible without further a priori knowledge.


Proportional, Integral and Derivative (PID) control structures are still a widely used control tool in industrial applications, in particular in the process industry, but also in automotive applications and in low-level control in robotics. The large share of PID controlled applications is mainly due to the past record of success, the wide availability, and the simplicity in use of this technique. Even in multivariable systems, PID controllers can be employed.


Exploring the mathematics behind the disclosure, it is possible to consider discrete time dynamic systems of the form






x
t+1
=f(xt,ut)+∈t  (1)


with continuously valued state xtcustom-characterD as well as continuously valued input utcustom-characterF. The system dynamics f is not known a priori. One may assume a fully measurable state, which is corrupted by zero-mean independent and identically distributed (i.i.d.) Gaussian noise, i.e ∈t˜custom-character(0, Σ).


One specific reinforcement learning formulation aims at minimizing the expected cost-to-go given by






J=Σ
t=0
T
custom-character[c(xt,ut;t)],x0˜custom-character00)  (2)


where an immediate, possibly time dependent cost c(xt, ut; t) penalizes undesired system behavior. Policy search methods optimize the expected cost-to-go J by selecting the best out of a range of policies ut=π(xt; θ) parametrized by θ. A model {circumflex over (f)} of the system dynamics f is utilized to predict the system behavior and to optimize the policy.


In a first aspect, the disclosure therefore relates to a method for devising an optimum control policy π of a controller, especially a PID controller, for controlling a (physical) system, said method comprising optimizing at least one parameter θ that characterizes said control policy π, wherein a Gaussian process model {circumflex over (f)} is used to model expected dynamics of the system, if the system is acted upon by said PID controller, wherein said optimization optimizes a cost function J which depends on said control policy π and said Gaussian process model {circumflex over (f)} with respect to said at least one parameter θ, wherein said optimization is carried out by evaluating at least one gradient of said cost function J with respect to said at least one parameter θ, wherein for an evaluation of said cost function J a temporal evolution of a state xt of the system is computed using said control policy π and said Gaussian process model, wherein said cost function J depends on an evaluation of an expectation value of a cost function c under a probability density of an augmented state zt at predefinable time steps t.


The control output of a scalar PID controller is given by






u
t
=K
p
e
t
+K
10teτdτ+Kdėt  (3)






e
t
=x
des,t
−x
t  (4)


The current desired state xdes,t can be either a constant set-point or a time-variable goal trajectory. A PID controller is agnostic to the system dynamics and depends only on the system's error. Each controller is parametrized by its proportional, integral and derivative gain θPID=(Kp, Ki, Kd). Of course, some of these gains may be set fixed to zero, yielding e.g. a PD controller in the case of KI=0.


A general PID control structure C(s) for MIMO (multi input multi output) processes can be described in transfer function notation by a D×F transfer function matrix










C


(
s
)


=

[





c
11



(
s
)









c

1

D




(
s
)



















c

F





1




(
s
)









c
FD



(
s
)





]





(
5
)







where s denotes the complex Laplace variable and cij(s) are of PID type. The multivariate error is given by et=xdes,t−xtcustom-characterD such that the multivariate input becomes u(s)=C(s)e(s).



FIG. 1 shows a humanoid robot 1 balancing an inverted pendulum 2. Using the disclosure, coupled PID and PD controllers were successfully trained to stabilize the pole in the central, upright position without requiring a priori system knowledge.


We present a sequence of state augmentations such that any multivariable PID controller as given by equation (5) can be represented as a parametrized static state feedback law. A visualization of the state augmentation integrated into the one-step-ahead prediction is shown in FIG. 2, in comparison with the standard PILCO setting. All lines linking the blocks {tilde over (z)}t, zt+1 are absent in the standard PILCO setting.


Given a Gaussian distributed initial state x0 the resulting predicted states will remain Gaussian for the presented augmentations.


To obtain the required error states for each controller given by equation (3), it is possible to define an augmented system state zt that may also track of the error at the previous time step and the accumulated error,






z
t:=(xt,et−1,ΔTΣτ=0t−1eτ)  (6)


where ΔT is the system's sampling time.


For simplicity, vectors are denoted as tuples (v1, . . . , vn) where vi may be vectors themselves. The following augmentations can be made to obtain the necessary policy inputs:


The augmented state zt and/or the desired state xdes,t (set-point or target trajectory) may be selected as independent Gaussian random variables, i.e.










[




z
t






x

des
,
t





]

=




(


[




μ
z






μ

des
,
t





]

,

[




Σ
z



0




0



Σ
dest




]


)






(
7
)







Drawing the desired state xdes,t from a Gaussian distribution yields improved generalization to unseen targets.


The current error is a linear function of zt and xdes,t. The current error derivative and integrated error may conveniently be approximated by










e
.





e
t

-

e

t
-
1




Δ





T






(
8
)










τ
=
0

t




e
τ


d





τ





Δ





T





τ
=
0


t
-
1




e
τ



+

Δ







Te
t

.







(
9
)







Both approximations are linear transformations of the augmented state. The resulting augmented state distribution remains Gaussian as it is a linear transformation of a Gaussian random variable


This aspect of the disclosure can readily be extended to incorporate a low-pass filtered error derivative. In this case, additional historic error states would be added to the state zt to provide the input for a low-pass Finite Impulse Response (FIR) filter. This reduces measurement noise in the derivative error.


A fully augmented state {tilde over (z)}t is then conveniently given by











z
~

t

:=

(


z
t

,

x

des
,
t


,

e
t

,



e
t

-

e

t
-
1




Δ





T


,

Δ





T





τ
=
0

t



e
τ




)





(
10
)







Based on the fully augmented state {tilde over (z)}t, the PID control policy for multivariate controllers can be expressed as a static state feedback policy













u
t

=


A
PID



(



z
~

t

(
3
)


,


z
~

t

(
5
)


,


z
~

t

(
4
)



)








=



A
PID



(


e
t

,
,

Δ





T





τ
=
0

t



e
τ



,



e
t

-

e

t
-
1




Δ





T



)


.








(
11
)







The specific structure of the multivariate PID control law is defined by the parameters in APID. For example, PID structures as shown in FIG. 3 may be represented by











A

a
)


=

[




K

p
,
1




0



K

i
,
1




0



K

d
,
1




0




0



K

p
,
2




0



K

i
,
2




0



K

d
,
2





]


,




(
12
)







A

b
)


=


[




K

p
,
1





K

p
,
2





K

i
,
1





K

i
,
2





K

d
,
1





K

d
,
2





]

.





(
13
)







Given the Gaussian distributed augmented state and control input as derived above, the next augmented state may be computed using the GP dynamics model {circumflex over (f)}. It is possible to approximate the predictive distribution p(xt+1) by a Gaussian distribution using exact moment matching. From the dynamics model output xt+1 and the current error stored in the fully augmented state zt, the next state may be obtained as






z
t+1=(xt+1,{tilde over (z)}t(3),{tilde over (z)}t(5))=(xt+1,et,ΔTΣτ=0teτ)  (14).


Iterating (6) to (14), a long-term prediction can be computer over a prediction horizon H as illustrated in FIG. 2. For the initial augmented state, one may conveniently define






z
0:=(x0,xdes,0−x0,0).  (15)


Given the presented augmentation and propagation steps, the expected cost gradient can be computed analytically such that the policy π can be efficiently optimized using gradient-based methods.


The expected cost derivative may be obtained as










dJ

d





θ


=





t
=
1

H




d

d





θ







z
t




[

c


(

z
t

)


]




=




t
=
1

H





d






ɛ
t



dp


(

z
t

)







dp


(

z
t

)



d





θ


.








(
16
)







Here, we denoted εt=custom-characterzt[c(zt)] and we write dp(zt) to denote the sufficient statistics derivatives dμt and dΣt of a Gaussian random variable p(ztcustom-charactert, Σt). The gradient of the immediate loss with respect to the augmented state distribution, dεt/dp(zt) is readily available for cost functions like quadratic or saturated exponential terms and Gaussian input distributions.


The gradient for each predicted augmented state in the long-term rollout may be obtained by applying the chain rule to (14) resulting in











dp


(

z

t
+
1


)



d





θ


=






p


(

z

t
+
1


)






p


(


z
~

t

)







dp


(


z
~

t

)



d





θ



+





p


(

z

t
+
1


)






p


(

x

t
+
1


)








dp


(

x

t
+
1


)



d





θ


.







(
17
)







The derivatives










p


(

z

t
+
1


)






p


(


z
~

t

)









and









p


(

z

t
+
1


)






p


(

x

t
+
1


)








may be computed for the linear transformation in equation (14) according to the general rules for linear transformations on Gaussian random variables.


The gradient of the dynamics model output xt+1 is given by











dp


(

z

t
+
1


)



d





θ


=






p


(

z

t
+
1


)






p


(


z
~

t

)







dp


(


z
~

t

)



d





θ



+





p


(

x

t
+
1


)






p


(

u
t

)








dp


(

u
t

)



d





θ


.







(
18
)







Applying the chain rule for the policy output p(ut) yields











dp


(

u
t

)



d





θ


=






p


(

u
t

)






p


(


z
~

t

)







dp


(


z
~

t

)



d





θ



+





p


(

u
t

)





θ


.






(
19
)







The derivatives










p


(

u
t

)






p


(


z
~

t

)









and









p


(

u
t

)





θ






are introduced by the linear control law given by equation (11) and can be computed according to the general rules for linear transformations on Gaussian random variables. The gradient of the fully augmented state {tilde over (z)}t is given by











dp


(


z
~

t

)



d





θ


=



dp


(


z
~

t

)



d






p


(

z
t

)








dp


(

z
t

)



d





θ


.






(
20
)







may be computed for the linear transformation given by equation (10). Starting from an initial augmented state z0 where









dp


(

z
0

)



d





θ


=
0

,




it is possible to obtain gradients for all augmented states zt with respect to the policy parameters θ, dp(zt)/dθ by iteratively applying equations (17) to (20) for all time steps t.


The disclosure is also directed to a computer program. The computer program product comprises computer-readable instructions stored on a non-transitory machine-readable medium that are executable by a computer having a processor for causing the processor to perform the operations listed herein.





BRIEF DESCRIPTION OF THE DRAWINGS

The objects, features and advantages of the disclosure will be apparent from the following detailed descriptions of the various aspects of the disclosure in conjunction with reference to the following drawings, where:



FIG. 1 is an illustration of a humanoid robot trained with a system according to the disclosure;



FIG. 2 is a schematic illustration of the mathematics behind the method according to an aspect of the disclosure;



FIG. 3 is a block diagram depicting control structures of a controller to which the disclosure may be applied;



FIG. 4 is a block diagram depicting components of a system according to an aspect of the disclosure;



FIG. 5 is a block diagram depicting components of a system according to another aspect of the disclosure;



FIG. 6 is a flowchart diagram depicting the method according to one aspect of the disclosure.





DETAILED DESCRIPTION


FIG. 4 shows a block diagram depicting components of a system according to an aspect of the disclosure. Shown is a control system 40, which receives sensor signals S from a sensor 30 via an input unit 50. The sensor senses a state of a physical system 10, e.g. a robot (like e.g. the humanoid robot 1 shown in FIG. 1, or an at least partially self-driving car), or more generally an actuator (like e.g. a throttle valve), in an environment 20. The input unit 50 transforms theses sensor signals S into a signal representing said state x. For example, the input unit 50 may copy the sensor signal S into a predefined signal format. If the sensor signal S is in a suitable format, the input unit 50 may be omitted altogether.


This signal representing state x is then passed on to a controller 60, which may, for example, be given by a PID controller. The controller is parameterized by parameters θ, which the controller 60 may receive from a parameter storage P. The controller 60 computes a signal representing an input signal u, e.g. via equation (11). This signal is then passed on to an output unit 80, which transforms the signal representing the input signal u into an actuation signal A, which is passed on to the physical system 10, and causes said physical system 10 to act. Again, if the input signal u is in a suitable format, the output unit may be omitted altogether.


The controller 60 may be controlled by software which may be stored on a machine-readable storage medium 45 and executed by a processor 46. For example, said software may be configured to compute the input signal u using the control law given by equation (11).



FIG. 5 shows a block diagram depicting a training system 140, which may be configured to train the control system 40. The training system 140 may comprise an input unit 150 for receiving signals representing an input signal u and a state signal x, which are then passed on to a block 190 which receives present parameters θ from parameter storage P and computes new parameters θ′. These new parameters θ′ are then passed on to parameter storage P to replace present parameters θ. The block 190 may be operated by software which may be stored on a machine-readable storage medium 210 and executed by a processor 200. For example, block 190 may be configured to execute the steps of the method shown in FIG. 6.



FIG. 6 is a flowchart diagram depicting a method for devising optimum parameters θ for an optimum control policy π of controller 60.


First (1000), a random policy is devised, e.g. by randomly assigning values for parameters θ and storing them in parameter storage P. The controller 60 then controls physical system 10 by executing its control policy π corresponding to these random parameters θ. The corresponding state signals x are recorded and passed on to block 190.


Next (1010), a GP dynamics model {circumflex over (f)} is trained using the recorded signals x and u to model the temporal evolution of the system state x, xt+1={circumflex over (f)}(xt, ut).


Then (1020), a roll-out of the augmented system state zt over a horizon H is computed based on the GP dynamics model {circumflex over (f)}, the present parameters θ and the corresponding control policy π(θ) and the gradient of the cost function J w.r.t. to parameters θ is computed, e.g. by equations (17)-(20).


Based on these gradients, new parameters θ′ are computed (1030). These new parameters θ′ replace present parameters θ in parameter storage P.


Next, it is checked whether the parameters θ have converged sufficiently (1040). If it is decided that they have not, the method iterates back to step 1020. Otherwise, the present parameters θ are selected as optimum parameters θ* that minimize the cost function J (1050).


Controller 60 is then executed with a control policy π corresponding to these optimum parameters θ* to control the physical system 10. The input signal u and the state signal x are recorded (1060).


The GP dynamics model {circumflex over (f)} is then updated (1070) using the recorded signals x and u.


Next, it is checked whether the GP dynamics model {circumflex over (f)} has sufficiently converged (1080). This convergence can be checked e.g. by checking the convergence of the log likelihood of the measured data x, t, which is maximized by adjusting the hyperparameters of the GP, e.g. with a gradient-based method. If it is deemed not to have been sufficiently converged, the method branches back to step 1020. Otherwise, the present optimum parameters θ* are selected as parameters θ that will be used to parametrize the control policy π of controller 60. This concludes the method.


Parts of this disclosure have been published as “Model-Based Policy Search for Automatic Tuning of Multivariate PID Controllers”, arXiv:1703.02899v1, 2017, Andreas Doerr, Duy Nguyen-Tuong, Alonso Marco, Stefan Schaal, Sebastian Trimpe, which is incorporated herein by reference in its entirety.

Claims
  • 1. A method for devising an optimum control policy of a controller for controlling a system, said method comprising: optimizing at least one parameter that characterizes said control policy;using a Gaussian process model to model expected dynamics of the system, wherein said optimization optimizes a cost function which depends on said control policy and said Gaussian process model with respect to said at least one parameter; andcarrying out said optimization by evaluating at least one gradient of said cost function with respect to said at least one parameter,wherein for an evaluation of said cost function a temporal evolution of a state of the system is computed using said control policy and said Gaussian process model, andwherein said cost function depends on an evaluation of an expectation value of a cost function under a probability density of an augmented state at time steps.
  • 2. The method according to claim 1, wherein said augmented state at a given time step comprises the state at said given time step.
  • 3. The method according to claim 1, wherein said augmented state at a given time step comprises an error between the state and a desired state at a previous time step.
  • 4. The method according to claim 1, wherein said augmented state at a given time step comprises an accumulated error of a previous time step.
  • 5. The method according to claim 3, wherein the augmented state and/or the desired state are Gaussian random variables.
  • 6. The method according to claim 1, wherein the controller is a multivariate controller.
  • 7. The method according to claim 1, wherein: a first step of optimizing said at least one parameter by said optimization of said cost function with respect to said at least one parameter,a second step of controlling said system by said controller using said control policy parametrized by said optimized at least one parameter, anda third step of updating said Gaussian process model based on a recorded reaction of said system during said second step are carried out iteratively.
  • 8. The method according claim 1, wherein the system comprises an actuator and/or a robot.
  • 9. The method according to claim 1, wherein said system is controlled by said controller, the control policy of which has been devised by the method.
  • 10. The method according to claim 1, wherein a training system for devising an optimum control policy of a controller is configured to carry out the method.
  • 11. The method according to claim 1, wherein a control system for controlling a system is configured to carry out the method.
  • 12. The method according to claim 1, wherein a computer program contains instructions which cause a processor to carry out the method if the computer program is executed by said processor.
  • 13. The method according to claim 12, wherein a machine-readable storage medium is configured to store the computer program.
Priority Claims (1)
Number Date Country Kind
10 2018 202 431.6 Feb 2018 DE national