Stateless discrete predictive controller

Information

  • Patent Grant
  • 11971690
  • Patent Number
    11,971,690
  • Date Filed
    Monday, September 30, 2019
    4 years ago
  • Date Issued
    Tuesday, April 30, 2024
    15 days ago
Abstract
A model predictive controller for a performing stateless prediction. Using dosed form algebraic expressions for the step test in a dynamic matrix eliminates the requirement for individual calculation on each element. With both the dynamic matrix and the vector of predicted errors written in terms of discrete algebraic equations, the control law is written as a function of the current state of the system. The control law is then be reduced to its minimal form, which leaves the next control action to be a function of the system parameters, the past errors, and the past control actions. Since the system parameters are constant, this controller is then be reduced into a single discrete equation. This greatly reduces the computations required in each control loop iteration.
Description
FIELD OF THE INVENTION

The present invention relates generally to model predictive controllers.


BACKGROUND OF THE INVENTION

As the computational power available in the control loop increases, the use of model predictive controllers are becoming more cost effective. This has allowed them to be used in industrial facilities but only on certain systems. However, systems with fast, non-linear dynamics remain out of reach due to the computational requirements every control cycle. It would therefore be desirable to have an improved model predictive controller which reduces the computational time required for each control cycle.





BRIEF DESCRIPTION OF THE DRAWINGS

For the purpose of illustrating the invention, the drawings show aspects of one or more embodiments of the invention. However, it should be understood that the present invention is not limited to the precise arrangements and instrumentalities shown in the drawings, wherein:



FIG. 1 is a block diagram of a method used to derive a control law for a controller according to an embodiment of the invention;



FIG. 2 is a block diagram for a Dynamic Model Controller (DMC) controller; DMC Controller Block Diagram;



FIG. 3 is a graph of a simulation output of a first order system; First Order Plant Output;



FIG. 4 is a graph of a controller output in a first order system; First Order Plant Input;



FIG. 5 is a graph of computational time for a first order system; First Order Controller Computation Time;



FIG. 6 is a graph of simulation output of a second order system; Second Order Plant Output;



FIG. 7 is graph of controller output in a second order system; Second Order Plant Input;



FIG. 8 is graph of computational time for a second order system; Second Order Controller Computation Time;



FIG. 9 is a graph of simulation output of a first order plus dead time system; First Order Plus Dead Time Plant Output;



FIG. 10 is graph of controller output in a first order plus dead time system; First Order Plus Dead Time Plant Input;



FIG. 11 is graph of computational time for a first order plus dead time system; First Order Plus Dead Time Controller Computation Time;



FIG. 12 is a graph of simulation output of a general linear system; General Linear Plant Output;



FIG. 13 is graph of controller output in a general linear system; General Linear Plant Input;



FIG. 14 is graph of computational time for a general linear system; General Linear Controller Computation Time;



FIG. 15 is a graph of simulation outputs of a mutiple-input and multiple-output (MIMO) system; MIMO Plant Outputs;



FIG. 16 is graph of controller outputs in a MIMO system; MIMO Plant Inputs;



FIG. 17 is graph of computational time for a MIMO system; MIMO Controller Computation Time;



FIG. 18 is a graph of simulation output of a non-linear robot; Nonlinear Robot Output;



FIG. 19 is graph of controller output in a non-linear robot; Nonlinear Robot Input;



FIG. 20 is graph of computational time for a non-linear robot; Nonlinear Controller Computation Time;



FIG. 21 is a table of computational times for a first order system; First Order Controller Computation Times;



FIG. 22 is a table of computational times for a second order system; Second Order Controller Computation Times;



FIG. 23 is a table of computational times for a first order plus dead time system; First Order Plus Dead Time Controller Computation Times;



FIG. 24 is a table of computational times for a general linear system; General Linear Controller Computation Times;



FIG. 25 is table of computational times for a MIMO system; MIMO Plant Controller Computation Times;



FIG. 26 is a table of computational times for a non-linear robot; Non Linear Controller Computation Times;



FIG. 27 is a block diagram of a structure of a control feedback loop; Control Feedback Loop;



FIGS. 28(a) to (c) are graphs of simulation results for a second order under damped system; Robustness Simulation N=150;



FIGS. 29(a) to (c) are graphs of simulation results for a second order under damped system; Robustness Simulation N=100;



FIGS. 30(a) to (c) are graphs of simulation results for a second order under damped system; Robustness Simulation N=50;



FIG. 31 is a graph of experimental results for test robot; Vertical Robot Test 1 Angle;



FIG. 32 is a graph of control action corresponding to experiments on the test robot; Vertical Robot Test 1 Voltage;



FIG. 33 is a graph of experimental results for test robot; Vertical Robot Test 2 Angle;



FIG. 34 is a graph of control action corresponding to experiments on the test robot; Vertical Robot Test 2 Voltage;



FIG. 35 is a photograph of the industrial robot used for testing; Kuka Robot;



FIG. 36 is a graph of experimental results for the Kuka; Kuka Position Test 1;



FIG. 37 is a graph of control action corresponding to experiments on the Kuka; Kuka Servo Inputs Test 1;



FIG. 38 is a graph of experimental results for the Kuka; Kuka Position Test 2; and



FIG. 39 is a graph of control action corresponding to experiments on the Kuka; Kuka Servo Inputs Test 2.





DETAILED DESCRIPTION

The present invention, in one embodiment, relates to a model predictive controller which is stateless which allows for a stateless prediction to be performed without compromising with computer time.


1. Approach


In the DMC control law, the dynamics of the system are captured using a step test and then placed into the dynamic matrix (A). In the conventional controller the dynamic matrix is populated with numerical values which means that in every control loop iteration calculations using all these elements are required to determine the new control action. Using closed form algebraic expressions for the step test in the dynamic matrix would eliminate the requirement for individual calculation on each element. FIG. 1 contains a flow diagram that outlines the procedure, according to one embodiment of the present invention, that is used to remove the redundant calculations.


With reference to FIG. 1, the procedure begins with a model of the system in step 1. This model can either be a linear or non-linear model and the form is not of significant importance because it will have to be transformed in the next block, step 2. The second block 2 is taking the model and transferring it into the discrete time domain, in a form which allows the state of the system at any time step in the future to be determined algebraically.


This discrete time model can then be used to algebraically describe each element in a unit step test of the system. With each of these individual equations the dynamic matrix can be populated with solely algebraic expressions.


Also, from the discrete time model, each element of the predicted response of the system could be generated from different algebraic equations. Combining these equations with a discrete form of the set-point, the vector of the predicted errors can be expressed.


With both the dynamic matrix and the vector of predicted errors written in terms of discrete algebraic equations, the control law can be written as a function of the current state of the system. The control law can then be reduced to its minimal form, which would leave the next control action to be a function of the system parameters, the past errors, and the past control actions. Since the system parameters are constant this controller can then be reduced into a single discrete equation. This would greatly reduce the computations required in each control loop iteration.


2. First Order Controller Theory











J



t


=





j
=
1

N








(



r
^

j




t




-


y
^

j





t



)

2


+




i
=
1


n
u











Λ
^

i



(


Δ







u
^

i





t


)


2











where


:






J




t







Cost





fucntion





evaluated





at





time





t










r
^

j




t







Set





point





for





the






j
th






time





step





evaluated





at





time





t










Λ
^

i






Weight





element





for





the






i
th






control





action





change





(
1
)







In a model predictive controller the future response of the system is used in calculations to reduce the future errors of the system. To perform this reduction a cost functions for the error is required; this cost function can be seen in Eq. 1. This cost function uses the sum of squared errors for simplicity in optimization and a weighted control action sum for move suppression.


In the DMC least squares control law, a unit step test is used to create a matrix that captures the dynamics of the system. The optimization can then be reduced to the following equation, which is the DMC unconstrained control law.

Δû=(ATA+{circumflex over (Λ)}I)−1ATê  (2)


The general transfer function for a first order system is as follows.











G


(
s
)


=


k
p



τ





s

+
1









where


:







k
p






System





Gain







τ





System





Time





Constant





(
3
)







Using the zero hold discrete model Eq. 3 can be rewritten into the discrete time domain. This equation (Eq. 4) is formulated assuming the control action (û|t) will be constant for k time steps ahead.












y
^





t
+

k





Δ





t




=



α
k



y
o





t





+


k
p



(

1
-

α
k


)




u




t








where


:







y
^





t







Predicted





plant





output





at





time





t











u



t







System





input





at





time





t








α





Constant






(

e



-
Δ






t

τ


)








k





an





integer





depicting





a





future





time







Δ





t





Time





Step





(
4
)







Using Eq. 4 the form of a step test can be derived. The initial conditions are all zero and the control action will be a constant unity so the equation can be significantly simplified.

ĉi=kp(1−αi)  (5)

where: ĉi ith Value of the Step Test


This step test data is then used to populate the dynamic matrix. Since the gain is in every term it can be factored out of the matrix.










A
=


k
p



[




(

1
-

α
1


)



0





0





(

1
-

α
2


)




(

1
-

α
1


)






0



















(

1
-

α
N


)




(

1
-

α

N
-
1



)







(

1
-

α

N
-

n
u

+
1



)




]









where


:






N





Prediction





horizon





length








n
u






Control





horizon





length





(
6
)







Using this dynamic matrix and its transpose the next part of the DMC control law can be computed. To reduce high changes in the control actions a move suppression factor is used in Dynamic Matrix Control. This factor is multiplied by the diagonal in the A T A matrix.


where: λ Move suppression factor










(



A
T


A

-


Λ
^


I


)

=


k
p
2



[




λ





i
=
1

N








(

1
-

α
i


)

2









i
=
1


N
-
1









(

1
-

α
i


)



(

1
-

α

i
+
1



)












i
=
1


N
-

n
u

+
1









(

1
-

α
i


)



(

1
-

α

i
+

n
u

-
1



)











i
=
1


N
-
1









(

1
-

α
i


)



(

1
-

α

i
+
1



)






λ





i
=
1

N








(

1
-

α
i


)

2












i
=
1


N
-

n
u

+
1









(

1
-

α
i


)



(

1
-

α

i
+

n
u

-
2



)

























i
=
1


N
-

n
u

+
1









(

1
-

α
i


)



(

1
-

α

i
+

n
u

-
1



)









i
=
1


N
-

n
u

+
1









(

1
-

α
i


)



(

1
-

α

i
+

n
u

-
2



)









λ





i
=
1

N








(

1
-

α
i


)

2






]






(
7
)







In the control law the next step is to invert the matrix. With the plant gain factored out of the matrix the following inversion identity can be used to keep the gain separate from the control law. Generally,











(

β





H

)


-
1


=


1
β



H

-
1







(
8
)







Using Eq. 8 in Eq. 7 gives:












(



A
T


A

-


Λ
^


I


)


-
1


=


1

k
p
2




M

-
1









with




(
9
)






M
=

[




λ





i
=
1

N








(

1
-

α
i


)

2









i
=
1


N
-
1









(

1
-

α
i


)



(

1
-

α

i
+
1



)












i
=
1


N
-

n
u

+
1









(

1
-

α
i


)



(

1
-

α

i
+

n
u

-
1



)











i
=
1


N
-
1









(

1
-

α
i


)



(

1
-

α

i
+
1



)






λ





i
=
1


N
-
1









(

1
-

α
i


)

2












i
=
1


N
-

n
u

+
1









(

1
-

α
i


)



(

1
-

α

i
+

n
u

-
2



)

























i
=
1


N
-

n
u

+
1









(

1
-

α
i


)



(

1
-

α

i
+

n
u

-
1



)









i
=
1


N
-

n
u

+
1









(

1
-

α
i


)



(

1
-

α

i
+

n
u

-
2



)









λ





i
=
1


N
-

n
u

+
1









(

1
-

α
i


)

2






]





(
10
)







The second half of the control law is ATê.

ê={circumflex over (r)}−ŷ  (11)


For this application of the controller, the setpoint vector ({circumflex over (r)}) is going to be represented by a constant value.










e
^

=


r


[



1




1









1



]


-

y
^






(
12
)







The prediction vector (ŷ) will start from the current value or state and transition to the steady state value in the form of first order plant with the same time constant as the system.

ŷi|t=(1−αi)(yss|t−yo|t)+yo|t  (13)

    • where: ŷi|t Prediction of the plant state i future timesteps conducted at time t
      • yss|t Predicted steady state value at time t
      • yo|t Current or measured plant value at time t


In order to find the predicted steady state value, the dynamics of the first order system has to be accounted for. At time t the system will have a new steady state value of yss|t. Using this steady state value the difference between it and the current state can be stated as dyss|t. For this steady state difference dyss|t, the control action change has to be accounted for because it is applied at the current time step.

ŷi|t=yo|t+(1−αi)(dyss|t+kpΔu|t)  (14)


After one time step the plant should be at the following value. (i=1)

yo|(t+Δt)=yo|t+(1−α)(dyss|t+kpΔu|t)  (15)


The change between two consecutive timestep values can now be described by the difference between the value at t and t+Δt in terms of the predictions and also by the change in the steady state values.


Therefore, Δy 1 is the difference between the current position and the predicted current position after one time step.

Δy1=yo|(t+Δt)−yo(|t)  (16)
Δy1=yo(|t+(1−α)(dyss|t+kpΔut))−yo|t  (17)
Δy1=dyss|t−αdyss|t+kpΔut−αkpΔu|t  (18)
Δy2=dyss|t+−(dyss|(t+Δt)−kpΔu|t)  (19)


Δy2 is the difference between the current difference in steady state and the difference in steady state after one time step, while accounting for the control action change.

Δy2=dyss|t+−(dyss|(t+Δt)−kpΔu|t)  (19)
Δy2=dyss|t+kpΔu|t−dyss|(t+Δt)  (20)


Equating Eq.18 and Eq.20 allows the following formula for the steady state difference to be derived.

dyss|(t+Δt)=αdyss|t+αkpΔu|t  (21)


Advancing the time by one timestep and rearranging the equation allows the formula to be written in the instantaneous time domain.












αΔ






t


(



dy
ss




t




-

dy
ss






(

t
-

Δ





t


)





Δ





t


)



+


(

1
-
α

)



dy
ss






t


=


α






k
p


Δ





u






t
-

Δ





t


)







(
22
)








αΔ





t


d
.



y
ss





t





+

(

1
-
α

)




dy
ss





t



=


α






k
p


Δ





u






t
-

Δ





t


)







(
23
)







Going back to Eq.12 the prediction vector ŷ can be written in terms of dyss and the setpoint r can be written in terms of the current error e o and the current position yo.

êi=(eo+yo)−((1−αi)dyss+yo)  (24)
êi=eo−(1−αi)dyss  (25)


This allows the error vector to be broken up into two vectors.










e
^

=



e
o



[



1




1









1



]


-


dy
ss



[





1
-
α












1
-

α
2























1
-

α
N





]







(
26
)







Using this new equation for the error vector and the dynamic matrix control law, an equation can be formulated for the change in the control action.










Δ






u
^


=


1

k
p
2




M

-
1




A
T



e
^






(
27
)







Δ






u
^


=




e
o


k
p
2




M

-
1





A
T



[



1




1









1



]



-



dy
ss


k
p
2




M

-
1





A
T



[





1
-
α












1
-

α
2























1
-

α
N





]








(
28
)







Using the ŵ and {circumflex over (z)} vectors, Eq. 28 can be rewritten.











Δ






u
^


=




e
o


k
p




w
^


-



dy
ss


k
p




z
^











where


:







z
^


=


M

-
1




[







i
=
1

N








(

1
-

α
i


)

2










i
=
1


N
-
1









(

1
-

α
i


)



(

1
-

α

i
+
1



)
















i
=
1


N
-

n
u

+
1









(

1
-

α
i


)



(

1
-

α

i
+

n
u

-
1



)






]










w
^

=


M

-
1




[







i
=
1

N








(

1
-

α
i


)

2










i
=
1


N
-
1








(

1
-

α
i


)















i
=
1


N
-

n
u

+
1








(

1
-

α
i


)





]







(
29
)







Only the first element of the control action change vector will be used so the equation can be reduced.











Δ





u

=





w
^

1


k
p



e

-




z
^

1


k
p




dy
ss











where


:






Δ





u

=

Δ







u
^

1







(
30
)







Taking the Laplace transform of Eq.30, the controller can be placed in a closed loop block diagram.










Δ






U


(
s
)



=





w
^

1


k
p




E


(
s
)



-




z
^

1


k
p





dY
ss



(
s
)








(
31
)







To find dYss(s) the Laplace transform is taking of Eq.23.











αΔ







tsdY
ss



(
s
)



+


(

1
-
α

)




dY
ss



(
s
)




=

α






k
p


Δ






U


(
s
)







(
32
)








dY
ss



(
s
)


=



α






k
p




αΔ





ts

+

(

1
-
α

)




Δ






U


(
s
)







(
33
)







u



t


=

u




(

t
-

Δ





t


)






+
Δ






u




t







(
34
)







Δ






t


(


u



t




-
u



(

t
-

Δ





t


)




Δ





t


)



=


Δ





u




t






(
35
)








U


(
s
)



Δ






U


(
s
)




=

1

Δ





ts






(
36
)







Using the Laplace transform equations Eq. 31, Eq. 33, and Eq. 36 a block diagram for the DMC controller can be drawn, FIG. 2. Using the block diagram reduction law for feed back loop this block diagram can be reduced to a single closed loop block.











U


(
s
)



E


(
s
)



=




w
^

1



(


αΔ





ts

+

(

1
-
α

)


)




k
p


Δ






ts


(


αΔ





ts

+

(

1
-
α

)

+



z
^

1


α


)








(
37
)







Converting Eq. 37 from continuous to discrete creates the control law for the stateless discrete predictive controller.











u



t


=




(

b
0

)


e




t





+

(

b
1

)



e





(

t
-

Δ





t


)






-

(

a
1

)



u





(

t
-

Δ





t


)






-

(

α
2

)



u





(

t
-

2

Δ





t


)








where


:







a
1






=

-


1
+
α
+



z
^

1


α



1
+



z
^

1


α













a
2

=

α

1
+



z
^

1


α











b
0

=



w
^

1



k
p



(

1
+



z
^

1


α


)











b
1

=

-




w
^

1


α



k
p



(

1
+



z
^

1


α


)









(
38
)








3. Second Order Controller Theory


Using the zero hold discrete model an equation for a second order system can be generated. In generating a step response the initial conditions are all zero and the control action will be constant at one (1) so the equation can be simplified to the equation seen in Eq. 39.












c
^

i

=


k
p



(

1
-


e


-
ζω

-
n



Δ





ti





cos






(


ω
d


Δ





ti

)


-



ζω
n


ω
d




e


-

ζω
n



Δ





ti







sin






(


ω
d


Δ





ti

)



)













where


:








c
^

i






Step





test





value





at





the






i
th






time





step












k
p






System





gain











ζ





Dampening





ratio












ω
n






Natural





frequency












ω
d






Damped





natural





frequency











Δ





t





Time





step











i





Indexing





variable





(
39
)







To assist in the derivation the gain is separated from the remainder of the response using the following convention.

ĉi=kpĉ′i  (40)


This step test data is then used to populate the dynamic matrix.










A
=


k
p



[






c


^

1



0





0







c


^

2






c


^

1






0





















c


^

N






c


^


N
-
1










c


^


N
-

n
u

+
1





]









where


:






N





Prediction





horizon





length








n
u






Control





horizon





length





(
41
)







Using this dynamic matrix and its transpose the next matrix that is used in the optimization can be computed. This also includes the move suppression factor.


Using this dynamic matrix and its transpose the next matrix that is used in the optimization can be computed. This also includes the move suppression factor.











(



A
T


A

-


Λ
^


I


)

=


k
p
2



[




λ





i
=
1

N








(



c


^

i

)

2









i
=
1


N
-
1










(



c


^

i

)

2



(



c


^


i
+
1


)












i
=
1


N
-

n
u

+
1









(



c


^

i

)



(



c


^


i
+

n
u

-
1


)











i
=
1


N
-
1










(



c


^

i

)

2



(



c


^


i
+
1


)






λ





i
=
1


N
-
1









(



c


^

i

)

2












i
=
1


N
-

n
u

+
1









(



c


^

i

)



(



c


^


i
+

n
u

-
2


)

























i
=
1


N
-

n
u

+
1









(



c


^

i

)



(



c


^


i
+

n
u

-
1


)









i
=
1


N
-

n
u

+
1









(



c


^

i

)



(



c


^


i
+

n
u

-
2


)









λ





i
=
1


N
-

n
u

+
1









(



c


^

i

)

2






]









where


:






λ





Move





suppression





factor





(
42
)







Using this dynamic matrix and its transpose the next matrix that is used in the optimization can be computed. This also includes the move suppression factor.


where: λ Move suppression factor


The next step is to invert the matrix but with the plant gain factored out of the matrix. The following inversion identity can be used to keep the gain separate from the control law.











(
kA
)


-
1


=


1
k



A

-
1







(
43
)







It follows:











(



A
T


A

-


Λ
^






I


)


-
1


=


1

k
p
2




M

-
1







(
44
)














M
=

[




λ





i
=
1

N




(


c
^

i


)

2









i
=
1


N
-
1





(


c
^

i


)



(


c
^


i
+
1



)












i
=
1


N
-

n
u

+
1





(


c
^

i


)



(

c







c
^


i
+

n
u

-
1




)











i
=
1


N
-
1





(


c
^

i


)



(


c
^


i
+
1



)






λ





i
=
1


N
-
1





(


c
^

i


)

2












i
=
1


N
-

n
u

+
1





(


c
^

i


)



(






c
^


i
+

n
u

-
2



)

























i
=
1


N
-

n
u

+
1





(


c
^

i


)



(


c
^


i
+

n
u

-
1



)









i
=
1


N
-

n
u

+
1





(


c
^

i


)



(


c
^


i
+

n
u

-
2



)









λ





i
=
1


N
-
1





(


c
^

i


)

2






]





(
45
)







The second half of the optimization formula is ATê, where ê is found through Eq. 46.

ê={circumflex over (r)}−ŷ  (46)


For this application of the controller the setpoint vector ({circumflex over (r)}) is going to be represented by a scalar value.

ê=r−ŷ  (47)


The prediction vector can then be formulated using the time derivation of the system

















y
i



t

=
u




(

t
-

Δ





t


)





k
p



(

1
-


e


-
ζ







ω
n


Δ





ti




cos


(


ω
d


Δ





ti

)



-



ζ






ω
n



ω
d




e


-
ζ







ω
n


Δ





ti




sin


(


ω
d


Δ





ti

)




)



+

y
o





t




(



e


-
ζ







ω
n


Δ





ti




cos


(


ω
d


Δ





ti

)



+



ζ






ω
n



ω
d




e


-

ζω
n



Δ





ti




sin


(


ω
d


Δ





ti

)




)

+


(







y
o



t

-

y
o





(

t
-

Δ





t


)



Δ
t


)



1

ω
d




e


-

ζω
n



Δ





ti




sin


(


ω
d


Δ





ti

)








(
48
)







Adding the control action dynamics and current position dynamics together allows the equation to be simplified.


















y
^

i



t

=




(
u



(

t
-

Δ





t


)




k
p


-

y
o





t

)



(

1
-


e


-
ζ







ω
n


Δ





ti



cos


(


ω
d


Δ





ti

)


-



ζω
n


ω
d




e


-
ζ







ω
n


Δ





ti




sin


(


ω
d


Δ





ti

)




)


+




y
o



t

+


(







y
o



t

-

y
o





(

t
-

Δ





t


)



Δ





t


)



1

ω
d




e


-
ζ







ω
n


Δ





ti




sin


(


ω
d


Δ





ti

)







(
49
)







Replacing the prediction vector in Eq. 47 with Eq. 49 the error vector can be written in terms of the current error value (eo=r−yo).


Rearranging Eq. 50 allows the error vector to be written as three separate vectors.













e
^

=


e
o




t






-


(
u



(

t
-

Δ





t







k
p


-

y
o





t




)



(

1
-


e


-
ζ







ω
n


Δ





ti




cos


(


ω
d


Δ





ti

)



-



ζ






ω
n



ω
d




e


-
ζ







ω
n


Δ





ti




sin


(


ω
d


Δ





ti

)




)


-


(







y
o



t

-

y
o





(

t
-

Δ





t


)



Δ





t


)



1

ω
d




e


-
ζ







ω
n


Δ





ti




sin


(


ω
d


Δ





ti

)







(
50
)











e
^

=

e
o




t



[



1




1









1



]


+


(


y
o




t




-
u





(

t
-

Δ





t


)




k
p



)






[




(

1
-


e


-
ζ







ω
n


Δ





t




cos


(


ω
d


Δ





t

)



-



ζ






ω
n



ω
d




e


-
ζ







ω
n


Δ





t




sin


(


ω
d


Δ





t

)




)






(

1
-


e


-
2







ζω
n


Δ





t




cos


(

2






ω
d


Δ





t

)



-



ζ






ω
n



ω
d




e


-
2






ζ






ω
n


Δ





t




sin


(

2






ω
d


Δ





t

)




)











(

1
-


e


-
N







ζω
n


Δ





t




cos


(

N






ω
d


Δ





t

)



-



ζω
n


ω
d




e


-
N







ζω
n


Δ





t




sin


(

N






ω
d


Δ





t

)




)




]

-


(



y
o




t




-

y
o






(

t
-

Δ





t


)





Δ





t


)




1

ω
d




[





e


-
ζ







ω
n


Δ





t




sin


(


ω
d


Δ





t

)









e


-
2







ζω
n


Δ





t




sin


(

2






ω
d


Δ





t

)














e


-
N






ζ






ω
n


Δ





t




sin


(

N






ω
d


Δ





t

)






]










(
51
)







Using Eq. 51 as the error vector the equations in Eq. 52 for the future control actions can be written.










Δ






u
^


=


1

k
p
2




M

-
1




A
T



e
^






(
52
)







Δ






u
^


=






c
o



t


k
p
2




M

-
1





A
T



[



1




1









1



]



+









(

y
o



t

-
u




(

t
-

Δ





t


)




k
p


)


k
p
2




M

-
1




A
T






[




(

1
-


e


-

ζω
n



Δ





t




cos


(


ω
d


Δ





t

)



-



ζ






ω
n



ω
d




e


-
ξ







ω
n


Δ





t




sin


(


ω
d


Δ





t

)




)






(

1
-


3


-
2






ζ






ω
n


Δ





t




cos


(

2






ω
d


Δ





t

)



-



ζω
n


ω
d




e


-
2







ζω
n


Δ





t




sin


(

2






ω
d


Δ





t

)




)











(

1
-


e


-
N







ζω
n


Δ





t




cos


(

N






ω
d


Δ





t

)



-



ζ






ω
n



ω
d




e


-
N







ζω
n


Δ





t




sin


(

N






ω
d


Δ





t

)




)




]







(

-




(







y
o



t

-

y
o





(

t
-

Δ





t


)



Δ





t


)



1


k
p
2



ω
d





M

-
1





A
T



[





e


-
ζ







ω
n


Δ





t




sin


(


ω
d


Δ





t

)









e


-
2







ζω
n


Δ





t




sin


(

2






ω
d


Δ





t

)














e


-
N







ζω
n


Δ





t




sin


(

N






ω
d


Δ





t

)






]














(
53
)







Using the {circumflex over (d)}1, {circumflex over (d)}2 and {circumflex over (d)}3 vectors, Eq. 53 can be rewritten for simplicity.










Δ






u
^


=





e
o



|
t



k
p





d
^

1


+







(

y
o



t

-
u



|

(

t
-

Δ

t


)




k
p


)


k
p





d
^

2


-


(



y
o



|
t



-

y
o




|

(

t
-

Δ





t


)




Δ

t


)



1


k
p



ω
d






d
^

3







(
54
)







Since only the first element of the control action change vector is implemented on the plant the Eq. 54 can be reduced to the following.












Δ





u



|
t


=



(



e
o



|
t



k
p


)



d
1


+


(


(


y
o



|
t



-
u



|

(

t
-
Δt

)




k
p


)


k
p


)



d
2


-


(


(



y
o



|
t



-

y
o




|

(

t
-

Δ





t


)




Δ

t


)



1


k
p



ω
d




)



d
3














where


:














Δ





u



|
t


=


Δ


u
^




|
t



(
1
)














d
1





=



d
^

1



(
1
)














d
2





=



d
^

2



(
1
)














d
3





=



d
^

3



(
1
)







(
55
)







Rearranging Eq. 55 and replacing the control action change (Δu|t) the following equation can be derived.












(

u


|
t



-
u



|

(

t
-

Δ





t


)



)

+


d
2


u




|

(

t
-

Δ





t


)



=




d
1


k
p



e



|
t




+


d
2


k
p




y



|
t




-


d
3



k
p



ω
d






y
.




|
t






(
56
)








Δ






t


(

1
-

d
2


)




u
.




|
t




+

d
2




u
.




|
t


=




d
1


k
p



e



|
t




+


d
2


k
p




y



|
t




-


d
3



k
p



ω
d






y
.




|
t






(
57
)







Taking the Laplace transform of Eq. 57











(


Δ


t


(

1
-

d
2


)



s

+

d
2


)



U


(
s
)



=




d
1


k
p




E


(
s
)



-


(




d
3



k
p



ω
d




s

-


d
2


k
p



)



Y


(
s
)








(
58
)







Using the general equation for a second order system (Eq. 59) the equation Eq. 58 can be rearranged into a transfer function between the error and the control action.
















Y


(
s
)



U


(
s
)



=



k
p



ω
n
2




s
2

+

2


ζω
n


s

+

ω
n
2








(
59
)








(


Δ


t


(

1
-

d
2


)



s

+

d
2


)



U


(
s
)



=




d
1


k
p




E


(
s
)



-


(




d
3



k
p



ω
d




s

-


d
2


k
p



)



(



k
p



ω
n
2




s
2

+

2

ζ


ω
n


s

+

ω
n
2



)



U


(
s
)








(
60
)








(


Δ






t


(

1
-

d
2


)



s

+

d
2

+

(





ω
n
2


ω
d




d
3


s

-


ω
n
2



d
2





s
2

+

2

ζ


ω
n


s

+

ω
n
2



)


)



U


(
s
)



=



d
1


k
p




E


(
s
)







(
61
)













U


(
s
)



E


(
s
)



=




d
1


k
p




(


s
2

+

2

ζ


ω
n


s

+

ω
n
2


)







Δ


t


(

1
-

d
2


)




s
3


+


(


d
2

+

2

ζ


ω
n


Δ






t


(

1
-

d
2


)




)



s
2


+







(



ω
n
2


Δ






t


(

1
-

d
2


)



+

2






ζω
n



d
2


+



ω
n
2


ω
d




d
3



)


s










(
62
)







Converting the transfer function in Eq. 62 into discrete time will yield the control law for a second order Dynamic Matrix Controller.










u


|
t


=



(

b
0

)


e



|
t




+

(

b
1

)



e



|

(

t
-

Δ





t


)








+





(

b
2

)


e



|

(

t
-

2

Δ





t


)








-





(

a
1

)


u



|

(

t
-

Δ





t


)





-

(

a
2

)



u



|

(

t
-

2

Δ





t


)





-

(

a
3

)



u



|

(

t
-

3

Δ





t


)












where


:












a
1

=

-

(


(

3
-

d
2

+


(


4





ζ

-

2





ζ






d
2


+



ω
n


ω
d




d
3



)



ω
n


Δ





t

+


(

1
-

d
2


)



ω
n
2


Δ






t
2



)


(

1
+


(


2

ζ

+



ω
n


ω
d




d
3



)



ω
n


Δ

t

+


(

1
-

d
2


)



ω
n
2


Δ


t
2



)


)














a
2

=





(


3
-

2


d
2


+


(

1
-

d
2


)


2

ζ


ω
n


Δ

t



1
+


(


2

ζ

+



ω
n


ω
d




d
3



)



ω
n


Δ

t

+


(

1
-

d
2


)



ω
n
2


Δ


t
2




)













a
3

=





-

(


1
-

d
2



1
+


(


2

ζ

+



ω
n


ω
d




d
3



)



ω
n


Δ

t

+


(

1
-

d
2


)



ω
n
2


Δ


t
2




)














b
0

=







d
1


k
p




(


1
+

2

ζ


ω
n


Δ





t

+


ω
n
2


Δ


t
2




1
+


(


2

ζ

+



ω
n


ω
d




d
3



)



ω
n


Δ

t

+


(

1
-

d
2


)



ω
n
2


Δ


t
2




)














b
1

=






-


d
1


k
p





(


2
+

2

ζ


ω
n


Δ

t



1
+


(


2

ζ

+



ω
n


ω
d




d
3



)



ω
n


Δ





t

+


(

1
-

d
2


)



ω
n
2


Δ






t
2




)














b
2

=







d
1


k
p




(

1

1
+


(


2





ζ

+



ω
n


ω
d




d
3



)



ω
n


Δ

t

+


(

1
-

d
2


)



ω
n
2


Δ


t
2




)















(
63
)








4. First Order Plus Dead Time Controller Theory


This section presents the control theory derivation for a first order plus dead time system. The general transfer function for a first order system with dead time can be seen in Eq. 64











G


(
s
)


=



k
p



e


-
θ


s





τ

s

+
1









where


:









k
p






System





Gain







θ





System





Dead





Time








τ





System





Time





Constant

,





(
64
)







Using the zero hold discrete model theory, Eq.64 can be rewritten into the discrete time domain. This equation (Eq. 65) is formulated assuming the control action (u|(t−θ)) will be constant for k time steps ahead.

















y
^




(

t
+

k





Δ





t


)


=


α
k



y
o





t

+



k
p



(

1
-

α
k


)



u





(

t
-
θ

)





(
65
)






where


:















y
^



t






Plant





output





at





time





t













u


t






System





input





at





time





t











α





Constant






(

e


Δ





t

τ


)












k





an





integer





depicting





a





future





time











Δ





t





Time





Step












Using Eq. 65 the general form of a step test can be derived. The initial conditions are all zero and the control action will be a constant unity so the equation can be significantly simplified.











c
^

i

=

{




0


:





i
<

θ

Δ





t










k
p



(

1
-

α
i


)




:





i


θ

Δ





t











(
66
)







where


:










c
^

i







i
th






Value





of





the





Step





Test













This step test data is then used to populate the dynamic matrix. Since the gain is in every term it can be factored out of the matrix.










A
=


k
p



[



0


0





0



















(

1
-

α
1


)



0





0





(

1
-

α
2


)




(

1
-

α
1


)






0



















(

1
-

α

N
-

n
k




)




(

1
-

α

N
-

n
k

-
1



)







(

1
-

α

N
-

n
u

-

n
k

+
1



)




]









where


:






N





Prediction





horizon





length








n
u






Control





horizon





length








n
k






Dead





Time





Steps






(


n
k

=

θ

Δ





t



)



(
Rounded
)






(
67
)







Using this dynamic matrix and its transpose the next part of the DMC control law can be computed. To reduce high changes in the control actions a move suppression factor is used in Dynamic Matrix Control. This factor is multiplied by the diagonal in the A T A matrix.











(



A
T


A

-


Λ
^


I


)

=


k
p
2



[




λ





i
=
1


N
-

n
k










(

1
-

α
i


)

2









i
=
1


N
-

n
k

-
1









(

1
-

α
i


)



(

1
-

α

i
+
1



)












i
=
1


N
-

n
k

-

n
u

+
1









(

1
-

α
i


)



(

1
-

α

i
+

n
u

-
1



)











i
=
1


N
-

n
k

-
1









(

1
-

α
i


)



(

1
-

α

i
+
1



)






λ





i
=
1


N
-

n
k

-
1









(

1
-

α
i


)

2












i
=
1


N
-

n
k

-

n
u

+
1









(

1
-

α
i


)



(

1
-

α

i
+

n
u

-
2



)

























i
=
1


N
-

n
k

-

n
u

+
1









(

1
-

α
i


)



(

1
-

α

i
+

n
u

-
1



)









i
=
1


N
-

n
k

-

n
u

+
1









(

1
-

α
i


)



(

1
-

α

i
+

n
u

-
2



)









λ





i
=
1


N
-

n
k

-

n
u

+
1









(

1
-

α
i


)

2






]









where


:






λ





Move





suppression





factor





(
68
)







In the control law the next step is to invert the matrix but with the plant gain factored out of the matrix the following inversion identity can be used to keep the gain separate from the control law.











(
kA
)


-
1


=


1
k



A

-
1







(
69
)







Therefore,











(



A
T


A

-


Λ
^


I


)


-
1


=


1

k
p
2




M

-
1







(
70
)






M
=

[




λ





i
=
1


N
-

n
k










(

1
-

α
i


)

2









i
=
1


N
-

n
k

-
1









(

1
-

α
i


)



(

1
-

α

i
+
1



)












i
=
1


N
-

n
k

-

n
u

+
1










(

1
-
α

)

i



(

1
-

α

i
+

n
u

-
1



)











i
=
1


N
-

n
k

-
1









(

1
-

α
i


)



(

1
-

α

i
+
1



)






λ





i
=
1


N
-

n
k

-
1









(

1
-

α
i


)

2












i
=
1


N
-

n
k

-

n
u

+
1










(

1
-
α

)

i



(

1
-

α

i
+

n
u

-
2



)

























i
=
1


N
-

n
k

-

n
u

+
1










(

1
-
α

)

i



(

1
-

α

i
+

n
u

-
1



)









i
=
1


N
-

n
k

-

n
u

+
1










(

1
-
α

)

i



(

1
-

α

i
+

n
u

-
2



)









λ





i
=
1


N
-

n
k

-

n
u

+
1









(

1
-

α
i


)

2






]





(
71
)







The second half of the control law is ATê.











A
T



e
^


=



k
p



[



0






(

1
-

α
1


)




(

1
-

α
2


)







(

1
-

α

N
-

n
k




)





0





0



(

1
-

α
2


)







(

1
-

α

N
-

n
k

-
1



)

























0





0


0






(

1
-

α

N
-

n
k

-

n
u

+
1



)




]




[





e
^

1







e
^

2












e
^

N




]






(
72
)













A
T



e
^


=


k
p



[







i
=
1


N
-

n
k










(

1
-

α
i


)




e
^


i
+

n
k

-
1












i
=
1


N
-

n
k

-
1









(

1
-

α
i


)




e
^


i
+

n
k

-
1

















i
=
1


N
-

n
k

-

n
u

+
1









(

1
-

α
i


)




e
^


i
+

n
k

-
1







]







(
73
)







From Eq. 73 it can be seen that the first nk elements of the error vector are not used in the calculation of the next control action. This means the prediction vector is also only significant after nk steps. Using this fact the prediction vector formulation will be identical to the formulation of the prediction vector for the first order system without dead time. From Eq. 26 the formulation of the error vector can be determined.











e
^

=



e

+

n
k





[




N
.
R
.

















1










1





















1








]


-


dy

ss


(

+

n
k


)





[





N
.
R
.























1
-
α












1
-

α
2























1
-

α

N
-

n
k







]










where


:







N
.
R
.




Not







Relevant




(

+

n
k


)






Indexing






n
k






Steps





in





the





Future





(
74
)







Using this new equation for the error vector the and the dynamic matrix control law an equation can be formulated for the change in the control action.










Δ






u
^


=


1

k
p
2




M

-
1




A
T



e
^






(
75
)







Δ






u
^


=




e

+

n
k




k
p
2




M

-
1





A
T



[




N
.
R
.

















1










1





















1








]



-



dy

ss


(

+

n
k


)




k
p
2




M

-
1





A
T



[





N
.
R
.























1
-
α












1
-

α
2























1
-

α

N
-

n
k







]








(
76
)







Using the ŵ and {circumflex over (z)} vector the equation can be rewritten.











Δ






u
^


=




e

+

n
k




k
p



W

-



dy

ss


(

+

n
k


)




k
p



Z










where


:







z
^


=


M

-
1




[







i
=
1


N
-

n
k










(

1
-

α
i


)

2










i
=
1


N
-

n
k

-
1









(

1
-

α
i


)



(

1
-

α

i
+
1



)
















i
=
1


N
-

n
k

-

n
u

+
1









(

1
-

α
i


)



(

1
-

α

i
+

n
u

-
1



)






]










w
^

=


M

-
1




[







i
=
1


N
-

n
k









(

1
-

α
i


)










i
=
1


N
-

n
k

-
1








(

1
-

α
i


)















i
=
1


N
-

n
k

-

n
u

+
1








(

1
-

α
i


)





]







(
77
)







Only the first element of the control action change vector will be used so the equation can be reduced.











Δ





u

=





w
^

1


k
p




e

(

+

n
k


)



-




z
^

1


k
p




dy

ss


(

+

n
k


)













where


:






Δ





u

=

Δ







u
^

1







(
78
)








u



t


=




(

b
0

)


e





(

t
+


n
k


Δ





t


)






+

(

b
1

)



e





(

t
+


(


n
k

-
1

)


Δ





t


)






-

(

a
1

)



u





(

t
-

Δ





t


)






-

(

α
2

)



u





(

t
-

2

Δ





t


)









where

Y


(
s
)





:







a
1






=

-


1
+
α


1
+



z
^

1


α













a
2

=

α

1
+



z
^

1


α











b
0

=



w
^

1



k
p



(

1
+



z
^

1


α


)











b
1

=

-




w
^

1


α



k
p



(

1
+



z
^

1


α


)









(
79
)







This control law is the same as the one for the first order plant (Eq. 30) except for the e (+nk) and dy ss(+nk) are delayed by nk time steps. Using the same theory as for the general first order formulation the control law for the first order plus dead time can be written.


In the Eq. 79 there are some future errors (et+nk and e|(t−Δt+nk)) which are required to calculate the next control action. Using the zero order hold discrete equation for a first order system, Eq. 4, the future error can be written in terms of the current process or plant value, the setpoint and last nk control actions.











e
^





(

t
+


n
k


Δ





t


)



=


r
-


α

n
k




y
o






t





-


k
p



(

1
-

α

n
k



)




u





(

t
-


n
k


Δ





t


)




-




i
=
1



n
k

-
1










k
p



(

1
-

α
i


)




(

u




(

t
-

i





Δ





t


)





-
u





(

t
-


(

i
+
1

)


Δ





t


)




)










(
80
)








5. General Linear System Controller Theory


With reference to FIG. 1, in this section the general linear control theory is derived. It is similar to the methodology used in the first order derivation but it allows any linear system to be used. According to one embodiment of the present invention, in step 4, transform a model into a weighted sequence, the model provided in step 2 can be used to algebraically describe each element in a unit step test of the system being modelled. With each of these individual equations, the dynamic matrix can be populated with solely algebraic expressions. The form that will be used for the model provided in step 2 is as follows.











Y


(
s
)



U


(
s
)



=



b
o

+


b
1


s

+

+


b

m
b




s

m
b





1
+


a
1


s

+

+


a

n
a




s

n
a









(
81
)







This formulation assumes the model is in its reduced form such that mb is less than na. From this state the model can be reduced to a series of first order equations, where the gain (ki) and the time constant (τi) can be either real or complex values.











Y


(
s
)



U


(
s
)



=




i
=
1


n
a









k
i




τ
i


s

+
1







(
82
)







In step 3, generate equation for step test, using the zero order hold discrete model for a first order system (Eq. 4) the general linear zero order hold model can be created. The initial conditions are all zero and the control action will be constant at one so the equation can be significantly simplified.












c
^

j

=




i
=
1


n
a









k
i



(

1
-

α
i
j


)










where


:







α
i







Constant
(

e



-
Δ






t


τ
i



)






(
83
)







In step 6, populate dynamic matrix, this step test data is then used to populate the dynamic matrix.










A
=

[





c
^

1



0





0






c
^

2





c
^

1






0




















c
^

N





c
^


N
-
1









c
^


N
-
nu
+
1





]








where


:






N





Prediction





horizon





length







nu





Control





horizon





length





(
84
)







Using this dynamic matrix and its transpose the next part of the DMC control law can be computed. To reduce high changes in the control actions a move suppression factor is used in Dynamic Matrix Control. This factor is multiplied by the diagonals in the ATA matrix










(



A
T


A

-


Λ
^


I


)

=

[




λ





j
=
1

N








(


c
^

j

)

2









j
=
1


N
-
1









(


c
^

j

)



(


c
^


j
+
1


)












j
=
1


N
-
nu
+
1









(


c
^

j

)



(


c
^


j
+
nu
-
1


)











j
=
1


N
-
1









(


c
^

j

)



(


c
^


j
+
1


)






λ





j
=
1


N
-
1









(


c
^

j

)

2












j
=
1


N
-
nu
+
1









(


c
^

j

)



(


c
^


j
+
nu
-
2


)

























j
=
1


N
-
nu
+
1









(


c
^

j

)



(


c
^


j
+
nu
-
1


)









j
=
1


N
-
nu
+
1









(


c
^

j

)



(


c
^


j
+
nu
-
2


)









λ





j
=
1


N
-
nu
+
1









(


c
^

j

)

2






]





(
85
)







where: λ Move suppression factor


Therefore,
















(



A
T


A

-


Λ
^






I


)


-
1


=

M

-
1







(
86
)






M
=

[




λ





j
=
1

N




(


c
^

j

)

2









j
=
1


N
-
1





(


c
^

j

)



(


c
^


j
+
1


)












j
=
1


N
-
nu
+
1





(


c
^

j

)



(


c
^


j
+
nu
-
1


)











j
=
1


N
-
1





(


c
^

j

)



(


c
^


j
+
1


)






λ





j
=
1


N
-
1





(


c
^

j

)

2












j
=
1


N
-
nu
+
1





(


c
^

jj

)



(


c
^


j
+
nu
-
1


)

























j
=
1


N
-

n
u

+
1





(


c
^

j

)



(


c
^


j
+
nu
-
1


)









j
=
1


N
-

n
u

+
1





(


c
^

j

)



(


c
^


j
+
nu
-
2


)









λ





j
=
1


N
-
nu
+
1





(


c
^

j

)

2






]





(
87
)












e
^

=


r
^

-

y
^







(
88
)







For this application of the controller the set point vector ({circumflex over (r)}) is going to be represented by a constant value.

ê=r−ŷ  (89)


In step 4, predicted response vector, also from the discrete time model, each element of the predicted response of the system could be generated from different algebraic equations. The prediction vector can then be formulated using the time derivation of the system












y
^

j



|
t


=





i
=
1


n
a




y
oi




|
t




α
i
j

+
u



|

(

t
-

Δ





t


)





k
i



(

1
-

α
i
j


)







(
90
)







Adding the control action dynamics and current measured dynamics together allows the equation to be simplified.












y
^

j



|
t


=






i
=
1


n
a





(

u


|

(

t
-

Δ





t


)





k
i

-

y
oi




|
t


)



(

1
-

α
i
j


)



+




i
=
1


n
a




y
oi





|
t






(
91
)







In step 7, with input from step 5, predicted error vector, replacing the prediction vector in Eq. 89 with Eq. 91 the error vector can be written in terms of the current error value (eo=r−yo).










e
^

=


e
o

-




i
=
1


n
a





(

u


|

(

t
-

Δ

t


)





k
i

-

y

o

i





|
t


)



(

λ
-

α
i
i


)








(
92
)







Rearranging Eq.92 allows the error vector to be written as two separate vectors.










e
^

=



e
o



[



1




1









1



]


+




i
=
1


n
a




(


(


y

o

i




|
t



-
u



|

(

t
-

Δ

t


)




k
i


)



[




(

1
-

α
i


)






(

1
-

α
i
2


)











(

1
-

α
i
N


)




]


)







(
93
)







In step 8, combine in control law, using this new equation for the error vector and the dynamic matrix control law, an equation can be formulated for the change in the control action.















Δ






u
^


=


M

-
1




A
T



e
^







(
94
)







Δ






u
^


=



e
o



M

-
1





A
T



[



1




1









1



]



+




i
=
1


n
a




(


(


y

o

i




|
t



-
u



|

(

t
-

Δ

t


)




k
i


)



M

-
1





A
T



[




(

1
-

α
i


)






(

1
-

α
i
2


)











(

1
-

α
i
N


)




]



)







(
95
)







In step 9, reduce control law, the control law can then be reduced to its minimum form, which would leave the next control action to be a function of the system parameters, the past errors, and the past control actions. Using the {circumflex over (d)}1 and {circumflex over (d)}2i vector the equation can be rewritten.











Δ






u
^


=



e
o




d
^

1


+




i
=
1


n
2





(


y
oi



|
t



-
u



|

(

t
-

Δ





t


)




k
i


)




d
^


2

i













where


:







d
1


=


M

-
1





A
T



[



1




1









1



]












d
^


2

i


=


M

-
1





A
T



[




(

1
-

α
i


)






(

1
-

α
i
2


)











(

1
-

α
i
N


)




]








(
96
)







Since only the first element of the control action change vector is implemented on the plant the Eq. 96 can be reduced to the following.











Δ

u

=




e
o



d
1


-
u



|

(

t
-

Δ

l


)





d
3

+




τ
=
1


n
2




y
oi





|
t



d

2

i











where


:






Δ





u

=

Δ







u
^



(
1
)











d
1

=



d
^

1



(
1
)










d

2

i


=



d
^


2

i




(
1
)










d
3

=




i
=
1


n
a





k
i



d

2

i









(
97
)







Rearranging Eq. 97 and replacing the control action change (Δu) the following equation can be derived.












(

u


|
t



-
u



|

(

t
-

Δ





t


)



)

+


d
3


u




|

(

t
-

Δ





t


)



=



d
1


e



|
t



+




i
=
1


n
a





d

2

i




y
i






|
t






(
98
)








Δ






t


(

1
-

d
3


)




u
.




|
t




+

d
3



u



|
t


=



d
1


e



|
t



+




i
=
1


n
a





d

2

i




y
i






|
t






(
99
)







Taking the Laplace transform of Eq 99











(


Δ


t


(

1
-

d
3


)



s

+

d
3


)



U


(
s
)



=



d
1



E


(
s
)



+




i
=
1


n
a





d

2

i





Y
i



(
s
)









(
100
)







Using the general equation for a first order system (Eq. 101) the equation Eq. 100 can be rearranged into a transfer function between the error and the control action.












Y
i



(
s
)



U


(
s
)



=


k
i




τ
i


s

+
1






(
101
)








(


Δ


t


(

1
-

d
3


)



s

+

d
3


)



U


(
s
)



=


d

i


E


(
s
)



+




i
=
1


n
a






d

2

i




(


k
i




τ
i


s

+
1


)




U


(
s
)









(
102
)








(


Δ


t


(

1
-

d
3


)



s

+

d
3

-




i
=
1


n
a






k
i



d

2

i






τ
i


s

+
1




)



U


(
s
)



=


d
1



E


(
s
)







(
103
)








U


(
s
)



E


(
s
)



=



d
1



(





i
=
1


𝔫
a





τ
i


s


+
1

)








(


Δ






t


(

1
-

d
3


)



s

+

d
3


)



(





i
=
1


n
a





τ
i


s


+
1

)


-









j
=
1


r


ι
a





(


(


k
j



d

2

j



)



(





i
=
1


j
-
1





τ
i


s


+
1

)



(





i
=

j
+
1



n
a





τ
i


s


+
1

)


)










(
104
)







In step 10, coefficients for input-output discrete control law, converting the transfer function seen in Eq. 104 into discrete time will yield the control law for a general linear system. Since the model parameters are constant, this single discrete equation would have constant coefficients which are all calculated before the controller is running, such that the amount of computational time required in each control cycle is reduced.


6. General Linear MIMO System Controller Theory


In this section the theory for the general linear multiple input and multiple output system will be derived and presented. This will start with the form of a general linear MIMO system.












Y
w



(
s
)




U
v



(
s
)



=



b

w
,
v
,
0


+


b

w
,
v
,
1



s

+

+


b

w
,
v
,

m

b


(

w
.
v

)







s

m

b


(

w
,
v

)







1
+


a

w
,
v
,
1



s

+

+


a

w
,
v
,

n

a


(

w
,
v

)







s


n
a



(

w
.
v

)










(
105
)







In this equation the w is an index that represents the different outputs of the MIMO system and the v is an index of the inputs of the system. This formulation assumes the model is in its reduced form such that mb(w,v) is less than na(w,v). From this state the model can be reduced to a series of first order equations where the gain (kw,v,i) and the time constant (τw,v,i) can be either real or complex values.












Y
w



(
s
)




U
v



(
s
)



=




i
=
1


r


ι

a


(

w
,
v

)








k

w
,
v
,
i





τ

w
,
v
,
i



s

+
1







(
106
)







Using the zero hold discrete model a discrete equation for the step response can be generated using Eq. 106. The initial conditions are all zero and the control action will be constant at one so the equation can be significantly simplified.











c
^



(

w
,
v

)

,
j


=




i
=
1


r


ι

a


(

w
,
v

)








k

(

w
,
v
,
i

)




(

1
-

α

(

w
,
v
,
i

)

j


)







(
107
)







This step test data is then used to populate a dynamic matrix.









A
=

[




G

(

1
,
1

)





G

(

1
,
2

)








G

(

1
,

N
v


)







G

(

2
,
1

)





G

(

2
,
2

)








G

(

2
,

N
v


)





















G

(


N
w

,
1

)





G

(


N
w

,
2

)








G

(


N
w

,

N
v


)





]





(
108
)








G

(

w
,
v

)


=

[





c
^



(

w
,
v

)

,
1




0





0






c
^



(

w
,
v

)

,
2






c
^



(

w
,
v

)

,
1







0




















c
^



(

w
,
v

)

,
N






c
^



(

w
,
v

)

,

(

N
-
1

)










c
^



(

w
,
v

)

,

(

N
-

n
u

+
1

)






]








where


:






N





Prediction





horizon





length








n
u






Control





horizon





length








N
w






Number





of





outputs








N
v






Number





of





inputs





(
109
)







Using this dynamic matrix and its transpose the next part of the control law can be computed. To reduce high changes in the control actions a move suppression factor is used in Dynamic Matrix Control. This factor is multiplied by the diagonals in the ATA matrix










(



A
T


A

-


Λ
^


I


)

=


[




H

1
,
1





H

1
,
2








H

1
,

N
v








H

2
,
1





H

2
,
2








H

2
,

N
v






















H


N
v

,
1





H


N
v

,
2








H


N
v

,

N
v






]

=
M





(
110
)







Where H is equal to Eq. 111 if v1 not equal v2, if they are equal then H is shown in Eq. 112 and λ is the move suppression factor.










H


N
v

,

N
v



=

[







w
=
1


N
w







j
=
1

N




(


c
^



(

w
,

v
1


)

,
j


)



(


c
^



(

w
,

v
2


)


j


)










w
=
1


N
w







j
=
1


N
-
1





(


c
^



(

w
,

v
1


)

,
j


)



(


c
^



(

w
,

v
2


)

,

(

j
+
1

)



)













w
=
1


N
w







j
=
1


N
-

n
u

+
1





(


c
^



(

w
,

v
1


)

,
j


)



(


c
^



(

w
,

v
2


)

,

(

j
+

n
v

-
1

)



)












w
=
1


N
w







j
=
1

N




(


c
^



(

w
,

v
1


)

,

(

j
+
1

)



)



(


c
^



(

w
,

v
2


)


j


)










w
=
1


N
w







j
=
1


N
-
1





(


c
^



(

w
,

v
1


)

,
j


)



(


c
^



(

w
,

v
2


)


j


)













w
=
1


N
w







j
=
1


N
-

n
u

+
1





(


c
^



(

w
,

v
1


)

,
j


)



(


c
^



(

w
,

v
2


)

,

(

j
+

n
v

-
2

)



)


























w
=
1


N
w







j
=
1


N
-

n
u

+
1





(


c
^



(

w
,

v
1


)

,

(

j
+

n
w

-
1

)



)



(


c
^



(

w
,

v
2


)


j


)










w
=
1


N
w







j
=
1


N
-

n
w

+
1





(


c
^



(

w
,

v
1


)

,

(

j
+

n
u

-
1

)



)



(


c
^



(

w
,

v
2


)


j


)













w
=
1


N
w







j
=
1


N
-

n
u

+
1





(


c
^



(

w
,

v
1


)

,
j


)



(


c
^



(

w
,

v
2


)

,
j


)







]





(
111
)















H


v
1

,

v
2



=

[




λ





w
=
1


N
w







j
=
1

N




(


c
^



(

w
,

v
1


)

,
j


)

2










w
=
1


N
w







j
=
1


N
-
1





(


c
^



(

w
,

v
1


)

,
j


)



(


c
^



(

w
,

v
2


)

,

(

j
+
1

)



)













w
=
1


N
w







j
=
1


N
-
1





(


c
^



(

w
,

v
1


)

,
j


)



(


c
^



(

w
,

v
2


)

,

(

j
+

n
u

-
1

)



)












w
=
1


N
w







j
=
1

N




(


c
^



(

w
,

v
1


)

,

(

j
+
1

)



)



(


c
^



(

w
,

v
2


)

,
j


)







λ





w
=
1


N
w







j
=
1

N




(


c
^



(

w
,

v
1


)

,
j


)

2













w
=
1


N
w







j
=
1


N
-
1





(


c
^



(

w
,

v
1


)

,
j


)



(


c
^



(

w
,

v
2


)

,

(

j
+

n
u

-
2

)



)


























w
=
1


N
w







j
=
1


N
-

n
u

+
1





(


c
^



(

w
,

v
1


)

,

(

j
+

n
u

-
1

)



)



(


c
^



(

w
,

v
2


)

,
j


)










w
=
1


N
w







j
=
1


N
-

n
u

+
1





(


c
~



(

w
,

v
1


)

,

(

j
+

n
u

-
1

)



)



(


c
~



(

w
,

v
2


)

,
j


)










λ





w
=
1


N
w







j
=
1


N
-

n
u

+
1





(


c
^



(

w
,

v
1


)

,
j


)

2







]





(
112
)







The second half of the control law is ATê










e
^

=


[





e
^

1







e
^

2












e
^


N
w





]

=

[






r
^

1

-


y
^

1









r
^

2

-


y
^

2














r
^


N
w


-


y
^


N
w






]






(
113
)







With the set point vector ({circumflex over (r)} w) being a vector of a constant value, it can be represented as a single value for each of the different outputs.

êw=rw−ŷw  (114)


The prediction vector for each of the outputs can then be formulated using the time derivation of the system











y
^



(
w
)

,
j






t



=




v
=
1


N
v







i
=
1


N

a


(

w
,
v

)







y

o


(

w
,
v
,
i

)








t




α

(

w
,
v
,
i

)

j

+

u
v





(

t
-

Δ





t


)





k

(

w
,
v
,
i

)




(

1
-

α

(

w
,
v
,
i

)

j


)











(
115
)







Adding the control action dynamics and current measured dynamics together allows the equation to be simplified.











y
^



(
w
)

,
j






t



=






v
=
1


N
v







i
=
1


N

a


(

w
,
v

)







(


u
v







(

t
-

Δ





t


)





k

(

w
,
v
,
i

)


-

y

o


(

w
,
v
,
i

)






t


)



(

1
-

α

(

w
,
v
,
i

)

j


)




+




v
=
1


N
v







i
=
1


N

a


(

w
,
v

)






y

o


(

w
,
v
,
i

)








|
t








(
116
)







Replacing the prediction vector in Eq. 114 with Eq. 116 the error vector can be written in terms of the current error value











(


y

o


(
w
)



=




v
=
1


N
v







i
=
1


N

a


(

w
,
v

)






y

o


(

w
,
v
,
i

)






)




(


e

o


(
w
)



=


r
w

-

y

o


(
w
)





)

.






e
^

w



=


e

o


(
w
)





|
t



-




v
=
1


N
v







i
=
1


N

a


(

w
,
v

)







(


u
v







(

t
-

Δ





t


)





k

(

w
,
v
,
i

)


-

y

o


(

w
,
v
,
i

)






t


)



(

1
-

α

(

w
,
v
,
i

)

j


)










(
117
)







Rearranging Eq.117 allows the error vector to be written as two separate vectors.











e
^

w

=


e

o


(
w
)





|
t




[



1




1









1



]

+




v
=
1


N
v







i
=
1


N

a


(

w
,
v

)






(


(


y

o


(

w
,
v
,
i

)








t



-

u
v





(

t
-

Δ





t


)




k

(

w
,
v
,
i

)



)



[




(

1
-

α

(

w
,
v
,
i

)



)






(

1
-

α

(

w
,
v
,
i

)

2


)











(

1
-

α

(

w
,
v
,
i

)

N


)




]


)









(
118
)







Using the error equation (Eq. 118) the entire control law can be written in known terms.















Δ






u
^


=


M

-
1




A
T



e
^







(
119
)












Δ






u
^


=





w
=
1


N
w




Δ







u
^

w



=




w
=
1


N
w





M

-
1




A
w
T




e
^

w









(
120
)












A
w

=

[


G

(

w
,
1

)








G

(

w
,
2

)














G

(

w
,

N
v


)



]






(
121
)







Δ






u
^


=





w
=
1


N
w




e

o


(
w
)






|
t




M

-
1





A
w
T



[



1




1









1



]







w
=
1


N
w







v
=
1


N
v







i
=
1


N

a


(

w
,
v

)






(


(


y

o


(

w
,
v
,
i

)








t



-

u
v





(

t
-

Δ





t


)




k

(

w
,
v
,
i

)



)



M

-
1





A
w
T



[




(

1
-

α

(

w
,
v
,
i

)



)






(

1
-

α

(

w
,
v
,
i

)

2


)











(

1
-

α

(

w
,
v
,
i

)

N


)




]



)










(
122
)







Using the {circumflex over (d)}w and {circumflex over (q)}(w,v),i vectors the equation can be rewritten.











Δ






u
^


=





w
=
1


N
w




e

o


(
w
)






|
t





d
^

w

+




w
=
1


N
w







v
=
1


N
v







i
=
1


N

a


(

w
,
v

)







(


y

o


(

w
,
v
,
i

)








t



-

u
v





(

t
-

Δ





t


)




k

(

w
,
v
,
i

)



)




q
^



(

w
,
v

)

,
i




















where


:








d
^

w


=


M

-
1





A
w
T



[



1




1









1



]
















q
^



(

w
,
v

)

,
i


=


M

-
1





A
w
T



[




(

1
-

α

(

w
,
v
,
i

)



)






(

1
-

α

(

w
,
v
,
i

)

2


)











(

1
-

α

(

w
,
v
,
i

)

N


)




]








(
123
)







Since only the first element of the control action change vector is implemented on the plant the Eq. 123 can be reduced to the following.











u

v
p




|
t


=





w
=
1


N
w





d

(

w
,

v
p


)




e

o


(
w
)







|
t






v
=
1


N
v





kq

(

v
,

v
p


)




u
v





|

(

t
-

Δ





t


)




+




w
=
1


N
w







v
=
1


N
v







i
=
1


N

a


(

w
,
v

)







y

o


(

w
,
v
,
i

)







t





q

(

w
,
v
,
i
,

v
p


)











where


:







u

v
p



=




Δ







u
^



(

1
+


(


v
p

-
1

)



n
u



)



+

u

v
p





|

(

t
-

Δ





t


)












d

(

w
,

v
p


)



=





d
^

w



(

1
+


(


v
p

-
1

)



n
u



)












q

(

w
,
v
,
i
,

v
p


)



=





q
^



(

w
,
v

)

,
i




(

1
+


(


v
p

-
1

)



n
u



)












kq

(

v
,

v
p


)



=

{





1
-




w
=
1


N
w







i
=
1


N

a


(

w
,
v

)







k

(

w
,
v
,
i

)




q

(

w
,
v
,
i
,

v
p


)






,




v
=

v
p








-




w
=
1


N
w







i
=
1


N

a


(

w
,
v

)







k

(

w
,
v
,
i

)




q

(

w
,
v
,
i
,

v
p


)






,



otherwise



















(
124
)







As the yo(w,v,i)|t are the current outputs of the submodels there has to be a mapping from them to the actual outputs. Using the known relationship in Eq. 115 the mapping can be formulated so the q(w,v,i,v p) variables can be formulated into parameters that will be multiplied to the yo(w)|t−nΔt and uvp|t−nΔt.


7. Linear Alternative Form Controller Theory


In some systems there are dynamics that can be induced on the output that not related to the input of the system so they need modeled separately. An example of where this type of model would be used is in the linearizion of a non-linear system at a series of operating positions. Eq. 125 show the transfer function form of the model while Eq. 126 shows how it could be broken down into a series of first order systems.










Y


(
s
)


=





b
o

+


b
1


s

+

+


b

m
b




s

r


n
b






1
+


a
1


s

+

+


a

n
a




s

n
α







U


(
s
)



+



c
o

+


c
1


s

+

+


c

m
c




s

m
c





1
+


a
1


s

+

+


a

n
a




s

n
a










(
125
)












Y


(
s
)


=





i
=
1


n
1






k
i




τ
i


s

+
1




U


(
s
)




+




i
=
1


n
2





k

d

i





τ
i


s

+
1









(
126
)







With the model being slightly different, to capture the step test dynamics the different between a step input and not step input has to be used. This can be seen in Eq. 127. The initial conditions are all zero and the control action will be constant at one so the equation can be significantly simplified into Eq. 128.











c
^

=



sim






f


(

u
+
dv

)



-

sim






f


(
u
)




dv


,


where


:






dv

=
1





(
127
)








c
^

j

=




i
=
1


n
1





k
i



(

1
-

α
i
j


)







(
128
)







Using these step dynamics similar to the other systems the inverse of the M can be constructed.











(



A
T


A

-
Λ

)


-
1


=

M

-
1






(
129
)






M
=

[




λ





j
=
1

N




(


c
^

j

)

2









j
=
1


N
-
1





(


c
^

j

)



(


c
^


j
+
1


)












j
=
1


N
-

n
u

+
1





(


c
^

j

)



(


c
^


j
+

n
u

-
1


)











j
=
1


N
-
1





(


c
^

j

)



(


c
^


j
+
1


)






λ





j
=
1

N




(


c
^

j

)

2












j
=
1


N
-

n
u

+
1





(


c
^

jj

)



(


c
^


j
+

n
u

-
2


)

























j
=
1


N
-

n
u

+
1





(


c
^

j

)



(


c
^


j
+

n
u

-
1


)









j
=
1


N
-

n
u

+
1





(


c
^

j

)



(


c
^


j
+

n
u

-
2


)









λ





j
=
1


N
-

n
u

+
1





(


c
^

j

)

2






]





(
130
)







The second half of the control law is ATê

ê={circumflex over (r)}−ŷ  (131)


With the set point vector ({circumflex over (r)}) being a vector of a constant value it can be represented as a single value.

ê=r−ŷ  (132)


The prediction vector can then be formulated using the time derivation of the system












y
^

j



|
t


=





i
=
1


n
a




y
oi




|
t




α
i
j

+
u



|

(

t
-

Δ





t


)






k
i



(

1
-

α
i
j


)


+


k
di



(

1
-

α
i
j


)








(
133
)







Adding the control action dynamics and current position dynamics together allows the equation to be simplified.












y
^

j



|
t


=






i
=
1


n
a





(

u


|

(

t
-

Δ





t


)





k
i

-

y
oi




|
t



+

k
di



)



(

1
-

α
i
j


)



+




i
=
1


n
a




y
oi





|
t






(
134
)







Replacing the prediction vector in Eq. 132 with Eq. 134 the error vector can be written in terms of the current error value (r−yo=eo).










e
^

=


e
o



|
t



-




i
=
1


n
a





(

u


|

(

t
-

Δ





t


)





k
i

-

y
oi




|
t



+

k
di



)



(

1
-

α
i
j


)









(
135
)







Rearranging Eq.135 allows the error vector to be written as two separate vectors.










e
^

=


e
o



|
t




[



1




1









1



]

+




i
=
1


n
a




(


(


y
oi



|
t



-
u



|

(

t
-

Δ





t


)





k
i

-

k
di



)



[




(

1
-

α
i


)






(

1
-

α
i
2


)











(

1
-

α
i
N


)




]


)








(
136
)







Using this new equation for the error vector the and the dynamic matrix control law an equation can be formulated for the change in the control action.















Δ






u
^


=


M

-
1




A
T



e
^







(
137
)







Δ






u
^


=


e
o



|
t





M

-
1





A
T



[



1




1









1



]



+




i
=
1


n
u




(


(


y
oi



|
t



-
u



|

(

t
-

Δ





t


)





k
i

-

k
di



)



M

-
1





A
T



[




(

1
-

α
i


)






(

1
-

α
i
2


)











(

1
-

α
i
N


)




]



)








(
138
)







Using the ĉ1 and ĉ2i vector the equation can be rewritten.











Δ






u
^


=


e
o



|
t





c
^

1

+




i
=
1


n
2





(


y
oi



|
t



-
u



|

(

t
-

Δ





t


)





k
i

-

k
di



)




c
^


2

i














where


:








c
^

1


=


M

-
1





A
T



[



1




1









1



]












c
^


2

i


=


M

-
1





A
T



[




(

1
-

α
i


)






(

1
-

α
i
2


)











(

1
-

α
i
N


)




]








(
139
)







Since only the first element of the control action change vector is implemented on the plant the Eq. 139 can be reduced to the following.












Δ





u



|
t


=


e
o



|
t




c
1

-
u



|

(

t
-

Δ





t


)





c
3

-

c
4

+




i
=
1


n
2




y
oi





|
t



c

2

i











where


:






Δ





u

=

Δ







u
^



(
1
)











c
1

=



c
^

1



(
1
)










c

2

i


=



c
^


2

i




(
1
)










c
3

=




i
=
1


n
a





k
i



c

2

i












c
4

=




i
=
1


n
a





k
di



c

2

i









(
140
)







Rearranging Eq. 140 and replacing the control action change (Δu) the following equation can be derived.












(

u


|
t



-
u



|




t
-

Δ

t



)



)

+


c
3


u




|

(

t
-
Δt

)



=



c
1



e
o




|
t




-

c
4


+




i
=
1


n
a





c

2

i




y
i






|
t






(
141
)







Taking the Laplace transform of Eq. 141











(


Δ


t


(

1
-

c
3


)



s

+

c
3


)



U


(
s
)



=



c
1



E


(
s
)



+




i
=
1


n
a





c

2

i





Y
i



(
s
)




-


1
s



c
4







(
142
)







Using the general equation for the system (Eq. 126) the equation Eq. 142 can be rearranged into a transfer function between the error and the control action.











(


Δ


t


(

1
-

c
3


)



s

+

c
3


)



U


(
s
)



=



c
1



E


(
s
)



+




i
=
1


n
a





c

2

i




(




k
i



U


(
s
)



+

k
di





τ
i


s

+
1


)



-


1
s



c
4







(
143
)








(


Δ






t


(

1
-

c
3


)



s

+

c
3

-




i
=
1


n
a






k
i



c

2

i






τ
i


s

+
1




)



U


(
s
)



=



c
1



E


(
s
)



+




i
=
1


n
a





k
di



τ
i


s



-


1
s



c
4







(
144
)












U


(
s
)


-




(





i
=
1


n
a









τ
i


s


+
1

)



(



c
1


s






E


(
s
)



-

c
4


)


+




i
=
1


n
a




k
di









(


Δ






t


(

1
-

c
3


)




s
2


+


c
3


s


)



(





i
=
1


n
a









τ
i


s


+
1

)


-






s





j
=
1


n
a




(


(


k
j



c
s


j

)



(





i
=
1


j
-
1









τ
i


s


+
1

)



(





i
=

j
+
1



n
a









τ
i


s


+
1

)


)












(
145
)







Converting the transfer function seen in Eq. 145 into discrete time will yield the control law.


8. Non-Linear Controller Theory


In order to maintain a fast computational time in the control of non-linear systems, a linear optimization will still have to be utilized. For other comparable controllers this means that the system will have to be linearized at the current operating point at each time step. The benefit of having this reduction process is it allows the linearization of each point to be condensed into a select number of control parameters. This would allow the linearization to be performed prior and stored in a multi-dimensional array. The following sections, section 8.1 and section 8.2, provides the different methodologies for performing this linearization.


8.1 Equation Model


The first method in creating this multi-dimensional array is to have an equation that represents the non-linear system. Using this single non-linear equation an analytical linearization can be performed for any operating point of the system. This generic linearization would allow the non-linear model to be expressed as a set of linear equations. Using this set of linear equations the general linear system control theory (Section 5) can be used to determine the control parameters, thus creating the multi-dimensional array of control constants.


Consider the non-linear equation found in Eq. 146 where f (y) is a non-linear equation with respect to y. The linearized form of this equation can be seen in Eq. 147.












a

n
a





dy

n
a




d

n
a



t



+





+


a
1



dy
dt


+
y
+

f


(
y
)



=



b

m
b





dy

m
b




d

m
b



t



+





+


b
1



dy
dt


+
u





(
146
)









a

n
a





dy

n
a




d

n
a



t



+





+


a
1



dy
dt


+
y
+



df


(

y
_

)



d






y
_





(

y
-

y
_


)


+

f


(

y
_

)



=



b

m
b





dy

m
b




d

m
b



t



+





+


b
1



du
dt


+
u





(
147
)







With this linearized version of the equations, the previously derived linear systems theory can be utilized to calculate the control parameters for an array of operating points (y). These constants can then be stored in an array allowing them to be accessed during operation.


8.2 Complex Model


Some systems are not able to be placed into analytical models similar to the one presented in Eq. 146. This requires an alternative approach to be applied. In the first approach the system was analytically linearized to generate the array of control parameters, which is not possible so a set of simulations can be used to capture the dynamics throughout the range of motion.


Using the equation of the change in control action from the second order system Eq. 53 as a basis, the non-linear control theory can be derived. Separating this equation into its dynamics, there is a step test (in the M−1 and AT matrices and the second error array) and an impulse test (in the third error array).


There is one item that is not captured in Eq. 53 that has to be included in the equation for the non-linear systems. As it is common for non-linear systems to have varying instantaneous (time varying) gains, the assumption that steady state output is equal to instantaneous gain multiplied by the control action (yss=kp*uss) is no longer valid. This means the control action has to be adjusted to include an additional term yss=kp*uss+kd. This allows the instantaneous gain (kp=Δyss/Δuss) to have a better depiction of the current system dynamics.


The following is an explanation of how to acquire the required current dynamics of a system model based in simulation. As this is a simulation, initial conditions are required for each of the operating points that the controllers parameters are to be calculated at. To ensure the simulation is accurate the initial conditions should correlate to the steady state values that enable the system to be stationary at this particular state.


With these steady state values as initial conditions, a small change in the control action can be simulated (du). Normalizing the change in the response with respect to the small control action change will give the normalized step test that contains the current system dynamics. Also the instantaneous gain can determined as it would be the change in steady state value due to the control action change divided by the control action change (kp=Δyss/du). Also from this the adjustment term can be calculated kd=yss−kpuss.


The impulse can be captured by running the simulation when all the initial conditions are set to the steady state values except for the velocity which is set to one.











u


|
t


=



d
1



e
o




|
t




+

(

1
-


d
2



k
p



)



u



|

(

t
-

Δ





t


)





+

(


d
2

-

d
3


)




y
o




|
t




+

d
3




y
o




|

(

t
-

Δ





t


)





-

d
2




k
d















where


:







d
1


=


1

k
p
2




M

-
1





A
T



[



1




1









1



]















d
2

=


1

k
p
2




M

-
1





A
T



[





c
^

1







c
^

2












c
^

N




]















d
3

=


1


k
p
2


Δ





t




M

-
1





A
T



[





g
^

1







g
^

2












g
^

N




]
















c
^

i

=


i
th






value





of





the





normalized





step





test














g
^

i

=


i
th






value





of





the





impulse





test






(
148
)








9. Operation


9.1 Industrial Robot


An industrial robotic manipulator real time offset compensation with Stateless Discrete Predictive Controller (SDPC). In the setup of a real time offset adjustment the current position of the robot can be measured through the encoders built into the manipulator. Reading these give the current position of the robot and comparing this to the a desired position of the robot gives an error which is the value taking by the controller each control cycle.


With this error value the control structure would calculate the optimal corrective measure for the servos motors to reduce this error as soon as possible. This corrective measure would be sent to the servo drives and then this process would start again in the next control cycle.


In order for the control structure to calculate the optimal corrective measure to send to the servos it has to have a model of the system. This model can be generated through any standard modeling technique but has to be able to accurately contain the dynamics of the robot position to changes in the servo inputs.


Taking this model and breaking into discrete linear operating ranges, the initial non-linear model can be composed of a series linear model that each represent the dynamics in a specific range.


With this set of linear models the control parameters for each of these models can be calculated. These parameter are the d1, d2i, and d3 found in the equations below. These parameters are then stored so they can be accessed by the real time control system each control cycle.


The part of the controller that is to be implement on the physical robot is a discretized version of the U (s)/E(s) equation. In each control cycle the controller would check which of the operating ranges it is in and pull the control parameters associated with that range. With those parameters the controller can then determine the next optimal position to send to the servos (U (s)).


9.2 Other Implementations


There are a number of different applications where this control structure can be applied to. The key components of this structure are in the calculation of the control parameters from the model before the controller is operating. These parameters have been formulated so they inherently contain a full stateless prediction of the system at the current system dynamics. Using these parameters with the controller in the control cycle provides the benefit of the a model predictive controller with a computational time that is significantly reduced. To transfer this control structure to a different system the steps that would change are the setting up of the system in terms of what is physically controlled.


10. Equation













U


(
s
)



E


(
s
)



=



d
1



(





i
=
1


n
a









τ
i


s


+
1

)








(


Δ






t


(

1
-

d
3


)




s
2


+

d
3


)



(





i
=
1


n
a









τ
i


s


+
1

)


-









j
=
1


n
a




(


(


k
j



d

2

j



)



(





i
=
1


j
-
1









τ
i


s


+
1

)



(





i
=

j
+
1



n
a









τ
i


s


+
1

)


)


















System





Model


:



















Y


(
s
)



U


(
s
)



=




i
=
1


n
a





k
i




τ
i


s

+
1















System





Constants


:














n
a






Order





of





the





system













τ
i







i
th






order





time





constant













k
i







i
th






order





gain












System





Variable


:














U


(
s
)







Input





to





the





system













E


(
s
)







Error





between





the





system





output





and





desired





output













Y


(
s
)







Output





of





the





system












Variables


:













s





LaPlace





frequency





variable












Δ





t





Controller





sample





time













d
1

=



d
^

1



(
1
)















d

2

i


=


d

2

i




(
1
)















d
3

=




i
=
1


n
a





k
i



d

2

i


















d
^

1

=


M

-
1





A
T



[



1




1









1



]

















d
^


2

i


=


M

-
1





A
T



[




(

1
-

α
i


)






(

1
-

α
i
2


)











(

1
-

α
i
N


)




]















A
=

[





c
^

1



0





0






c
^

2





c
^

1






0




















c
^

N





c
^


N
-
1









c
^


N
-

n
a

+
1





]








M
=



[








λ





j
=
1

N




(


c
^

j

)

2









j
=
1


N
-
1





(


c
^

j

)



(


c
^


j
+
1


)












j
=
1


N
-

n
u

+
1





(


c
^

j

)



(


c
^


j
+

n
u

-
1


)











j
=
1


N
-
1





(


c
^

j

)



(


c
^


j
+
1


)






λ





j
=
1


N
-
1





(


c
^

j

)

2












j
=
1


N
-

n
u

+
1





(


c
^

jj

)



(


c
^


j
+

n
u

-
2


)

























j
=
1


N
-

n
u

+
1





(


c
^

j

)



(


c
^


j
+

n
u

-
1


)









j
=
1


N
-

n
u

+
1





(


c
^

j

)



(


c
^


j
+

n
u

-
2


)









λ





j
=
1


N
-

n
u

+
1





(


c
^

j

)

2






]












c
^

j


=




i
=
1


n
a






k
i



(

1
-

α
i
j


)












α
i






Constant






(

e



-
Δ






t


τ
i



)










Tunning





Parameters


:










N





Prediction





horizon





length










n
u






Control





horizon





length









λ





Move





suppression





factor










11. Simulation System Tests


During the development of the controller various systems were used to evaluate the performance of the control schemes. For these simulations the dynamic matrix control (DMC) approach was used as a comparison because it has the same objective function structure, as well as being the standard algorithm of predictive control schemes. Having both algorithms controlling each system also allows the computational time required for each control cycle to be accurately compared.


11.1 First Order System


As with the derivation of the controller theory, the evaluation commences with a first order system. Eq. 149 contains the Laplace equation for the system used with the constants in Table 1. This first order system model has been derived from a DC motor with no load.










G


(
s
)


=


k
p



τ

s

+
1






(
149
)














TABLE 1







First Order System Constants










Parameter
Value














kp
2.0



τ
0.725










The control parameters for these simulations are as follows in Table 2 and are the same for both controllers.









TABLE 2







First Order Controller Constants











Description
Symbol
Value















Prediction Horizon
N
500



Control Horizon
nu
5



Move Suppression
λ
1.01











FIG. 3 contains the velocity output of the simulated DC motor for both the conventional DMC and the newly formulated stateless discrete predictive controller, SDPC. From this graph it can be observed that the performance of the two controllers are identical. This is indicated in the plot of the voltage supplied to the motor in FIG. 4.


The advantage of the discrete version of the MPC controller is in the reduction in computation time required to, determine the control output each control cycle. To evaluate this, both controllers were implemented using the C programming language to run the simulations with an accurate measurement of each control cycle duration. FIG. 5 contains the plot of the computational time for the controllers seen in FIG. 3. It is apparent that the stateless discrete predictive controller requires significantly less computational time than the conventional DMC controller.


11.2 Second Order System


The next system used to evaluate the performance of the scheme is a second order system having a relative low damping. The general equation for a second order system can be seen in Eq. 150, with the parameters used for this simulation in table 3. The model has been derived from a DC motor with an external load.










G


(
s
)


=



k
p



ω
n
2




s
2

+

2

ζ


ω
n


s

+

ω
n
2







(
150
)














TABLE 3







Second Order System Constants










Parameter
Value














kp
2.0



ζ
0.3



wn
2.0










Table 4 contains the control parameters used for the simulation.









TABLE 4







Second Order Controller Constants











Description
Symbol
Value















Prediction Horizon
N
600



Control Horizon
nu
10



Move Suppression
λ
1.01










Similar to the first order system responses, the two controllers' simulations have an almost identical motor, output. This can be seen in FIG. 6 and FIG. 7. There is one noticeable difference between the two simulations, around eight seconds into the simulation there is a minor bump in the response of the conventional DMC. This is caused by the prediction horizon N and is investigated further in section 12.


Again the controller code was written using the C programming language with an accurate record of the controller execution time for comparisons. FIG. 8 shows the comparison where the conventional DMC takes significantly longer than the stateless discrete predictive controller.


11.3 First Order Plus Dead Time System


This section presents the results from simulations executed with a first order plus dead time system model. The formulation of the model can be seen in Eq. 151 with the parameters in Table 5. This model was taken from a no load DC motor with the dead time exaggerated so that the controllers ability to handle the reaction delay could be evaluated.










G


(
s
)


=



k
p



e

-

θ
s






τ

s

+
1






(
151
)














TABLE 5







First Order Plus Dead Time System Constants










Parameter
Value














kp
2.0



θ
0.5



τ
0.25










Table 6 contains the control parameters used for the simulation.









TABLE 6







First Order Plus Dead Time Controller Constants











Description
Symbol
Value















Prediction Horizon
N
1000



Control Horizon
nu
2



Move Suppression
λ
1.1










As expected the response of the system with the two controllers is similar as shown in FIG. 9 and FIG. 10.



FIG. 11 has a graph of the computational time for each control cycle during the simulation. It is apparent that the stateless discrete predictive controller computes its control action in a reduced amount of time but at less of a reduction than the previous system types. This increase is due to a characteristic of First Order Plus Dead Time (FOPDT) systems, the inherent delayed reaction in the output from a change to the input. These delayed reactions have to be accounted for in the control algorithm so they have to be stored until they come into effect which requires more computational time. The amount of additional stored variables is the number of time steps within the dead time, which will always be significantly out-numbered by the variables the conventional DMC stores for the prediction horizon N and control horizon n u. Therefore a reduction in the computational time from the conventional DMC to the stateless discrete predictive controller is expected irrelevant of the dead time.


11.4 General Linear System


For the evaluation of the general linear controller an underdamped third order system with one zero was selected. The equation of the model is in Eq. 152 with the parameters shown in Table 7.











Y


(
s
)



U


(
s
)



=



b
o

+


b
1


s



1
+


a
1


s

+


a
2



s
2


+


a
3



s
3








(
152
)














TABLE 7







General Linear System Constants










Parameter
Value














a1
1.3



a2
1.3



a3
0.8



b0
1.0



b1
0.2










For this simulation the selected control parameter are shown below in Table 8.









TABLE 8







General Linear Controller Constants











Description
Symbol
Value















Prediction Horizon
N
3000



Control Horizon
nu
5



Move Suppression
λ
1.01










The response of the two controllers can be seen in FIG. 12 and FIG. 13. Due to the highly underdamped nature of the system the step test of the controller is expected to have overshoot but still settle at the set point.


Another effect of the underdamped nature of the system is its requirement for a longer prediction horizon to achieve stable control. This corresponds to more calculations required for the conventional DMC which translates to a more significant difference between the computational times of the two controllers. This can be observed in FIG. 14.


11.5 MIMO System


This section includes the evaluation of multiple input multiple output MIMO stateless discrete predictive controller theory. The system chosen for testing has the following interactions. Second order dynamics for input one to output one and input two to output two. The interactions or cross dynamics are both first order. The equation for these dynamics can be seen in Eq. 153, with the parameters in Table 9. In this example, there will be four dynamic equations: Y1(s)/U1(s), Y1(s)/U2(s), Y2(s)/U1(s), and Y2(s)/U2(s). The four equations are required to capture the dynamic between each separate input out each output.









Y
1



(
s
)




U
1



(
s
)



,



Y
1



(
s
)




U
2



(
s
)



,



Y
2



(
s
)




U
1



(
s
)



,

and









Y
2



(
s
)




U
2



(
s
)



.







The four equations are required to capture the dynamic between each separate input out each outputs












Y
w



(
s
)




U
v



(
s
)



=



b

w
,
v
,
0


+


b

w
,
v
,
1



s

+

+


b

w
,
v
,

m

b


(

w
,
v

)







s


m
b



(

w
,
v

)






1
+


a

w
,
v
,
1



s

+

+


a

w
,
v
,

n

a


(

w
,
v

)







s

n

a


(

w
,
v

)











(
153
)














TABLE 9







MIMO System Constants










Parameter
Value














a1, 1, 1
1.0



a1, 1, 2
1.75



b1, 1, 0
1.5



a1, 2, 1
1.3



b1, 2, 0
0.5



a2, 1, 1
2.0



b2, 1, 0
1.0



a2, 2, 1
1.0



a2, 2, 2
1.5



b2, 2, 0
2.5










The control parameters used for the simulation can be seen in Table 10.









TABLE 10







MIMO Controller Constants











Description
Symbol
Value















Prediction Horizon
N
500



Control Horizon
nu
10



Move Suppression
λ
1.01










The following figures, FIG. 15 and FIG. 16, contain the response of both inputs for the system. In these graphs input and output one is represented by a solid line while input and output two is represented by a dashed line for step changes in setpoint.


With there being multiple dynamics between each of the inputs and each of the outputs, the computational time of the conventional DMC is significantly higher than the stateless discrete predictive controller. This can be observed in the graph of the computational time of each controller, for each control cycle in FIG. 17.


11.6 Nonlinear Robot System


In order to verify the functionality of the nonlinear control structure a model of a vertical robotic manipulator was used. This model is in the state space form and can be seen in Eq. 154 with the parameters in Table 11.

{umlaut over (θ)}=b0u+a0{dot over (θ)}+α1 sin(θ)  (154)









TABLE 11







Nonlinear Robot Constants










Parameter
Value














a0
2.63



a1
−55.30



b0
16.07










Table 12 contains the control parameters for the model predictive controllers.









TABLE 12







Nonlinear Robot Controller Constants











Description
Symbol
Value















Prediction Horizon
N
500



Control Horizon
nu
10



Move Suppression
λ
1.01



Control Action Change
du
0.1










For the simulation a set-point was selected to have the robot move through positions of highly variable, dynamics, to evaluate the controllers ability to compensate for the non-linearities. FIG. 18 and FIG. 19 show the output angle of the robot and the voltage supplied to the motor throughout the simulation. With the nonlinear system the dynamics are constantly changing, which requires the controller to linearize the current dynamics at each cycle. This is a computationally expensive task which correlates to high computational time for the conventional DMC controller. This issue is also seen by the stateless discrete predictive controller but to a substantially lesser degree as in FIG. 20.


11.7 Computational Times


As a further investigation into the computational time reduction, a series of simulations were conducted. These simulations were design to determine to what extent the stateless discrete predictive controller is computationally faster than the conventional dynamic matrix controller. These tests varied the control and prediction horizon for the six simulations that were discussed in section 11.1 to section 11.6. The results can be seen in FIG. 21 to FIG. 26. Throughout all of the test cases the stateless discrete predictive controller has a substantially lower computation time than the conventional DMC. Another conclusion that can be drawn from this data, is that while the size of the control and prediction horizon have a drastic effect on the computational time of the conventional DMC, the effect on the stateless discrete predictive controller is minimal at most. This is a significant feature on account of the prediction and control horizon lengths being limited in conventional DMC tunning due to computational time.


12. Robustness and Stability


12.1 Stability


Through the derivation of the controller, a unique controller transfer function would be created for each different system. Using the first order system as a template and example, the process of evaluating the stability can be outlined. Equation 155 contains the transfer function for a first order system and Eq. 156 is the general transfer function of the stateless discrete predictive controller.











C


(
s
)



U


(
s
)



=


k
p



τ

s

+
1






(
155
)








U


(
s
)



E


(
s
)



=



d
1



(





i
=
1


n
a









τ
i


s


+
1

)








(


Δ






t


(

1
-

d
3


)



s

+

d
3


)



(





i
=
1


n
a









τ
i


s


+
1

)


-









j
=
1


n
a




(


(


k
j



d

2

j



)



(





i
=
1


j
-
1









τ
i


s


+
1

)



(





i
=

j
+
1



n
a









τ
i


s


+
1

)


)










(
156
)







Reducing the general controller transfer function for a first order system is shown in Eq. 157











U


(
s
)



E


(
s
)



=



d
1



(


τ

s

+
1

)





(


Δ


t


(

1
-

d
3


)



s

+

d
3


)



(


τ

s

+
1

)


-


k
p



d

2

1









(
117
)







Using the block diagram structure in FIG. 27, the two transfer functions can be combined to find the closed loop transfer function of the entire control structure.











C


(
s
)



E


(
s
)



=



k
p



d
1





(


Δ


t


(

1
-

d
3


)



s

+

d
3


)



(


τ

s

+
1

)


-


k
p



d

2

1









(
158
)







Using the generic equation for a closed loop transfer function, the closed loop equation for this system can be derived, show in Eq. 159.











C


(
s
)



R


(
s
)



=



k
p



d
1







Δ





t






τ


(

1
-

d
3


)




s
2


+


(


Δ





t

+


(

τ
-
Δ

)



d
3



)


s

+






(


d
3

+


k
p



d
1


-


k
p



d
21



)









(
159
)







From the derivation of the general linear theory d3=kp d21, hence the CLTF equation can be simplified.











C


(
s
)



R


(
s
)



=



k
p



d
1




Δ





t






τ


(

1
-


k
p



d
21



)




s
2


+


(


Δ





t

+


(

τ
-
Δ

)



k
p



d
21



)


s

+


k
p



d
1








(
160
)







Solving for the poles of Eq. 160 will determine the stability of the system. If any of the poles are on the right hand side of the jω axis the system is unstable. Using the quadratic equation, the poles would be stable if the Eq. 161 is true.











-
b

+



b
2

-

4

a





c




<
0




(
161
)







Using the characteristic equation of the CLTF the following conditions would determine the stability of the system.

Stable if {Δt+(τ−Δt)kpd21>0(Δtτ(1−kpd21))(kpd1)≥0  (162)


For the stability of a system, the sampling time has to be smaller than the time constant of the system. This means τ is greater than Δt as that is a requirement when selecting a sampling rate. From the derivation of d21 it can be noted that it will be a positive value divided by kp. This is the same for d1 in Case 2, Eq. 164.









Case





1


{


τ
>

Δ






td
21



=


(
+
)


k
p








(
163
)







Through a closer inspection of the derivation of d21 it can be concluded that it has a maximum value of 1/kp due to the α (α=e(−Δt/τ)) term being limited to between zero and one.









Case





2


{


d
1

=




(
+
)


k
p




d
21


=




(
+
)


k
p




d
21




1

k
p










(
164
)







Following this procedure the stability of the stateless discrete predictive controller for any linear system can be theoretically determined. As the controller for nonlinear systems interprets the system as a series of linear models, the stability can be performed and calculated for each of the independent linearized models. This is possible due to the controller being stateless and only incorporating the current model dynamics.


12.2 Robustness


In the design of the stateless discrete predictive controller, one of the advantages of the proposed design is the controller's ability to perform stateless predictions each control cycle. In the conventional DMC the prediction vector is stored and then used again in the next control cycle. Errors in predictions are then carried from one control cycle to the next. In the formulation of the discrete version, the prediction is embedded into the control parameters so it is theoretically equivalent to performing the prediction from scratch each control cycle. To demonstrate this phenomenon various simulations are executed on an underdamped second order system. Equation 165 contains the model equation with the parameters in Table 13. In this table there are two sets of parameters, one for the simulated plant, and the other for the system that the controller was designed for. The difference is used to demonstrate model mismatch in a physical plant.










G


(
s
)


=



k
p



ω
n
2




s
2

+

2

ζ


ω
n


s

+

ω
n
2







(
165
)














TABLE 13







Second Order System Constants











Model
Parameter
Value















Plant
kp
2.0



Plant
ζ
0.7



Plant
wn
2.0



Controller
kp
1.8



Controller
ζ
0.77



Controller
wn
2.0










The following Table 14, contains the control parameters.









TABLE 14







Second Order Controller Constants











Description
Symbol
Value







Prediction Horizon
N
Varying



Control Horizon
nu
Varying



Move Suppression
λ
1.01










For these simulations the same model is used with three different prediction and control horizons, N=150, 100, 50 and n u=2, 5, 10. As a reference these predictions are 45%, 30%, and 15% of the length required for the model to reach steady state. These nine tests can be seen in FIG. 28, FIG. 29, and FIG. 30. What can be observed is that as the prediction horizon shortens and the control horizon lengthens for the conventional DMC, the systems become more unstable. This is caused by the prediction horizon not reaching steady state, therefore the prediction is appending incorrect values as the algorithm iterates through control cycles. The stateless discrete predictive controller has the same errors in prediction but since the prediction is done based on measured values each cycle, the overall effect is significantly reduced.


13. Experimental System Tests


13.1 Vertical Robot


Using a lab experimental robotic manipulator the stateless discrete predictive controller is tested against the controllers used in the previous work. For the first experiment the controller is given an initial position of 80 degrees from the equilibrium (straight down) with a setpoint of −80 degrees. This swing of 160 degrees is utilized so the manipulator passes through highly variable dynamics and is a thorough test of the controllers nonlinear abilities. FIG. 31 contains the manipulator response with the motor voltage in FIG. 32.


Comparing the response of these controllers, the stateless discrete predictive controller is an improvement over the previously tested controllers. It achieves a reduction in the settling time compared to the Proportional-Integral-Derivative Model predictive control (PID MPC) and the conventional nonlinear Data Management Controller (DMC) (nMPC). Evaluating the voltages provided to the motor demonstrates more about the different control schemes. As was observed in the background work, the simplified MPC (SnMPC) had some issues with stability which is observed in the rapid oscillations in the voltage. In comparison the stateless discrete predictive controller has a much smoother voltage output.


In the next experiment the same system was used with an initial position of −80 degrees from equilibrium. The setpoint was selected as zero (equilibrium) due to it being the point of highest sensitivity for the robot. Utilizing this feature the controllers ability to achieve a controlled stop can be observed. FIG. 33 and FIG. 34 contains the system response and the voltage respectively.


One key feature that can be seen from the response graph is that the stateless discrete predictive controller settles faster than the other controllers. The SDPC and nMPC is marginally slower for the first 0.2 seconds but the stateless discrete predictive controller rapidly catches up a settles faster than the other controllers. From the voltage graph it is apparent the simplified MPC is again oscillating and approaching instability. The PID MPC and the conventional nonlinear MPC are better in terms of oscillations but the SDPC is significantly smoother.


13.2 Kuka Robot


Another experiment was performed using an industrial Kuka serial manipulator, which can be seen in FIG. 35. These manipulators have been designed to track predefined profiles. However, as they are being utilized for more complex applications where the tasks they are performing can vary from cycle to cycle. As a result, Kuka developed a Robot Sensor Interface (RSI), a software that allows offsets to be sent to the robot in realtime at a sampling time of four milliseconds.


The current configuration is such that the offsets are directly added on the inputs to the servomotors for the joints. The addition of a controller would allow the robot to respond to this offset faster, giving the robot a faster response time. The performance of the different schemes will be evaluated based upon settling time with the secondary objective being stability during motion. The experimental setup is conducted with the robot initially in the cannon position and is given a step offset in at X-Z plane. This planar work envelope was selected because of the coupled nature introduced from the parallel robot joints two and three. During testing of the system and determination of the system model it was observed that there is notable dead time in the system (approximatively 0.024 seconds). Due to the four millisecond sampling rate, the benchmark controllers are limited which lead to the following being selected: conventional MIMO DMC, PID, no controller with acceleration limits. Within the RSI functionality of the KUKA there is no controller, it takes the displacement offset and drives the motors to achieve this new position. In the experiments that were conducted, if these values had been sent directly to servos they would faulted due to a high torque limit. Generally this would be avoided through displacement limits but this would be impractical as a reasonably safe limit of 0.2 millimeters per time step would mean it would be at least 2 seconds for an offset of 40 millimeters to be applied.



FIG. 36 shows an experiment where a 40 millimeter offset in the X-axis was sent to the system, FIG. 37 shows the actual offset sent to the servos. From this figure it can be noted that the stateless discrete predictive controller has a reduced settling time when compared to all of the others. It should be noted that the Z-axis had some movement due to the coupling effect of the two axes, which is expected as the controller is minimizing the combined error between all the axes.



FIG. 38 shows an experiment where a positive 40 millimeter offset in the X direction and a negative 40 millimeter offset in the Z direction was sent to the system. FIG. 39 contains the actual positions sent to the servo during this experiment. Again in this experiment the discrete controller achieve a reduced settling time when compared to the other options.


As this is a complex system with varying dynamics throughout the workspace, it is expected that the traditional control schemes like PID would have diminished performance. Looking at the conventional DMC controller, the performance issue it is having occurs as it moves through the varying dynamics. It can be observed that in the first experiment there was overshoot while there was none during the second experiment in the X axis. Being stateless, the discrete model predictive controller is able to account for these varying dynamics with less of an impact on performance.


In one embodiment, the invention relates to a computer-implemented method for controlling a dynamical system to execute a task, including providing a model of the dynamical system, computing control parameters, based on an optimization of future control actions in order to reduce predicted errors using the model of the dynamic system prior to executing a control loop iteration, including, computing future error values based on differences between predicted outputs of the model of the dynamical system and a set point where the current state of the dynamical system is undefined; and algebraically optimizing the future errors, storing the computed control parameters in a memory device executing at least one control loop iteration including, obtaining from a sensor at least one input representing a current state of the dynamical system, computing an error value based on the difference between the current state of the dynamical system to a desired state of the dynamical system, accessing at least one of the stored control parameters, and generating, based on the accessed control parameters and the error value, at least one control output, and, transmitting the at least one control output to the dynamical system for executing one or more control actions to adjust the current state of the dynamical system during execution of the task to reduce the error value. The dynamical system can be a dynamically rapid system, where the time constant of the dynamically rapid system to the control loop cycle time ratio is less than about 100 or where the time constant is under about one second. The dynamically rapid system can be selected from the group consisting of robots, injection molding machines, computer numerical control (CNC) machines and servo drives. The stored control parameters can each be associated with an operating range of the dynamical system.

Claims
  • 1. A computer-implemented method for controlling a dynamical system to execute a task, comprising: providing a model of the dynamical system;computing control parameters, based on an optimization of future control actions in order to reduce predicted errors using the model of the dynamical system prior to executing any control loop iterations for the dynamical system, comprising; computing future error values based on differences between predicted outputs of the model of the dynamical system and a set point where the current state of the dynamical system is undefined; andalgebraically optimizing the future errors,storing the computed control parameters in a memory device;executing at least one control loop iteration comprising: obtaining from a sensor at least one input representing a current state of the dynamical system;computing an error value based on the difference between the current state of the dynamical system to a desired state of the dynamical system;accessing at least one of the stored control parameters, and generating, based on the accessed control parameters and the error value, at least one control output; and,transmitting the at least one control output to the dynamical system for executing one or more control actions to adjust the current state of the dynamical system during execution of the task to reduce the error value.
  • 2. The computer-implemented method of claim 1, wherein the step of computing the control parameters further comprises computing the variables: Variables:
  • 3. The computer-implemented method of claim 2, wherein the step of generating at least one control output further comprises executing the following equation:
  • 4. The computer-implemented method of claim 2, wherein the dynamical system is a dynamically rapid system, where the time constant of the dynamically rapid system to the control loop cycle time ratio is less than about 100 or where the time constant is under about one second.
  • 5. The computer-implemented method of claim 4, wherein the dynamically rapid system is selected from the group consisting of robots, injection molding machines, CNC machines and servo drives.
  • 6. The computer-implemented method of claim 1, wherein the stored control parameters are each associated with an operating range of the dynamical system.
PCT Information
Filing Document Filing Date Country Kind
PCT/CA2019/000137 9/30/2019 WO
Publishing Document Publishing Date Country Kind
WO2020/061675 4/2/2020 WO A
US Referenced Citations (5)
Number Name Date Kind
10060373 Wang et al. Aug 2018 B2
20060045801 Boyden Mar 2006 A1
20070078529 Thiele Apr 2007 A1
20090265021 Dubay Oct 2009 A1
20170359418 Sustaeta Dec 2017 A1
Foreign Referenced Citations (7)
Number Date Country
2718911 Sep 2009 CA
2874269 Jun 2016 CA
2874269 Jun 2016 CA
1940780 Apr 2007 CN
3014569 Jun 2015 FR
2430764 Apr 2007 GB
2010088693 Aug 2010 WO
Non-Patent Literature Citations (6)
Entry
Matthew Ellis, Helen Durand, and Panagiotis D. Christofides, a tutorial review of economic model predictive control methods, Journal of Process Control 24 (2014), 1156-1178.
Ramdane Hedjar, Redouane Toumi, Patrick Boucher, and Didier Dumur, Finite horizon nonlinear predictive control by the taylor approximation: application to robot tracking trajectory, International Journal of Applied Mathematics and Computer Science 15 (2005), No. 4, 527-540.
G. C. Kember, R. Dubay, and S. E. Mansour, On simplified predictive control as a generalization of least-squares dynamic matrix control, ISA transactions 44 (2005), No. 3, 345-352.
Michael Short, A Simplified Approach to Multivariable Model Predictive Control, International Journal of Engineering and Technology Innovation, vol. 5, No. 1, 2015, pp. 19-32.
Jacob Mark Wilson, Meaghan Charest, and Rickey Dubay, Non-linear model predictive controller, Proceedings of the Canadian Society for Mechanical Engineers International Congress 2014, 2014.
Zhiyun Zou, Dehong Yu, Zhen Hu, Luping Yu, Wenqiang Feng, and Ning Guo, Design and simulation of nonlinear Hammerstein systems dynamic matrix control algorithm, Intelligent Control and Automation, 2006. WCICA 2006. The Sixth World Congress on, vol. 1, 2006, pp. 1981-1985.
Related Publications (1)
Number Date Country
20220019181 A1 Jan 2022 US
Provisional Applications (1)
Number Date Country
62738547 Sep 2018 US