Information transfer in stochastic optimal control theory with information theoretic criterial and application

Information

  • Patent Grant
  • 11016460
  • Patent Number
    11,016,460
  • Date Filed
    Monday, April 27, 2020
    4 years ago
  • Date Issued
    Tuesday, May 25, 2021
    3 years ago
Abstract
The current disclosure provides a method for transmitting encoded information signals through a control system and to a decoder. The encoded information signals are transmitted along with control signals as an encoded message. The information signals are encoded based at least in part on a control-coding capacity of the control system.
Description
TECHNICAL FIELD

This patent application generally relates to coding of information signals in controllers of general control systems, and using the control system to communicate the encoded information to processes connected to control systems, such as field devices, outputs, other controllers or processors. The invention further relates to: (i) controller design in control systems with a dual role, (ii) achieving stability and optimal performance with respect to control objectives, (iii) encoding information, (iv) transferring the information through the control system, and (v) decoding or estimating the information at outputs of control systems, with arbitrary small error probability. The invention facilitates communication from one processor of a control system to another processor.


BACKGROUND

Current telecommunication and information technology systems are designed based on Shannons operational definitions of coding-capacity, the maximum rate of communicating information over noisy channels, and coding-compression, the minimum rate of compressing information, which give the fundamental performance limitations for reliable communication. They utilize encoders and decoders, to combat communication noise and to remove redundancy in data.


Current dynamical control systems are designed by utilizing feedback controllers, actuators and sensors, to ensure stability, robustness, and optimal control objectives and performance.


Industries related to modern control and communication systems have experienced tremendous growth due to their increasing applications in information technology which affects everyday lives of people. The next generation of engineering systems integrates control, communication, protocols, etc. to develop complex engineering systems, which can be implemented in energy systems, transportation systems, medical systems, surveillance networks, financial instruments etc. Many of these applications consist of multiple control sub-systems and communication sub-systems integrated together to achieve control and communication objectives.


In the field of communication most systems are designed by utilizing encoders and decoders, to combat communication noise and to remove redundancy in data. Current telecommunication systems, whether point-to-point, network, mobile, etc., are designed, based on Shannon's operational definitions, which give the maximum rate of communicating information over noisy channels, and the minimum rate of compressing data generated by sources, called coding-capacity of communication channels, and coding-compression of information processes, respectively.


In the field of modern control theory and applications, controllers are designed to control the control system outputs, to optimize performance and to achieve robustness with respect to uncertainties and noise. Over the year the criterion of optimality of controllers is the average of a real-valued sample path pay-off functional, of the control, state and output processes, which is optimized to achieve optimal control performance. It is well-known that controllers are designed to control the control systems but not to encode information and communicate this information, through the control system to other processes.


The general separation of control system design and communication system design has divided the community of developers into independent groups developing controllers for control systems, and communication systems for noisy channels. Shannon's operational definitions of achievable coding-capacity, and coding-compression, which utilizes encoders and decoders, to combat communication noise and to remove redundancy in data, did not apply beyond communication system, such as, in the ability of control system to transmit information.


To this date, no controllers in a control system is designed to communicate information, from one controller, through the control system, to another controller, or to any of its outputs, or from elements of a control system to elements of another control system, using encoders and decoders as done in problems of information transmission over noisy channels.


To this date, Shannon's operational definitions of coding-capacity and coding-compression are not used beyond communication system, such as, in dynamical control system. It is well-known that encoders are designed to encode and transmit information over noisy communication channels but not to control channel output processes.


SUMMARY

In an embodiment, a method of designing an controller-encoder pair in control systems or general dynamical systems or decision systems, comprising the controller controls the control system, the encoder encodes information signals and transmits the information over the control system to any element attached to it, and reconstruct the encoded signals using decoders, with arbitrary small error probability.


The method described above further includes determining the expression of Control-Coding Capacity (CC-Capacity) of the control system, for stable or unstable systems, wherein the expression represents the maximum rate in bits/second of simultaneously controlling the control system and encoding information, and decoding the information at any processors attached to the control system, by solving the expression of CC-Capacity.


The method of above further includes simultaneous


(i) optimal performance with respect to control objectives, and


(ii) optimal coding of information signals with respect to communication objectives.


The method of above further includes the the specification of minimum power necessary to meet the optimal performance with respect to control objectives, and the specification of excess power which is converted into transmitting information through control systems with positive data rates.


In another embodiment, the controller-encoder comprise of deterministic part and a random part, wherein the deterministic part controls the control system, while the random part encodes information.


The method further includes determining how existing controllers which are not designed to encode information, can be modified to encode information, and transmit information over control systems, and decoded at any processors attached to the control system, such as, in control-to-control communication, etc.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 is a block diagram of an example of controller-encoder of a control system, which simultaneously controls the control system and encodes an information signals, and transmits the information signals over the control system, and decodes the information signals at the output of a decoder and/or estimator;



FIG. 2 is a flow chart depicting a method of an example method for transmitting encoded information signals and control signals in a control system.





DETAILED DESCRIPTION

The techniques of the present disclosure facilitate a method of controller design in control systems or decision systems, which is based on utilizing controllers, encoders and decoders, wherein the encoders and decoders are designed using techniques from communication system designs. Further, these techniques may be universal to any dynamical system with control inputs and outputs. The maximum rate for simultaneous control of the dynamical system and transmission of information over the dynamical system is called the Control-Coding Capacity (CC-Capacity).


The design of the controller-encoder for control systems, aims at simultaneous control of the control system and communication of information from one processor to another processor. More specifically, the CC-Capacity of dynamical control systems identifies the interaction of control and information, and controller-encoders-decoders are designed to achieve simultaneously optimal performance with respect to control objectives, and optimal transmission of information, via the dynamical system, from one controller to another controller or output. etc.


Applications of the CC-Capacity to control systems, and in general, to any dynamical system with inputs and outputs, include biological, financial, quantum, etc., because any such system has CC-Capacity.


To this end, a methodology determines how to design control systems, which in addition to effectively control system outputs (e.g., stabilize them), they also encode information, and communicate the information from one controller to any other controller, via the control system which acts as a communication channel, and the information is decoded or estimated at other processors attached to the control system, with arbitrary precision.


Dynamical Systems or Control Systems with Communication Capabilities



FIG. 1 is a block diagram of an example dynamical system or control system 100 including at least some components that may be configured according to and/or utilize the methods discussed herein. In this example system, an information signal 101 is received (e.g., including any suitable information). A randomized strategy also called controller-encoder 102 is designed to control the control system 106 and to encode the information signals 101, the control process or encoded message 105, including a control signal and an encoded information signal, is transmitted as a signal over the dynamical system also called control system 106. A decoder or estimator 108 decodes the process output 107, which is the output of the encoded message 105 after going through the control system 106, to produce an estimate of the information signal 101. The process output 107 is also returned to the controller-encoder 102 as feedback. In some embodiments the process output may be delayed before being returned to the controller/encoder.


The information signal 101 includes analog or digital data. For example, the information signal may include suitable computer generating messages, including digital data, such as digital data representing pictures, text, audio, videos, or signals generated by other control systems to etc. Generally, the information signal 101 may include any number of devices or components of devices generating messages to be encoded by the randomized strategy also called controller-encoder 102 and transmitted over the control system 106. The information signal 101 may include information sources and/or tracking signals as well as any other pertinent data that may be encoded and transmitted through the control system.


The randomized strategy called controller-encoder 102 encodes messages received from the information signals 101 {X0, X1, . . . , Xn} and uses feedback from the control system outputs {Y0, . . . , Yn} to generated the control process {A0, . . . , An}. The controller/encoder 102 may comprise a control module 103 and a coding module 104. The control module 103 may be used for typical control system activities, such as generating control signals for elements of the control system 106. The control module may also receive feedback 107 and other information related to the control system which may be implemented in controlling the control system. The coding module 104, which is not typical of previous known controller devices, may be implemented in encoding information signals 101 and transmitting the control process 105 as an encoded signal along with control signals to the control system 106. The controller/encoder 102 may implement the control module 103 and coding module 104 in any suitable combination to produce the control process 105. In some embodiments, the control process 105 can be represented as a randomized control signal or a randomized control action.


The coding module 104 may implement any of the techniques described below to encode the information signals 101. For example, the encoding module may be designed based on the capacity of the control system, known as the control-coding capacity, of one or more control systems 106 and then encode the signals based on the determined capacity. Further, the coding module 104 may also implement any known or future developed schemes for encoding the information signal 101.


The dynamical system also called control system 106 may include any number of systems, described by nonlinear recursive models driven by a control process and the noise process, such as, the dynamics of a ship, submarine, missile, plane, car, and any model of a dynamical system which is controlled by a controller, such as a process control system. The output of the control system 107 is available to the decoder or estimator 108 and comprises encoded information signals and control signal feedback. The decoder 108 may be any device suitable for receiving the encoded information signals from the process output 107 such as a sensor, field device, another controller, etc. In the example of FIG. 1 the control system or dynamical system 106 is described by a sequence of conditional distributions {PYi|i-1,Ai:i=0, . . . , n} called conditional distributions or control system distributions.



FIG. 2 is a flow chart of an example method for transmitting encoded signals through a control system. The method starts at block 210 when an information signal is received by a controller of a control system. The information signal can be of any form discussed above with regard to FIG. 1. At block 220, a controller may determine one or more control signals that to be sent to the control system. At block 230, the received information signals may be encoded. The information signals may be encoded by the controller at a coding module, for example. The information signals may also be encoded based on any of the techniques discussed herein. Next, at block 240 the controller may transmit the encoded information signals along with the control signals through the control system.


Previous systems would only transmit control signals through control systems. These previous systems lacked the capability of sending information signals in such a manner. For example, if information needed to be transmitted through a control system or from one controller to another controller in the control system, this was not enabled by the previous systems. In-stead, previous systems would require additional communication channels for communicating information signals and they do not use the control systems as communication channels used to transmitting information signals. In other words, by transmitting encoded messages which include both control signals and encoded information signals through the control system, the current system enables simultaneous control and communication of data transmission. Further, the current system requires further utilizes the control system as a communication channel and also provides more optimized communications through currently available control systems. The implementation of such techniques was not previously considered (using communication channel optimization techniques in a control system).


The encoded message transmitted in block 240 may be utilized at two different junctures because the encoded message essentially contains two parts, an encoded information signal and one or more control signals. At block 250 the encoded information signal portion may be decoded. The encoded portion may be decoded by a decoder as described above with respect to FIG. 1. In an embodiment, the encoded signal may be decoded using a particular preset encoder scheme. In another embodiment, the encoded signal may indicate the encoder scheme to be used for decoding the message.


At block 260, the control signal portion may be received by the control system. The control signal portion may be used to control one or more elements of the control system. In an embodiment, once the control signal is received, feedback may be generated by the device or devices that have received and utilized the control signal. For example, feedback may be generated and transmitted as part of process output, such as process output 107 discussed above with respect to FIG. 1. The control signals may be received by a controller, a field device, a sensor, or any other suitable device in a control system.


Although blocks 250 and 260 are discussed in sequential order, these steps may take simultaneously or in any suitable order. In an embodiment, the present techniques allow the control system to be used as a communication channel for transmitting an encoded information signal from a controller to another device which may decode the encoded information signal.


All elements of dynamical systems or control systems 106 with communication capabilities shown in FIG. 1 are analogous to elements source, encoder, channel, decoder of transmitting information through noisy communication channels, as explained below.

    • (i) AnΔ{A0, . . . , An} is the control process of the control system, and An=an ∈xi=0ncustom characterin the control actions. This is analogous to channel inputs in problems of information transmission over noisy communication channels.
    • (ii) YnΔ{Y−1, Y0, . . . , Yn} is the output process of the control system taking values in xi=−∞ncustom characteri, and Y−1 is the initial state. This analogous to channel outputs in problems of information transmission over noisy communication channels.
    • (iii) {PYi|Yi-1,Ai:i=0, . . . m} is the control system conditional distribution. This is analogous to the channel distribution of the communication channel in problems of information transmission over noisy communication channels.
    • (iv) {custom characterAi|Ai-1,Yi-1: i=0, . . . , n} is the randomized control strategy or conditional distribution of the control process. This is analogous to channel input conditional distribution in problems of information transmission over noisy communication channels.
    • (v) XnΔ{Xi: i=0, . . . , n} is the information source or tracking signal taking values in xi=0ncustom characteri. This is analogous to information source and Xn=xn the specific message in problems of information transmission over noisy communication channels.


In view of (i)-(v) there is a one-to-one analogy between elements of control systems and elements of noisy communication channels in problems of information transmission.


A. Consider a general control system with control process An Δ{Ai: i=0, . . . , n}, output process YnΔ{Y: i= . . . , −1, 0, 1, . . . , n}, control system conditional distribution

PYi|Yi-1,Ai≡Qi(dyi|yi-1,ai),i=0, . . . ,n


a set of randomized control strategies or conditional distribution of the control process

custom character[0,n]Δ{PAi|Ai-1,Yi-1≡Pi(dai|ai-1,yi-1):i=0,1, . . . ,m}


and a set of randomized control strategies satisfying power constraints defined by the set








𝒫

[

0
,
n

]




(
κ
)




=
Δ




{



P
i



(



da
i

|

a

i
-
1



,

y

i
-
1



)


,

i
=
0

,





,


n


:



1

n
+
1




E


(




0
,
n




(


A
n

,

Y
n


)


)




κ


}



𝒫

[

0
,
n

]







where κ∈[0, ∞] is the total power with respect to the cost function l0,n(an, yn-1)∈[0, ∞).


B. The pay-off of model of A. is the so-called directed information from An to Y0n conditioned on Y−1, denoted by I(An→Yn) and defined by







I


(


A
n

->

Y
n


)




=
Δ






i
=
0

n



E


{

log


(




dP



Y
i

|

Y

i
-
1



,

A
i



(

·




Y

i
-
1


,

A
i


)





dP


Y
i

|

Y

i
-
1




(

·



Y

i
-
1


)






(

Y
i

)


)


}







where {PYi|Yi-1:i=0, . . . , n} is distribution generated from {Qi(dyi|yi-1, ai), Pi(dai|ai-1, yi-1): i=0, 1, . . . , n}.


The Finite Time Horizon (FTH) information CC-Capacity of the control system is to determine an optimal randomized control strategy {P*i(⋅|⋅): i=0, . . . , n}∈custom character[0,n](κ) which maximizes the directed information pay-off defined by








J


A
n



Y
n





(


P
*

,
κ

)




=
Δ




sup


𝒫

[

0
,
n

]




(
κ
)






I


(


A
n

->

Y
n


)


.






The Control-Coding Capacity (CC-Capacity) called the supremum of all transmission rates over the model of A. is the per unit time limiting version







C


(
κ
)




=
Δ




lim

n







1

n
+
1





J


A
n

->

Y
n





(


P
*

,
κ

)








C. The method of B. further states that CC-Capacity is analogous to the definition of coding-capacity or capacity in problems of information transmission over noisy communication channels, as follows.


(1) The operational definition of controller-encoder-decoder strategies is a generalization of Shannon's operational definition of coding-capacity in problems of information transmission over noisy channels.


(2) In digital communication theory, the source coding problem can be lossless or lossy (based on quantization), while the channel coding problem is defined with respect to the quantized or compressed representation X(n) of the information process {Xi: i=0, . . . , }.


Operational Control-Coding Capacity of Control Systems or Dynamical Systems


Consider a filtered probability space (Ω, custom character, {custom characteri: i=0, 1, . . . , N}, custom character) on which the following processes and RVs are defined.

X(n): ∩custom charactercustom character(n)Δ{1, . . . ,M(n)},Ai: Ωcustom charactercustom characteri,Yi: Ωcustom charactercustom characteri,i=0, . . . ,n,
Y−1: Ωcustom charactercustom character−1,{circumflex over (X)}(n)custom charactercustom character(n).


A controller-encoder-decoder for a dynamical system, such as, a control system, with power constraint, over the time horizon {0, 1, . . . , n} is denoted by






G


-


SCM


:







(




(
n
)


,


{

𝔸
i

}


i
=
0

n

,


{

𝕐
i

}


i
=

-



n

,

{


P


(


X

(
n
)


=

x

(
n
)



)


=


1

M

(
n
)





:







x

(
n
)



ϵ








(
n
)




}

,


{


g
i



(


x

(
n
)


,

a

i
-
1


,

y

i
-
1



)


}


i
=
0

n

,


{


Q
i



{



dy
i

|

y

i
-
1



,

a
i


)


}


i
=
0

n

,




0
,
n




(


a
n

,

y
n


)


,


d
n



(

y
n

)



)





and consists of the following elements.


(a) A set of uniformly distributed messages X(n) taking values in custom character(n)Δ{1, . . . , M(n)}, known to both the encoder and decoder (the controller does not need to know these).


(b) A set of controller-encoder strategies mapping messages and feedback control information into control actions defined by

ε[0,n]Scustom character{gi:custom character(n)×custom characteri-1×custom characteri-1custom charactercustom characteri:i=0, . . . ,n:a0=g0(x(n),y−1),a1=g1(x(n),y−1,a0,y0), . . . ,an=gn(x(n)an-1,yn-1),x(n)custom character(n)}


The set of admissible controller-encoder strategies subject to power constraint κ is defined by










[

0
,
n

]

S



(
κ
)




=
Δ




{



g
i



(


x

(
n
)


,

a

i
-
1


,

y

i
-
1



)


,

i
=
0

,





,


n


:







1

n
+
1





E
g



(




0
,
n




(


A
n

,

Y
n


)


)




κ


}





[

0
,
n

]

S







where κ∈[0, ∞) is the total cost or power of the controller-encoder. For any message x(n)custom character(n) and feedback information

ux(n)=(g0(x(n),y−1),g1(x(n),a−1,y−1,a0,y0), . . . ,gn(x(n),an-1,yn-1))∈ε[0,n]S(κ)


is the controller-encoder strategy. The control-coding book of the controller-encoder strategy for the message set custom character(n) is C(n)=(u1, u2, . . . , uM(n)).


(c) A decoder measurable mapping dn: custom characterncustom charactercustom character(n), {circumflex over (X)}(n)Δdn(Yn) such that the average probability of decoding error is given by 1 1 The superscript on expectation. i.e. Pg indicates the dependence of the distribution on the encoding strategies.








P
error

(
u
)




=
Δ





P
g



{



d
n



(

Y
n

)




X

(
n
)



}


=



1

M

(
n
)











P



{





d
n



(

Y
n

)




x

(
n
)



|

X

(
n
)



=

x

(
n
)



}






n




,


ϵ
n







ϵ
[

0
,
1

)






(d) Conditional independence holds.

PYi|Yi-1,Ai,Xk=Qi(dyi|yi-1,ai),i=0,1, . . . ,n,∀k∈{0,1, . . . }.


(e) The initial data Y−1 custom character−1 and the distribution μ(dy−1) may be known to the controller-encoder and decoder.


(f) The encoder-controller-decoder for the G-SCM is denoted by (n+1, custom character(n), ϵn, κ).


The control-coding rate is defined by







R


(
n
)









=
Δ




1

n
+
1



log







M

(
n
)


.






(g) Another class of controller-encoder strategies, which are nonanticipative with respect to the information process {X0, X1, . . . } is defined by










[

0
,
n

]




(
κ
)




=
Δ



{

{



(





e
i



(

·

,

·

,
·




)




:






i

=
0

,
1
,





,
n

}



ϵℰ

[

0
,
n

]




:







1

n
+
1





E
e



(


c

0
,
n




(


A
n

,

Y
n


)


)




κ

}






where

ε[0,n]custom character{ei:custom characteri×custom characteri-1×custom characteri-1custom charactercustom characteri:i=0, . . . ,n: a0=e0(x0,y−1),a1=g1(x0,x1,y−1,a0,y0), . . . ,an=gn(xn,an-1,yn-1)}


Note the following.


(3) By the above definition, the message X(n) is taken to be uniformly distributed over custom character(n), hence its entropy is H(X(n))=log M(n). Moreover, unlike most treatments of capacity of communication channels, which often do not impose cost constraints (except for Gaussian channels in which only power constraint on {Ai: i=0, . . . , n} are imposed), the control system is not required to be stable, and there is a cost constraint on both {(Ai, Yi): i=0, . . . , n}.


(4) If the entropy H(X(n)) is below the capacity of the control system, calculated over the time horizon {0, 1, . . . , n}, then for large enough n, we expect to achieve the control objective of meeting the average power constraint, and the coding objective to reconstruct a randomly chosen message X(n)=x(n) at the output of the decoder with small probability of error. Next, we give the precise definition of an achievable control-coding rate and capacity of the control system.


Control-Coding Capacity of Dynamical Systems or Control Systems


(h) A control-coding rate R>0 is said to be an achievable rate (under power constraint κ), if there exists a sequence of controller-encoder-decoder strategies {(n+1, custom character(n), ϵn, κ): n=0, 1, . . . } such that the random processes (An, Yn)≡(Ag,n, Yg,n) depend on the message X(n), and satisfy








lim

n






ϵ
u


=


0





and







lim





inf


n






1

n
+
1



log






M

(
n
)





R
.






(i) The operational control-coding capacity of G-SCM under power constraint κ is the supremum of all achievable control-coding rates, i.e., it is defined by

C(κ)custom charactersup{R: R is achievable}.


(j) Under appropriate conditions, the CC-Capacity is given by C(K) defined by ( ).


We note the following.


(5) In information theory the operational capacity is often called the coding-capacity of noisy communication channels. Since the objective is to control the control system, in addition to encode information, the operational capacity is called the control-coding capacity.


(6) The definition of CC-Capacity states that if a control-coding rate R is achievable under the power constraint and the control system is operated for sufficiently large n, then we can control the control process {Yi: i=0, 1, . . . , n} and reconstruct M(n)=┌e(n+1)R┐ messages at the control system output using the decoder or estimator, with arbitrary small probability of error, for sufficiently large n.


(7) As pointed out below, and in view of the general cost constraint, the cost of control is κ0,n (0) while the cost of communication is κ0,n(C)−κ0,n(0).


D. The method of B. further states the optimal randomized strategy {P*i(⋅|⋅): i=0, . . . , n}∈custom character[0,n](κ) is analogous to the optimal channel input conditional distribution in problems of information transmission over noisy communication channels with feedback encoding.


E. An example of the control system is a General-Recursive Control Model (G-RCM) defined as follows.






G


-


RCM


:







{






Y
i

=


h
i



(


Y

i
-
1


,

A
i

,

V
i


)



,

0
=
1

,





,
n
,








Y
0

=


h
0



(


Y

-
1


,

A
0

,

V
0


)



,


Y

-
1


=

y

-
1



,








1

n
+
1







i
=
0

n



E


{


γ
i



(



T
i



A
n


,


T
i



Y
n



)


}





κ









where


Assumption A.(i). {Vi: i=0, . . . , n} is the noise process independent of Y−1, hi: custom character−1×custom characteri×custom charactercustom charactercustom characteri and hi(⋅,⋅,⋅), γi(⋅, ⋅) are specific functions such as linear or nonlinear, and autoregressive models, etc., with Multiple Inputs and Multiple Outputs, and (Tian, Tiyn)custom characterγi(Tian, Tiyn), where Tian⊆{a0, a1, . . . , ai}, and Tiyn⊆{y0, y1, . . . , yi} for i=0, . . . , n is any quadratic or nonlinear function;


Assumption A.(ii). The noise process {Vi: i=0, . . . , n} satisfies

PVi|Vi-1,Ai,Y-1(dvi|vi-1,ai,y−1)=PVi(dvi),i=0, . . . ,n.


Hence, a G-RCM induces a sequence of control system conditional distributions, and these are not necessarily Gaussian.


F. The analogy between feedback capacity of noisy communication channels and stochastic optimal control problems, with directed information pay-off, introduced in A.-D., and depicted in FIG. 1, states that randomized strategies called randomized control strategies can be used to control the control system and to encode information signals or messages, and communicate the information signals over the control system, which acts as a communication channel, to any of the other processes attached to the control system, and reconstruct the information signals by using a decoder or estimator.


Hierarchical Decomposition: Cost of Control and Communication For any finite n, C0,n(κ)custom characterJAn→Yn(P*,κ) is a concave non-decreasing in κ∈[0,∞), and the inverse function of C0,n(κ) denoted by κ0,n(C) is a convex non-decreasing function of C∈[0, ∞). This implies
















κ

0
,
n




(
C
)




=
Δ






inf




A
i



A

i
-
1



,

y

i
-
1


,

i
=
0

,





,

n
:



1

n
+
1




(


a
n



Y
n


)



C






E


{




0
,
n




(


A
n

,

Y
n


)


}













inf

P




E


{




0
,
n




(


A
n

,

Y
n


)


}






κ

0
,
n




(
0
)


.









Clearly, κ0,n (0)≡κmin is the optimal pay-off or cost of the control system without communication. Then κ0,n(C)−κ0,n(0) is the additional cost incurred if the control system operates at an information rate of at least C.


The cost of communication is given by








κ


(
C
)


-

κ


(
0
)





=
Δ





lim

n







1

n
+
1





κ

0
,
n




(
C
)




-


lim

n







1

n
+
1






κ

0
,
n




(
0
)


.








In general, κ(C)−κ(0)>0. In view of the above connections, κ0,n(κ) decomposes into two sub-problems, the optimal control sub-problem and the optimal communication sub-problem, which imposes a natural hierarchical decomposition on any optimization problem C0,n(κ).


In addition, we show that classical stochastic optimal control problems, defined by













J

0
,
n




(

P
*

)


=


inf


E



{




0
,
n




(


A
n

,

Y
n


)


}

.








are degenerate optimization problems of the FTH information CC-Capacity ( ).


There are several hidden aspects of the FTH information CC-Capacity JAn→Yn(P*, κ), which include the following.

    • i) Randomized Versus Deterministic Strategies. Randomized strategies custom character[0,n] incur a higher value for the pay-off I(An→Yn) than deterministic strategies, custom character[0,n]D, that is, when {Pi(dai|ai-1, yi-1): i=0, . . . , n}∈custom character[0,n] are delta measures concentrated at

      ajg=gj(yg,-1,y0g, . . . ,yj-1g,a0g,a1g, . . . ,aj-1g)
      • for j=0, . . . , n. Indeed, if randomized control strategies custom character[0,n] are replaced by deterministic strategies then

        I(An→Yn)=0,∀{Pi⋅|⋅):i=0, . . . n}∈custom character[0,n]D.
      • This implies that that for any directed information pay-off, and hence for the FTH information CC-Capacity, randomized control strategies are responsible for encoding of information signals, into the the control process {Ai: i=0, . . . , n}, which cosists of a deterministic control strategy part and a random encoder strategy part. This is fundamentally different from classical stochastic optimal control problems J0,n(P*), in which the performance over randomized and deterministic control strategies is the same.
    • ii) Duality Relations. The inverse of the function C0,n(κ)custom characterJAn→Yn(P*, κ) denoted by is given by the following optimization problem.


Dual Extremum Problem 1.
















κ

0
,
n




(
C
)


=




inf



{

E


(




0
,
n




(


A
n

,

Y
n


)


)


}













κ

0
,
n




(
0
)






J

0
,
n




(

P
*

)


.









The inequality states that it costs more to simultaneously control and transmit information than to control only. The additional cost to communicate is κ0,n(C)−κ0,n(0); this is quantified in the application examples.


Moreover, if the randomized control strategies custom character[0,n] are restricted to deterministic strategies custom character[0,n]D, then and necessarily C=0. The resulting optimization problem reduces to the following classical stochastic optimal control problem (without an information theoretic constraint).


Degenerate Dual Extremum Problem 2.













κ

0
,
n




(
C
)


=



inf


E


{




0
,
n




(


A
n

,

Y
n


)


}






if






𝒫

[

0
,
n

]



=


𝒫

[

0
,
n

]

D

=



κ

0
,
n




(
0
)


.








The last equality follows from fact that randomized control strategies do not incur better performance.


The application example in [0037] illustrates the above statements.


Characterization of Control-Coding Capacity of Dynamical Systems


G. Consider an example control system with control system distribution, and cost function defined by

Qi(dyi|yi-1,ai-Li),γi(ai-Ni,yi-1),i=0, . . . ,n  (.1)


where ai-Licustom character(ai-L, ai-L+1, . . . , ai), yi-1=(y−1, y0, . . . , yi-1), {L, N} are a finite non-negative integers, with an average constraint defined by








𝒫

[

0
,
n

]

N



(
κ
)


=


{



P
i



(



da
i

|

a

i
-
1



,

y

i
-
1



)


,

i
=
0

,





,

n
:



1

n
+
1




E


(




i
=
0

n




γ
i



(


A

i
-
N

i

,

Y

i
-
1



)



)




κ



}

.





Then the characterization of FTH information CC-Capacity is given by

















J


A
n



Y
n


I



(


π

I
,
*


,
κ

)




sup






i
=
0

n



E


{

log


(




dQ
i



(


·

|

Y

i
-
1




,

A

i
-
L

i


)



d







Π
i

π
I




(

·

|

Y

i
-
1




)






(

Y
i

)


)


}




,









I
=

max


{

L
,
N

}








(
.2
)







where the distributions are given by













i

π
I




(


dy
i

|

y

i
-
1



)


=









Q
i




(



dy
i

|

y

i
-
1



,

a

i
-
L

i


)





π
i
I



(



da
i

|

a

i
-
I


i
-
1



,

y

i
-
1



)





P

π
I




(


da

i
-
I


i
-
1


|

y

i
-
1



)





,




(
.3
)









P
μ

π
I




(


da
i

,

dy
i


)


=


μ


(

dy

-
1


)






i
=
0

n



(



Q
i



(



dy
i

|

y

i
-
1



,

a

i
-
L

i


)





π
i
I



(



da
i

|

a

i
-
I


i
-
1



,

y

i
-
1



)



)



,









i
=
0

,





,


n
.







𝒫
_


[

0
,
n

]

I



(
κ
)





=
Δ



{



π
i
1



(



da
i

|

a

i
-
I


i
-
1



,

y

i
-
1



)


,

i
=
0

,





,

n
:



1

n
+
1




E


(




i
=
0

n




γ
i



(


A

i
-
N

i

,

Y

i
-
1



)



)




κ



}






(
.4
)







H. Consider an example control system with control system distribution, and cost function defined by

Qi(dyi|yi-Mi-1,ai-Li),γi(ai-Ni,yi-Ki-1),i=0, . . . ,n  (.5)


where {M, K} are finite non-negative integers and the convention is













y

i
=
M


i
=
1




|

M
=
0



=

{
Ø
}







for any i. Then the characterization of FTH information CC-Capacity is given by
















J


A
*



Y
*



I
,
M




(


π

I
,
*


,
κ

)


=


sup






i
=
0

n



E


{

log


(




dQ
i



(


·

|

Y

i
-
M


i
-
1




,

A

i
-
L

i


)



d




i

π
L




(

·

|

Y

i
-
1




)






(

Y
i

)


)


}













(
.6
)







where

custom character[0,n]I,K(κ)custom characteriI(dai|ai-1i-1,yi-1),i=0, . . . ,n:1/n+1Ei=0nγi(Ai-Ni,Yi-Ki-1))≤κ},

where Icustom charactermax{L, N}, and












i

π
I




(


dy
i

|

y

i
-
1



)


=








Q
i



(



dy
i

|

y

i
-
M


i
-
1



,

a

i
-
L

i


)





π
i
1



(



da
i

|

a

i
-
I


i
-
1



,

y

i
-
1



)





P

π
I




(


da

i
-
I


i
-
1


|

y

i
-
1



)








(
.7
)









P
μ

π
I




(


da
i

,

dy
i


)


=


μ


(

dy

-
1


)






j
=
0

i



(



Q
j



(



dy
j

|

y

j
-
M


i
-
1



,

a

j
-
L

j


)





π
j
I



(



da
j

|

a

j
-
I


j
-
1



,

y

j
-
1



)



)



,

i
=
0

,





,

n
.





(
.8
)



















I. Consider an example control system with control system distribution and cost function defined by

Qi(dyi|yi-Mi-1,ai),γi(ai,yi-Ki-1),i=0, . . . ,n.  (.9)


Then the characterization of FTH information CC-Capacity is given by

















J


A
n



Y
n


J



(


π

0
,
J


,
κ

)


=


sup






i
=
0

n



E


{

log


(




dQ
i



(


·

|

Y

i
-
M


i
-
1




,

A
i


)



d




i

π

0
,
j





(

·

Y

i
-
J


i
-
1



)






(

B
i

)


)


}





,









J


=
Δ



max


{

M
,
K

}








(
.11
)







where













𝒫
°


[

0
,
n

]

J



(
κ
)




=
Δ





{



π

0
,
J




(


da
i

|

y

i
-
J


i
-
1



)


,

i
=
0

,





,

n
:



1

n
+
1




E


(




i
=
0

n




γ
i



(


A
i

,

Y

i
-
K


i
-
1



)



)




κ



}

.











i

π

0
,
J





(


dy
i

|

y

i
-
J


i
-
1



)



=






Q
i




(



dy
i

|

y

i
-
M


i
-
1



,

a
i


)





π
j

0
,
J




(


da
i

|

y

i
-
J


i
-
1



)






,







P
π

π

0
,
J





(


da
i

,

dy
i


)


=


μ


(

dy

-
1


)






j
=
0

i




(



Q
j



(



dy
j

|

y

j
-
M


j
-
1



,

a
j


)





π
j

0
,
J




(


da
j

|

y

i
-
J


i
-
1



)



)

.







(
.11
)







The above characterization means the joint process {(Ai, Yi): i=0, . . . , n} and the output process {Yi: i=0, . . . , n} are J-order Markov process.


J. The characterizations of FTH information CC-Capacity are general and hence, they hold for arbitrary control models, noise distributions, and cost functions.


CC-Capacity of Gaussian-G-RCM-1 and Randomized Strategies


K. Consider an example G-RCM of E. called Gaussian-G-RCM-1 with quadratic cost function defined as follows.

Yi=Ci-1Yi-1+Di,iAi+Di,i-1Ai-1+Vi,
Y−1=y−1,A-1=a-1,i=0, . . . ,n,
PVi|Vi-1,Ai,Y-1=PVi,Vi˜N(0,KVi),KVi>0,
(Y−1,A-1N(0,KY-1,A-1),KY-1,A-1>0,
γi(ai,yi-1)custom charactercustom characterai,Riaicustom character+custom characteryi-1,Qi,i-1yi-1custom character,
(Di,i,Di,i-1)∈custom characterp×q×custom characterp×q,
Ricustom character++q×q,Qi,i-1custom character+p×p,i=0,β,n.


where custom character+q×q denotes the set of positive semidefinite q×q matrices, and custom character++q×q their restriction to positive definite, and custom character⋅,⋅custom character denotes inner product. The Gaussian-RCM-1 is a Multiple Input Multiple Output (MIMO) control system with memory on past inputs and outputs, is an Infinite Impulse Response (IIR) model, and the cost function is quadratic.


Next, we prepare to compute


(i) the optimal randomized strategy, and


(ii) the FTH CC-Capacity.


From G. the optimal randomized strategy is of the form {πiL(dai|ai-Li-1,yi-1)≡πil(dai|ai-1,yi-1): i=0, . . . , n}, i.e., L=1. The directed information pay-off is expressed as follows.













I


(


A
n



Y
n


)


=




i
=
0

n



{


H


(


Y
i

|

Y

i
-
1



)


=

H


(



Y
i

|

A

i
-
1

i


,

Y

i
-
1



)



}



,










H


(



Y
i

|

Y

i
-
1



,

A

i
-
1

i


)


=


H


(

V
i

)


=


1
2




log


(

2

π





e

)


p






K

V
i




.









Let {(Aig, Yig, Zig): i=0, . . . , n} denote a jointly Gaussian process. By the maximum entropy property of Gaussian distributions it follows that










i
=
0

n



H


(


Y
i

|

Y

i
=
1



)



=


H


(

Y
n

)




H


(

Y

g
,
n


)







and the upper bound is achieved if {(Ai, Yi, Zi)=(Aig, Yig, Zig): z=0, . . . , n} and the average constraint is satisfied. Hence, the upper bound is achieved, if the optimal strategies are linear, given as follows.


Randomized Strategy











A
i
g

=




U
i
g

+


A

i
,

i
-
1





A

i
-
1

g


+

Z
i
g



,


U
i
g



=
Δ




Γ

i
-
1




Y

g
,

i
-
1





,













g
i
1



(

Y

g
,

i
-
1



)


+


A

i
,

i
-
1





A

i
-
1

g


+

Z
i
g



,












g
i
1



(

y

i
-
1


)


=


Γ

i
-
1




y

i
-
1




,

i
=
0

,





,
n
,






    • Zig is independent of (Ag,i-1, Yg,i-1), Zg,i is independent of Vi, i=0, . . . , n,

    • {Zig˜N(0, KZi): i=0, 1, . . . , n} is an independent Gaussian process





for some deterministic matrices {(Γi-1, Λi,i-1): i=0, . . . , n} of appropriate dimensions. Next, we prepare to compute the directed information pay-off. To this end, we need to compute the conditional entropy H(Yig|Yg,i-1), i=0, . . . , n, which means we need to determine the conditional density of Yig given Yg,i-1 for i=0, . . . , n, using the stochastic control system and strategy. Since the conditional density is characterized by the conditional mean and covariance, we define the quantities

Ŷi|i-1custom characterEe1{Yig|Yg,i-1},Âi|iΔEe1{Aig|Yg,i},
KYi|Yi-1custom characterEe1{(Yig−Ŷi|i-1)(Yig−Ŷi|i-1)T|Yg,i-1}
Pi|i=Ee1(Aig−Âi|i)(Aig−Âi|i)T,i=0, . . . ,n.


From above and using the independent properties of the noise process then

Âi|ii,i-1Âi-1|i-1+1Uigi|i-1(Yig−Ŷi|i-1),
Ŷi|i-1=Ci-1Yg,i-1+Di,iYig+Λi,i-1Âi-1|i-1,
KYi|Yi-1=Λi,i-1Pi-1|i-1Λi,i-1T+Di,iKZiDi,iT+KVi,i=0, . . . ,n


where (Â−1|−1, P−1|−1) are initial data and

Λi,i-1custom characterDi,iΛi,i-1+Di,i-1,i=0, . . . ,n,
Pi|ii,i-1Pi-1|i-1Λi,i-1T+KZi−(KZiDi,iTi,i-1Pi-1|i-1Λi,i-1T)·Φi|i-1(KZiDi,iTi,i-1Pi-1|i-1Λi,i-1T)T,
Φi|i-1custom character[Di,iKZiDi,iT+KVi=Λi,i-1Pi-1|i-1Λi,i-1T]−1,
Δi|i-1custom character(KZiDi,iTi,i-1Pi-1|i-1Λi,i-1Ti|i-1


The innovations process denoted by {ve1: i=0, . . . , n} is an orthogonal process, independent of {gi1(⋅): i=0, . . . , n}, and satisfies the following identities.








v
i

e
1




=
Δ





Y
i
g

-


Y
^



i
|
i

=
1



=





A
_


i
,

i
=
1





(


A

i
=
1

g

=


A
^



i
-
1



i
-
1




)


+


D

i
,
i




Z
i
g


+

V
i


=


v
i

e
2




|


g
i

=
0






v
i
0






,


v
i
0

~

N


(

0
,

K


Y
i

|

Y

i
-
s





)



,

i
=
0

,





,
n




where {vi0: i=0, . . . , n} indicates that the innovations process is independent of the strategy {gi1(⋅): i=0, . . . , n}. From the above equations, since the conditional covariance KYi|Yi-1 is independent of Yg,i-1 for i=0, . . . , n, the conditional distribution is Pe1(Yig≤yi|Yg,i-1)˜N(Ŷi|i-1, KYi|Yi-1), i=0, . . . , n. Applying the above two observations we obtain












I


(


A

g
,
n




Y

g
,
n



)


=



1
2






i
=
0

n



log





K


Y
i

|

Y

i
-
1









K

V
i













i
=
0

n



{


H


(

v
i
0

)


-

H


(

V
i

)



}








Next, we give the the closed form expressions of the optimal randomized control strategies, and the FTH CC-Capacuty.


CC-Capacity and Randomized Strategy


Consider the Gaussian-RCM-1. The following hold.


(a) FTH Information CC-Capacity. The joint process {(Ai, Yi)=(Aig, Yig), i=0 . . . , n}, is jointly Gaussian and satisfies the following equations.











A
i
g

=





e
i
1



(


Y

s
,

i
-
1



,

A

i
-
1

g


)


+

Z
i
g



,

i
=
0

,





,
n
,







=




U
i
g

+


A

i
,

i
-
1





A

i
-
1

g


+

Z
i
g



,











U
i
g

=



g
i
1



(

Y

g
,

i
-
1



)


=


Γ

i
-
1




Y

g
,

i
-
1






,






Y
i
g

=



C

i
-
1




Y

g
,

i
-
1




+



A
_


i
,

i
-
1





A

i
-
1

g


+


D

i
,
i




U
i
g


+


D

i
,
i




Z
i
g


+

V
i



,






    • i) Zig is independent of (Ag,i-1, Yg,i-1), i=0, . . . , n,

    • ii) Zg,i is independent of Vi, i=0, . . . , n,

    • iii) {Zig˜N(0, KZi): i=0, 1, . . . , n} is an independent Gaussian process,











E

e
1




{


γ
i



(


A
i
g

,

Y

i
-
1

g


)


}


=


E

e
1





{





U
i
g

,


R
i



U
i
g





+

2






A

i
,

i
-
1






A
^



i
-
1

|

i
-
1




,


R
i



U
i
g






+





A

i
,

i
-
1






A
^



i
-
1

|

i
-
1




,


R
i



A

i
,

i
-
1






A
^


i
,


i
-
1

|

i
-
1








+

tr


(


K

Z
i




R
i


)


+

tr


(


A

i
,

i
-
1


T



R
i



A

i
,

i
-
1





P


i
-
1

|

i
-
1




)


+




Y

i
-
1

g

,


Q
i



Y

i
-
1

s






}

.






The FTH CC-Capacity given by













J


A
n



Y
n


1



(

e

1
,
*


)


=


sup



1
2






i
=
0

n



log





K


Y
i

|

Y

i
-
1









K

V
i













and the average constraint set is defined by









𝒫
_


[

0
,
n

]

1



(
κ
)




=
Δ




{




e
i
1



(
·
)




=
Δ



(



g
i
1



(

·

,
·


)


,

A

i
,

i
-
1



,

K

Z
i



)


,

i
=
0

,





,

n
:



1

n
+
1







i
=
0

n




E

e
2




(


γ
i



(


A
i
g

,

Y

g
,

i
-
1




)


)





κ



}

.





(b) Decentralized Separation of Randomized Strategy into Controller and Encoder: The optimal strategy denoted by {e1,*(⋅)≡(gi1,*(⋅), Λi,i-1*, KZi*): i=0, . . . , n} is the solution of the dual optimization problem








κ

0
,
n




(
C
)




=
Δ




inf


(



g
i
1



(
·
)


,

A

i
,

i
-
1



,

K

Z
i



)

,

i
=
0

,





,

n
:



1
2






i
=
0

n



log





K

Y


Y

i
-
1









K

V
i











(

n
+
1

)


C







E

e
2





{




i
=
0

n




γ
i



(


A
i
g

,

Y

i
-
1

g


)



}

.






Moreover, the following decentralized separation holds.


(i) The optimal strategy {gi1.*(⋅): i=0, . . . , n} is the solution of the optimization problem







inf




g
i
1



(
·
)


;

i
=
0


,





,
n




E

e
2




{




i
=
0

n




γ
i



(


A
i
g

,

Y

i
-
1

g


)



}





for a fixed {Λi,i-1, KZi: i=0, . . . , n}.


(ii) The optimal strategy {Λi,i-1*, KZi*: i=0, . . . , n} is the solution of κ0,n(C) for {gi1(⋅)=gi1,*(⋅): i=0, . . . , n} determined from (i).


(c) Optimal Strategies of Controller and Encoder. Suppose in (a), Yig is replaced by

Yig=Ci,i-1Yi-1g+Λi,i-1Ai-1g+Di,iUig+Di,iZig+Vi,i=0, . . . ,n.


Any candidate of the control strategy {gi1(Yg,i-1): i=0, . . . , n} is of the form












g
i
1



(

Y

g
,

i
-
1



)




=
Δ







Γ

i
,

i
-
1


1



Y

i
-
1

g


+


Γ

i
,

i
-
1


2




A
^



i
-
1

|

i
-
1






,













Γ
_


i
,

i
-
1






Y
_


i
-
1

g



,



Y
_


i
-
1

g

=

[




Y

i
-
1

g







A
^



i
-
1

|

i
-
1






]


,

i
=
0

,





,

n
.








Define the augmented system









Y
_

i
g

=




F
_


i
,

i
-
1






Y
_


i
-
1

g


+



B
_


i
,

i
-
1





U
i
g


+



G
_


i
,

i
-
1





v
i

e
2





,







F
_


i
,

i
-
1





=
Δ



[




C

i
,

i
-
1







A
_


i
,

i
-
1







0



A

i
,

i
-
1






]


,



B
_


i
,

i
-
1





=
Δ



[




D

i
,
i






I



]


,







G
_


i
,

i
-
1





=
Δ



[



I





Δ

i
|

i
-
1






]


,

i
=
0

,





,
n




and average cost











E

e
1




{




i
=
0

n




γ
i



(


A
i
g

,

Y

i
-
1

g


)



}






E



e
1




{




i
=
0

n





γ
_

i



(


U
i
g

,

Y

i
-
1

g


)



}









=
Δ






E

e
1




{




i
=
0

n



(





[





Y
_


i
-
1

g






U
i
g




]

T



[





M
_


i
,

i
-
1







L
_


i
,

i
-
1









L
_


i
,

i
-
1







N
_


i
,

i
-
1






]




[





Y
_


i
-
1

g






U
i
g




]


+

















tr


(


K

Z
i




R
i


)


+

tr


(


A

i
,

i
-
1


T



R
i



A

i
,

i
-
1





P


i
-
1

|

i
-
1




)



)

}

.

















M
_


i
,

i
-
1





=
Δ



[




Q

i
,

i
-
1





0




0




A

i
,

i
-
1


T



R
i



A

i
,

i
-
1







]


,











L
_


i
,

i
-
1





=
Δ



[



0






A

i
,

i
-
1


T



R
i





]


,



N
_


i
,

i
-
1





=
Δ




R
i

.







Then the following hold.


(1) For a fixed {Λi,i-1, KZi: i=0, . . . , n} the optimal strategy {Uig,*=gi1,*(Yg,i-1): i=0, . . . , n} is the solution of the partially observable classical stochastic optimal control problem













J

0
,
n




(



g

1
,
*




(
·
)


,
A
,

K
Z


)


=


inf



E

e
1




{




i
=
0

n





γ
_

i



(


U
i
g

,


Y
_


i
=
1

g


)



}







where {Yig: i=0, . . . , n} satisfy the above recursion. Moreover, the optimal strategy {Uig,*=gi1,*(Yg,i-1)≡gi1,*(Yg,i-1): i=0, . . . , n} is given by the following equations.














g
i

1
,
*




(


y
_


i
-
1


)


=



Γ
_


i
,

i
=
1






y
_


i
=
1




,





i
=
0

,





,

n
-
1

,

=


-


(



N
_


i
,

i
-
1



+



B
_


i
,

i
-
1








(

i
+
1

)




B
_


i
,

i
-
1







)


-
1






B
_


i
,

i
-
1


T






(

i
+
1

)




F
_


i
,

i
-
1






y
_


i
-
1






,











g
n

1
,
*




(


y
_


n
-
1


)


=
0

,





where the symmetric positive semidefinite matrix {Σ(i): i=0, . . . , n} satisfies a matrix difference Riccati equation, for i=0, . . . , n−1, equation

Σ(i)=Fi,i-1TΣ(i+1)Fi,i-1−(Fi,i-1TΣ(i+1)Bi,i-+Li,i-1)·(Ni,i-1+Bi,i-1TΣ(i+1)Bi,i-1)−1(Bi,i-1TΣi,i-1Fi,i-1+Li,i-1T)T+Mi,i-1,Σ(n)=Mn,n-1

and the optimal pay-off is given by








J

0
,
n




(



g

1
,
*




(
·
)


,
A
,

K
Z


)


=




j
=
0

n




{



tr


(


Φ

j


j
-
1







(
j
)



)


+

tr
(



K

Z
j




R
j


+

tr


(


A

j
,

j
-
1


T



R
j



A

j
,

j
-
1





P


j
-
1

|

j
-
1




)


+

tr


(


Δ

j
|

j
-
1





K


Y
j

|

Y

j
-
1






Δ

jj
-
1






(
j
)



)



}

+


Y
_



-
1

|

-
1




,




(
0
)




Y
_



-
1

|

-
1






)

.






(2) The optimal strategies {(Λi,i-1*, KZi*): i=0, . . . , n} are the solutions of the optimization problem













κ

0
,
n




(
C
)


=


inf


(


A

i
,

1
-
ɛ



,

K

x
i



)

,

A
=
0

,








n

;



1
2






i
=
0

n



log





K


Y
i



Yi
-
1








K

V
i











(

n
+
1

)


C








{


J

0
,
n




(



g

1
,
*




(
·
)


,
A
,

K
Z


)


}

.







The above is a decentralized separation principle, and (1) and (2) are Person-by-Person Optimality statements of {gi1(⋅): i=0, . . . ,} and {Λi,i-1, KZi: i=0, . . . , n}.


CC-Capacity of Gaussian-G-RCM-2 and Randomized Strategies


L. Consider an example G-RCM of H. called Gaussian-G-RCM-2 with quadratic cost function defined as follows.

Yi=Ci,i-1Yi-1+DiAi+Vi,Y-1=y-1,
PVi|Vi-1,Ai,Y-1=PVi,Vi˜N(0,KVi),KVi>0,
γi(ai,yi-1)custom charactercustom characterai,Riaicustom character+custom characteryi-1,Qi,i-1yi-1custom character.


By I. the characterization if FTH-DI information CC-Capacity is














J


A
n



Y
n





(


π
*

,
κ

)


=




sup



°


[

0
,
n

]




(
κ
)








i
=
0

n






log


(




dQ
i



(


·

|

y

i
-
1




,

a
i


)



d




i
*



(

·

|

y

i
-
1




)






(

y
i

)


)





P
*



(


dy
i

,

y

i
=
1


,

da
i


)

















sup






i
=
0

n



I


(


A
i

;


Y
i

|

Y

i
-
1




)


























i
*



(


dy
i

|

y

i
-
1



)


=




λ
i






Q
i



(



da
i

|

y

i
-
1



,

a
i


)





π
i



(


da
i

|

y

i
-
1



)



















𝒫
Δ


[

0
,
n

]




(
κ
)




=
Δ




{



π
i



(


da
i

|

y

i
-
1



)


,

i
=
0

,





,

n
:



1

n
+
1







i
=
0

n




E
n



{





A
i

,


R
i



A
i





+




Y

i
-
1


,


Q

i
,

i
-
1





Y

i
-
1







}





κ



}

.













The optimal randomized strategy of above characterization of FTH information CC-Capacity is now computed, using several steps.


(a) Gaussian Properties of Characterization of FTH Information CC-Capacity. Using dynamic programming or the maximum entropy property of processes with fixed second moments, the optimal strategies are Gaussian denoted by {πig(dai|yi-1): i=0, . . . , n}∈P̊[0,n](κ), and the joint process is jointly Gaussian denoted by {(Ai, Yi)≡(Aig, Yig): i=0, . . . , n}.


(b) Realization of Optimal Strategies. Since {(Aig, Yig): i=0, . . . , n} is jointly Gaussian, strategies from the set P̊[0,n](κ), can be realized by linear and Gaussian randomized strategies defined by the set








𝒫

[

0
,
n

]




(
κ
)


=

{



A
i
g

=


e
i



(


Y

i
=
1

g

,

Z
i
g


)



,



e
i



(


Y

i
=
1

g

,

Z
i
g


)


=




g
i



(

Y

i
=
1

g

)


+

Z
i
g


=



Γ

i
,

i
-
1





Y

i
=
1

g


+

Z
i
g




,


Z
i
g



Y

g
,

i
=
1




,


{




Z
i
g

:
i

=
0

,





,
n

}






independent





process

,


Z
i
g

~

N


(

0
,

K

Z
i



)



,


K

Z
i




S
+

q

q



,

i
=
0

,





,

n
:



1

n
+
1




E
e



{




i
=
0

n



[





A
i
g

,


R
i



A
i
g





+




Y

i
=
1

g

,


Q

i
,

i
=
1





Y

i
=
1

g






]


}



κ



}





where ⋅⊥⋅ means the processes are independent.


Moreover, the characterization of the FTH information CC-Capacity is













J


A
n



Y
n





(


π

3
,
*


,
κ

)


=


sup




{


1
2






i
=
0

n



ln



|



D
i



K

Z
i




D
i
T


+

K

V
i



|


|

K

V
i


|





}

.








(c) Dual Role of Randomized Control Strategies. Since the optimal control strategies admit the decomposition

Aigi,i-1Yi-1g+Zig≡gi(Yi-1g)+Zig,i=0, . . . ,n


then we have the following: (i) the feedback control law or strategy {gi≡Γi,i-1: i=0, . . . , n} is responsible for controlling the output process {Yig: i=0, . . . , n}, and (ii) the orthogonal innovations process {Zig: i=0, . . . , n} is responsible for communicating new information to the output process, both chosen to maximize JAn→Yng,*,κ).

    • (d) Decentralize Separation Principle of Optimal Randomized Strategies. Let {(Aig,*, Tig,*, Zig,*): i=0, . . . , n} denote the optimal joint process corresponding to (c) above. The cost-to-go Ci: custom characteri-1custom charactercustom character (corresponding to JAn→Yng,*, κ)), from time “i” to terminal time “n” satisfies the dynamic programming recursions









C
n



(

y

n
-
1


)


=


sup



{



1
2


log







D
n



K

Z
n




D
n
T


+

K

V
n








K

V
n






-

str


(


R
n



K

Z
n



)


-

s


[





u
b

,


R
n



u
n





+




y

n
-
1


,


Q

n
,

n
-
1





y

n
-
1







]


+


s


(

n
+
1

)



κ


}



,







C
i



(

y

i
-
1


)


=


sup



{



1
2


log







D
i



K

Z
i




D
i
T


+

K

V
i








K

V
i






-

str


(


R
i



K

Z
i



)


-

s


[





i
i

,


R
i



u
i





+




y

i
-
1


,


Q

i
,

i
-
1





y

i
-
1







]


+

E


{




C

i
+
1




(

Y
i

g
,
*


)


|

Y

i
-
1


g
,
*



=

y

i
-
1



}



}



,

i
=
0

,





,

n
-
1





where s≥0 is the Lagrange multiplier associated with the average constraint. The solution of the dynamic programming equations is given by the following equations.

Ci(yi-1)={−s(yi-1,P(i)yi-1)+r(i)},i=0, . . . ,n


where {r(i): i=0, . . . , n−1} satisfies the recursions







r


(
i
)


=


r


(

i
+
1

)




sup



ϵS
+

q

q






{



1
2


log







D
i



K

Z
i




D
i
T


+

K

V
i








K

V
i






-

str


(


P


(

i
+
1

)




[



D

i
,
i




K

Z
i




D
i
T


+

K

V
i



]


)


-

str


(


R
i



K

Z
i



)



}









r


(
n
)


=


sup



ϵS
+

q

q






{



1
2


log







D
n



K

Z
n




D
n
T


+

K

V
n








K

V
n






-

str


(


R
n



K

Z
n



)


+


s


(

n
+
1

)



κ


}






and {P(i): i=0, . . . , n} is a solution of the Riccati difference matrix equation

P(i)=Ci,i-1TP(i+1)Ci,i-1+Qi,i-1−Ci,i-1TP(i+1)Di(Di,iTP(i+1)Di+Ri,i)−1(Ci,i-1TP(i+1)Di)T,i=0, . . . ,n−1,
P(n)=Qn,n-1.


The optimal randomized control strategy is given by

Aig,*=gi*(Yi-1g,*)+Zig,*,i=0, . . . ,n


where its random part {Zig,*: i=0, . . . , n} is the solution to the above recursions, and its deterministic part is given by














g
i
*



(

y

t
-
1


)


=




-


(



D
i
T



P


(

i
+
1

)




D
i


+

R
i


)


-
1





D
i
T



P


(

i
+
1

)




C

i
,

i
-
1





y

i
-
1















Γ

i
,

i
-
1


*



y

i
-
1




,

i
=
0

,





,

n
-
1

,
















g
n
*



(

y

n
-
1


)


=
0.












The corresponding covariance KYicustom characterE{Yig(Yig)T}, i=0, 1, . . . , n, is given by the following equation.

KYi=(Ci,i-1+DiΓi,i-1*)KYi 1(Ci,i-1+DiΓi,i-1*)T+DiKZiDiT+KVi,i=0, . . . ,n,KY-1=given.


The Lagrange multiplier s≥0 are found from the problem

infs≥0{−scustom charactery-1,P(0)y-1custom character+r(0)}


The characterization of the FTH-DI extremum problem is given by

JAn→Yng,*,κ)=−s∫Y-1custom charactery-1,P(0)y-1custom characterPY-1(dy-1)+r(0).


where s is the value found above.


(e) Water-filling Solution of Encoder Part of Randomized Strategy. The above solution illustrates the decentralized separation between the computation of the optimal deterministic part {g*i(yi-1): i=0, . . . , n} and the optimal random part {K*Zi: i=0, . . . , n} (covariance of innovations process) of the randomized control strategy, the later is found from a sequential water filling problem of recursions ri(⋅) that depends on the solution of the matrix Riccati difference equation. Clearly, optimal randomized control strategies have a dual role; to control the controlled process, precisely as in stochastic optimal control theory, of Gaussian control systems with quadratic pay-off, and to transmit new information via its random part {Zig: i=0, . . . , n}. This means, information data are mapped into the optimal innovations process with covariance {K*Zi: i=0, . . . , n}, and then transmitted over the stochastic system, which acts as a communication channel.


(f) Connection to Stochastic Optimal Control Problem. From the above solution, we can recover, as a degenerate case, the optimal strategies of Gaussian control problems with quadratic pay-off as follows.


The dual of ( ) is given by











κ

0
,
n




(
C
)




=
Δ





inf


(


Γ

i
,

i
-
1



,

K

Z
i



)

,

i
=
0

,





,

n
:



1

2


(

n
+
1

)








i
=
0

n



ln







D
i



K

Z
i




D
i
T


+

K

V
i








K

V
i









C













{




i
=
0

n




E
g



(


(


A
i
g

,


R
i



A
i
g



)

+

(


Y

i
-
1

g

,


Q

i
,

i
-
1





Y

i
-
1

g



)


)



}







=



inf



A
i
g

=


g
i



(

Y

i
-
1

g

)



,

i
=
0

,





,
n













E
g



{




i
=
0

n



(


(


A
i
g

,


R
i



A
i
g



)

+

(


Y

i
-
1

g

,


Q

i
,

i
-
1





Y

i
-
1

g



)


)


}


,












if






K

Z
i



=

,

i
=
0

,





,

n
.









The second identity holds if randomized control strategies are restricted to deterministic strategies. Hence,

JAn→Yn(#g,*,κ)=0 if K*Zi=0,i=0, . . . ,n.


and the degenerate optimization problem is the stochastic optimal control of Gaussian control systems with quadratic pay-off.


CC-Capacity: Per Unit Time of FTH Information CC-Capacity


M. Consider the Gaussian-G-RCM-2 of L. Here, we compute the capacity of the control system, i.e., given by the per unit time limit of the FTH information CC-Capacity. Suppose the control system is time-invariant, with {Ci,i-1=C, Di=D, KVi, =KV, Ri=R, i=0, . . . , n, Qi,i-1=Q, i=0, . . . , n−1, Qn,n-1=M}, and the following conditions hold.

    • i) the pair (C, D) is stabilizable,
    • ii) the pair (G, C) is detectable, Q=GTG, G∈custom character+p×p.


Then the CC-Capacity of the control system is per unit time limit of the characterization of FTH information CC-Capacity, given by










C


(
κ
)


=



J


A




Y






(


π

g
,
*


,
κ

)




=
Δ






lim

n






1

n
+
1





J


A
n



Y
n





(


π

g
,
*


,
κ

)











=





J
*



(

s
*

)


=


inf

s

0





J
*



(
s
)

















where








J
*



(
s
)


=


sup



ϵ𝕊
+

q

q






{



1
2


log







DK
Z



D
T


+

K
V







K
V





+

s





κ

-

str


(

RK
Z

)


-

str


(

P


[



DK
Z



D
T


+

K
V


]


)



}
















g


,
*




(
y
)


=




-


(



D
T


PD

+
R

)


-
1





D
T


PCy




Γ


,
*




y
.








P



=



C
T


PC

+
Q
-


C
T




PD


(



D
T


PD

+
R

)



-
1





(


C
T


PD

)

T





,










spec


(

C
+

D






Γ


,
*




)




𝔻
o







where spec(⋅) denotes the set of eigenvalues and custom character0custom character{c∈C:|c|<1} is the unit disc of the set of complex numbers C . The Lagrange multiplier s*(κ) can be found from the average constraint

tr(RKZ)+tr(P[[DKZDT+KV])≤κ.


Thus, the predictable part of the optimal randomized control strategy g∞,*(y) ensures existence of a unique invariant distribution PYg∞,*(dy) of the optimal output process {Y*i: i=0, . . . , n}corresponding to (g∞,*(⋅), K*Z), i.e., stability of the closed loop system, and hence C(K) is operational, i.e., it is the CC-capacity of the control system.


N. Consider an example Gaussian-G-RCM-2 of L., with parameters p=q=1, R=1, Q=0 and C, D) arbitrary. For these choices of parameters we have the following.










(


Γ


,
*


,

K
Z
*


)

=

{







(

0
,
κ

)

,




κ


[

0
,


)






if







C



<
1







(


-



C
2

-
1

CD


,




D
2


κ

+

K






γ


(

1
-

C
2


)






C
2



D
2




)

,





κ


[


κ
min

,


)


,


κ
min

=



(


C
2

-
1

)


K





γ


D
2








if







C



>
1







(


-



C
2

-
2

CD


,
0

)

,




κ


[

0
,

κ
min


]






if







C



>
1













and










C


(
κ
)



=

{





1
2


ln





D
2


κ

+

K
V



K
V








if







C



<
1

,

i
.
e
.

,


K
Z
*

=
κ








1
2


ln





D
2



K
Z
*


+

K
V



K
V








if







C



>
1

,

κ


[


κ

min
,





)







0





if







C



>
1

,

κ



[

0
,

κ
min


]

.













(
.12
)







Clearly, if |C|<1, i.e., stable, the deterministic part of the strategy is zero, i.e., Γ∞,*=0. The capacity formulae C(κ), illustrates that there are multiple regimes, depending on whether the control system is stable, that is, |C|<1 or unstable |C|>1. Moreover, for unstable control systems |C|>1, the optimal pay-off is zero, unless the power level r exceeds the critical level κmin.


Encoder Design for CC-Capacity of Gaussian-G-RCM-2


O. Consider an example Gaussian-G-RCM-2 of L.


We illustrate via an example of an information process or tracking process, as shown in FIG. 1, how to design the encoder.


Consider a Gaussian information process {Xi: i=0, 1, . . . , n} taking values in custom charactercustom charactercustom characterq (which is to be encoded by the optimal randomized strategy) and described by a Gaussian Linear State Space Model (G-LSSM), as follows.

Xi+1=AiXi+GiWi,X0=x,i=0, . . . ,n−1


where {Wi˜(0, KWi): i=0, . . . , n−1} are custom character=custom characterk—valued zero mean Gaussian processes, independent of the noises driving the Gaussian-G-RCM-2 of the control system, i.e., {(Ai, Vi): i=0, 1, . . . , n}, and the initial state X0 is either fixed, i.e., X0=x or its distribution is fixed and Gaussian, i.e., PX0(dx)˜N(0, KX0), and independent of {Wi: i=0, . . . , n}. Next, we illustrate how to transform the optimal randomized strategy of L. into an controller-encoder, and a mean-square error decoder, based on a variation of the Kalman filter. Consider the optimal randomized strategy of L., specifically, {(Γ*i,i-1, K*Zi): i=0, 1, . . . , n−1,}, with corresponding optimal distribution {πig,*(dai|yi-1): i=0, . . . , n} and joint process {(Aig,*, Yig,*): i=0, . . . , n}. Define the filter estimate and conditional covariance, for i=0, . . . , n, by

{circumflex over (X)}i|i-1custom characterE{Xi|Yg,*,i-1},
Σi|i-1custom characterE{(Xi−{circumflex over (X)}i|i-1)(Xi−{circumflex over (X)}i|i-1)T|Yg,*,i-1}.


(a) Controller-Encoder Strategy. The following controller-encoder strategy2 achieves the characterization of FTH information CC-Capacity. ( ). 2 For any square matrix D with real entries D1/2 is its square root.

Aig,*=ē*i(Xi,Yg,*,i-1)=Γ*i,i-1Yi-1g,*+Δ*i{Xi−{circumflex over (X)}i|i-1},
Δ*i=KZi*,1/2Σi|i-1−1/2,Δ*i>0,
Yig,*=(Ci,i-1+DiΓ*i,i-1)Yi-1g,*+DiΔ*i{Xi−{circumflex over (X)}i|i-1}+Vi,i=0,1, . . . ,n.


Moreover, from the properties of Kalman filter, the following hold.


(b) Filter Estimates. The innovations process defined by {v*icustom characterYig,*−E{Yig,*|Yg,*,i-1}:i=0, . . . , n} satisfies

v*i=Y*i−(Ci,i-1+DiΓ*i,i-1)Yi-1g,*=DiΔ*i{Xi−{circumflex over (X)}i|i-1}+Vi,i=0, . . . ,n.
E{v*i|Yg,*,i-1}=E{v*i}=0,i=0, . . . ,n,
E{v*i(v*i)T|Yg,*,i-1}=DiK*ZiDiT+KVi=E{v*i(v*i)T}


and the sequence of RVs, {v*i: i=0, . . . , n} is uncorrelated.


The optimal filter estimates and conditional covariances satisfy the following recursions.



















X
^



i
+
1

|
i


=





A
i




X
^


i
|

i
-
1




+


Ψ

i
|

i
-
1





{


Y
i

0
,
*


-


(


C

i
,

i
-
1



+


D
i



Γ

i
,

i
-
1


*



)



Y

i
-
1


θ
,
*




}











=





A
i




X
^


i
|

i
-
1




+


Ψ

i
|

i
-
1





v
i
*




,

i
=
0

,





,
n
,



X
^


0
|

-
1



=
given

,



















i
+
1

|
i




=



A
i






i
|

i
-
1





A
i
T



-


A
i






i
|

i
-
1









(


D
i



Δ
i
*


)

T



[



D
i



K

Z
i

*



D
i
T


+

K

V
i



]



-
1


·

(




D
i



Δ
i
*






i
|

i
-
1





A
i
T



+


G
i



K

W
i




G
i
T



,



0
|

-
1




)







,












0
|

-
1





=




given














where the filter gains {Ψi|i-1: i=0, . . . , n} and the output process {Yig,*: i=0, . . . , n} are defined by

Ψi|i-1custom characterAiΣi|i-1(DiΔ*i)T[DiK*ZiDiT+KVi]−1
Yig,*custom character(Ci,i-1+DiΓ*i,i-1)Yi-1g,*+v*i,i=0, . . . ,n.


Moreover, the a-algebra generated by {Ykg,*: k=0, 1, . . . , i} denoted by custom character0,iYg,*custom characterσ{Y0g,*, Y1g,*, . . . , Yig,*} satisfies custom character0,iYg,*=custom character0,iv*custom characterσ{v*0, v*1, . . . , v*i}, i=0, . . . , n.


(c) Realization of Optimal Randomize Strategy of L. The controller-encoder strategy {ē*i(⋅,⋅): i=0, 1, . . . , n} realizes the optimal randomized strategy, that is

Pē*(Aig,*∈dai|yi-1)=πig,*(dai|yi-1N(Γ*i,i-1Yi-1g,*,K*Zi),i=0, . . . ,n.


(d) FTH Information CC-Capacity Achieving Contoller-Encoder Strategy. The strategy {ē*i(⋅,⋅): i=0, . . . , n} achieves the FTF information CC-Capacity, that is, the following identities hold.











lim

n






1

n
+
1




I


(


X
n



Y

g
,
8
,
n



)




=






i
=
0

n



I


(


X
i

;


Y
i

g
,
*


|

Y

g
,

*

,

i
-
1







)









=




lim

n






1

n
+
1







i
=
0

n



I


(


A
i

g
,
*


;


Y
i

g
,
*


|

Y

g
,

*

,

i
-
1







)











=




lim

n






1

n
+
1







i
=
0

n



{


H


(

v
i
*

)


-

H


(

V
i

)



}










=





J



A









Y






(


π

g
,
*


,
κ

)


=

C


(
κ
)










CC-Capacity Achieving Controller-Encoder


P. Consider an example Gaussian-G-RCM-2 of M. The method of O. can be repeated by replacing {(Γ*i,i-1, K*Zi)=i=0, 1, . . . , n−1} in (a)-(d) of O. by (Γ∞,*, K*Z) obtained in M.


Q. Consider the example of N.


(a) Gaussian RV Message X˜N(0, σX2). Then the message is Xi+1=Xi, X0=X, i=0, . . . , n−1. The Rate Distortion Function (RDF) of the Gaussian RV X subject to Mean-Square Error distortion is given by







R


(
Δ
)


=



inf


X
^



=
Δ




E
|
X

=


X
^



|
2




Δ







I


(

X
;

X
^


)



=


1
2


log






max


(

1
,


σ
X
2

Δ


)








Let {circumflex over (X)}i|icustom characterE{X|Yg,*,*}, i=0, . . . , n be the decoder of message X. Calculating the time-invariant scalar version (i.e., p=q=1) of optimal controller-encoder O., after (n+1) times uses of the control system, the Mean-Square Error (MSE) of the decoder decays geometrically, according to the expression

E|X−{circumflex over (X)}n|n|2X2e−2C0,n(κ)


where for large enough n, we have C0,n(κ)=(n+1)C(κ), and C(κ) is the capacity of the control system given in N. for cases, |C|>1, |C|<1 (stable and unstable). Letting Δ=Σn|n we obtain

R(Δ)=C0,n(κ).


This implies the controller-encoder-decoder strategy meets the RDF with equality and no other controller-encoder-decoder scheme, no matter how complex it can be, can achieve a smaller MSE. Moreover, limn→∞E|X−{circumflex over (X)}n|n|2=0. We can also compute the error probability for any finite n.


(b) Equiprobable messages X∈{0, 1, . . . , Mn}, n=0, 1, . . . Similar to (a), we can apply the Schalkwijk-Kailath coding scheme, to show for Mncustom characterexp{(n+1)R} such that







R
<


1

n
+
1





C

0
,
n




(
κ
)




,





the probability of Maximum Likelihood (ML) decoding error decreases doubly exponentially in (n+1).


(c) The analysis of the multi-dimensional versions is also feasible using the method described.


R. In the method of L.-Q.


(i) the optimal controller-encoder strategy controls the controlled process {Yi: i=0, . . . , n} and encodes the information process {Xi: i=0, . . . , n}, and


(ii) the information process is reconstructed at the output of the control system, using a minimum mean-square error decoder, via the Kalman filter.


(iii)







C


(
κ
)


=



lim

n







1

n
+
1





J


A
n



Y
n





(


π

g
,
*


,
κ

)




=


lim

n







1

n
+
1




I


(


X
n



Y

g
,

*

,
n





)










is a variant of per unit time infinite horizon stochastic optimal control theory; it reveals that optimal randomized strategies are capable of information transfer, over the control system, which acts as a feedback channel, precisely as in Shannon's operational definition of Capacity of Noisy Channels.


S. The methods described in this patent, for simultaneous control and encoding of information apply to any dynamical system model with inputs and outputs, such as, biological control models, financial portfolio optimization and hedging models, quantum dynamical control systems, etc.

Claims
  • 1. A computer-implemented method for simultaneously controlling a control system and transmitting information signals in the control system, the method comprising: receiving, at a controller of a control system, an information signal to be propagated through the control system, where the information signal includes data to be transmitted through the control system;determining, at the controller of the control system, a control signal to control one or more operations in the control system;encoding, at the controller of the control system, the received information signal to produce an encoded information signal;determining, at the controller of the control system, a two-part signal comprising: 1) the control signal, and 2) the encoded information signal, where the two-part signal is based on a transmission rate below a control-coding capacity of the control system, where the control-coding capacity is a maximum rate of transmitting signals through the control system;transmitting, by the controller, the two-part signal using the control system as a communication channel.
  • 2. The computer-implemented method of claim 1, wherein encoding the information signal, further includes: encoding parameter values, or improper or failure regimes of actuator devices.
  • 3. The computer-implemented method of claim 1, wherein encoding the information signal message further includes: removing redundancy in data.
  • 4. The computer-implemented method of claim 1, further comprising: receiving, at one or more devices of the control system, a signal that conveys information about the information signal; andextracting, at the one or more devices, the information signal.
  • 5. The computer-implement method of claim 4, further comprising: determining, at one or more devices of the control system, feedback for the controller; andtransmitting, by the one or more devices of the control system a process output signal comprising the encoded information signal.
  • 6. The computer-implemented method of claim 5, further comprising: receiving, at a decoder, the process output signal; anddecoding, at the decoder, the encoded information based on an encoder scheme.
  • 7. The computer-implemented method of claim 6, wherein the two-part message indicates the encoder scheme.
  • 8. The computer-implemented method of claim 5, further comprising: receiving the process output signal at one or more of: 1) a controller, or 2) a decoder device.
  • 9. The computer-implemented method of claim 1, wherein the control system is a dynamic system, and further includes a communication channel with memory.
  • 10. The computer-implemented method of claim 1, wherein encoding the received information signal comprises: determining the control-coding capacity of the control system based on a system model of the control system; andimplementing a coding scheme to encode the information signals based on the determined control-coding capacity.
  • 11. The computer-implemented method of claim 10, wherein the system model of the control system comprises one or more of: power constraints, directed information pay-off, noise distributions, cost functions, feedback models, or an information rate of the control system.
  • 12. The computer-implemented method of claim 10, wherein determining the control-coding capacity includes solving the optimization of a representation of the control-coding capacity based on the system model, wherein the representation of the control-coding capacity includes one or more conditional control input distributions and one or more cost functions.
  • 13. The computer-implemented method of claim 10, wherein transmitting the two-part signal further comprises: transmitting the two-part signal based on the control-coding capacity to achieve optimal performance of one or more control objectives.
  • 14. The computer-implemented method of claim 1, wherein the control system is a dynamic system including one or more inputs, one or more outputs, and one or more controllers.
  • 15. A system for simultaneously controlling a dynamic system and transmitting information signals in the dynamic system, the system comprising: a controller for generating and transmitting control signals to control one or more operations of a dynamic system;an encoder communicatively coupled to the controller,one or more non-transitory storage media configured to store processor-executable instructions; andone or more processors operatively connected to the controller and the one or more non-transitory storage media and configured to execute the processor-executable instructions to cause the system to:receive, at the controller, an information signal to be propagated through the dynamic system, where the information signal includes data to be transmitted through the dynamic system;determining, at the controller, a control signal to control one or more operations in the dynamic system;encoding, at the encoder, the received information signal to produce an encoded information signal;determining, at the controller, a two-part signal comprising: 1) the control signal, and 2) the encoded information signal, where the two-part signal is based on a transmission rate below a control-coding capacity of the dynamic system, where the control-coding capacity is a maximum rate of transmitting signals through the dynamic system;transmitting, by the controller, the two-part signal using the dynamic system as a communication channel.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 15/461,462, which was entitled “The Information Transfer in Stochastic Optimal Control Theory with Information Theoretic Criterial and Application” and filed on Mar. 16, 2017 which claims priority to and the benefit of the filing date of U.S. Provisional Patent Application Ser. No. 62/309,315, which was entitled “The Information Transfer in Stochastic Optimal Control Theory with Information Theoretic Criterial and Application” and filed on Mar. 16, 2016. The entire disclosure of this application is hereby expressly incorporated by reference herein for all uses and purposes.

US Referenced Citations (9)
Number Name Date Kind
6192238 Piirainen Feb 2001 B1
7676007 Choi et al. Mar 2010 B1
20050010400 Murashima Jan 2005 A1
20110125702 Gorur Narayana Srinivasa et al. May 2011 A1
20110128922 Chen Jun 2011 A1
20130007264 Effros et al. Jan 2013 A1
20130287068 Ashikhmin et al. Oct 2013 A1
20150098533 Rusek Apr 2015 A1
20150312601 Novotny Oct 2015 A1
Foreign Referenced Citations (2)
Number Date Country
WO-2008024967 Feb 2008 WO
WO-2009084896 Jul 2009 WO
Non-Patent Literature Citations (46)
Entry
C.E. Shannon, “A mathematical theory of communication”, Bell Syst. Tech, vol. 27, Jul. 1948 (Reprinted).
C.E. Shannon, “Coding theorems for a discrete source with a fidelity criterion,” IRE Nat. Conv. Rec., pt. 4, vol. 27, pp. 325-350, Jul. 1950.
T.M. Cover and J.A. Thomas, Elements of Information Theory, 2nd ed. John Wiley & Sons, Inc., Hoboken, New Jersey, 2006.
P.R. Kumar and P. Varaiya, Stochastic Systems: Estimation, Identification, and Adaptive Control. Prentice Hall, 986. [5] C.D. Charalambous, C.K. Kourtellaris, and I. Tzortzis, Information transfer in stochastic optimal control with randomized strategies and directed information criterion, in 55th IEEE Conference on Decision and Control (CDC), Dec. 12-14, 2016.
J.L. Massey, “Causality, feedback and directed information”, in International Symposium on Information Theory and its Applications (ISITA '90), Nov. 27-30, 1990 (Pre-Print).
H. Permuter, T. Weissman, and A Goldsmith, “Finite state channels with time-invariant deterministic feedback,” IEEE Trans. Inf. Theory, vol. 55, No. 2, pp. 644-662, Feb. 2009.
C.D. Charalambous and P.A. Stavrou, “Directed information on abstract spaces: Properties and variational equalities”, IEEE Transactions on Information Theory, vol. 62, No. 11, pp. 6019-6052, 2016.
G. Kramer, “Capacity results for the discrete memoryless network”, IEEE Transactions on Information Theory, vol. 49, No. 1, pp. 4-21, Jan. 2003.
R. S. Liptser and A. N. Shiryaev, Statistics of Random Processes: I. General Theory, 2nd ed. Springer-Verlag, Berlin, New York 2001.
J.P.M. Schalkwijk and T. K ailath, “A coding scheme for additive noise channels with feedback-I: no bandwidth constraints,” IEEE Transactions on Information Theory, vol. 12, pp. 172-182, Apr. 1966.
Blahut, “Computation of channel capacity and rate-distortion functions”, IEEE Transactions on Information Theory, vol. 18, No. 4, pp. 460-473, Jul. 1972.
Chen et al., “The Capacity of Finite-State Markov Channels with Feedback”, IEEE Transactions on Information Theory, vol. 51, No. 3, pp. 780-798, Mar. 2005.
Cover et al., “Gaussian Feedback Capacity,” IEEE Transactions on Information Theory, vol. 35, No. 1, pp. 37-43, Jan. 1989.
Dobrushin, “General Formulation of Shannon's Main Theorem of Information Theory,” Usp. Math. Nauk., vol. 14, pp. 3-104, 1959, translated in Am. Math. Soc. Trans., 33:323-438.
Dobrushin, “Information Transmission in a Channel with Feedback,” Theory of Probability and its Applications, vol. 3, No. 2, pp. 367-383, 1958.
Elishco et al., “Capacity and Coding of the (sing Channel with Feedback,” IEEE Transactions on Information Theory, vol. 60, No. 9, pp. 3138-5149, Jun. 2014.
Gamal et al., Network Information Theory, Cambridge University Press, Dec. 2011.
Kim, “A Coding Theorem for a Class of Stationary Channels with Feedback,” IEEE Transactions on Information Theory, vol. 54, No. 4, pp. 1488-1499, Apr. 2008.
Kim, “Feedback Capacity of Stationary Gaussian Channels,” IEEE Transactions on Information Theory, vol. 56, No. 1, pp. 57-85, 2010.
Kramer, “Topics in Multi-User Information Theory,” Foundations and Trends in Communications and Information Theory, vol. 4, Nos. 4-5, pp. 265-444, 2007.
Permuter et al., “Capacity of a Post Channel With and Without Feedback,” IEEE Transactions on Information Theory, vol. 60, No. 9, pp. 5138-5149, 2014.
Permuter et al., “Capacity of the Trapdoor Channel with Feedback,” IEEE Transactions on Information Theory, vol. 56, No. 1, pp. 57-85, Apr. 2010.
Tatikonda et al., “The Capacity of Channels with Feedback,” IEEE Transactions on Information Theory, vol. 55, No. 1, pp. 323-349, Jan. 2009.
Verdu et al., “A General Formula for Channel Capacity,” IEEE Transactions on Information Theory, vol. 40, No. 4, pp. 1147-1157, Jul. 1994.
Yang et al., “Feedback Capacity of Finite-State Machine Channels,” Information Theory, IEEE Transactions on, vol. 51, No. 3, pp. 799-810, Mar. 2005.
Yang et al., “On the Feedback Capacity of Power-Constrained Gaussian Noise Channels with Memory,” IEEE Transactions on Information Theory, vol. 53, No. 3, pp. 929-954, Mar. 2007.
European Search Report in Corresponding EP Application No. 15165990.1, dated Sep. 24, 2015.
Non-Final Office Action in U.S. Appl. No. 14/700,495 dated Oct. 2, 2015.
Notice of Allowance in U.S. Appl. No. 14/700,495 dated Feb. 12, 2016.
C.D. Charalambous et al., “Stochastic Optimal Control with Randomized Strategies and Directed Information Criterion,” 8 pgs.
C.D. Charalambous et al., “Stochastic Optimal Control with Randomized Strategies and Directed Information Criterion.”
C.D. Charalambous et al., “The Value of Information & Information Transfer in Stochastic Optimal Control Theory,” 8 pgs.
R. E. Blahut, “Principles and Practice of Information Theory,” ser. in Electrical and Computer Engineering. Reading, MA: Addison-Wesley Publishing Company, 1987.
R. T. Gallager, “Information Theory and Reliable Communications,” John Wiley & Sons, Inc., Hoboken, New Jersey 2006.
I. I. Gihman and A. V. Skorohod, “Controlled Stochastic Processes,” Springer-Verlag, 1979.
O. Hernandez-Lerma and J. Lasserre, “Discrete-Time Markov Control Processes: Basic Optimality Criteria,” ser. Applications of Mathematics Stochastic Modelling and Applied Probability, Springer-Verlag, 1996.
G. Kramer, “Directed Information for Channels with Feedback,” Ph.D. dissertation, Swiss Federal Institute of Technology (ETH), Dec. 1998.
N. U. Ahmed and C. D. Charalambous, “Stochastic Minimum Principle for Partially Observed Systems Subject to Continuous and Jump Diffusion Processes and Driven by Relaxed Controls,” SIAM Journal on Control Optimization, vol. 51, No. 4, pp. 3235-3257, 2013.
H. Marko, “The Bidirectional Communication Theory—A Generalization of Information Theory,” IEEE Transactions on Communications, vol. 21, No. 12, pp. 1345-1351, Dec. 1973.
C. Kourtellaris and C. D. Charalambous, “Information Structures of Capacity Achieving Distributions for Feedback Channels With Memory and Transmission Cost: Stochastic Optimal Control & Variational Equalities—Part I,” IEEE Transactions on Information Theory, 2015, submitted, Nov. 2015.
P. E. Caines, “Linear Stochastic Systems,” ser. Wiley Series in Probability and Statistics, John Wiley & Sons, Inc., New York 1988.
M. Pinkser, “Information and Information Stability of Random Variables and Processes”, Holden-Day Inc., San Francisco, 1964.
S. Ihara, “Information Theory for Continuous Systems”, World Scientific 1993.
Non-Final Office Action for U.S. Appl. No. 15/461,462 dated Dec. 20, 2018.
Final Office Action for U.S. Appl. No. 15/461,462 dated Aug. 12, 2019.
Notice of Allowance for U.S. Appl. No. 15/461,462 dated Dec. 18, 2019.
Related Publications (1)
Number Date Country
20200257258 A1 Aug 2020 US
Provisional Applications (1)
Number Date Country
62309315 Mar 2016 US
Continuations (1)
Number Date Country
Parent 15461462 Mar 2017 US
Child 16859411 US