This patent application generally relates to coding of information signals in controllers of general control systems, and using the control system to communicate the encoded information to processes connected to control systems, such as field devices, outputs, other controllers or processors. The invention further relates to: (i) controller design in control systems with a dual role, (ii) achieving stability and optimal performance with respect to control objectives, (iii) encoding information, (iv) transferring the information through the control system, and (v) decoding or estimating the information at outputs of control systems, with arbitrary small error probability. The invention facilitates communication from one processor of a control system to another processor.
Current telecommunication and information technology systems are designed based on Shannons operational definitions of coding-capacity, the maximum rate of communicating information over noisy channels, and coding-compression, the minimum rate of compressing information, which give the fundamental performance limitations for reliable communication. They utilize encoders and decoders, to combat communication noise and to remove redundancy in data.
Current dynamical control systems are designed by utilizing feedback controllers, actuators and sensors, to ensure stability, robustness, and optimal control objectives and performance.
Industries related to modern control and communication systems have experienced tremendous growth due to their increasing applications in information technology which affects everyday lives of people. The next generation of engineering systems integrates control, communication, protocols, etc. to develop complex engineering systems, which can be implemented in energy systems, transportation systems, medical systems, surveillance networks, financial instruments etc. Many of these applications consist of multiple control sub-systems and communication sub-systems integrated together to achieve control and communication objectives.
In the field of communication most systems are designed by utilizing encoders and decoders, to combat communication noise and to remove redundancy in data. Current telecommunication systems, whether point-to-point, network, mobile, etc., are designed, based on Shannon's operational definitions, which give the maximum rate of communicating information over noisy channels, and the minimum rate of compressing data generated by sources, called coding-capacity of communication channels, and coding-compression of information processes, respectively.
In the field of modern control theory and applications, controllers are designed to control the control system outputs, to optimize performance and to achieve robustness with respect to uncertainties and noise. Over the year the criterion of optimality of controllers is the average of a real-valued sample path pay-off functional, of the control, state and output processes, which is optimized to achieve optimal control performance. It is well-known that controllers are designed to control the control systems but not to encode information and communicate this information, through the control system to other processes.
The general separation of control system design and communication system design has divided the community of developers into independent groups developing controllers for control systems, and communication systems for noisy channels. Shannon's operational definitions of achievable coding-capacity, and coding-compression, which utilizes encoders and decoders, to combat communication noise and to remove redundancy in data, did not apply beyond communication system, such as, in the ability of control system to transmit information.
To this date, no controllers in a control system is designed to communicate information, from one controller, through the control system, to another controller, or to any of its outputs, or from elements of a control system to elements of another control system, using encoders and decoders as done in problems of information transmission over noisy channels.
To this date, Shannon's operational definitions of coding-capacity and coding-compression are not used beyond communication system, such as, in dynamical control system. It is well-known that encoders are designed to encode and transmit information over noisy communication channels but not to control channel output processes.
In an embodiment, a method of designing an controller-encoder pair in control systems or general dynamical systems or decision systems, comprising the controller controls the control system, the encoder encodes information signals and transmits the information over the control system to any element attached to it, and reconstruct the encoded signals using decoders, with arbitrary small error probability.
The method described above further includes determining the expression of Control-Coding Capacity (CC-Capacity) of the control system, for stable or unstable systems, wherein the expression represents the maximum rate in bits/second of simultaneously controlling the control system and encoding information, and decoding the information at any processors attached to the control system, by solving the expression of CC-Capacity.
The method of above further includes simultaneous
(i) optimal performance with respect to control objectives, and
(ii) optimal coding of information signals with respect to communication objectives.
The method of above further includes the the specification of minimum power necessary to meet the optimal performance with respect to control objectives, and the specification of excess power which is converted into transmitting information through control systems with positive data rates.
In another embodiment, the controller-encoder comprise of deterministic part and a random part, wherein the deterministic part controls the control system, while the random part encodes information.
The method further includes determining how existing controllers which are not designed to encode information, can be modified to encode information, and transmit information over control systems, and decoded at any processors attached to the control system, such as, in control-to-control communication, etc.
The techniques of the present disclosure facilitate a method of controller design in control systems or decision systems, which is based on utilizing controllers, encoders and decoders, wherein the encoders and decoders are designed using techniques from communication system designs. Further, these techniques may be universal to any dynamical system with control inputs and outputs. The maximum rate for simultaneous control of the dynamical system and transmission of information over the dynamical system is called the Control-Coding Capacity (CC-Capacity).
The design of the controller-encoder for control systems, aims at simultaneous control of the control system and communication of information from one processor to another processor. More specifically, the CC-Capacity of dynamical control systems identifies the interaction of control and information, and controller-encoders-decoders are designed to achieve simultaneously optimal performance with respect to control objectives, and optimal transmission of information, via the dynamical system, from one controller to another controller or output. etc.
Applications of the CC-Capacity to control systems, and in general, to any dynamical system with inputs and outputs, include biological, financial, quantum, etc., because any such system has CC-Capacity.
To this end, a methodology determines how to design control systems, which in addition to effectively control system outputs (e.g., stabilize them), they also encode information, and communicate the information from one controller to any other controller, via the control system which acts as a communication channel, and the information is decoded or estimated at other processors attached to the control system, with arbitrary precision.
Dynamical Systems or Control Systems with Communication Capabilities
The information signal 101 includes analog or digital data. For example, the information signal may include suitable computer generating messages, including digital data, such as digital data representing pictures, text, audio, videos, or signals generated by other control systems to etc. Generally, the information signal 101 may include any number of devices or components of devices generating messages to be encoded by the randomized strategy also called controller-encoder 102 and transmitted over the control system 106. The information signal 101 may include information sources and/or tracking signals as well as any other pertinent data that may be encoded and transmitted through the control system.
The randomized strategy called controller-encoder 102 encodes messages received from the information signals 101 {X0, X1, . . . , Xn} and uses feedback from the control system outputs {Y0, . . . , Yn} to generated the control process {A0, . . . , An}. The controller/encoder 102 may comprise a control module 103 and a coding module 104. The control module 103 may be used for typical control system activities, such as generating control signals for elements of the control system 106. The control module may also receive feedback 107 and other information related to the control system which may be implemented in controlling the control system. The coding module 104, which is not typical of previous known controller devices, may be implemented in encoding information signals 101 and transmitting the control process 105 as an encoded signal along with control signals to the control system 106. The controller/encoder 102 may implement the control module 103 and coding module 104 in any suitable combination to produce the control process 105. In some embodiments, the control process 105 can be represented as a randomized control signal or a randomized control action.
The coding module 104 may implement any of the techniques described below to encode the information signals 101. For example, the encoding module may be designed based on the capacity of the control system, known as the control-coding capacity, of one or more control systems 106 and then encode the signals based on the determined capacity. Further, the coding module 104 may also implement any known or future developed schemes for encoding the information signal 101.
The dynamical system also called control system 106 may include any number of systems, described by nonlinear recursive models driven by a control process and the noise process, such as, the dynamics of a ship, submarine, missile, plane, car, and any model of a dynamical system which is controlled by a controller, such as a process control system. The output of the control system 107 is available to the decoder or estimator 108 and comprises encoded information signals and control signal feedback. The decoder 108 may be any device suitable for receiving the encoded information signals from the process output 107 such as a sensor, field device, another controller, etc. In the example of
Previous systems would only transmit control signals through control systems. These previous systems lacked the capability of sending information signals in such a manner. For example, if information needed to be transmitted through a control system or from one controller to another controller in the control system, this was not enabled by the previous systems. In-stead, previous systems would require additional communication channels for communicating information signals and they do not use the control systems as communication channels used to transmitting information signals. In other words, by transmitting encoded messages which include both control signals and encoded information signals through the control system, the current system enables simultaneous control and communication of data transmission. Further, the current system requires further utilizes the control system as a communication channel and also provides more optimized communications through currently available control systems. The implementation of such techniques was not previously considered (using communication channel optimization techniques in a control system).
The encoded message transmitted in block 240 may be utilized at two different junctures because the encoded message essentially contains two parts, an encoded information signal and one or more control signals. At block 250 the encoded information signal portion may be decoded. The encoded portion may be decoded by a decoder as described above with respect to
At block 260, the control signal portion may be received by the control system. The control signal portion may be used to control one or more elements of the control system. In an embodiment, once the control signal is received, feedback may be generated by the device or devices that have received and utilized the control signal. For example, feedback may be generated and transmitted as part of process output, such as process output 107 discussed above with respect to
Although blocks 250 and 260 are discussed in sequential order, these steps may take simultaneously or in any suitable order. In an embodiment, the present techniques allow the control system to be used as a communication channel for transmitting an encoded information signal from a controller to another device which may decode the encoded information signal.
All elements of dynamical systems or control systems 106 with communication capabilities shown in
In view of (i)-(v) there is a one-to-one analogy between elements of control systems and elements of noisy communication channels in problems of information transmission.
A. Consider a general control system with control process An Δ{Ai: i=0, . . . , n}, output process YnΔ{Y: i= . . . , −1, 0, 1, . . . , n}, control system conditional distribution
PY
a set of randomized control strategies or conditional distribution of the control process
[0,n]Δ{PA
and a set of randomized control strategies satisfying power constraints defined by the set
where κ∈[0, ∞] is the total power with respect to the cost function l0,n(an, yn-1)∈[0, ∞).
B. The pay-off of model of A. is the so-called directed information from An to Y0n conditioned on Y−1, denoted by I(An→Yn) and defined by
where {PY
The Finite Time Horizon (FTH) information CC-Capacity of the control system is to determine an optimal randomized control strategy {P*i(⋅|⋅): i=0, . . . , n}∈[0,n](κ) which maximizes the directed information pay-off defined by
The Control-Coding Capacity (CC-Capacity) called the supremum of all transmission rates over the model of A. is the per unit time limiting version
C. The method of B. further states that CC-Capacity is analogous to the definition of coding-capacity or capacity in problems of information transmission over noisy communication channels, as follows.
(1) The operational definition of controller-encoder-decoder strategies is a generalization of Shannon's operational definition of coding-capacity in problems of information transmission over noisy channels.
(2) In digital communication theory, the source coding problem can be lossless or lossy (based on quantization), while the channel coding problem is defined with respect to the quantized or compressed representation X(n) of the information process {Xi: i=0, . . . , }.
Operational Control-Coding Capacity of Control Systems or Dynamical Systems
Consider a filtered probability space (Ω, , {i: i=0, 1, . . . , N}, ) on which the following processes and RVs are defined.
X(n): ∩(n)Δ{1, . . . ,M(n)},Ai: Ωi,Yi: Ωi,i=0, . . . ,n,
Y−1: Ω−1,{circumflex over (X)}(n):Ω(n).
A controller-encoder-decoder for a dynamical system, such as, a control system, with power constraint, over the time horizon {0, 1, . . . , n} is denoted by
and consists of the following elements.
(a) A set of uniformly distributed messages X(n) taking values in (n)Δ{1, . . . , M(n)}, known to both the encoder and decoder (the controller does not need to know these).
(b) A set of controller-encoder strategies mapping messages and feedback control information into control actions defined by
ε[0,n]S{gi:(n)×i-1×i-1i:i=0, . . . ,n:a0=g0(x(n),y−1),a1=g1(x(n),y−1,a0,y0), . . . ,an=gn(x(n)an-1,yn-1),x(n)∈(n)}
The set of admissible controller-encoder strategies subject to power constraint κ is defined by
where κ∈[0, ∞) is the total cost or power of the controller-encoder. For any message x(n)∈(n) and feedback information
ux(n)=(g0(x(n),y−1),g1(x(n),a−1,y−1,a0,y0), . . . ,gn(x(n),an-1,yn-1))∈ε[0,n]S(κ)
is the controller-encoder strategy. The control-coding book of the controller-encoder strategy for the message set (n) is C(n)=(u1, u2, . . . , uM
(c) A decoder measurable mapping dn: n(n), {circumflex over (X)}(n)Δdn(Yn) such that the average probability of decoding error is given by 1 1 The superscript on expectation. i.e. Pg indicates the dependence of the distribution on the encoding strategies.
(d) Conditional independence holds.
PY
(e) The initial data Y−1 ∈−1 and the distribution μ(dy−1) may be known to the controller-encoder and decoder.
(f) The encoder-controller-decoder for the G-SCM is denoted by (n+1, (n), ϵn, κ).
The control-coding rate is defined by
(g) Another class of controller-encoder strategies, which are nonanticipative with respect to the information process {X0, X1, . . . } is defined by
where
ε[0,n]{ei:i×i-1×i-1i:i=0, . . . ,n: a0=e0(x0,y−1),a1=g1(x0,x1,y−1,a0,y0), . . . ,an=gn(xn,an-1,yn-1)}
Note the following.
(3) By the above definition, the message X(n) is taken to be uniformly distributed over (n), hence its entropy is H(X(n))=log M(n). Moreover, unlike most treatments of capacity of communication channels, which often do not impose cost constraints (except for Gaussian channels in which only power constraint on {Ai: i=0, . . . , n} are imposed), the control system is not required to be stable, and there is a cost constraint on both {(Ai, Yi): i=0, . . . , n}.
(4) If the entropy H(X(n)) is below the capacity of the control system, calculated over the time horizon {0, 1, . . . , n}, then for large enough n, we expect to achieve the control objective of meeting the average power constraint, and the coding objective to reconstruct a randomly chosen message X(n)=x(n) at the output of the decoder with small probability of error. Next, we give the precise definition of an achievable control-coding rate and capacity of the control system.
Control-Coding Capacity of Dynamical Systems or Control Systems
(h) A control-coding rate R>0 is said to be an achievable rate (under power constraint κ), if there exists a sequence of controller-encoder-decoder strategies {(n+1, (n), ϵn, κ): n=0, 1, . . . } such that the random processes (An, Yn)≡(Ag,n, Yg,n) depend on the message X(n), and satisfy
(i) The operational control-coding capacity of G-SCM under power constraint κ is the supremum of all achievable control-coding rates, i.e., it is defined by
C(κ)sup{R: R is achievable}.
(j) Under appropriate conditions, the CC-Capacity is given by C(K) defined by ( ).
We note the following.
(5) In information theory the operational capacity is often called the coding-capacity of noisy communication channels. Since the objective is to control the control system, in addition to encode information, the operational capacity is called the control-coding capacity.
(6) The definition of CC-Capacity states that if a control-coding rate R is achievable under the power constraint and the control system is operated for sufficiently large n, then we can control the control process {Yi: i=0, 1, . . . , n} and reconstruct M(n)=┌e(n+1)R┐ messages at the control system output using the decoder or estimator, with arbitrary small probability of error, for sufficiently large n.
(7) As pointed out below, and in view of the general cost constraint, the cost of control is κ0,n (0) while the cost of communication is κ0,n(C)−κ0,n(0).
D. The method of B. further states the optimal randomized strategy {P*i(⋅|⋅): i=0, . . . , n}∈[0,n](κ) is analogous to the optimal channel input conditional distribution in problems of information transmission over noisy communication channels with feedback encoding.
E. An example of the control system is a General-Recursive Control Model (G-RCM) defined as follows.
where
Assumption A.(i). {Vi: i=0, . . . , n} is the noise process independent of Y−1, hi: −1×i×i and hi(⋅,⋅,⋅), γi(⋅, ⋅) are specific functions such as linear or nonlinear, and autoregressive models, etc., with Multiple Inputs and Multiple Outputs, and (Tian, Tiyn)γi(Tian, Tiyn), where Tian⊆{a0, a1, . . . , ai}, and Tiyn⊆{y0, y1, . . . , yi} for i=0, . . . , n is any quadratic or nonlinear function;
Assumption A.(ii). The noise process {Vi: i=0, . . . , n} satisfies
PV
Hence, a G-RCM induces a sequence of control system conditional distributions, and these are not necessarily Gaussian.
F. The analogy between feedback capacity of noisy communication channels and stochastic optimal control problems, with directed information pay-off, introduced in A.-D., and depicted in
Hierarchical Decomposition: Cost of Control and Communication For any finite n, C0,n(κ)JA
Clearly, κ0,n (0)≡κmin is the optimal pay-off or cost of the control system without communication. Then κ0,n(C)−κ0,n(0) is the additional cost incurred if the control system operates at an information rate of at least C.
The cost of communication is given by
In general, κ(C)−κ(0)>0. In view of the above connections, κ0,n(κ) decomposes into two sub-problems, the optimal control sub-problem and the optimal communication sub-problem, which imposes a natural hierarchical decomposition on any optimization problem C0,n(κ).
In addition, we show that classical stochastic optimal control problems, defined by
are degenerate optimization problems of the FTH information CC-Capacity ( ).
There are several hidden aspects of the FTH information CC-Capacity JA
Dual Extremum Problem 1.
The inequality states that it costs more to simultaneously control and transmit information than to control only. The additional cost to communicate is κ0,n(C)−κ0,n(0); this is quantified in the application examples.
Moreover, if the randomized control strategies [0,n] are restricted to deterministic strategies [0,n]D, then and necessarily C=0. The resulting optimization problem reduces to the following classical stochastic optimal control problem (without an information theoretic constraint).
Degenerate Dual Extremum Problem 2.
The last equality follows from fact that randomized control strategies do not incur better performance.
The application example in [0037] illustrates the above statements.
Characterization of Control-Coding Capacity of Dynamical Systems
G. Consider an example control system with control system distribution, and cost function defined by
Qi(dyi|yi-1,ai-Li),γi(ai-Ni,yi-1),i=0, . . . ,n (.1)
where ai-Li(ai-L, ai-L+1, . . . , ai), yi-1=(y−1, y0, . . . , yi-1), {L, N} are a finite non-negative integers, with an average constraint defined by
Then the characterization of FTH information CC-Capacity is given by
where the distributions are given by
H. Consider an example control system with control system distribution, and cost function defined by
Qi(dyi|yi-Mi-1,ai-Li),γi(ai-Ni,yi-Ki-1),i=0, . . . ,n (.5)
where {M, K} are finite non-negative integers and the convention is
for any i. Then the characterization of FTH information CC-Capacity is given by
where
[0,n]I,K(κ){πiI(dai|ai-1i-1,yi-1),i=0, . . . ,n:1/n+1E(Σi=0nγi(Ai-Ni,Yi-Ki-1))≤κ},
where Imax{L, N}, and
I. Consider an example control system with control system distribution and cost function defined by
Qi(dyi|yi-Mi-1,ai),γi(ai,yi-Ki-1),i=0, . . . ,n. (.9)
Then the characterization of FTH information CC-Capacity is given by
where
The above characterization means the joint process {(Ai, Yi): i=0, . . . , n} and the output process {Yi: i=0, . . . , n} are J-order Markov process.
J. The characterizations of FTH information CC-Capacity are general and hence, they hold for arbitrary control models, noise distributions, and cost functions.
CC-Capacity of Gaussian-G-RCM-1 and Randomized Strategies
K. Consider an example G-RCM of E. called Gaussian-G-RCM-1 with quadratic cost function defined as follows.
Yi=Ci-1Yi-1+Di,iAi+Di,i-1Ai-1+Vi,
Y−1=y−1,A-1=a-1,i=0, . . . ,n,
PV
(Y−1,A-1)˜N(0,KY-1,A-1),KY-1,A-1>0,
γi(ai,yi-1)ai,Riai+yi-1,Qi,i-1yi-1,
(Di,i,Di,i-1)∈p×q×p×q,
Ri∈++q×q,Qi,i-1∈+p×p,i=0,β,n.
where +q×q denotes the set of positive semidefinite q×q matrices, and ++q×q their restriction to positive definite, and ⋅,⋅ denotes inner product. The Gaussian-RCM-1 is a Multiple Input Multiple Output (MIMO) control system with memory on past inputs and outputs, is an Infinite Impulse Response (IIR) model, and the cost function is quadratic.
Next, we prepare to compute
(i) the optimal randomized strategy, and
(ii) the FTH CC-Capacity.
From G. the optimal randomized strategy is of the form {πiL(dai|ai-Li-1,yi-1)≡πil(dai|ai-1,yi-1): i=0, . . . , n}, i.e., L=1. The directed information pay-off is expressed as follows.
Let {(Aig, Yig, Zig): i=0, . . . , n} denote a jointly Gaussian process. By the maximum entropy property of Gaussian distributions it follows that
and the upper bound is achieved if {(Ai, Yi, Zi)=(Aig, Yig, Zig): z=0, . . . , n} and the average constraint is satisfied. Hence, the upper bound is achieved, if the optimal strategies are linear, given as follows.
Randomized Strategy
for some deterministic matrices {(Γi-1, Λi,i-1): i=0, . . . , n} of appropriate dimensions. Next, we prepare to compute the directed information pay-off. To this end, we need to compute the conditional entropy H(Yig|Yg,i-1), i=0, . . . , n, which means we need to determine the conditional density of Yig given Yg,i-1 for i=0, . . . , n, using the stochastic control system and strategy. Since the conditional density is characterized by the conditional mean and covariance, we define the quantities
Ŷi|i-1Ee
KY
Pi|i=Ee
From above and using the independent properties of the noise process then
Âi|i=Λi,i-1Âi-1|i-1+1Uig+Δi|i-1(Yig−Ŷi|i-1),
Ŷi|i-1=Ci-1Yg,i-1+Di,iYig+
KY
where (Â−1|−1, P−1|−1) are initial data and
Pi|i=Λi,i-1Pi-1|i-1Λi,i-1T+KZ
Φi|i-1[Di,iKZ
Δi|i-1(KZ
The innovations process denoted by {ve
where {vi0: i=0, . . . , n} indicates that the innovations process is independent of the strategy {gi1(⋅): i=0, . . . , n}. From the above equations, since the conditional covariance KY
Next, we give the the closed form expressions of the optimal randomized control strategies, and the FTH CC-Capacuty.
CC-Capacity and Randomized Strategy
Consider the Gaussian-RCM-1. The following hold.
(a) FTH Information CC-Capacity. The joint process {(Ai, Yi)=(Aig, Yig), i=0 . . . , n}, is jointly Gaussian and satisfies the following equations.
The FTH CC-Capacity given by
and the average constraint set is defined by
(b) Decentralized Separation of Randomized Strategy into Controller and Encoder: The optimal strategy denoted by {e1,*(⋅)≡(gi1,*(⋅), Λi,i-1*, KZ
Moreover, the following decentralized separation holds.
(i) The optimal strategy {gi1.*(⋅): i=0, . . . , n} is the solution of the optimization problem
for a fixed {Λi,i-1, KZ
(ii) The optimal strategy {Λi,i-1*, KZ
(c) Optimal Strategies of Controller and Encoder. Suppose in (a), Yig is replaced by
Yig=Ci,i-1Yi-1g+
Any candidate of the control strategy {gi1(Yg,i-1): i=0, . . . , n} is of the form
Define the augmented system
and average cost
Then the following hold.
(1) For a fixed {Λi,i-1, KZ
where {
where the symmetric positive semidefinite matrix {Σ(i): i=0, . . . , n} satisfies a matrix difference Riccati equation, for i=0, . . . , n−1, equation
Σ(i)=
and the optimal pay-off is given by
(2) The optimal strategies {(Λi,i-1*, KZ
The above is a decentralized separation principle, and (1) and (2) are Person-by-Person Optimality statements of {gi1(⋅): i=0, . . . ,} and {Λi,i-1, KZ
CC-Capacity of Gaussian-G-RCM-2 and Randomized Strategies
L. Consider an example G-RCM of H. called Gaussian-G-RCM-2 with quadratic cost function defined as follows.
Yi=Ci,i-1Yi-1+DiAi+Vi,Y-1=y-1,
PV
γi(ai,yi-1)ai,Riai+yi-1,Qi,i-1yi-1.
By I. the characterization if FTH-DI information CC-Capacity is
The optimal randomized strategy of above characterization of FTH information CC-Capacity is now computed, using several steps.
(a) Gaussian Properties of Characterization of FTH Information CC-Capacity. Using dynamic programming or the maximum entropy property of processes with fixed second moments, the optimal strategies are Gaussian denoted by {πig(dai|yi-1): i=0, . . . , n}∈P̊[0,n](κ), and the joint process is jointly Gaussian denoted by {(Ai, Yi)≡(Aig, Yig): i=0, . . . , n}.
(b) Realization of Optimal Strategies. Since {(Aig, Yig): i=0, . . . , n} is jointly Gaussian, strategies from the set P̊[0,n](κ), can be realized by linear and Gaussian randomized strategies defined by the set
where ⋅⊥⋅ means the processes are independent.
Moreover, the characterization of the FTH information CC-Capacity is
(c) Dual Role of Randomized Control Strategies. Since the optimal control strategies admit the decomposition
Aig=Γi,i-1Yi-1g+Zig≡gi(Yi-1g)+Zig,i=0, . . . ,n
then we have the following: (i) the feedback control law or strategy {gi≡Γi,i-1: i=0, . . . , n} is responsible for controlling the output process {Yig: i=0, . . . , n}, and (ii) the orthogonal innovations process {Zig: i=0, . . . , n} is responsible for communicating new information to the output process, both chosen to maximize JA
where s≥0 is the Lagrange multiplier associated with the average constraint. The solution of the dynamic programming equations is given by the following equations.
Ci(yi-1)={−s(yi-1,P(i)yi-1)+r(i)},i=0, . . . ,n
where {r(i): i=0, . . . , n−1} satisfies the recursions
and {P(i): i=0, . . . , n} is a solution of the Riccati difference matrix equation
P(i)=Ci,i-1TP(i+1)Ci,i-1+Qi,i-1−Ci,i-1TP(i+1)Di(Di,iTP(i+1)Di+Ri,i)−1(Ci,i-1TP(i+1)Di)T,i=0, . . . ,n−1,
P(n)=Qn,n-1.
The optimal randomized control strategy is given by
Aig,*=gi*(Yi-1g,*)+Zig,*,i=0, . . . ,n
where its random part {Zig,*: i=0, . . . , n} is the solution to the above recursions, and its deterministic part is given by
The corresponding covariance KY
KY
The Lagrange multiplier s≥0 are found from the problem
infs≥0{−sy-1,P(0)y-1+r(0)}
The characterization of the FTH-DI extremum problem is given by
JA
where s is the value found above.
(e) Water-filling Solution of Encoder Part of Randomized Strategy. The above solution illustrates the decentralized separation between the computation of the optimal deterministic part {g*i(yi-1): i=0, . . . , n} and the optimal random part {K*Z
(f) Connection to Stochastic Optimal Control Problem. From the above solution, we can recover, as a degenerate case, the optimal strategies of Gaussian control problems with quadratic pay-off as follows.
The dual of ( ) is given by
The second identity holds if randomized control strategies are restricted to deterministic strategies. Hence,
JA
and the degenerate optimization problem is the stochastic optimal control of Gaussian control systems with quadratic pay-off.
CC-Capacity: Per Unit Time of FTH Information CC-Capacity
M. Consider the Gaussian-G-RCM-2 of L. Here, we compute the capacity of the control system, i.e., given by the per unit time limit of the FTH information CC-Capacity. Suppose the control system is time-invariant, with {Ci,i-1=C, Di=D, KV
Then the CC-Capacity of the control system is per unit time limit of the characterization of FTH information CC-Capacity, given by
where spec(⋅) denotes the set of eigenvalues and 0{c∈C:|c|<1} is the unit disc of the set of complex numbers C . The Lagrange multiplier s*(κ) can be found from the average constraint
tr(RKZ)+tr(P[[DKZDT+KV])≤κ.
Thus, the predictable part of the optimal randomized control strategy g∞,*(y) ensures existence of a unique invariant distribution PYg∞,*(dy) of the optimal output process {Y*i: i=0, . . . , n}corresponding to (g∞,*(⋅), K*Z), i.e., stability of the closed loop system, and hence C(K) is operational, i.e., it is the CC-capacity of the control system.
N. Consider an example Gaussian-G-RCM-2 of L., with parameters p=q=1, R=1, Q=0 and C, D) arbitrary. For these choices of parameters we have the following.
Clearly, if |C|<1, i.e., stable, the deterministic part of the strategy is zero, i.e., Γ∞,*=0. The capacity formulae C(κ), illustrates that there are multiple regimes, depending on whether the control system is stable, that is, |C|<1 or unstable |C|>1. Moreover, for unstable control systems |C|>1, the optimal pay-off is zero, unless the power level r exceeds the critical level κmin.
Encoder Design for CC-Capacity of Gaussian-G-RCM-2
O. Consider an example Gaussian-G-RCM-2 of L.
We illustrate via an example of an information process or tracking process, as shown in
Consider a Gaussian information process {Xi: i=0, 1, . . . , n} taking values in q (which is to be encoded by the optimal randomized strategy) and described by a Gaussian Linear State Space Model (G-LSSM), as follows.
Xi+1=AiXi+GiWi,X0=x,i=0, . . . ,n−1
where {Wi˜(0, KW
{circumflex over (X)}i|i-1E{Xi|Yg,*,i-1},
Σi|i-1E{(Xi−{circumflex over (X)}i|i-1)(Xi−{circumflex over (X)}i|i-1)T|Yg,*,i-1}.
(a) Controller-Encoder Strategy. The following controller-encoder strategy2 achieves the characterization of FTH information CC-Capacity. ( ). 2 For any square matrix D with real entries D1/2 is its square root.
Aig,*=ē*i(Xi,Yg,*,i-1)=Γ*i,i-1Yi-1g,*+Δ*i{Xi−{circumflex over (X)}i|i-1},
Δ*i=KZ
Yig,*=(Ci,i-1+DiΓ*i,i-1)Yi-1g,*+DiΔ*i{Xi−{circumflex over (X)}i|i-1}+Vi,i=0,1, . . . ,n.
Moreover, from the properties of Kalman filter, the following hold.
(b) Filter Estimates. The innovations process defined by {v*iYig,*−E{Yig,*|Yg,*,i-1}:i=0, . . . , n} satisfies
v*i=Y*i−(Ci,i-1+DiΓ*i,i-1)Yi-1g,*=DiΔ*i{Xi−{circumflex over (X)}i|i-1}+Vi,i=0, . . . ,n.
E{v*i|Yg,*,i-1}=E{v*i}=0,i=0, . . . ,n,
E{v*i(v*i)T|Yg,*,i-1}=DiK*Z
and the sequence of RVs, {v*i: i=0, . . . , n} is uncorrelated.
The optimal filter estimates and conditional covariances satisfy the following recursions.
where the filter gains {Ψi|i-1: i=0, . . . , n} and the output process {Yig,*: i=0, . . . , n} are defined by
Ψi|i-1AiΣi|i-1(DiΔ*i)T[DiK*Z
Yig,*(Ci,i-1+DiΓ*i,i-1)Yi-1g,*+v*i,i=0, . . . ,n.
Moreover, the a-algebra generated by {Ykg,*: k=0, 1, . . . , i} denoted by 0,iYg,*σ{Y0g,*, Y1g,*, . . . , Yig,*} satisfies 0,iYg,*=0,iv*σ{v*0, v*1, . . . , v*i}, i=0, . . . , n.
(c) Realization of Optimal Randomize Strategy of L. The controller-encoder strategy {ē*i(⋅,⋅): i=0, 1, . . . , n} realizes the optimal randomized strategy, that is
Pē*(Aig,*∈dai|yi-1)=πig,*(dai|yi-1)˜N(Γ*i,i-1Yi-1g,*,K*Z
(d) FTH Information CC-Capacity Achieving Contoller-Encoder Strategy. The strategy {ē*i(⋅,⋅): i=0, . . . , n} achieves the FTF information CC-Capacity, that is, the following identities hold.
CC-Capacity Achieving Controller-Encoder
P. Consider an example Gaussian-G-RCM-2 of M. The method of O. can be repeated by replacing {(Γ*i,i-1, K*Z
Q. Consider the example of N.
(a) Gaussian RV Message X˜N(0, σX2). Then the message is Xi+1=Xi, X0=X, i=0, . . . , n−1. The Rate Distortion Function (RDF) of the Gaussian RV X subject to Mean-Square Error distortion is given by
Let {circumflex over (X)}i|iE{X|Yg,*,*}, i=0, . . . , n be the decoder of message X. Calculating the time-invariant scalar version (i.e., p=q=1) of optimal controller-encoder O., after (n+1) times uses of the control system, the Mean-Square Error (MSE) of the decoder decays geometrically, according to the expression
E|X−{circumflex over (X)}n|n|2=σX2e−2C
where for large enough n, we have C0,n(κ)=(n+1)C(κ), and C(κ) is the capacity of the control system given in N. for cases, |C|>1, |C|<1 (stable and unstable). Letting Δ=Σn|n we obtain
R(Δ)=C0,n(κ).
This implies the controller-encoder-decoder strategy meets the RDF with equality and no other controller-encoder-decoder scheme, no matter how complex it can be, can achieve a smaller MSE. Moreover, limn→∞E|X−{circumflex over (X)}n|n|2=0. We can also compute the error probability for any finite n.
(b) Equiprobable messages X∈{0, 1, . . . , Mn}, n=0, 1, . . . Similar to (a), we can apply the Schalkwijk-Kailath coding scheme, to show for Mnexp{(n+1)R} such that
the probability of Maximum Likelihood (ML) decoding error decreases doubly exponentially in (n+1).
(c) The analysis of the multi-dimensional versions is also feasible using the method described.
R. In the method of L.-Q.
(i) the optimal controller-encoder strategy controls the controlled process {Yi: i=0, . . . , n} and encodes the information process {Xi: i=0, . . . , n}, and
(ii) the information process is reconstructed at the output of the control system, using a minimum mean-square error decoder, via the Kalman filter.
(iii)
is a variant of per unit time infinite horizon stochastic optimal control theory; it reveals that optimal randomized strategies are capable of information transfer, over the control system, which acts as a feedback channel, precisely as in Shannon's operational definition of Capacity of Noisy Channels.
S. The methods described in this patent, for simultaneous control and encoding of information apply to any dynamical system model with inputs and outputs, such as, biological control models, financial portfolio optimization and hedging models, quantum dynamical control systems, etc.
This application is a continuation of U.S. patent application Ser. No. 15/461,462, which was entitled “The Information Transfer in Stochastic Optimal Control Theory with Information Theoretic Criterial and Application” and filed on Mar. 16, 2017 which claims priority to and the benefit of the filing date of U.S. Provisional Patent Application Ser. No. 62/309,315, which was entitled “The Information Transfer in Stochastic Optimal Control Theory with Information Theoretic Criterial and Application” and filed on Mar. 16, 2016. The entire disclosure of this application is hereby expressly incorporated by reference herein for all uses and purposes.
Number | Name | Date | Kind |
---|---|---|---|
6192238 | Piirainen | Feb 2001 | B1 |
7676007 | Choi et al. | Mar 2010 | B1 |
20050010400 | Murashima | Jan 2005 | A1 |
20110125702 | Gorur Narayana Srinivasa et al. | May 2011 | A1 |
20110128922 | Chen | Jun 2011 | A1 |
20130007264 | Effros et al. | Jan 2013 | A1 |
20130287068 | Ashikhmin et al. | Oct 2013 | A1 |
20150098533 | Rusek | Apr 2015 | A1 |
20150312601 | Novotny | Oct 2015 | A1 |
Number | Date | Country |
---|---|---|
WO-2008024967 | Feb 2008 | WO |
WO-2009084896 | Jul 2009 | WO |
Entry |
---|
C.E. Shannon, “A mathematical theory of communication”, Bell Syst. Tech, vol. 27, Jul. 1948 (Reprinted). |
C.E. Shannon, “Coding theorems for a discrete source with a fidelity criterion,” IRE Nat. Conv. Rec., pt. 4, vol. 27, pp. 325-350, Jul. 1950. |
T.M. Cover and J.A. Thomas, Elements of Information Theory, 2nd ed. John Wiley & Sons, Inc., Hoboken, New Jersey, 2006. |
P.R. Kumar and P. Varaiya, Stochastic Systems: Estimation, Identification, and Adaptive Control. Prentice Hall, 986. [5] C.D. Charalambous, C.K. Kourtellaris, and I. Tzortzis, Information transfer in stochastic optimal control with randomized strategies and directed information criterion, in 55th IEEE Conference on Decision and Control (CDC), Dec. 12-14, 2016. |
J.L. Massey, “Causality, feedback and directed information”, in International Symposium on Information Theory and its Applications (ISITA '90), Nov. 27-30, 1990 (Pre-Print). |
H. Permuter, T. Weissman, and A Goldsmith, “Finite state channels with time-invariant deterministic feedback,” IEEE Trans. Inf. Theory, vol. 55, No. 2, pp. 644-662, Feb. 2009. |
C.D. Charalambous and P.A. Stavrou, “Directed information on abstract spaces: Properties and variational equalities”, IEEE Transactions on Information Theory, vol. 62, No. 11, pp. 6019-6052, 2016. |
G. Kramer, “Capacity results for the discrete memoryless network”, IEEE Transactions on Information Theory, vol. 49, No. 1, pp. 4-21, Jan. 2003. |
R. S. Liptser and A. N. Shiryaev, Statistics of Random Processes: I. General Theory, 2nd ed. Springer-Verlag, Berlin, New York 2001. |
J.P.M. Schalkwijk and T. K ailath, “A coding scheme for additive noise channels with feedback-I: no bandwidth constraints,” IEEE Transactions on Information Theory, vol. 12, pp. 172-182, Apr. 1966. |
Blahut, “Computation of channel capacity and rate-distortion functions”, IEEE Transactions on Information Theory, vol. 18, No. 4, pp. 460-473, Jul. 1972. |
Chen et al., “The Capacity of Finite-State Markov Channels with Feedback”, IEEE Transactions on Information Theory, vol. 51, No. 3, pp. 780-798, Mar. 2005. |
Cover et al., “Gaussian Feedback Capacity,” IEEE Transactions on Information Theory, vol. 35, No. 1, pp. 37-43, Jan. 1989. |
Dobrushin, “General Formulation of Shannon's Main Theorem of Information Theory,” Usp. Math. Nauk., vol. 14, pp. 3-104, 1959, translated in Am. Math. Soc. Trans., 33:323-438. |
Dobrushin, “Information Transmission in a Channel with Feedback,” Theory of Probability and its Applications, vol. 3, No. 2, pp. 367-383, 1958. |
Elishco et al., “Capacity and Coding of the (sing Channel with Feedback,” IEEE Transactions on Information Theory, vol. 60, No. 9, pp. 3138-5149, Jun. 2014. |
Gamal et al., Network Information Theory, Cambridge University Press, Dec. 2011. |
Kim, “A Coding Theorem for a Class of Stationary Channels with Feedback,” IEEE Transactions on Information Theory, vol. 54, No. 4, pp. 1488-1499, Apr. 2008. |
Kim, “Feedback Capacity of Stationary Gaussian Channels,” IEEE Transactions on Information Theory, vol. 56, No. 1, pp. 57-85, 2010. |
Kramer, “Topics in Multi-User Information Theory,” Foundations and Trends in Communications and Information Theory, vol. 4, Nos. 4-5, pp. 265-444, 2007. |
Permuter et al., “Capacity of a Post Channel With and Without Feedback,” IEEE Transactions on Information Theory, vol. 60, No. 9, pp. 5138-5149, 2014. |
Permuter et al., “Capacity of the Trapdoor Channel with Feedback,” IEEE Transactions on Information Theory, vol. 56, No. 1, pp. 57-85, Apr. 2010. |
Tatikonda et al., “The Capacity of Channels with Feedback,” IEEE Transactions on Information Theory, vol. 55, No. 1, pp. 323-349, Jan. 2009. |
Verdu et al., “A General Formula for Channel Capacity,” IEEE Transactions on Information Theory, vol. 40, No. 4, pp. 1147-1157, Jul. 1994. |
Yang et al., “Feedback Capacity of Finite-State Machine Channels,” Information Theory, IEEE Transactions on, vol. 51, No. 3, pp. 799-810, Mar. 2005. |
Yang et al., “On the Feedback Capacity of Power-Constrained Gaussian Noise Channels with Memory,” IEEE Transactions on Information Theory, vol. 53, No. 3, pp. 929-954, Mar. 2007. |
European Search Report in Corresponding EP Application No. 15165990.1, dated Sep. 24, 2015. |
Non-Final Office Action in U.S. Appl. No. 14/700,495 dated Oct. 2, 2015. |
Notice of Allowance in U.S. Appl. No. 14/700,495 dated Feb. 12, 2016. |
C.D. Charalambous et al., “Stochastic Optimal Control with Randomized Strategies and Directed Information Criterion,” 8 pgs. |
C.D. Charalambous et al., “Stochastic Optimal Control with Randomized Strategies and Directed Information Criterion.” |
C.D. Charalambous et al., “The Value of Information & Information Transfer in Stochastic Optimal Control Theory,” 8 pgs. |
R. E. Blahut, “Principles and Practice of Information Theory,” ser. in Electrical and Computer Engineering. Reading, MA: Addison-Wesley Publishing Company, 1987. |
R. T. Gallager, “Information Theory and Reliable Communications,” John Wiley & Sons, Inc., Hoboken, New Jersey 2006. |
I. I. Gihman and A. V. Skorohod, “Controlled Stochastic Processes,” Springer-Verlag, 1979. |
O. Hernandez-Lerma and J. Lasserre, “Discrete-Time Markov Control Processes: Basic Optimality Criteria,” ser. Applications of Mathematics Stochastic Modelling and Applied Probability, Springer-Verlag, 1996. |
G. Kramer, “Directed Information for Channels with Feedback,” Ph.D. dissertation, Swiss Federal Institute of Technology (ETH), Dec. 1998. |
N. U. Ahmed and C. D. Charalambous, “Stochastic Minimum Principle for Partially Observed Systems Subject to Continuous and Jump Diffusion Processes and Driven by Relaxed Controls,” SIAM Journal on Control Optimization, vol. 51, No. 4, pp. 3235-3257, 2013. |
H. Marko, “The Bidirectional Communication Theory—A Generalization of Information Theory,” IEEE Transactions on Communications, vol. 21, No. 12, pp. 1345-1351, Dec. 1973. |
C. Kourtellaris and C. D. Charalambous, “Information Structures of Capacity Achieving Distributions for Feedback Channels With Memory and Transmission Cost: Stochastic Optimal Control & Variational Equalities—Part I,” IEEE Transactions on Information Theory, 2015, submitted, Nov. 2015. |
P. E. Caines, “Linear Stochastic Systems,” ser. Wiley Series in Probability and Statistics, John Wiley & Sons, Inc., New York 1988. |
M. Pinkser, “Information and Information Stability of Random Variables and Processes”, Holden-Day Inc., San Francisco, 1964. |
S. Ihara, “Information Theory for Continuous Systems”, World Scientific 1993. |
Non-Final Office Action for U.S. Appl. No. 15/461,462 dated Dec. 20, 2018. |
Final Office Action for U.S. Appl. No. 15/461,462 dated Aug. 12, 2019. |
Notice of Allowance for U.S. Appl. No. 15/461,462 dated Dec. 18, 2019. |
Number | Date | Country | |
---|---|---|---|
20200257258 A1 | Aug 2020 | US |
Number | Date | Country | |
---|---|---|---|
62309315 | Mar 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15461462 | Mar 2017 | US |
Child | 16859411 | US |