The present invention pertains to a predictive device that models the dynamic input/output relationships of a physical process, particularly in the process industries such as hydrocarbons, polymers, pulp and paper, and utilities. The predictive device is primarily for multivariable process control, but is also applicable to dynamic process monitoring, or to provide a continuous stream of inferred measurements in place of costly or infrequent laboratory or analyzer measurements.
Most existing industrial products designed for multivariable model predictive control (MPC) employ linear step-response models or finite impulse response (FIR) models. These approaches result in over-parameterization of the models (Qin and Badgwell, 1996). For example, the dynamics of a first order single input/single output SISO process which can be represented with only three parameters (gain, time constant and dead-time) in a parametric form typically require from 30 to 120 coefficients to describe in a step-response or FIR model. This over-parameterization problem is exacerbated for non-linear models since standard non-parametric approaches, such as Volterra series, lead to an exponential growth in the number of parameters to be identified. An alternative way to overcome these problems for non-linear systems is the use of parametric models such as input-output Nonlinear Auto-Regressive with eXogenous inputs (NARX). Though NARX models are found in many case-studies, a problem with NARX models using feed forward neural networks is that they offer only short-term predictions (Su, et al, 1992). MPC controllers require dynamic models capable of providing long-term predictions. Recurrent neural networks with internal or external feedback connections provide a better solution to the long-term prediction problem, but training such models is very difficult.
The approach described in (Graettinger, et al, 1994) and (Zhao, et al, 1997) provides a partial solution to this dilemma. The process model is identified based on a set of decoupled first order dynamic filters. The use of a group of first order dynamic filters in the input layer of the model enhances noise immunity by eliminating the output interaction found in NARX models. This structure circumvents the difficulty of training a recurrent neural network, while achieving good long-term predictions. However, using this structure to identify process responses that are second order or higher can result in over sensitive coefficients and in undesirable interactions between the first order filters. In addition, this approach usually results in an oversized model structure in order to achieve sufficient accuracy, and the model is not capable of modeling complex dynamics such as oscillatory effects. In the single input variable case, this first order structure is a special case of a more general nonlinear modeling approach described (Sentoni et al., 1996) that is proven to be able to approximate any discrete, causal, time invariant, nonlinear SISO process with fading memory. In this approach a Laguerre expansion creates a cascaded configuration of a low pass and several identical band pass first order filters. One of the problems of this approach is that may it require an excessively large degree of expansion to obtain sufficient accuracy. Also, it has not been known until now how to extend this methodology in a practical way to a multi-input system.
This invention addresses many essential issues for practical non-linear multivariable MPC. It provides the capability to accurately identify non-linear dynamic processes with a structure that
The present invention is a dynamic predictive device that predicts or estimates values of process variables that are dynamically dependent on other measured process variables. This invention is especially suited to application in a model predictive control (MPC) system. The predictive device receives input data under the control of an external device controller. The predictive device operates in either configuration mode or one of three runtime modes—prediction mode, horizon mode, or reverse horizon mode.
The primary runtime mode is the prediction mode. In this mode, the input data are such as might be received from a distributed control system (DCS) as found in a manufacturing process. The device controller ensures that a contiguous stream of data from the DCS is provided to the predictive device at a synchronous discrete base sample time. The device controller operates the predictive device once per base sample time and receives the prediction from the output of the predictive device.
After the prediction mode output is available, the device controller can switch to horizon mode in the interval before the next base sample time. The predictive device can be operated many times during this interval and thus the device controller can conduct a series of experimental scenarios in which a sequence of input data can be specified by the device controller. The sequence of input data can be thought of as a data path the inputs will follow over a forward horizon. The sequence of predictions at the output of the controller is a predicted output path over a prediction horizon and is passed to the device controller for analysis, optimization, or control. The device controller informs the predictive device at the start of an experimental path and synchronizes the presentation of the path with the operation of the device. Internally, horizon mode operates exactly the same way as prediction mode, except that the dynamic states are maintained separately so that the predictive device can resume normal prediction mode operation at the next base sample time. In addition, the outputs of the filter units are buffered over the course of the path and are used during reverse horizon operation of the device.
The purpose of reverse horizon mode is to obtain the sensitivities of the predictive device to changes in an input path. Reverse horizon mode can only be set after horizon mode operation has occurred. The device controller first informs the predictive device the index of the point in the output path for which sensitivities are required. The device controller then synchronizes the reverse operation of the predictive device with the output of sensitivity data at the input paths of the device.
In forward operation, each input is scaled and shaped by a preprocessing unit before being passed to a corresponding delay unit which time-aligns data to resolve dead time effects such as pipeline transport delay. Modeling dead-times is an important issue for an MPC system. In practical MPC, prediction horizons are usually set large enough so that both dynamics and dead-time effects are taken into account; otherwise the optimal control path may be based on short term information, and the control behavior may become oscillatory or unstable. In the preferred embodiment, the predictive device is predicting a single measurement, and the dead-time units align data relative to the time of that measurement. If predictions at several measurement points are required, then several predictive devices are used in parallel. During configuration mode, the dead times are automatically estimated using training data collected from the plant. In the preferred embodiment the training method consists of constructing individual auto-regressive models between each input and the output at a variety of dead-times, and choosing the dead time corresponding to the best such model. As with other components of the invention, manual override of the automatic settings is possible and should be used if there is additional process knowledge that allows a more appropriate setting.
Each dead time unit feeds a dynamic filter unit. The dynamic filter units are used to represent the dynamic information in the process. Internally the dynamic filter units recursively maintain a vector of states. The states derive their values from states at the previous time step and from the current input value. This general filter type can be represented by what is known to those skilled in the art as a discrete state space equation. The preferred embodiment imposes a much-simplified structure on the filter unit that allows for fast computation for MPC and also allows intelligent override of the automatic settings. This simplified structure is composed of first and second order loosely coupled subfilters, only one of which receives direct input from the corresponding delay unit. The practical identification of this filter structure is an essential part of this invention.
The outputs of the dynamic filter units are passed to a non-linear analyzer that embodies a static mapping of the filter states to an output value. The exact nature of the non-linear analyzer is not fundamental to this invention. It can embody a non-linear mapping such as a Non-linear Partial Least Squares model or a Neural Network, or a hybrid combination of linear model and non-linear model. The preferred embodiment makes use of a hybrid model. The reason for this is that a non-parametric non-linear model identified from dynamic data (such as a neural net) cannot, by its nature, be fully analyzed and validated prior to use. The non-linearity of the model means that different dynamic responses will be seen at different operating points. If the process being modeled is truly non-linear, these dynamic responses will be an improvement over linear dynamic models in operating regions corresponding to the training data, but may be erroneous in previously unseen operating regions. When the non-linear model is used within the context of MPC, erroneous responses, especially those indicating persistent and invalid gain reversals can create instabilities in the MPC controller. With a hybrid approach, a non-linear model is used to model the errors between the linear dynamic model and the true process. The hybrid dynamic model is a parallel combination of the linear dynamic model with the error correction model. The dynamic response of the linear model can be analyzed completely prior to use, since the gains are fixed and independent of the operating point. The process engineer can examine and approve these gains prior to closing the loop on the process and is assured of responses consistent with the true process. However, the linear dynamic response will be sub-optimal for truly non-linear processes. In online operation of the hybrid model within an MPC framework, the responses of the linear model and the hybrid model can be monitored independently and compared. In operating regions where the non-linear model shows persistently poor response, control can be switched, either automatically or by the operator, back to the safety of the linear model.
The output of the non-linear analyzer is passed through a postprocessing unit that converts the internal units to engineering units.
The importance of this invention is that its structure is shown to be able to approximate a large class of non-linear processes (any discrete, causal, time invariant, nonlinear multi-input/single output (MISO) process with fading memory), but is still simple enough to allow incorporation of process knowledge, is computationally fast enough for practical non-linear MPC, and can be configured with sufficient accuracy in a practical manner.
The textual description of the present invention makes detailed reference to the following drawings:
V.1 Forward Runtime Operation of the Prediction Device
The figures and equations in this detailed description refer to an index k that represents a data point in a sequence of data points. This index has different meanings depending on whether the forward operational mode of the device is prediction mode or horizon mode.
In prediction mode data is provided at a regular sampling interval Δt to the input nodes (18) of the device. Data is passed in a forward direction through the device. For simplicity of notation, the sample point T0+kΔt is denoted by the index k.
In horizon mode, a sequence of data representing a forward data path is provided to the inputs. This data path may represent a proposed path for manipulated variables for process control purposes, or may represent a holding of the inputs to constant values in order to determine the steady state output of the device. The starting point of this path is taken to be the most recent input sample provided in prediction mode. Index 0 represents this starting point and index k represents the kth data point in this path.
V.1.1 Forward Runtime Operation of a Preprocessing Unit
Each input feeds a preprocessing unit (20) which is used to convert the engineering units of each data value to a common normalized unit whose lower and upper limits are, by preference, −1 and 1 respectively, or 0 and 1 respectively.
The preprocessing unit can also shape the data by passing it through a non-linear transformation. However, the preferred embodiment uses a simple scale and offset as shown in FIG. 2 and equation (1):
u(k)=suE(k)+o (1)
where uE(k) is the value of an input in engineering units, and u(k) is the preprocessed value in normalized units. The scale and offset values as stored in the configuration file (30—
V.1.2 Forward Runtime Operation of a Delay Unit
Data flows from each preprocessing unit to a corresponding delay unit (22). The forward run-time operation of the delay unit (22) is shown in FIG. 3 and equation (2). The output ud(k)(304) of an individual delay unit (300) is equal to the input u(k) (302) delayed by d sample times. The value of d may be different for each delay unit (22) and is retrieved from the configuration file (30—FIG. 1). This may be implemented as a shift register with a tap at the dth unit.
ud(k)=u(k−d) (2)
This equation can also be written in terms of the unit delay operator q−1:
ud(k)=q−du(k)
V.1.3 Forward Runtime Operation of the Filter Units
Referring again to
The primary subfilter maintains a vector (412) of states x1(k) at each time k. An internal single time step delay unit (414) feeds the vector state to a coupling unit (420) and to a matrix unit (416). The matrix unit converts the delayed state vector (418) and feeds it to a vector addition unit (408). The input to the filter unit ud(k) is expanded and linearly scaled by the input coupling unit (410) to a vector of values of the same dimension as the state vector. The vector addition unit then combines its two input streams to produce the vector of states for the current time. The operation just described for the primary subfilter is conveniently described in mathematical matrix and column vector notation as:
x1(k)=A1x1(k−1)+b1ud(k) (3)
Such an equation is known, to those skilled in the art, as a linear state space equation with a single input. If no structure is imposed on A1 or b1, then further subfilters are unnecessary since the cascaded subfilter structure can subsumed into a single complicated primary subfilter. However, the preferred subfilter structures as described below, or similar to those described below, are essential for a practical embodiment and application of the invention.
The subfilter coupling unit (420) determines how state values at time k−1 affect the state units in the next subfilter at time k. In mathematical terms, the subfilter coupling unit uses the coupling matrix Γ2 to perform a linear transformation of state vector x1(k−1) which is passed to the vector addition unit of the next subfilter. The operation of a secondary subfilter is conveniently described in mathematical matrix and vector notations as:
xs(k)=Asxs(k−1)+Γsxs−1(k−1)+bsud(k) (4)
In the preferred embodiment, the subfilters are all of first or second order. A first order subfilter maintains just one state. The preferred embodiment for a first order primary subfilter (500) is shown in FIG. 5. The vectorizing unit (502) and the matrix unit (504) collapse to become scaling operations so that the state vector (506) is represented by:
x1(k)=λ1x1(k−1)+(1−λ1)ud(k) (5)
The preferred embodiment for a first order secondary subfilter (600) is shown in FIG. 6. The secondary subfilter receives no direct input, but instead receives cascaded input from the previous subfilter. The preferred coupling is a loose coupling scheme (602) in which only the last state component of the previous subfilter contributes. Note that the previous subfilter is not required to be a first order subfilter. The state vector (606) is represented by:
xs(k)=λsxs(k−1)+(1−λs)xs−1,last(k−1) (6)
where the matrix unit λs (604) is a scalar.
Second order subfilters maintain two states. The preferred embodiment for a second order primary subfilter (700) is shown in FIG. 7. In this figure, the state vector x1(k) is shown in terms of its two components x11(k) (708) and x12(k) (710). The vectorizing unit (702) creates two inputs to the vector addition unit (714), the second of which is fixed at zero. The delayed states (704) and (706) are fed to the matrix unit (712) whose outputs are also fed to the vector addition unit (712) which adds the matrix transformed states to the vectorized inputs producing the current state. Note that due to the (1,0) structure of the second matrix row, and the zero second component of the vectorizing unit component, the current second state component (710) is just equal to the delayed first component (704):
x11(k)=a11x11(k−1)+a12x12(k−1)+(1−a11−a12)ud(k)
x12(k)=x11(k−1) (7)
The preferred embodiment for a second order secondary subfilter (800) is shown in FIG. 8. In this figure, the state vector xs(k) is shown in terms of its two components xs1(k) (808) and xs2(k) (810). The preferred coupling with the previous subfilter unit is a loose coupling scheme (802) in which only the last state component of the previous subfilter contributes to the first state component of the current subfilter. Note that the previous subfilter is not required to be a first order subfilter or second order subfilter. The output of the coupling unit is fed to the addition unit (814). The delayed states (804) and (806) are fed to the state matrix unit (812) whose outputs are also fed to the vector addition unit (812) which adds the state matrix transformed states to the output of the coupling unit, producing the current state. Note that due to the (1,0) structure of the second state matrix row, and the zero second row of the coupling matrix, the current second state component (810) is just equal to the delayed first component (804):
xs1(k)=as1xs1(k−1)+as2xs2(k−1)+(1−as1−as2)xs−1,last(k−1)
xs2(k)=xs1(k−1) (8)
If the device is operating in horizon mode current states along the path are maintained in a separate storage area so as not to corrupt the prediction mode states. In horizon mode, k indexes the input path and the states are initialized at the start of the path (k=0) to the prediction mode states. In addition the states at the output of the filter unit are buffered for use in reverse horizon mode.
V.1.4 Forward Runtime Operation of the Non-Linear Analyzer
Referring again to
V.1.5 Forward Runtime Operation of the Postprocessing Unit
The postprocessing unit (32) in
The scale and offset values as stored in the configuration file (30—
V.2 Reverse Runtime Operation of the Prediction Device
The reverse horizon mode of operation is only allowed immediately following horizon mode operation. Horizon mode operation buffers the states (28) output by the filter units (24) over the course of the forward path. The purpose of reverse horizon mode is to obtain the sensitivity of any point y(k) of the prediction path (output by the device in horizon mode) with respect to any point in the input path u(l).
In order to use the invention for process control applications, the mathematical derivatives of the prediction with respect to the inputs are required. The mathematical derivatives measure how sensitive a state is in response to a small change in an input. The dynamic nature of the predictive device means that a change in input at time k will start to have an effect on the output as soon as the minimum dead-time has passed and will continue to have an effect infinitely into the future. In most practical applications systems are identified to have fading memory so that the effect into the future recedes with time. For MPC applications the aim is to plan a sequence of moves for the inputs corresponding to manipulated variables (MVs). The effect of these moves needs to be predicted on the controlled variables (CVs) along a prediction path. A constrained optimization algorithm is then used to find the move sequences that predict an optimal prediction path according to some desired criteria.
In reverse horizon mode, the external device controller specifies the output path index k. The device then outputs in sequence the sensitivities (64) in reverse order at the input nodes of the device. In the detailed description below, the sensitivity of the output YE(k) of the device with respect to any variable v is represented by Ωkv. It is this sensitivity value, rather than an external data value that is fed back through the device when operating in reverse horizon mode.
V.2.1 Reverse Runtime Operation of the Postprocessing Unit
The reverse operation of the postprocessing unit (32) is to scale data received at its output node using the inverse of the feedforward scaling shown in equation (10):
Ωky(k)=sΩkyE(k) (11)
Since the sensitivity of the output with respect to itself is:
ΩkyE(k)=1 (12)
the postprocessing unit always receives the value of 1 at its output node in reverse operation.
V.2.2 Reverse Runtime Operation of the Non-Linear Analyzer
The reverse runtime operation of a neural net model is well known to those skilled in the art and is shown in FIG. 10. The output from the reverse operation of the postprocessing unit Ωky(k) is presented at the output node of the non-linear analyzer (26). The information flows in a reverse manner through the non-linear analyzer (26) and the resulting sensitivities (62) are output at the input nodes of the non-linear analyzer (26):
V.2.3 Reverse Runtime Operation of a Filter Unit
The effect of a change in the delayed input ud(l) on a the sequence of states being output from a filter unit (24) in horizon mode is complex due to the dependencies of a subfilter's states based on the previous subfilter's states and on the subfilter's previous states. An efficient solution can be derived using the chain rule for ordered derivatives (Werbos, 1994) and is achieved by the reverse operation of the filter unit (24). In reverse horizon mode, the output of each filter unit (24) receives the vector of sensitivities Ωsxs(k) propagated back from the non-linear analyzer (26) operating in reverse mode:
The operation of these equations is shown in
In
The reverse operation of a matrix operation (1132, 1134) or a vector operation (1136) is represented mathematically as the transpose of the forward operation. The physical justification for this is shown in
V.2.4 Reverse Runtime Operation of a Delay Unit
The reverse operation of a delay unit (22) corresponds to a delay in the reverse sequencing:
Ωku(l)=Ωkud(l+d) (15)
V.2.5 Reverse Runtime Operation of a Preprocessing Unit
The reverse operation of a preprocessing unit (20) is to scale data received at its output node using the inverse of the feedforward scaling shown in equation (1):
V.3 Configuration Mode
The predictive device is configured, in the preferred embodiment, using training data collected from the process. However, a process engineer can override any automated configuration settings. The training data set should represent one or more data sets which have been collected at the same base-time sample rate that will be used by the external device controller to present data to the predictive device in prediction mode. Each set of data should represent a contiguous sequence of representative.
In order to allow operator approval or override of the configuration settings, the training of the predictive device is done in stages, each stage representing a major component of the predictive device.
V.3.1 Configuring the Preprocessing and Postprocessing Units
The scale and offset of a preprocessing or postprocessing unit is determined from the desire to map the minimum Emin and maximum Emax of the corresponding variable's engineering units to the minimum Nmin and maximum Nmax of the normalized units:
The preferred normalized units have Nmin=−1, Nmax=1. The engineering units may be different for each input variable, leading to a different scale and offset for each preprocessing/postprocessing unit.
V.3.2 Configuring a Delay Unit
The configuration of a delay unit (22) is not a central aspect of this application.
dmin≦d≦dmax
V.3.3 Configuring a Filter Unit
A practical means of configuring a filter unit (24) is an essential aspect of this invention. The preferred method of configuration is initialized using the simplified filter structure shown in
Step 1
The operator specifies an appropriate dominant time constant Ti associated with each input variable. This can be specified from engineering knowledge or through an automated approach such as Frequency Analysis or a Back Propagation Through Time algorithm. The value of the initial time constant is not critical the proposed configuration method automatically searches the dominant time range for the best values.
Step 2
For each input, initialize the filter structure in
In this simple filter structure, each subfilter (1302, 1304, 1306) yields a corresponding single state (1312, 1314, 1316) which is decoupled from the other subfilter states. This initial filter structure represents the equation
x(k)=Ax(k−1)+Bud(k) (19)
which has a simplified diagonal block structure of the form
Step 3
Map the contiguous input training data through the delay units (22) and filter structure (24) to obtain a set of training state vectors {X(k)|k=1, . . . , T}. Then find a vector c that provides the best linear mapping of the states to the corresponding target outputs {Y(k)|k=1, . . . , T}. One way of doing this is to use the Partial Least Squares method that is well known to those skilled in the art. This results in a multi-input, single-output (MISO) state space system {A, b, cT} in which equations (19), (20), and (21) are supplemented by the equation:
y(k)=cTx (22)
where
Step 4
Balance each subsystem {Ai, bi, ciT} of the MISO block diagonal system based on controllability & observability theory. The balancing procedure allows order reduction of a state space system by transforming the states so that the controllability and observability properties of the original system are substantially concentrated in the first part of the state vector.
For each input variable, indexed by i, perform the balancing procedure on the sub-system {Ai, bi, ciT}. Balancing of a linear state space system is a method of reduction well known to those skilled in the art. Other methods of model reduction, such as Hankel reduction, can be substituted. A summary of the balancing method is now given.
For each sub-system {Ai, bi, ciT}, compute the controllability and observability Gramians Pi>0, Qi>0 that satisfy the equations:
AiPiAiT−Pi=−bibiT
AiTQiAi−Qi=−ciciT (24)
Find a matrix Ri, using the Cholesky factorization method, such that
Pi=RiTRi. (25)
Using the singular value decomposition method, diagonalize to obtain the following decomposition:
RiQiRiT=UiΣi2UiT (26)
Define
Ti−1=RiTUiΣi−1/2 (27)
then
TiPiTiT=(TiT)−1QiTi−1=Σi (28)
and the balanced subsystem is obtained through a similarity transform on the states as:
Âi=TiAiTi−1,{circumflex over (b)}i=Tibi,ĉiT=ciTTi−1 (29)
Step 5
Using balanced subsystems find out dominant time constant for each input by reducing each balanced model to a first order model. This is done by considering the dynamics of all but the first state of each input's filter unit (24) to have reached steady state. This leads to:
Check the convergence of the dominant time constant estimation:
If
or the number of iterations has exceeded the maximum allowable, go to step 6. Otherwise, return to step 2. The maximum number of iterations and ε are parameters of the training method.
Step 6
Once an accurate estimate of the dominant time constant is available for each input variable, the eigenvalues {λisP|s=1, . . . , 5} of the controllability gramian {circumflex over (P)}i (equivalently the observability gramian) are calculated; these are always positive and real because the controllability gramian is positive definite. The final order Si of each filter unit (24) is then calculated such that
where θ is parameter of the training method and is a value less than 1, a good practical value being 0.95. This order represents the total number of states of an individual filter unit (24).
After determining the model order, truncate the Âi matrix so that just the first Si states are used; this truncation is done by selecting the upper left Six Si submatrix of Âi. Then calculate the Si eigenvalues of the truncated Âi matrix {λis|s=1, . . . , Si}. Now configure each filter unit (24) using the preferred first and second order subfilter configurations with the preferred couplings as shown in FIG. 5 through FIG. 8. Use a first order filter for each real eigenvalue. Use a second order filter for each pair of complex eigenvalues {λ, {overscore (λ)}}, where, in
a11=λ+{overscore (λ)}
a12=−λ{overscore (λ)} (35)
The preferred ordering of these subfilter units is according to time-constant, with the fastest unit being the primary subfilter.
Another favored approach is to perform model reduction by initializing with Laguerre type filter units as described in section V.4.2, rather than the simple diagonal filter structure of FIG. 13. Sufficient quantity of Laguerre type filter units span the full range of dynamics in the process, and thus the iterative process described above is not needed. In fact a non-linear model reduction can be achieved by performing a linear model reduction on the linear system whose states are defined by the Laguerre filters and whose outputs are defined by pre-transformed values at the hidden layer of the neural net:
ξ1(k), . . . ,ξH(k)
V.3.4 Configuring the Non-Linear Analyzer
The configuration of the non-linear analyzer (26) is not a central aspect of this application. The non-linear analyzer (26) is trained to optimally map the outputs of the filter units (24) to the corresponding target output. Training of a neural net is described in detail in (Bishop, 95) for example. In one embodiment, the non-linear analyzer is replaced by apparatus for a constrained non-linear approximator disclosed in U.S. patent application Ser. No. 09/892,586 and herein incorporated by reference. In particular, page 33, lines 26-29, as filed in application Ser. No. 09/392,586 describes the additional calculations needed when the inputs to the non-linear approximator are filtered states.
V.4 Universality of the Prediction Device
The predictive device is shown, in this section, to be able to approximate any time invariant, causal, fading memory system (defined below). In order to prove this, some precise notation and definitions will be needed.
V.4.1 Notation and Definitions for Universality Proof
Let Z denote the integers, Z+ the non-negative integers and Z− the non-positive integers respectively. A variable u represents a vector or a sequence in accordance with the context, while u(k) represents a value of the sequence at the particular time k.
For any positive integer p>0, RN denotes the normed linear space of real N-vectors (viewed as column vectors) with norm |u|=max1≦i≦N|un|. Matrices are denoted in uppercase bold. Functions are denoted in italic lowercase if they are scalars and in bold if they are vector valued.
Let lN∞(Z) (respectively lN∞(Z+) and lN∞(Z−)), be the space of bounded RN-valued sequences defined on Z (respectively Z+ and Z−) with the norm:
∥u∥∞=supkεZ|u(k)|
For every decreasing sequence w:Z+→(0,1],
define the following weighted norm:
∥u∥w=supkεZ
A function F:lN∞(Z−)→R is called a functional on lN∞(Z−), and a function ℑ:lN∞(Z−)→l∞(Z) is called an operator. As a notational simplification the parentheses around the arguments of functionals and operators are usually dropped; for example, Fu rather than F[u] and ℑu(k) rather than ℑ[u](k).
Two specific operators are important. The delay operator defined by
Qdu(k)≡u(k−d)
and the truncation operator defined by
The following definitions make precise the terms used to characterize the class of systems approximated by the predictive device.
Time invariant: An operator ℑ is time-invariant if Qdℑ=ℑQd ∀dεZ.
Causality: ℑ is causal if u(l)=v(l)∀l≦k →ℑu(k)=ℑv(k).
Fading Memory: ℑ:lN∞(Z)→l∞(Z) has fading memory on a subset K−⊂lN∞(Z−) if there is a decreasing sequence
such that for each u, vεK− and given ε>0 there is a δ>0 such that
∥u(k)−v(k)∥w<ε→|ℑu(0)−ℑv(0)|<δ
Every sequence u in lN∞(Z−)can be associated with a causal extension sequence uc in lN∞(Z) defined as:
and each time invariant causal operator ℑ can be associated with a functional F on lN∞(Z−) defined by
Fu=ℑuc(0)
The operator ℑ can be recovered from its associated functional F via
ℑu(k)=FPQ−ku (36)
Then, ℑ is continuous if and only if F is, so the above equations establish a one to one correspondence between time invariant causal continuous operators and functionals F on lN∞(Z−). In the next the definition of the Laguerre system is given. These can be configured in the general filter structure of
V.4.2 Laguerre Systems
The set of the Laguerre systems is defined in the complex z-transform plane as:
where:
The whole set of Laguerre systems can be expressed in a state space form that shows a decoupled input form and therefore can be mapped to the general filter structure in FIG. 4. Each filter unit (24) is configured as a single structured {Ai,Bi} subfilter. The structure of Ai is a lower triangular matrix, and bi=[1 0 . . . 0]T.
The key point here is that the representation is decoupled by input. Balancing can be done to decrease the order of the Laguerre systems, and similarity transforms can be done on the Laguerre filters in order to simplify the configuration to utilize the preferred subfilter units. Similarity transformations do not affect the accuracy of the representation and so proving that the use of Laguerre filters decoupled by input approximate any time invariant, causal, fading memory system is equivalent to proving the preferred subfilter structure can approximate any such system. The balancing is a practical mechanism to reduce order without degrading performance.
V.4.3 Proof of Approximation Ability of Laguerre Systems
First some preliminary results are stated:
Stone-Weierstrass Theorem (Boyd, 1985)
Suppose E is a compact metric space and G a set of continuous functionals on E that separates points, that is for any distinct u, vεE there is a GεG such that Gu≠Gv. Then for any continuous functional F on E and given ε>0, there are functionals,
and a polynomial p: RS→R, such that for all uεE
|Fu−p(G11u, . . . , GS
The reason for the group indexing, which is not necessary for a general statement of the Stone-Weierstrass theorem, will become apparent in Lemma 2 when each block with a Laguerre operator. In addition, three lemmas are necessary before the theorem can be proved.
Lemma 1: K−≡{uεlN∞(Z−)|0<∥u∥≦c1}, is compact with the ∥•∥w norm.
Proof: Let u(p) be any sequence in K−. We will find a u(0)εK_ and a subsequence of u(p) converging in the ∥•∥w norm to u(0). It is well know that K− is not compact in lN∞(Z−) with the usual supremum norm ∥•∥∞ (Kolmogorov, 1980). For each l, let be K−[−l,0] the restriction of K− to [−l,0]. K−[−l,0] is uniformly bounded by c1 and is composed of a finite set of values, hence compact in lN∞[−l,0]. Since K−[−l,0] is compact for every l, we can find a subsequence u(p
Now, let ε>0. Since w(k)→0 as k→∞, we can find m0>0 a such that w(m0)≦ε/c1.
Since u(p
Now from equation (37) we can find m1 such that
so by equation (38) and equation (39) we can conclude that
∥u(p
which proves that K− is compact.
Lemma 2. The set of functional {Gsi} associated to the discrete Laguerre Operators are continuous with respect to ∥•∥w norm, that is, given any ε>0 there exists a δ>0 such that
∥u−v∥w<δ→|Gsiu−Gsiv|<ε
Proof: Consider the functional Gsi(•) associated with the Laguerre operator Lsi(•).
Given ε>0, chose a δ>0 such that:
|ui−vi|w<δ→|Gsiui−Gsivi|<ε (40)
This is possible due to the continuity of the one dimensional Laguerre operators with respect to the weighted norm as shown in (Sentoni et al, 1996). Therefore, from equation (40) and the definition of the functionals
∥u−v∥w<δ→|ui−vi|w<δ→|Gsiu−Gsiv|=|Gsiui−Gsivi|<ε (41)
which proves Lemma 2
Lemma 3. The {Gsi} separate points in lN∞(Z−), that is, for any distinct u, vεlN∞(Z−) there is a GsiεG such that Gsiu≠Gsiv.
Proof. Suppose u, vεlN∞(Z−) are equal except for the i-th component. Then
Gsiu≠Gsiv⇄Gsiui≠Gsivi (42)
by the definition of the functionals. It is known from one dimensional theory (Sentoni et al, 1996) that for any distinct ui, viεl∞(Z−) there is a Gsi such that Gsiui≠Gsivi; this result together with equation (42) proves Lemma 3.
Approximation Theorem
Now given ε>0, Lemmas 1, 2, 3 together with the Stone-Weierstrass theorem imply that given any continuous functional F on K−, there is a polynomial p: RS→R. such that for all uεK−
|Fu−p(G11u, . . . , GS
Because the Laguerre systems are continuous and acting on a bounded space, the Gsiu are bounded real intervals on so the polynomial p can be replaced by any static model that acts as a universal approximator on a bounded input space, for example, a neural net. In other words (43) can be replaced by
|Fu−NN(G11u, . . . , GS
A time invariant causal operator ℑ can be recovered from its associated functional through equation (36) as
ℑu(k)=FPQ−ku
Now let uεK and kεZ, so PQ−kuεK−, hence
|FPQ−ku−NN(G11PQ−ku, . . . , GS
Since the last equation is true for all kεZ, we conclude that for all uεK−
∥ℑu−{circumflex over (ℑ)}u∥<ε
In other words, it is possible to approximate any nonlinear discrete time invariant operator having fading memory on K, with a finite set of discrete Laguerre systems followed by a single hidden layer neural net. This completes the proof.
V.5 Equivalents
Although the foregoing details refer to particular preferred embodiments of the invention, it should be understood that the invention is not limited to these details. Substitutions and alterations, which will occur to those of ordinary skill in the art, can be made to the detailed embodiments without departing from the spirit of the invention. These modifications are intended to be within the scope of the present invention.
This application is a continuation-in-part of application Ser. No. 09/160,128 now U.S. Pat. No. 6,453,308, filed Sep. 24, 1998, which claims the benefit of U.S. Provisional Application No. 60/060,638 filed Oct. 1, 1997, the contents of which are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5535135 | Bush et al. | Jul 1996 | A |
5659667 | Buescher et al. | Aug 1997 | A |
5877954 | Klimasauskas et al. | Mar 1999 | A |
5992383 | Scholten et al. | Nov 1999 | A |
6278962 | Klimasauskas et al. | Aug 2001 | B1 |
6751602 | Kotoulas et al. | Jun 2004 | B1 |
6823244 | Breed | Nov 2004 | B1 |
Number | Date | Country |
---|---|---|
WO 9728669 | Aug 1997 | WO |
WO 9917175 | Apr 1999 | WO |
Number | Date | Country | |
---|---|---|---|
20020178133 A1 | Nov 2002 | US |
Number | Date | Country | |
---|---|---|---|
60060638 | Oct 1997 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09160128 | Sep 1998 | US |
Child | 10045668 | US |