Information
-
Patent Application
-
20030110148
-
Publication Number
20030110148
-
Date Filed
October 19, 200123 years ago
-
Date Published
June 12, 200321 years ago
-
CPC
-
US Classifications
-
International Classifications
Abstract
A control system for optimizing a shock absorber having a non-linear kinetic characteristic is described. The control system uses a fitness (performance) function that is based on the physical laws of minimum entropy and biologically inspired constraints relating to mechanical constraints and/or rider comfort, driveability, etc. In one embodiment, a genetic analyzer is used in an off-line mode to develop a teaching signal. An information filter is used to filter the teaching signal to produce a compressed teaching signal. The compressed teaching signal can be approximated online by a fuzzy controller that operates using knowledge from a knowledge base. In one embodiment, the control system includes a learning system, such as a neural network that is trained by the compressed training signal. The learning system is used to create a knowledge base for use by an online fuzzy controller. The online fuzzy controller is used to program a linear controller.
Description
BACKGROUND
[0001] 1. Field of the Invention
[0002] The disclosed invention is relates generally to control systems, and more particularly to electronically controlled suspension systems.
[0003] 2. Description of the Related Art
[0004] Feedback control systems are widely used to maintain the output of a dynamic system at a desired value in spite of external disturbances that would displace it from the desired value. For example, a household space-heating furnace, controlled by a thermostat, is an example of a feedback control system. The thermostat continuously measures the air temperature inside the house, and when the temperature falls below a desired minimum temperature the thermostat turns the furnace on. When the interior temperature reaches the desired minimum temperature, the thermostat turns the furnace off. The thermostat-furnace system maintains the household temperature at a substantially constant value in spite of external disturbances such as a drop in the outside temperature. Similar types of feedback controls are used in many applications.
[0005] A central component in a feedback control system is a controlled object, a machine, or a process that can be defined as a “plant”, having an output variable or performance characteristic to be controlled. In the above example, the “plant” is the house, the output variable is the interior air temperature in the house and the disturbance is the flow of heat (dispersion) through the walls of the house. The plant is controlled by a control system. In the above example, the control system is the thermostat in combination with the furnace. The thermostat-furnace system uses simple on-off feedback control system to maintain the temperature of the house. In many control environments, such as motor shaft position or motor speed control systems, simple on-off feedback control is insufficient. More advanced control systems rely on combinations of proportional feedback control, integral feedback control, and derivative feedback control. A feedback control based on a sum of proportional feedback, plus integral feedback, plus derivative feedback, is often referred as PID control.
[0006] A PID control system is a linear control system that is based on a dynamic model of the plant. In classical control systems, a linear dynamic model is obtained in the form of dynamic equations, usually ordinary differential equations. The plant is assumed to be relatively linear, time invariant, and stable. However, many real-world plants are time-varying, highly non-linear, and unstable. For example, the dynamic model may contain parameters (e.g., masses, inductance, aerodynamics coefficients, etc.), which are either only approximately known or depend on a changing environment. If the parameter variation is small and the dynamic model is stable, then the PID controller may be satisfactory. However, if the parameter variation is large or if the dynamic model is unstable, then it is common to add adaptive or intelligent (AI) control functions to the PID control system.
[0007] AI control systems use an optimizer, typically a non-linear optimizer, to program the operation of the PID controller and thereby improve the overall operation of the control system.
[0008] Classical advanced control theory is based on the assumption that near of equilibrium points all controlled “plants” can be approximated as linear systems. Unfortunately, this assumption is rarely true in the real world. Most plants are highly nonlinear, and often do not have simple control algorithms. In order to meet these needs for a nonlinear control, systems have been developed that use soft computing concepts such as genetic algorithms, fuzzy neural networks, fuzzy controllers and the like. By these techniques, the control system evolves (changes) over time to adapt itself to changes that may occur in the controlled “plant” and/or in the operating environment.
[0009] Currently, self-organizing control systems based on fuzzy controllers suffer from two drawbacks. First, when a genetic analyzer is used to develop a teaching signal for a fuzzy neural network, the teaching signal typically contains unnecessary stochastic noise, making it difficult to later develop an approximation to the teaching signal. Second, the fitness functions used for the genetic analyzers in self-organizing suspension control systems typically optimize the control system according to some desired control paradigm without reference to human factors such as rider comfort.
SUMMARY
[0010] The present invention solves these and other problems by providing a control system for optimizing a shock absorber system having a non-linear kinetic characteristic. The control system uses a fitness (performance) function that is based on the physical laws of minimum entropy and biologically inspired constraints relating to rider comfort, driveability, etc. In one embodiment, a genetic analyzer is used in an off-line mode to develop a teaching signal. An information filter is used to filter the teaching signal to produce a compressed teaching signal. The compressed teaching signal can be approximated online by a fuzzy controller that operates using knowledge from a knowledge base. The control system can be used to control complex plants described by nonlinear, unstable, dissipative models. The control system is configured to use smart simulation techniques for controlling the shock absorber (plant).
[0011] In one embodiment, the control system comprises a learning system, such as a neural network that is trained by a genetic analyzer. The genetic analyzer uses a fitness function that maximizes sensor information while minimizing entropy production based on biologically-inspired constraints.
[0012] In one embodiment, a suspension control system uses a difference between the time differential (derivative) of entropy (called the production entropy rate) from the learning control unit and the time differential of the entropy inside the controlled process (or a model of the controlled process) as a measure of control performance. In one embodiment, the entropy calculation is based on a thermodynamic model of an equation of motion for a controlled process plant that is treated as an open dynamic system.
[0013] The control system is trained by a genetic analyzer that generates a teaching signal. The optimized control system provides an optimum control signal based on data obtained from one or more sensors. For example, in a suspension system, a plurality of angle and position sensors can be used. In an off-line learning mode (e.g., in the laboratory, factory, service center, etc.), fuzzy rules are evolved using a kinetic model (or simulation) of the vehicle and its suspension system. Data from the kinetic model is provided to an entropy calculator that calculates input and output entropy production of the model. The input and output entropy productions are provided to a fitness function calculator that calculates a fitness function as a difference in entropy production rates for the genetic analyzer constrained by one or more constraints obtained from rider preferences. The genetic analyzer uses the fitness function to develop a training signal for the off-line control system. The training signal is filtered to produce a compressed training signal. Control parameters from the off-line control system are then provided to an online control system in the vehicle that, using information from a knowledge base, develops an approximation to the compressed training signal.
[0014] In one embodiment, the invention includes a method for controlling a nonlinear object (a plant) by obtaining an entropy production difference between a time differentiation (dSu/dt) of the entropy of the plant and a time differentiation (dSc/dt) of the entropy provided to the plant from a controller. A genetic algorithm that uses the entropy production difference as a fitness (performance) function evolves a control rule in an off-line controller. The nonlinear stability characteristics of the plant are evaluated using a Lyapunov function. The genetic analyzer minimizes entropy and maximizes sensor information content. Filtered control rules from the off-line controller are provided to an online controller to control suspension system. In one embodiment, the online controller controls the damping factor of one or more shock absorbers (dampers) in the vehicle suspension system.
[0015] In some embodiments, the control method also includes evolving a control rule relative to a variable of the controller by means of a genetic algorithm. The genetic algorithm uses a fitness function based on a difference between a time differentiation of the entropy of the plant (dSu/dt) and a time differentiation (dSc/dt) of the entropy provided to the plant. The variable can be corrected by using the evolved control rule.
[0016] In one embodiment, the invention comprises a self-organizing control system adapted to control a nonlinear plant. The AI control system includes a simulator configured to use a thermodynamic model of a nonlinear equation of motion for the plant. The thermodynamic model is based on a Lyapunov function (V), and the simulator uses the function V to analyze control for a state stability of the plant. The control system calculates an entropy production difference between a time differentiation of the entropy of said plant (dSu/dt) and a time differentiation (dSc/dt) of the entropy provided to the plant by a low-level controller that controls the plant. The entropy production difference is used by a genetic algorithm to obtain an adaptation function wherein the entropy production difference is minimized in a constrained fashion. The genetic algorithm provides a teaching signal. The teaching signal is filtered to remove stochastic noise to produce a filtered teaching signal. The filtered teaching signal is provided to a fuzzy logic classifier that determines one or more fuzzy rules by using a learning process. The fuzzy logic controller is also configured to form one or more control rules that set a control variable of the controller in the vehicle.
[0017] In yet another embodiment, the invention comprises a new physical measure of control quality based on minimum production entropy and using this measure for a fitness function of genetic algorithm in optimal control system design. This method provides a local entropy feedback loop in the control system. The entropy feedback loop provides for optimal control structure design by relating stability of the plant (using a Lyapunov function) and controllability of the plant (based on production entropy of the control system). The control system is applicable to a wide variety of control systems, including, for example, control systems for mechanical systems, bio-mechanical systems, robotics, electro-mechanical systems, etc.
BRIEF DESCRIPTION OF THE FIGURES
[0018] The above and other aspects, features, and advantages of the present invention will be more apparent from the following description thereof presented in connection with the following drawings.
[0019]
FIG. 1 illustrates a general structure of a self-organizing intelligent control system based on soft computing.
[0020]
FIG. 2 illustrates the structure of a self-organizing intelligent suspension control system with physical and biological measures of control quality based on soft computing
[0021]
FIG. 3 illustrates the process of constructing the Knowledge Base (KB) for the Fuzzy Controller (FC).
[0022]
FIG. 4 shows twelve typical road profiles.
[0023]
FIG. 5 shows a normalized auto-correlation function for different velocities of motion along the road number 9 from FIG. 4.
[0024]
FIG. 6A is a plot showing results of stochastic simulations based on a one-dimensional Gaussian probability density function.
[0025]
FIG. 6B is a plot showing results of stochastic simulations based on a one-dimensional uniform probability density function.
[0026]
FIG. 6C is a plot showing results of stochastic simulations based on a one-dimensional Rayleigh probability density function.
[0027]
FIG. 6D is a plot showing results of stochastic simulations based on a two-dimensional Gaussian probability density function.
[0028]
FIG. 6E is a plot showing results of stochastic simulations based on a two-dimensional uniform probability density function.
[0029]
FIG. 6F is a plot showing results of stochastic simulations based on a two-dimensional hyperbolic probability density function.
[0030]
FIG. 7 illustrates a full car model.
[0031]
FIG. 8A shows a control damper layout for a suspension-controlled vehicle having adjustable dampers.
[0032]
FIG. 8B shows an adjustable damper for the suspension-controlled vehicle.
[0033]
FIG. 8C shows fluid flow for soft and hard damping in the adjustable damper from FIG. 8B.
[0034]
FIG. 9 shows damper force characteristics for the adjustable dampers illustrated in FIG. 8.
[0035]
FIG. 10 shows the structure of an SSCQ for use in connection with a simulation model of the full car and suspension system.
[0036]
FIG. 11 is a flowchart showing operation of the SSCQ.
[0037]
FIG. 12 shows time intervals associated with the operating mode of the SSCQ.
[0038]
FIG. 13 is a flowchart showing operation of the SSCQ in connection with the GA.
[0039]
FIG. 14 shows the genetic analyzer process and the operations of reproduction, crossover, and mutation.
[0040]
FIG. 15 shows results of variables for the fuzzy neural network.
[0041]
FIG. 16A shows control of a four-wheeled vehicle using two controllers.
[0042]
FIG. 16B shows control of a four-wheeled vehicle using a single controller to control all four wheels.
[0043]
FIG. 17 shows an information filter application results for compression of a normal distribution stochastic signal, including: (a) the source signal; (b) the row-wise information distribution before compression; (c) the signal after compression; and (d) the information distribution after compression.
[0044]
FIG. 18 shows the teaching signal Kc provided by the information filter 241 shown in FIG. 2.
[0045]
FIG. 19 shows the output of the simulation system of control quality of intelligent control suspension system.
[0046]
FIG. 20 shows the simulation results of fuzzy control of a suspension system
[0047]
FIG. 21 shows the square of heave jerk for controlled and uncontrolled systems.
[0048]
FIG. 22 shows the square of pitch jerk for controlled and uncontrolled systems.
[0049]
FIG. 23 shows the square of roll jerk for controlled and uncontrolled systems.
[0050] In the drawings, the first digit of any three-digit element reference number generally indicates the number of the figure in which the referenced element first appears. The first two digits of any four-digit element reference number generally indicate the figure in which the referenced element first appears.
DESCRIPTION
[0051]
FIG. 1 is a block diagram of a control system 100 for controlling a plant based on soft computing. In the controller 100, a reference signal y is provided to a first input of an adder 105. An output of the adder 105 is an error signal ε, which is provided to an input of a Fuzzy Controller (FC) 143 and to an input of a Proportional-Integral-Differential (PID) controller 150. An output of the PID controller 150 is a control signal u*, which is provided to a control input of a plant 120 and to a first input of an entropy-calculation module 132. A disturbance m(t) 110 is also provided to an input of the plant 120. An output of the plant 120 is a response x, which is provided to a second input the entropy-calculation module 132 and to a second input of the adder 105. The second input of the adder 105 is negated such that the output of the adder 105 (the error signal ε) is the value of the first input minus the value of the second input.
[0052] An output of the entropy-calculation module 132 is provided as a fitness function to a Genetic Analyzer (GA) 131. An output solution from the GA 131 is provided to an input of a FNN 142. An output of the FNN 142 is provided as a knowledge base to the FC 143. An output of the FC 143 is provided as a gain schedule to the PID controller 150.
[0053] The GA 131 and the entropy calculation module 132 are part of a Simulation System of Control Quality (SSCQ) 130. The FNN 142 and the FC 143 are part of a Fuzzy Logic Classifier System (FLCS) 140.
[0054] Using a set of inputs, and the fitness function 132, the genetic algorithm 131 works in a manner similar to a biological evolutionary process to arrive at a solution which is, hopefully, optimal. The genetic algorithm 131 generates sets of “chromosomes” (that is, possible solutions) and then sorts the chromosomes by evaluating each solution using the fitness function 132. The fitness function 132 determines where each solution ranks on a fitness scale. Chromosomes (solutions) which are more fit are those chromosomes which correspond to solutions that rate high on the fitness scale. Chromosomes which are less fit are those chromosomes which correspond to solutions that rate low on the fitness scale.
[0055] Chromosomes that are more fit are kept (survive) and chromosomes that are less fit are discarded (die). New chromosomes are created to replace the discarded chromosomes. The new chromosomes are created by crossing pieces of existing chromosomes and by introducing mutations.
[0056] The PID controller 150 has a linear transfer function and thus is based upon a linearized equation of motion for the controlled “plant” 120. Prior art genetic algorithms used to program PID controllers typically use simple fitness and thus do not solve the problem of poor controllability typically seen in linearization models. As is the case with most optimizers, the success or failure of the optimization often ultimately depends on the selection of the performance (fitness) function.
[0057] Evaluating the motion characteristics of a nonlinear plant is often difficult, in part due to the lack of a general analysis method. Conventionally, when controlling a plant with nonlinear motion characteristics, it is common to find certain equilibrium points of the plant and the motion characteristics of the plant are linearized in a vicinity near an equilibrium point. Control is then based on evaluating the pseudo (linearized) motion characteristics near the equilibrium point. This technique is scarcely, if at all, effective for plants described by models that are unstable or dissipative.
[0058] Computation of optimal control based on soft computing includes the GA 131 as the first step of global search for an optimal solution on a fixed space of positive solutions. The GA searches for a set of control weights for the plant. Firstly the weight vector K={k1, . . . , kn} is used by a conventional proportional-integral-differential (PID) controller 150 in the generation of a signal u*=δ(K) which is applied to the plant. The entropy S(δ(K)) associated to the behavior of the plant 120 on this signal is used as a fitness function by the GA 131 to produce a solution that gives minimum entropy production. The GA 131 is repeated several times at regular time intervals in order to produce a set of weight vectors K. The vectors K generated by the GA 131 are then provided to the FNN 142 and the output of the FNN 142 to the fuzzy controller 143. The output of the fuzzy controller 143 is a collection of gain schedules for the PID controller 150 that controls the plant. For the soft computing system 100 based on a genetic analyzer, there is very often no real control law in the classical control sense, but rather, control is based on a physical control law such as minimum entropy production.
[0059] In order to realize an intelligent mechatronic suspension control system, the structure depicted on FIG. 1 is modified, as shown on FIG. 2 to produce a system 200 for controlling a plant, such as suspension system. The system 200 is similar to the system 100 with the addition of an information filter 241 and biologically-inspired constraints 233 in the fitness function 132. The information filter 241 is placed between the GA 131 and the FNN 142 such that a solution vector output Ki from the GA 131 is provided to an input of the information filter 241. An output of the information filter 241 is a filtered solution vector Kc that is provided to the input of the FNN 142. In FIG. 2, the disturbance 110 is a road signal m(t). (e.g., measured data or data generated via stochastic simulation). In FIG. 2, the plant 120 is a suspension system and car body. The fitness function 132, in addition to entropy production rate, includes biologically-inspired constraints based on mechanical and/or human factors. In one embodiment, the filter 241 includes an information compressor that reduces unnecessary noise in the input signal of the FNN 142. In FIG. 2, the PID controller 150 is shown as a proportional damping force controller.
[0060] As shown in FIG. 3, realization of the structure depicted in FIG. 2 is divided into four development stages. The development stages include a teaching signal acquisition stage 301, a teaching signal compression stage 302, a teaching signal approximation stage 303, and a knowledge base verification stage 304.
[0061] The teaching signal acquisition stage 301 includes the acquisition of a robust teaching signal without the loss of information. In one embodiment, the stage 301 is realized using stochastic simulation of a full car with a Simulation System of Control Quality (SSCQ) under stochastic excitation of a road signal. The stage 301 is based on models of the road, of the car body, and of models of the suspension system. Since the desired suspension system control typically aims for the comfort of a human, it is also useful to develop a representation of human needs, and transfer these representations into the fitness function 132 as constraints 233.
[0062] The output of the stage 301 is a robust teaching signal Ki, which contains information regarding the car behavior and corresponding behavior of the control system.
[0063] Behavior of the control system is obtained from the output of the GA 131, and behavior of the car is a response of the model for this control signal. Since the teaching signal Ki is generated by a genetic algorithm, the teaching signal Ki typically has some unnecessary stochastic noise in it. The stochastic noise can make it difficult to realize (or develop a good approximation for) the teaching signal Ki. Accordingly, in a second stage 302, the information filter 241 is applied to the teaching signal Ki to generate a compressed teaching signal Kc. The information filter 241 is based on a theorem of Shannon's information theory (the theorem of compression of data). The information filter 241 reduces the content of the teaching signal by removing that portion of the teaching signal Ki that corresponds to unnecessary information. The output of the second stage 302 is a compressed teaching signal Kc.
[0064] The third stage 303 includes approximation of the compressed teaching signal Kc by building a fuzzy inference system using a fuzzy logic classifier (FLC) based on a Fuzzy Neural Network (FNN). Information of car behavior can be used for training an input part of the FNN, and corresponding information of controller behavior can be used for output-part training of the FNN.
[0065] The output of the third stage 303 is a knowledge base (KB) for the FC 143 obtained in such a way that it has the knowledge of car behavior and knowledge of the corresponding controller behavior with the control quality introduced as a fitness function in the first stage 301 of development. The KB is a data file containing control laws of the parameters of the fuzzy controller, such as type of membership functions, number of inputs, outputs, rule base, etc.
[0066] In the fourth stage 304, the KB can be verified in simulations and in experiments with a real car, and it is possible to check its performance by measuring parameters that have been optimized.
[0067] To summarize, the development of the KB for an intelligent control suspension system includes:
[0068] I. Obtaining a stochastic model of the road or roads.
[0069] II. Obtaining a realistic model of a car and its suspension system.
[0070] III. Development of a Simulation System of Control Quality with the car model for genetic algorithm fitness function calculation, and introduction of human needs in the fitness function.
[0071] IV. Development of the information compressor (information filter).
[0072] V. Approximation of the teaching signal with a fuzzy logic classifier system (FLCS) and obtaining the KB for the FC
[0073] VI. Verification of the KB in experiment and/or in simulations of the full car model with fuzzy control
[0074] I. Obtaining Stochastic Models of the Roads
[0075] It is convenient to consider different types of roads as stochastic processes with different auto-correlation functions and probability density functions. FIG. 4 shows twelve typical road profiles. Each profile shows distance along the road (on the x-axis), and altitude of the road (on the y-axis) with respect to a reference altitude. FIG. 5 shows a normalized auto-correlation function for different velocities of motion along the road number 9 (from FIG. 4). In FIG. 5, a curve 501 and a curve 502 show the normalized auto-correlation function for a velocity θ=1 meter/sec, a curve 503 shows the normalized auto-correlation function for θ=5 meter/sec, and a curve 504 shows the normalized auto-correlation function for θ=10 meter/sec.
[0076] The results of statistical analysis of actual roads, as shown in FIG. 4, show that it is useful to consider the road signals as stochastic processes using the following three typical auto-correlation functions.
R
(τ)=B(0)exp{−α1θ|τ|}; (1.1)
R
(τ)=B(0)exp{−α1θ|τ|}cos β1θτ; (1.2)
1
[0077] where α1 and β1 are the values of coefficients for single velocity of motion. The ranges of values of these coefficients are obtained from experimental data as:
[0078] α1=0.014 to 0.111; β1=0.025 to 0.140.
[0079] For convenience, the roads are divided into three classes:
[0080] A. {square root}{square root over (B(0))}≦10 sm—small obstacles;
[0081] B. {square root}{square root over (B(0))}=10 sm to 20 sm—medium obstacles;
[0082] C. {square root}{square root over (B(0))}>20 sm—large obstacles.
[0083] The presented auto-correlation functions and its parameters are used for stochastic simulations of different types of roads using forming filters. The methodology of forming filter structure can be described according to the first type of auto-correlation functions (1.1) with different probability density functions.
[0084] Consider a stationary stochastic process X(t) defined on the interval [xl,xr], which can be either bounded or unbounded. Without loss of generality, assume that X(t) has a zero mean. Then xl<0 and xr>0. With the knowledge of the probability density p(x) and the spectral density ΦXX(ω) of X(t), one can establish a procedure to model the process X(t).
[0085] Let the spectral density be of the following low-pass type:
2
[0086] where σ2 is the mean-square value of X(t). If X(t) is also a diffusive Markov process, then it is governed by the following stochastic differential equation in the Ito sense (see Appendix I):
dX=−αXdt+D
(X)dB(t), (2.2)
[0087] where α is the same parameter in (2.1), B(t) is a unit Wiener process, and the coefficients −αX and D(X) are known as drift and the diffusion coefficients, respectively. To demonstrate that this is the case, multiply (2.2) by X(t−τ) and take the ensemble average to yield
3
[0088] where R(τ) is the correlation function of X(t), namely, R(τ)=E[X(t−τ)X(t)]. Equation (2.3) has a solution
R
(τ)=A exp(−α|τ|) (2.4)
[0089] in which A is arbitrary. By choosing A=σ2, equations (2.1) and (2.4) become a Fourier transform pair. Thus equation (2.2) generates a process X(t) with a spectral density (2.1). Note that the diffusion coefficient D(X) has no influence on the spectral density.
[0090] Now it is useful to determine D(X) so that X(t) possesses a given stationary probability density p(x). The Fokker-Planck equation, governing the probability density p(x) of X(t) in the stationary state, is obtained from equation (2.2) as follows:
4
[0091] where G is known as the probability flow. Since X(t) is defined on [xl,xr], G must vanish at the two boundaries x=xl and x=xr. In the present one-dimensional case, G must vanish everywhere; consequently, equation (2.5) reduces to
5
[0092] Integration of equation (2.6) results in
6
[0093] where C is an integration constant. To determine the integration constant C, two cases are considered. For the first case, if xl=−∞, or xr=∞, or both, then p(x)must vanish at the infinite boundary; thus C=0 from equation (2.7). For the second case, if both xl and xr are finite, then the drift coefficient −αxl at the left boundary is positive, and the drift coefficient −αxr at the right boundary is negative, indicating that the average probability flows at the two boundaries are directed inward. However, the existence of a stationary probability density implies that all sample functions must remain within [xl,xr], which requires additionally that the drift coefficient vanish at the two boundaries, namely, D2(xl)=D2(xr)=0. This is satisfied only if C=0. In either case,
7
[0094] Function D2(x), computed from equation (2.8), is non-negative, as it should be, since p(x)≧0 and the mean value of X(t) is zero. Thus the stochastic process X(t) generated from (2.2) with D(x) given by (2.8) possesses a given stationary probability density p(x) and the spectral density (2.1).
[0095] The Ito type stochastic differential equation (2.2) may be converted to that of the Stratonovich type (see Appendix I) as follows:
8
[0096] where ξ(t) is a Gaussian white noise with a unit spectral density. Equation (2.9) is better suited for simulating sample functions. Some illustrative examples are given below.
[0097] Example 1: Assume that X(t) is uniformly distributed, namely
9
[0098] Substituting (2.10) into (2.8)
D
2
(x)=α(Δ2−x2). (2.11)
[0099] In this case, the desired Ito equation is given by
dX=−αXdt
+{square root}{square root over (α(Δ2−X2))}dB(t). (2.12)
[0100] It is of interest to note that a family of stochastic processes can be obtained from the following generalized version of (2.12):
dX=−αXdt
+{square root}{square root over (αβ(Δ2−X2))}dB(t). (2.13)
[0101] Their appearances are strikingly diverse, yet they share the same spectral density (2.1).
[0102] Example 2: Let X(t) be governed by a Rayleigh distribution
p
(x)=γ2x exp(−γx),γ>0,0≦x<∞. (2.14)
[0103] Its centralized version Y(t)=X(t)−2/γ has a probability density
p
(y)=γ(γy+2)exp(−γy+2),−2/γ≦y<∞. (2.15)
[0104] From equation (2.8),
10
[0105] The Ito equation for Y(t) is
11
[0106] and the correspondence equation for X(t) in the Stratonovich form is
12
[0107] Note that the spectral density of X(t) contains a delta function (4/γ2)δ(ω) due to the nonzero mean 2/γ.
[0108] Example 3: Consider a family of probability densities, which obeys an equation of the form
13
[0109] Equation (2.19) can be integrated to yield
p
(x)=C1 exp(∫J(x)dx) (2.20)
[0110] where C1 is a normalization constant. In this case
D
2
(x)=−2α exp[−J(x)]∫x exp[J(x)]dx. (2.21)
[0111] Several special cases may be noted. Let
J
(x)=−γx2−δx4,−∞<x<∞ (2.22)
[0112] where γ can be arbitrary if δ>0. Substitution of equation (2.22) into equation (2.8) leads to
14
[0113] where erfc(y) is the complementary error function defined as
15
[0114] The case of γ<0 and δ>0 corresponds to a bimodal distribution, and the case of γ>0 and δ=0 corresponds to a Gaussian distribution.
[0115] The Pearson family of probability distributions corresponds to
16
[0116] In the special case of a0+b1=0,
17
[0117] From the results of statistical analysis of forming filters with auto-correlation function (1.1) one can describe typical structure of forming filters as in Table 2.1:
1TABLE 2.1
|
|
The Structures of Forming Filters for Typical Probability Density Functions p(x)
Auto-correlationProbability
functiondensity functionForming filter structure
|
|
18Gaussian19
|
20Uniform21
|
22Rayleigh23
|
24Pearson25
|
[0118] The structure of a forming filter with an auto-correlation function given by equations (1.2) and (1.3) is derived as follows. A two-dimensional (2D) system is used to generate a narrow-band stochastic process with the spectrum peak located at a nonzero frequency. The following pair of Ito equations describes a large class of 2D systems:
dx
1
=(a11x1+a12x2)dt+D1(x1,x2)dB1(t),
dx
2
=(a21x1+a22x2)dt+D2(x1,x2)dB2(t), (3.1)
[0119] where Bi, i=1,2 are two independent unit Wiener processes.
[0120] For a system to be stable and to possess a stationary probability density, is required that a11<0, a22<0 and a11a22−a12a21>0. Multiplying (3.1) by x1(t−τ)and taking the ensemble average, gives
26
[0121] where R11(τ)=M[x1(t−τ)x1(t)], R12(τ)=M[x1(t−τ)x2(t)] with initial conditions R11(0)=m11=M[x12], R12(0)=m12=M[x1x2].
[0122] Differential equations (3.2) in the time domain can be transformed (using the Fourier transform) into algebraic equations in the frequency domain as follows
27
[0123] where {overscore (R)}ij(ω) define the following integral Fourier transformation:
28
[0124] Then the spectral density S11(ω) of x1(t) can be obtained as
29
[0125] where Re denotes the real part.
[0126] Since Rij(τ)→0 as τ→∞, it can be shown that
30
[0127] and equation (3.3) is obtained using this relation.
[0128] Solving equation (3.3) for {overscore (R)}ij(ω) and taking its real part, gives
31
[0129] where A1=a11+a22, and A2=a11a22−a12a21.
[0130] Expression (3.5) is the general expression for a narrow-band spectral density. The constants aij, i, j=1,2, can be adjusted to obtain a best fit for a target spectrum. The task is to determine non-negative functions D12(x1x2) and D22(x1x2) for a given p(x1,x2).
[0131] Forming filters for simulation of non-Gaussian stochastic processes can be derived as follows. The Fokker-Planck-Kolmogorov (FPK) equation for the joint density p(x1,x2) of x1(t) and x2(t) in the stationary state is given as
32
[0132] If such D12(x1,x2) and D22(x1,x2) functions can be found, then the equations of forming filters for the simulation in the Stratonovich form are given by
33
[0133] where ξi(t), i=1,2, are two independent unit Gaussian white noises.
[0134] Filters (3.1) and (3.6) are non-linear filters for simulation of non-Gaussian random processes. Two typical examples are provided.
[0135] Example 1: Consider two independent uniformly distributed stochastic process x1 and x2, namely,
34
[0136] −Δ1≦x1≦Δ1, −Δ2≦x2≦Δ2.
[0137] In this case, from the FPK equation, one obtains
35
[0138] which is satisfied if
D
1
2
=−a
11
(Δ1−x12), D12=−a22(Δ2−x22).
[0139] The two non-linear equations in (3.6) are now
36
[0140] which generate a uniformly distributed stochastic process x1(t)with a spectral density given by (3.5).
[0141] Example 2: Consider a joint stationary probability density of x1(t) and x2(t) in the form
p
(x1,x2)=ρ(λ)=C1(λ+b)−δ,b>0,δ>1, and
37
[0142] A large class of probability densities can be fitted in this form. In this case
38
[0143] The forming filter equations (3.6) for this case can be described as following
39
[0144] If σik(x,t) are bounded functions (see Appendix I) and the functions Fi(x,t) satisfy the Lipshitz condition ∥F(x′−x∥≦K∥x′−x∥, K=const>0, then for every smoothly-varying realization of process y(t) the stochastic equations can be solved by the method of successive substitution which is convergent and defines smoothly-varying trajectories x(t). Thus, Markovian process x(t) has smoothly trajectories with the probability 1. This result can be used as a background in numerical stochastic simulation.
[0145] The stochastic differential equation for the variable xi is given by
40
[0146] These equations can be integrated using two different algorithms: Milshtein; and Heun methods. In the Milshtein method, the solution of stochastic differential equation (4.1) is computed by the means of the following recursive relations:
41
[0147] where ηi(t) are independent Gaussian random variables and the variance is equal to 1.
[0148] The second term in equation (4.2) is included because equation (4.2) is interpreted in the Stratonovich sense. The order of numerical error in the Milshtein method is δt . Therefore, small δt(i.e., δt=1×10−4 for σ2=1) is be used, while its computational effort per time step is relatively small. For large σ, where fluctuations are rapid and large, a longer integration period and small δt is used. The Milshtein method quickly becomes impractical.
[0149] The Heun method is based on the second-order Runge-Kutta method, and integrates the stochastic equation by using the following recursive equation:
42
[0150] where
y
i
(t)=xi(t)+F(xi(t))δt+G(xi(t)){square root}{square root over (σ2δt)}ηi(t).
[0151] The Heun method accepts larger δt than the Milshtein method without a significant increase in computational effort per step. The Heun method is usually used for σ2>2.
[0152] The time step δt can be chosen by using a stability condition, and so that averaged magnitudes do not depend on δt within statistical errors. For example, δt=5×10−4 for σ2=1 and δt=1×10−5 for σ2=15. The Gaussian random numbers for the simulation were generated by using the Box-Muller-Wiener algorithms or a fast numerical inversion method.
[0153] Table 5.1 summarizes the stochastic simulation of typical road signals.
2TABLE 5.1
|
|
Types ofTypes ofResults of
CorrelationProbabilityFormingStochastic
FunctionDensity FunctionFilter EquationsSimulations
|
|
1D Gaussian
434445
|
1D Uniform
4647
|
1D Rayleigh
4849
|
2D Gaussian
505152
|
2D Uniform
5354
|
2D Hyperbolic
5556FIG. 6F
|
[0154]
FIG. 7 shows a vehicle body 710 with coordinates for describing position of the body 710 with respect to wheels 701-704 and suspension system. A global reference coordinate xr, yr, zr{r} is assumed to be at the geometric center Pr of the vehicle body 710. The following are the transformation matrices to describe the local coordinates for the suspension and its components:
[0155] {2} is a local coordinate in which an origin is the center of gravity of the vehicle body 710;
[0156] {7} is a local coordinate in which an origin is the center of gravity of the suspension;
[0157] {10n} is a local coordinate in which an origin is the center of gravity of the n'th arm;
[0158] {12n} is a local coordinate in which an origin is the center of gravity of the n'th wheel;
[0159] {13n} is a local coordinate in which an origin is a contact point of the n'th wheel relative to the road surface; and
[0160] {14} is a local coordinate in which an origin is a connection point of the stabilizer.
[0161] Note that in the development that follows, the wheels 702, 701, 704, and 703 are indexed using “i”, “ii”, “iii”, and “iv”, respectively.
[0162] As indicated, “n” is a coefficient indicating wheel positions such as i, ii, iii, and iv for left front, right front, left rear and right rear respectively. The local coordinate systems x0, y0, and z0{0} are expressed by using the following conversion matrix that moves the coordinate {r} along a vector (0,0,z0)
57
[0163] Rotating the vector {r} along yr with an angle β makes a local coordinate system x0c, y0c, z0c{0r} with a transformation matrix 0c0T.
58
[0164] Transferring {0r} through the vector (a1n, 0,0) makes a local coordinate system x0f, y0f, z0f{0f} with a transformation matrix 0r0fT.
59
[0165] The above procedure is repeated to create other local coordinate systems with the following transformation matrices.
60
[0166] Coordinates for the wheels (index n: i for the left front, ii for the right front, etc.) are generated as follows. Transferring {1n} through the vector (0, b2n, 0) makes local coordinate system x3n, y3n, z3n{3n} with transformation matrix 1f3nT.
61
[0167] Some of the matrices are sub-assembled to make the calculation simpler.
62
[0168] Parts of the model are described both in local coordinate systems and in relations to the coordinate {r} or {1n} referenced to the vehicle body 710.
[0169] In the local coordinate systems:
63
[0170] In the global reference coordinate system {r}:
64
[0171] where ζn is substituted by,
ζn=−γn−θn
[0172] because of the link mechanism to support a wheel at this geometric relation.
[0173] The stabilizer linkage point is in the local coordinate system {1n}. The stabilizer works as a spring in which force is proportional to the difference of displacement between both arms in a local coordinate system {1n} fixed to the body 710.
65
[0174] Kinetic energy, potential energy and dissipative functions for the <Body>, <Suspension>, <Arm>, <Wheel> and <Stabilizer> are developed as follows. Kinetic energy and potential energy except by springs are calculated based on the displacement referred to the inertial global coordinate {r}. Potential energy by springs and dissipative functions are calculated based on the movement in each local coordinate.
66
[0175] Substituting man with mwn and e1n with e3n in the equation for the arm, yields an equation for the wheel as:
67
[0176] Therefore the total kinetic energy is:
68
where
m
ba
=m
b
(a0+a1i)
m
bbI
=m
b
(b02+c02)+Ibx
m
baI
=m
b
(a0+a1i)2+Iby
m
sawn
=m
sn
+m
an
+m
wn
m
sawan
=(msn+man+mwn)a1n
m
sawbn
=(msn+man+mwn)b2n
m
sawcn
=m
sn
c
1n
+(man+mwn)c2n (5.57)
m
saw2n
=(msn+man+mwn)a1n2
69
maw2In=mane1n2+mwne3n2+Iaxn
m
aw1n
=m
an
e
1n
+m
wn
e
3n
m
aw2n
=m
an
e
1n
2
+m
wn
e
3n
2
[0177] Hereafter variables and coefficients which have index “n” implies implicit or explicit that they require summation with n=i, ii, iii, and iv.
[0178] The total potential energy is:
70
where
m
ba
=m
b
(a0+a1i)
m
sawan
=(msn+man+mwn)a1n
m
sawbn
=(msn+man+mwn)b2n
m
sawcn
=m
sn
c
1n
+(man+mwn)c2n
γii=−γi (5.61)
[0179] The Lagrangian is written as:
71
[0180] The dissipative function is:
72
[0181] The constraints are based on geometrical constraints, and the touch point of the road and the wheel. The geometrical constraint is expressed as
e
2n
cos θn=−(z6n−d1n)sin ηn
e
2n
sin θn−(z6n−d1n)cos ηn=c1n−c2n (5.82)
[0182] The touch point of the road and the wheel is defined as
73
[0183] where Rn(t) is road input at each wheel.
[0184] Differentials are:
{dot over (θ)}ne2n sin θn−{dot over (z)}6n sin ηn−{dot over (η)}n(z6n−d1n)cos ηn=0
{dot over (θ)}ne2n cos θn−{dot over (z)}6n cos ηn+{dot over (η)}n(z6n−d1n)sin ηn=0
74
[0185] Since the differentials of these constraints are written as
75
[0186] then the values alnj are obtained as follows.
[0187] a1n0=0
[0188] a2n0=0
[0189] a3n0=1
[0190] a1n1=0, a1n2=0, a1n3=−(z6n−d1n)cos ηn, a1n4=e2n sin θn, a1n5=−sin ηn, a1n6=0 a2n1=0, a2n2=0, a2n3=(z6n−d1n)sin ηn, a2n4=e2n cos θn, a2n5=−cos ηn, a2n6=0
76
[0191] From the above, Lagrange's equation becomes
77
[0192] where
[0193] q0=z078
[0194] From the differentiated constraints it follows that:
{umlaut over (θ)}ne2nSθn+{dot over (θ)}n2e2nCθn−{umlaut over (z)}6nSηn−{dot over (z)}6n{dot over (η)}nCηn−{umlaut over (η)}n(z6n−d1n)Cηn−{dot over (η)}n{dot over (z)}6nCηn+{dot over (η)}n2(z6n−d1n)Sηn=0
{umlaut over (θ)}ne2nCθn−{dot over (θ)}n2e2nSθn−{umlaut over (z)}6nCηn+{dot over (z)}6n{dot over (η)}nSηn+{umlaut over (η)}n(z6n−d1n)Sηn+{dot over (η)}n{dot over (z)}6nSηn+{dot over (η)}n2(z6n−d1n)Cηn=0 (5.114)
[0195]
79
[0196] Supplemental differentiation of equation (5.113) for the later entropy production calculation yields:
k
wn
{dot over (z)}
12n
=−c
wn
{umlaut over (z)}
12n
+{dot over (λ)}3nCαCβ−{dot over (α)}λ3nSαCβ−{dot over (β)}λ3nCαSβ (5.118)
[0197] therefore
80
[0198] or from the third equation of constraint:
81
[0199] Equations for entropy production are developed below. Minimum entropy production (for use in the fitness function of the genetic algorithm) is expressed as:
82
[0200]
FIG. 8A shows the vehicle body 710 and the wheels 702 and 704 (the wheels 701 and 703 are hidden). FIG. 8A also shows dampers 801-804 configured to provide adjustable damping for the wheels 701-704 respectively. In one embodiment, the dampers 801-804 are electronically-controlled dampers. In one embodiment, a stepping motor actuator on each damper controls an oil valve. Oil flow in each rotary valve position determines the damping factor provided by the damper.
[0201]
FIG. 8B shows an adjustable damper 817 having an actuator 818 that controls a rotary valve 812. A hard-damping valve 811 allows fluid to in the adjustable damper 817 to produce hard damping. A soft-damping valve 813 allows fluid to flow in the adjustable damper 817 to produce soft damping. The rotary valve 812 controls the amount of fluid that flows through the soft-damping valve 813. The actuator 818 controls the rotary valve 812 to allow more or less fluid to flow through the soft-damping valve 813, thereby producing a desired damping. In one embodiment, the actuator 818 is a stepping motor. The actuator 818 receives control signals from the controller 810.
[0202]
FIG. 8C shows fluid flow through the soft-damping valve 813 when the rotary valve 812 is opened. FIG. 8C also shows fluid flow through the hard-damping valve 810 when then rotary valve 812 is closed.
[0203]
FIG. 9 shows damper force characteristics as damper force versus piston speed characteristics when the rotary valve is placed in a hard damping position and in a soft damping position. The valve is controlled by the stepping motor to be placed between the soft and the hard damping positions to generate intermediate damping force.
[0204] The SSCQ 130, shown in FIG. 2, is an off-line block that produces the teaching signal Ki for the FLCS 140. FIG. 10 shows the structure of an SSCQ 1030 for use in connection with a simulation model of the full car and suspension system. The SSCQ 1030 is one embodiment of the SSCQ 130. In addition to the SSCQ 1030, FIG. 10 also shows a stochastic road signal generator 1010, a suspension system simulation model 1020, a proportional damping force controller 1050, and a timer 1021. The SSCQ 1030 includes a mode selector 1029, an output buffer 1001, a GA 1031, a buffer 1022, a proportional damping force controller 1034, a fitness function calculator 1032, and an evaluation model 1036.
[0205] The Timer 1021 controls the activation moments of the SSCQ 1030. An output of the timer 1021 is provided to an input of the mode selector 1029. The mode selector 1029 controls operational modes of the SSCQ 1030. In the SSCQ 1030, a reference signal y is provided to a first input of the fitness function calculator 1032. An output of the fitness function calculator 1032 is provided to an input of the GA 1031. A CGSe output of the GA 1031 is provided to a training input of the damping force controller 1034 through the buffer 1022. An output of the damping force controller 1034 is provided to an input of the evaluation model 1036. An Xe output of the evaluation model 1036 is provided to a second input of the fitness function calculator 1032. A CGSi output of the GA 1031 is provided (through the buffer 1001) to a training input of the damping force controller 1050. A control output from the damping force controller 1050 is provided to a control input of the suspension system simulation model 1020. The stochastic road signal generator 1010 provides a stochastic road signal to a disturbance input of the suspension system simulation model 1020 and to a disturbance input of the evaluation model 1036. A response output Xi from the suspension system simulation model 1020 is provided to a training input of the evaluation model 1036. The output vector Ki from the SSCQ 1030 is obtained by combining the CGSi output from the GA 1031 (through the buffer 1001) and the response signal Xi from the suspension system simulation model 1020.
[0206] Road signal generator 1010 generates a road profile. The road profile can be generated from stochastic simulations as described above in connection with FIGS. 4-6F, or the road profile can be generated from measured road data. The road signal generator 1010 generates a road signal for each time instant (e.g., each clock cycle) generated by the timer 1021.
[0207] The simulation model 1020 is a kinetic model of the full car and suspension system with equations of motion, as obtained, for example, in connection with FIG. 7. In one embodiment, the simulation model 1020 is integrated using high-precision order differential equation solvers.
[0208] The SSCQ 1030 is an optimization module that operates on a discrete time basis. In one embodiment, the sampling time of the SSCQ 1030 is the same as the sampling time of the control system 1050. Entropy production rate is calculated by the evaluation model 1036, and the entropy values are included into the output (Xe) of the evaluation model 1036.
[0209] The following designations regarding time moments are used herein:
[0210] T=Moments of SSCQ calls
[0211] Tc=the sampling time of the control system 1050
[0212] Te=the evaluation (observation) time of the SSCQ 1030
[0213] tc=the integration interval of the simulation model 1004 with fixed control parameters, tc∈[T;T+Tc]
[0214] te=Evaluation (Observation) time interval of the SSCQ, te∈[T;T+Te]
[0215]
FIG. 11 is a flowchart showing operation of the SSCQ 1030 as follows:
[0216] 1. At the initial moment (T=0) the SSCQ 1030 is activated and the SSCQ 1030 generates the initial control signal CGSi(T).
[0217] 2. The simulation model 1020 is integrated using the road signal from the stochastic road generator 1010 and the control signal CGSi(T) on a first time interval tc to generate the output Xi.
[0218] 3. The output Xi and with the output CGSi(T) are is saved into the data file 1060 as a teaching signal Ki.
[0219] 4. The time interval T is incremented by Tc(T=T+Tc).
[0220] 5. The sequence 1-4 is repeated a desired number of times (that is while T<TF). In one embodiment, the sequence 1-4 is repeated until the end of road signal is reached
[0221] Regarding step 1 above, the SSCQ block has two operating modes:
[0222] 1. Updating of the buffer 1001 using the GA 1031
[0223] 2. Extraction of the output CGSi(T) from the buffer 1001.
[0224] The operating mode of the SSCQ 1030 is controlled by the mode selector 1029 using information regarding the current time moment T, as shown in FIG. 12. At intervals of Te the SSCQ 1030 updates the output buffer 1001 with results from the GA 1031. During the interval Te at each interval Tc, the SSCQ extracts the vector CGSi from the output buffer 1001.
[0225]
FIG. 13 is a flowchart 1300 showing operation of the SSCQ 1030 in connection with the GA 1031 to compute the control signal CGSi. The flowchart 1300 begins at a decision block 1301, where the operating mode of the SSCQ 1030 is determined. If the operating mode is a GA mode, then the process advances to a step 1302; otherwise, the process advances to a step 1310. In the step 1302, the GA 1031 is initialized, the evaluation model 1036 is initialized, the output buffer 1001 is cleared, and the process advances to a step 1303. In the step 1303, the GA 1031 is started, and the process advances to a step 1304 where an initial population of chromosomes is generated. The process then advances to a step 1305 where a fitness value is assigned to each chromosome. The process of assigning a fitness value to each chromosome is shown in an evaluation function calculation, shown as a sub-flowchart having steps 1322-1325. In the step 1322, the current states of Xi(T) are initialized as initial states of the evaluation model 1036, and the current chromosome is decoded and stored in the evaluation buffer 1022. The sub-process then advances to the step 1323. The step 1323 is provided to integrate the evaluation model 1036 on time interval te using the road signal from the road generator 1010 and the control signal CGSe(te) from the evaluation buffer 1022. The process then advances to the step 1324 where a fitness value is calculated by the fitness function calculator 1032 by using the output Xe from the evaluation model 1036. The output Xe is a response from the evaluation model 1036 to the control signals CGSe(te) which are coded into the current chromosome. The process then advances to the step 1325 where the fitness value is returned to the step 1305. After the step 1305, the process advances to a decision block 1306 to test for termination of the GA. If the GA is not to be terminated, then the process advances to a step 1307 where a new generation of chromosomes is generated, and the process then returns to the step 1305 to evaluate the new generation. If the GA is to be terminated, then the process advances to the step 1309, where the best chromosome of the final generation of the GA, is decoded and stored in the output buffer 1001. After storing the decoded chromosome, the process advances to the step 1310 where the current control value CGSi(T) is extracted from the output buffer 1001.
[0226] The structure of the output buffer 1001 is shown below as a set of row vectors, where first element of each row is a time value, and the other elements of each row are the control parameters associated with these time values. The values for each row include a damper valve position VPFL, VPFR, VPRL, VPRR, corresponding to front-left, front-right, rear-left, and rear-right respectively.
3|
|
TimeCGSi
|
|
TVPFL(T)**VPFR(T)VPRL(T)VPRR(T)
T + TcVPFL(T + Tc)VPFR(T + Tc)VPRL(T + Tc)VPRR(T + Tc)
. . .. . .. . .. . .. . .
T + TeVPFL(T + Te)VPFR(T + Te)VPRL(T + Te)VPRR(T + Te)
|
[0227] The output buffer 1001 stores optimal control values for evaluation time interval te from the control simulation model, and the evaluation buffer 1022 stores temporal control values for evaluation on the interval te for calculation of the fitness function.
[0228] Two simulation models are used. The simulation model 1020 is used for simulation and the evaluation model 1036 is used for evaluation. There are many different methods for numerical integration of systems of differential equations. Practically, these methods can be classified into two main classes: (1) variable-step integration methods with control of integration error; and (2) fixed-step integration methods without integration error control.
[0229] Numerical integration using methods of type (1) is very precise, but time-consuming. Methods of type (2) are typically faster, but with smaller precision. During each SSCQ call in the GA mode, the GA 1031 evaluates the fitness function 1032 many times and each fitness function calculation requires integration of the model of dynamic system (the integration is done each time). By choosing a small-enough integration step size, it is possible to adjust a fixed-step solver such that the integration error on a relatively small time interval (like the evaluation interval te) will be small and it is possible to use the fixed-step integration in the evaluation loop for integration of the evaluation model 1036. In order to reduce total integration error it is possible to use the result of high-order variable-step integration of the simulation model 1020 as initial conditions for evaluation model integration. The use of variable-step solvers to integrate the evaluation model can provide better numerical precision, but at the expense of greater computational overhead and thus longer run times, especially for complicated models.
[0230] The fitness function calculation block 1032 computes a fitness function using the reference signal Y and the response (Xe) from the evaluation model 1036 (due to the control signal CGSe(te) provided to the evaluation module 1036).
[0231] The fitness function 1032 is computed as a vector of selected components of a matrix (xe) and its squared absolute value using the following form:
83
[0232] where:
[0233] i denotes indexes of state variables which should be minimized by their absolute value; j denotes indexes of state variables whose control error should be minimized; k denotes indexes of state variables whose frequency components should be minimized; and wr, r=i,j,k are weighting factors which represent the importance of the corresponding parameter from the human feelings point of view. By setting these weighting function parameters, it is possible to emphasize those elements from the output of the evaluation model that are correlated with the desired human requirements (e.g., handling, ride quality, etc.). In one embodiment, the weighting factors are initialized using empirical values and then the weighting factors are adjusted using experimental results.
[0234] Extraction of frequency components can be done using standard digital filtering design techniques for obtaining the filter parameters. Digital filtering can be provided by a standard difference equation applied to elements of the matrix Xe:
84
[0235] where a, b are parameters of the filter, N is the number of the current point, and nb, na describe the order of the filter. In case of a Butterworth filter, nb=na.
[0236] In one embodiment, the GA 1031 is a global search algorithms based on the mechanics of natural genetics and natural selection. In the genetic search, each a design variable is represented by a finite length binary string and then these finite binary strings are connected in a head-to-tail manner to form a single binary string. Possible solutions are coded or represented by a population of binary strings. Genetic transformations analogous to biological reproduction and evolution are subsequently used to improve and vary the coded solutions. Usually, three principle operators, i.e., reproduction (selection), crossover, and mutation are used in the genetic search.
[0237] The reproduction process biases the search toward producing more fit members in the population and eliminating the less fit ones. Hence, a fitness value is first assigned to each string (chromosome) the population. One simple approach to select members from an initial population to participate in the reproduction is to assign each member a probability of selection on the basis of its fitness value. A new population pool of the same size as the original is then created with a higher average fitness value.
[0238] The process of reproduction simply results in more copies of the dominant or fit designs to be present in the population. The crossover process allows for an exchange of design characteristics among members of the population pool with the intent of improving the fitness of the next generation. Crossover is executed by selecting strings of two mating parents, randomly choosing two sites.
[0239] Mutation safeguards the genetic search process from a premature loss of valuable genetic material during reproduction and crossover. The process of mutation is simply to choose few members from the population pool according to the probability of mutation and to switch a 0 to 1 or vice versa at randomly sites on the chromosome.
[0240]
FIG. 14 illustrates the processes of reproduction, crossover and mutation on a set of chromosomes in a genetic analyzer. A population of strings is first transformed into decimal codes and then sent into the physical process 1407 for computing the fitness of the strings in the population. A biased roulette wheel 1402, where each string has a roulette wheel slot sized in proportion to its fitness is created. A spinning of the weighted roulette wheel yields the reproduction candidate. In this way, a higher fitness of strings has a higher number of offspring in the succeeding generation. Once a string has been selected for reproduction, a replica of the string based on its fitness is created and then entered into a mating pool 1401 for waiting the further genetic operations. After reproduction, a new population of strings is generated through the evolutionary processes of crossover 1404 and mutation 1405 to produce a new parent population 1406. Finally, the whole genetic process, as mentioned above, is repeated again and again until an optimal solution is found.
[0241] The Fuzzy Logic Control System (FLCS) 240 shown in FIG. 2 includes the information filter 241, the FNN 142 and the FC 143. The information filter 241 compresses the teaching signal Ki to obtain the simplified teaching signal Kc, which is used with the FNN 142. The FNN 142, by interpolation of the simplified teaching signal Kc, obtains the knowledge base (KB) for the FC 143.
[0242] As it was described above, the output of the SSCQ is a teaching signal Ki that contains the information of the behavior of the controller and the reaction of the controlled object to that control. Genetic algorithms in general perform a stochastic search. The output of such a search typically contains much unnecessary information (e.g., stochastic noise), and as a result such a signal can be difficult to interpolate. In order to exclude the unnecessary information from the teaching signal Ki, the information filter 241 (using as a background the Shannon's information theory) is provided. For example, suppose that A is a message source that produces the message a with probability p(a), and further suppose that it is desired to represent the messages with sequences of binary digits (bits) that are as short as possible. It can be shown that the mean length L of these bit sequences is bounded from below by the Shannon entropy H(A) of the source: L≧H(A), where
85
[0243] Furthermore, if entire blocks of independent messages are coded together, then the mean number {overscore (L)} of bits per message can be brought arbitrary close to H(A).
[0244] This noiseless coding theorem shows the importance of the Shannon entropy H(A) for the information theory. It also provides the interpretation of H(A) as a mean number of bits necessary to code the output of A using an ideal code. Each bit has a fixed ‘cost’ (in units of energy or space or money), so that H(A) is a measure of the tangible resources necessary to represent the information produced by A.
[0245] In classical statistical mechanics, in fact, the statistical entropy is formally identically to the Shannon entropy. The entropy of a macrostate can be interpreted as the number of bits that would be required to specify the microstate of the system.
[0246] Suppose x1, . . . , xN are N independent, identical distributed random variables, each with mean {overscore (x)} and finite variance. Given δ, ε>0, there exist N0 such that, for N≧N0,
86
[0247] This result is known as the weak law of large numbers. A sufficiently long sequence of independent, identically distributed random variables will, with a probability approaching unity, have an average that is close to mean of each variable.
[0248] The weak law can be used to derive a relation between Shannon entropy H(A) and the number of ‘likely’ sequences of N identical random variables. Assume that a message source A produces the message a with probability p(a). A sequence α=a1a2 . . . aN of N independent messages from the same source will occur in ensemble of all N sequences with probability P(α)=p(a1)·p(a2) . . . p(aN). Now define a random variable for each message by x=−log2p(a), so that H(A)={overscore (x)}. It is easy to see that
87
[0249] From the weak law, it follows that, if ε,δ>0, then for sufficient large N
88
[0250] for N sequences of α. It is possible to partition the set of all N sequences into two subsets:
[0251] a) A set Λ of “likely” sequences for which
89
[0252] b) A set of ‘unlikely’ sequences with total probability less than ε, for which this inequality fails.
[0253] This provides the possibility to exclude the ‘unlikely’ information from the set Λ which leaves the set of sequences Λ1 with the same information amount as in set Λ but with a smaller number of sequences.
[0254] The FNN 142 is used to find the relations between (Input) and (Output) components of the teaching signal Kc. The FNN 142 is a tool that allows modeling of a system based on a fuzzy logic data structure, starting from the sampling of a process/function expressed in terms of input-output values pairs (patterns). Its primary capability is the automatic generation of a database containing the inference rules and the parameters describing the membership functions. The generated Fuzzy Logic knowledge base (KB) represents an optimized approximation of the process/function provided as input. FNN performs rule extraction and membership function parameter tuning using learning different learning methods, like error back propagation, fuzzy clustering, etc. The KB includes a rule base and a database. The rule base stores the information of each fuzzy rule. The database stores the parameters of the membership functions. Usually, in the training stage of FNN, the parts of KB are obtained separately.
[0255] An example of a KB of a suspension system fuzzy controller obtained using the FNN 142 is presented in FIG. 15. The knowledge base of a fuzzy controller includes two parts, a database where parameters of membership functions are stored, and a database of rules where fuzzy rules are stored. In the example shown in FIG. 15, the fuzzy controller has two inputs (ANT1) and (ANT2) which are pitch angle acceleration and roll angle acceleration, and 4 output variables (CONS1, . . . CONS4), are the valve positions of FL, FR, RL, RR wheels respectively. Each input variable has 5 membership functions, which gives total number of 25 rules.
[0256] The type of fuzzy inference system in this case is a zero-order Sugeno-Takagi Fuzzy inference system. In this case the rule base has the form presented in the list below.
[0257] IF ANT1 is MBF1—1 and ANT2 is MBF2—1 then CONS1 is A1—1 and . . . and CONS4 is A4—1
[0258] IF ANT1 is MBF1—1 and ANT2 is MBF2—2 then CONS1 is A1—2 and . . . and CONS4 is A4—2
[0259] . . .
[0260] IF ANT1 is MBF1—5 and ANT2 is MBF25 then CONS1 is A1—25 and . . . and CONS4 is A4—25
[0261] In the example above, when there are only 25 possible combinations of input membership functions, so it is possible to use all the possible rules. However, when the number of input variables is large, the phenomenon known as “rule blow” takes place. For example, if number of input variables is 6, and each of them has 5 membership functions, then the total number of rules could be: N=56=15625 rules. In this case practical realization of such a rule base will be almost impossible due to hardware limitations of existing fuzzy controllers. There are different strategies to avoid this problem, such as assigning fitness value to each rule, and exclusion of rules with small fitness from the rule base. The rule base will be incomplete, but realizable.
[0262] The FC 143 is an on-line device that generates the control signals using the input information from the sensors comprising the following steps: (1) fuzzyfication; (2) fuzzy inference; and (3) defuzzyfication.
[0263] Fuzzyfication is a transferring of numerical data from sensors into a linguistic plane by assigning membership degree to each membership function. The information of input membership function parameters stored in the knowledge base of fuzzy controller is used.
[0264] Fuzzy inference is a procedure that generates linguistic output from the set of linguistic inputs obtained after fuzzyfication. In order to perform the fuzzy inference, the information of rules and of output membership functions from knowledge base is used.
[0265] Defuzzyfication is a process of converting of linguistic information into the digital plane. Usually, the process of defuzzyfication include selecting of center of gravity of a resulted linguistic membership function.
[0266] Fuzzy control of a suspension system is aimed at coordinating damping factors of each damper to control parameters of motion of car body. Parameters of motion can include, for example, pitching motion, rolling motion, heave movement, and/or derivatives of these parameters. Fuzzy control in this case can be realized in the different ways, and different number of fuzzy controllers used. For example, in one embodiment shown in FIG. 16A, fuzzy control is implemented using two separate controllers, one controller for the front wheels, and one controller for the rear wheels, as shown in FIG. 16A, where a first fuzzy controller 1601 controls front-wheel damper actuators 1603 and 1604 and a second fuzzy controller 1602 controls rear-wheel damper actuators 1605 and 1606. In one embodiment, shown in FIG. 16B, a single controller 1610 controls the actuators 1603-1606.
[0267] As described above in connection with Equation 7.3, it is possible to exclude the “unlikely’ information from the set Λ, which leaves the set of sequences Λ1 having approximately the same amount of information as in set Λ, but with a smaller number of sequences. The results of such a filtering algorithm application are illustrated in FIGS. 17 and 18. In the FIG. 17, the filtering is applied to the matrix of normal distribution values. The source signal (shown in curve (a) of FIG. 17) is a data matrix, containing 100 rows and one column of Gaussian-normal distributed values with mean zero and with σ2=1. Labels on the X axis denote the row number. The value of ε is chosen as the mean value of the information distribution (as shown in curve (b) of FIG. 17). After applying the information filter, the resulting filtered signal has less than 40 percent of the rows of the source signal (as shown in curve (c) of FIG. 17). The other rows have little significant information and can be eliminated. The information distribution of the resulting signal is shown in curve (d) of FIG. 17. The number of “in-out” teaching pairs in the compressed signal Kc(N=3587) shown in FIG. 18 is almost two times less than the number of “in-out” teaching pairs in the entire teaching signal Ki (N=7201), shown in FIG. 19. The form of the teaching signals remains the same, and both teaching signals have approximately the same amount of information. FIGS. 18 and 19 include curves showing speed of the vehicle.
[0268]
FIGS. 18 and 19 also include curves showing acceleration of the front-left portion of the vehicle body 710 (labeled BA_fl in FIGS. 18 and 19), acceleration of the front-right portion of the vehicle body 710 (labeled BA_fr in FIGS. 18 and 19), acceleration of the rear-left portion of the vehicle body 710 (labeled BA_r1 in FIGS. 18 and 19), and acceleration of the rear-right portion of the vehicle body 710 (labeled BA_rr in FIGS. 18 and 19) in meters per second squared. FIGS. 18 and 19 also include curves showing vertical velocity of the front-left damper 802 (labeled DV_fl in FIGS. 18 and 19), vertical velocity of the front-right damper 801 (labeled DV_fr in FIGS. 18 and 19), vertical velocity of the rear-left damper 804 (labeled DV_rl in FIGS. 18 and 19), and vertical velocity of the rear-right damper 803 (labeled DV_rr in FIGS. 18 and 19) in meters per second. Finally, FIGS. 18 and 19 include curves showing valve position of the front-left damper 802 (VP_fl), valve position of the front-right damper 801 (VP_fr), valve position of the rear-left damper 804 (VP_rl), and valve position of the rear-right damper 803 (VP_rr). The x-axes of the plots shown in FIGS. 18-19 indicate the sample points. The curves in FIG. 18 correspond to 3587 sample points. The curves in FIG. 19 correspond to 7201 sample points.
[0269]
FIG. 20 shows simulation results of fuzzy control of a suspension system, when a fuzzy logic classifier system, such as the fuzzy logic classifier system 140, is trained using the entire teaching signal Ki (shown in FIG. 19). FIG. 20 also shows simulation results of fuzzy control of the same system when the fuzzy logic classifier system 140 is trained using the filtered teaching signal Kc (shown in FIG. 18). In the first case (solid line), the controller was trained using the full teaching signal Ki. In the second case (dashed line), the controller was trained with the teaching signal, which was filtered using the information filter 241. In FIG. 20, heave (Z0), pitch (BT) and roll (AL) movements, velocities and accelerations are shown. FIG. 20 shows the similarity of the system outputs in both cases, indicating the positive result of teaching signal filtering using the information filter 241.
[0270] FIGS. 21-23 show experimental results of fuzzy control. For the curves shown in FIGS. 21-23, the fitness function in the genetic algorithm is designed to minimize:
(heave jerk)2+(pitch jerk)2+(roll jerk)2+(entropy production)2
[0271]
FIG. 21 shows the squared of the controlled (solid line) and uncontrolled (dashed line) heave jerk. FIG. 22 shows the square of the controlled (solid line) and uncontrolled (dashed line) pitch jerk. FIG. 23 shows the square of the controlled (solid line) and uncontrolled (dashed line) heave jerk.
[0272] Appendix I: Mathematical and Physical Background of Stochastic Simulation
[0273] 1. Stochastic Integrals and Stochastic Differential Equations
[0274] 1.1. Stochastic Integrals
[0275] Consider different types of stochastic integral according to increasing complexity:
[0276] A.
90
[0277] where y(τ) is the process of Brownian motion [M[y(τj+1)−y(τj)]=0,M[y(τj+1)−y(τj)]2=[τj+1−τj], (if(τj,τj+1)∩(τk,τk+1)=0 then the increments are independent and M[y(τj+1)−y(τj)][y(τk+1)−y(τk)]=0) and x(τ) is a smoothly Markovian process. Define the stochastic integrals of types A, B, and C as limits in mean-square (1.i.m.) of integral sums.
[0278] Case A: Assume that Φ(τ) is a smooth function and
91
[0279] In this case: 1.i.m.
92
[0280] for n→∞, where Φn(τ) is the sequence of step functions. Define the integral for a step functions as
93
[0281] where (τj,τj+1) is the step of a function Φn(τ) (see Fig.I.6).
[0282] The limit in mean-square for the sequence of random functions Sn(t) exists if M[Sn−Sm]2→0 for n,m→∞. In this case M[Sn(t)−Sm(t)]2=fl(-D, (D )]d and this integral is similar to the Riemannian integral.
[0283] Case B: Assume that Φ(y,t) is the differentiable function of both arguments and
94
[0284] The stochastic Ito integral (case B) is:
95
[0285] The function y(τ) is a non-differentiable function, has non-bounded variation, and the transformation rules and calculation of this integral differ from the case of the differentiable function y(τ).
[0286] Example: calculate the integral
96
[0287] In this case
97
[0288] Denote Δj+1=y(τj+1)−y(τj); s=τ0; t=τN. Then
98
[0289] For a differentiable function y(τ)
99
[0290] It is possible to introduce a definition of stochastic integral that can be stable to transformation from the process with differentiable trajectories to the process with non-differentiable trajectories.
[0291] In the case B introduce the following definition (Stratonovich symmetrical stochastic integral):
100
[0292] or in the equivalent form:
101
[0293] The first expression is describes the case of Brownian motion with constant diffusion coefficient, M[y(τj+1)−y(τj)]2=τj+1−τj, and the second expression describes a more general case of non homogeneous Brownian motion:
M{[y
(τj+1)−y(τj)]2|y(τj)=y}=σ2(y(τj),τj)[τj+1−τj]+0(τj+1−τj).
[0294] An ordinary differential equation
102
[0295] can be considered as a limit for Δt→0 in the equation in finite differences Δy=A(y,t)Δt+o(Δt). In the case of stochastic differential equation the increment Δy include the random addend Δy=A(y,t)Δt+Φ(y,t,Δt,αt), where αt is independent random values; time is discrete. For limit non-degenerate stochastic disturbance
Δy=A(y,t)Δt+f(y,t,αt){square root}{square root over (Δt)} (1)
[0296] The relation (1) is similar to equation described until limited model of Brownian motion (Wiener process). Process of Brownian motion can be described as limit of process with discrete set of states. Consider a stray particle on a lattice with a temporal step Δt and a spatial step δ, δ={square root}{square root over (Δt)}. During the time t=nΔt the particle displacement is
103
[0297] ,
[0298] where Δxk=δ or −δ with the probability ½. Using the central limit theorem it is very simple to obtain that for Δt→0 the probability density of the stray process x(t) trends to Gaussian density of probability.
[0299] As a further example, for the sequence ξ={ξ1,ξ2, . . . , ξn} of mutual independent random values, similar probability partition law, mathematical expectation M[ξi]=a and dispersion Dξi=b the probability density for the sum of random values
104
[0300] trends to Gaussian probability density function with the parameters M[ηn]=na, Dηn=nb:
105
[0301] . If the density function of random addends is not similar, then for application of the central probability theorem is desirable to use the Linderberg condition:
106
[0302] for n→∞, where τ>0 is arbitrary constant number.
[0303] Also, the trajectories of until-limited stray convergence with the probability 1 (broken lines xΔ(t) converge in the limit to a smooth trajectory x(t)) for Δt→0). The limit random process x(t) is called a Wiener process. The trajectory of the Wiener process is piecewise:
107
[0304] for Δt→0. With the probability 1, the trajectory x(t) is not differentiable in all points t and, at the same time, the trajectory x(t) is smoothly and satisfy of uniformly continuous condition: for small values |t2−t1|, |x(t2)−x(t1)|≦K|t2−t1|1/2, where K=const. Extending the axis x in {square root}{square root over (B)} times, gives the Brownian motion with the diffusion coefficient B. The Brownian motion with the drift coefficient A corresponds to the case of Brownian motion with non-equal probability of up and down transfers of the stray process on the lattice.
[0305] As an example, consider that the probability of up transfer is p, down is q=1−p and p≠q . Then M[Δxk]=(p−q){square root}{square root over (BΔt)}=AΔt . In this case
108
[0306] For Δt→0 the probabilities of up and down displacements trends to ½. According to this property, (notwithstanding the infinite velocity) the mean displacement is finite during the finite time. Define all smoothly Markovian processes as the sum of large number of small increments and
109
[0307] for k>2.
[0308] For this case
110
[0309] As an example of until-limited models of Brownian motion, consider the random process xΔ(t) with discrete time, t=nΔ, and with smoothly-set state:
111
[0310] where η(jΔ) are the values of a stationary Gaussian random process with the auto-correlation function Rη(τ)=M[η(t)η(t+τ)]=σ2R(τ). The function R(τ) decreases if τ increases. Define τcor as the time when for the case |t2−t1|>τcor Gaussian random values η(t1) and η(t2) are weakly correlated and weakly dependent, accordingly. For the sum xΔ(t) of weakly dependent Gaussian values one can use the central limit theorem. In this case it is desirable to develop the process xΔ(t) with the mathematical expectation of quadratic increment for one step Δ proportional to Δ:
112
[0311] In this case, the stochastic process xΔ(t) for Δ→0 turns into the process of Brownian motion. Thus for decreasing Δ=τcor the auto-correlation function of the stochastic process η(t) must be to change as follows: the graphic form of the function Rη(τ) must simultaneously be compressed on the axis τ and stretched on the ordinate axis. The area under this curve is constant:
113
[0312] . In the limit Rη(τ)→Cδ(τ); δ(τ) is a Dirac delta-function. The generalized limited random process η(t) is the Gaussian δ-correlated stochastic process (white noise of level C) with constant spectral density:
114
[0313] One can define different until-limit models xΔ(t) of Brownian motion with changing of auto-correlation function types Rη(τ).
[0314] Example 1. Let
115
[0315] where auto-correlation function is
116
[0316] and for ω0→∞, Rη(τ)→Cδ(τ). In this case the process
117
[0317] becomes Brownian motion.
[0318] Example 2. Let
118
[0319] and Rη(τ)=αe−α|τ|→δ(τ) for α→∞.
[0320] The smoothly Markovian processes are connected with the process of Brownian motion and can be obtained as the solution of stochastic differential equations with the Brownian motion as external forces (diffusion processes).
[0321] By defining M[f2(y,t,αt)]=B(y,t); M[f(y,t,αt)]=0 in equation (1) then the density probability function p(y,t) of the stochastic solution for equation (1) is defined as the solution of Fokker-Planck-Kolmogorov equation:
119
[0322] In general form, stochastic differential equations can be described as:
120
[0323] where yk(t) are independent processes of Brownian motion, or
121
[0324] where ξk(t) are independent Gaussian white noises; x={x1,x2, . . . , xn}.
[0325] The solutions of equations (3) and (4) can be to defined as equivalent stochastic integral equalities
122
[0326] If the stochastic integrals in equation (5) are defined as Ito stochastic integrals, then equations (3) and (4) are the stochastic Ito differential equations. If defined as the symmetrical Stratonovich integrals, then equations (3),(4) are stochastic Stratonovich differential equations. In last case it is possible to use stochastic processes ξk(t) as wide-band Gaussian processes.
[0327] The Ito stochastic integral is defined as:
123
[0328] The symmetrical Stratonovich stochastic integral is defined as:
124
[0329] The difference between these two stochastic integrals is equal to:
125
[0330] If the functions Φ(y,t) independent from y then both definitions of stochastic integral are equivalent:
126
[0331] and JS≡J1.
[0332] Equations (3) and (4) define Markovian process. From definition (5), it is clear that the probability distribution for x(t), t>t0 and defined initial states x(t0)∈X0 are dependent only from the states x(t0) and are independent from past before the moment t0, while
127
[0333] and yk(τ) have independent from past the values before the moment t0.
[0334] Markovian process can be fully characterized by local characteristics:
M[x
i
(t+Δt)−xi(t)|x,t]=Ai(x,t)Δt+o(Δt),
M
[(xi(t+Δt)−xi(t))(xj(t+Δt)−xj(t))|x,t]=Bij(x,t)Δt+o(Δt).
[0335] For Ito stochastic differential equations
128
[0336] Using the relation between Ito and Stratonovich integrals from equation (8) it is possible to obtain for symmetrical stochastic differential equations:
129
[0337] The density probability function can be defined as the solution of equation (2).
Claims
- 1. An optimization control method for a shock absorber comprising the steps of:
obtaining a difference between a time differential of entropy inside a shock absorber and a time differential of entropy given to said shock absorber from a control unit that controls said shock absorber; and optimizing at least one control parameter of said control unit by using a genetic algorithm, said genetic algorithm using said difference as a fitness function, said fitness function constrained by at least one biologically-inspired constraint.
- 2. The optimization control method of claim 1, wherein said time differential of said step of optimizing reduces an entropy provided to said shock absorber from said control unit.
- 3. The optimization control method of claim 1, wherein said control unit is comprises a fuzzy neural network, and wherein a value of a coupling coefficient for a fuzzy rule is optimized by using said genetic algorithm.
- 4. The optimization control method of claim 1, wherein said control unit comprises an offline module and a online control module, said method further including the steps of optimizing a control parameter based on said genetic algorithm by using said performance function, determining said control parameter of said online control module based on said control parameter and controlling said shock absorber using said online control module.
- 5. The optimization control method of claim 4, wherein said offline module provides optimization using a simulation model, said simulation model based on a kinetic model of a vehicle suspension system.
- 6. The optimization control method of claim 4, wherein said shock absorber is arranged to alter a damping force by altering a cross-sectional area of an oil passage, and said control unit controls a throttle valve to thereby adjust said cross-sectional area of said oil passage.
- 7. A method for control of a plant comprising the steps of: calculating a first entropy production rate corresponding to an entropy production rate of a control signal provided to a model of said plant; calculating a second entropy production rate corresponding to an entropy production rate of said model of said plant; determining a fitness function for a genetic optimizer using said first entropy production rate and said second entropy production rate; providing said fitness function to said genetic optimizer; providing a teaching output from said genetic optimizer to a information filter; providing a compressed teaching signal from said information filter to a fuzzy neural network, said fuzzy neural network configured to produce a knowledge base; providing said knowledge base to a fuzzy controller, said fuzzy controller using an error signal and said knowledge base to produce a coefficient gain schedule; and providing said coefficient gain schedule to a linear controller.
- 8. The method of claim 7, wherein said genetic optimizer minimizes entropy production under one or more constraints.
- 9. The method of claim 8, wherein at least one of said constraints is related to a user-perceived evaluation of control performance.
- 10. The method of claim 7, wherein said model of said plant comprises a model of a suspension system.
- 11. The method of claim 7, wherein said second control system is configured to control a physical plant.
- 12. The method of claim 7, wherein said second control system is configured to control a shock absorber.
- 13. The method of claim 7, wherein said second control system is configured to control a damping rate of a shock absorber.
- 14. The method of claim 7, wherein said linear controller receives sensor input data from one or more sensors that monitor a vehicle suspension system.
- 15. The method of claim 14, wherein at least one of said sensors is an acceleration sensor that measures a vertical acceleration.
- 16. The method of claim 14, wherein at least one of said sensors is a length sensor that measures a change in length of at least a portion of said suspension system.
- 17. The method of claim 14, wherein at least one of said sensors is an angle sensor that measures an angle of at least a portion of said suspension system with respect to said vehicle.
- 18. The method of claim 14, wherein at least one of said sensors is an angle sensor that measures an angle of a first portion of said suspension system with respect to a second portion of said suspension system.
- 19. The method of claim 7, wherein said second control system is configured to control a throttle valve in a shock absorber.
- 20. A control apparatus comprising: off-line optimization means for determining a control parameter from an entropy production rate to produce a knowledge base from a compressed teaching signal; and online control means for using said knowledge base to develop a control parameter to control a plant.