System for intelligent control of an engine based on soft computing

Abstract
A reduced control system suitable for control of an engine as a nonlinear plant is described. The reduced control system is configured to use a reduced sensor set for controlling the plant without significant loss of control quality (accuracy) as compared to an optimal control system with an optimum sensor set. The control system calculates the information content provided by the reduced sensor set as compared to the information content provided by the optimum set. The control system also calculates the difference between the entropy production rate of the plant and the entropy production rate of the controller. A genetic optimizer is used to tune a fuzzy neural network in the reduced controller. A fitness function for the genetic optimizer provides optimum control accuracy in the reduced control system by minimizing the difference in entropy production while maximizing the sensor information content.
Description


BACKGROUND OF THE INVENTION

[0002] 1. Field of the Invention


[0003] The disclosed invention relates generally to engine control systems, and more particularly to electronic control systems for internal combustion engines.


[0004] 2. Description of the Related Art


[0005] Feedback control systems are widely used to maintain the output of a dynamic system at a desired value in spite of external disturbance forces that would move the output away from the desired value. For example, a household furnace controlled by a thermostat is an example of a feedback control system. The thermostat continuously measures the air temperature of the house, and when the temperature falls below a desired minimum temperature, the thermostat turns the furnace on. When the furnace has warmed the air above the desired minimum temperature, then the thermostat turns the furnace off. The thermostat-furnace system maintains the household temperature at a constant value in spite of external disturbances such as a drop in the outside air temperature. Similar types of feedback control are used in many applications.


[0006] A central component in a feedback control system is a controlled object, otherwise known as a process “plant,” whose output variable is to be controlled. In the above example, the plant is the house, the output variable is the air temperature of the house, and the disturbance is the flow of heat through the walls of the house. The plant is controlled by a control system. In the above example, the control system is the thermostat in combination with the furnace. The thermostat-furnace system uses simple on-off feedback control to maintain the temperature of the house. In many control environments, such as motor shaft position or motor speed control systems, simple on-off feedback control is insufficient. More advanced control systems rely on combinations of proportional feedback control, integral feedback control, and derivative feedback control.


[0007] The PID control system is a linear control system that is based on a dynamic model of the plant. In classical control systems, a linear dynamic model is obtained in the form of dynamic equations, usually ordinary differential equations. The plant is assumed to be relatively linear, time invariant, and stable. However, many real-world plants are time varying, highly nonlinear, and unstable. For example, the dynamic model may contain parameters (e.g., masses, inductances, aerodynamic coefficients, etc.) which are either poorly known or depend on a changing environment. If the parameter variation is small and the dynamic model is stable, then the PID controller may be sufficient. However, if the parameter variation is large, or if the dynamic model is unstable, then it is common to add adaptation or intelligent (AI) control to the PID control system.


[0008] AI control systems use an optimizer, typically a nonlinear optimizer, to program the operation of the PID controller and thereby improve the overall operation of the control system. The optimizers used in many AI control systems rely on a genetic algorithm. Using a set of inputs, and a fitness function, the genetic algorithm works in a manner similar to process of evolution to arrive at a solution which is, hopefully, optimal. The genetic algorithm generates sets of chromosomes (corresponding to possible solutions) and then sorts the chromosomes by evaluating each solution using the fitness function. The fitness function determines where each solution ranks on a fitness scale. Chromosomes which are more fit, are those chromosomes which correspond to solutions that rate high on the fitness scale. Chromosomes which are less fit, are those chromosomes which correspond to solutions that rate low on the fitness scale. Chromosomes that are more fit are kept (survive) and chromosomes that are less fit are discarded (die). New chromosomes are created to replace the discarded chromosomes. The new chromosomes are created by crossing pieces of existing chromosomes and by introducing mutations.


[0009] The PID controller has a linear transfer function and thus is based upon a linearized equation of motion for the plant. Prior art genetic algorithms used to program PID controllers typically use simple fitness functions and thus do not solve the problem of poor controllability typically seen in linearization models. As is the case with most optimizers, the success or failure of the optimization often ultimately depends on the selection of the performance (fitness) function.


[0010] Evaluating the motion characteristics of a nonlinear plant is often difficult, in part due to the lack of a general analysis method. Conventionally, when controlling a plant with nonlinear motion characteristics, it is common to find certain equilibrium points of the plant and the motion characteristics of the plant are linearized in a vicinity near an equilibrium point. Control is then based on evaluating the pseudo (linearized) motion characteristics near the equilibrium point. This technique works poorly, if at all, for plants described by models that are unstable or dissipative.



SUMMARY OF THE INVENTION

[0011] The present invention solves these and other problems by providing a new AI control system that allows a reduced number of sensors to be used without a significant loss in control accuracy. The new AI control system is self-organizing and uses a fitness (performance) function that are based on the physical laws of minimum entropy and maximum sensor information. The self-organizing control system may be used to control complex plants described by nonlinear, unstable, dissipative models. The reduced control system is configured to use smart simulation techniques for controlling the plant despite the reduction in the number of sensor number without significant loss of control quality (accuracy) as compared to an optimal control system. In one embodiment, the reduced control system comprises a neural network that is trained by a genetic analyzer. The genetic analyzer uses a fitness function that maximizes information while minimizing entropy production.


[0012] In one embodiment, the reduced control system is applied to an internal combustion engine to provide control without the use of extra sensors, such as, for example, an oxygen sensor. The reduced control system develops a reduced control signal from a reduced sensor set. The reduced control system is trained by a genetic analyzer that uses a control signal developed by an optimized control system. The optimized control system provides an optimum control signal based on data obtained from temperature sensors, air-flow sensors, and an oxygen sensor. In an off-line learning mode, the optimum control signal is subtracted from a reduced control signal (developed by the reduced control system) and provided to an information calculator. The information calculator provides an information criteria to the genetic analyzer. Data from the reduced sensor set is also provided to an entropy model, which calculates a physical criteria based on entropy. The physical criteria is also provided to the genetic analyzer. The genetic analyzer uses both the information criteria and the physical criteria to develop a training signal for the reduced control system.


[0013] In one embodiment, a reduced control system is applied to a vehicle suspension to provide control of the suspension system using data from a reduced number of sensors. The reduced control system develops a reduced control signal from a reduced sensor set. The reduced control system is trained by a genetic analyzer that uses a control signal developed by an optimized control system. The optimized control system provides an optimum control signal based on data obtained from a plurality of angle and position sensors. In an off-line learning mode, the optimum control signal is subtracted from a reduced control signal (developed by the reduced control system) and provided to an information calculator. In one embodiment, the reduced control system uses a vertical accelerometer mounted near the center of the vehicle. The information calculator provides an information criteria to the genetic analyzer. Data from the reduced sensor set is also provided to an entropy model, which calculates a physical criteria based on entropy. The physical criteria is also provided to the genetic analyzer. The genetic analyzer uses both the information criteria and the physical criteria to develop a training signal for the reduced control system.


[0014] In one embodiment, the invention includes a method for controlling a nonlinear object (a plant) by obtaining an entropy production difference between a time differentiation (dSu/dt) of the entropy of the plant and a time differentiation (dSc/dt) of the entropy provided to the plant from a controller. A genetic algorithm that uses the entropy production difference as a fitness (performance) function evolves a control rule for a low-level controller, such as a PID controller. The nonlinear stability characteristics of the plant are evaluated using a Lyapunov function. The evolved control rule may be corrected using further evolutions using an information function that compares the information available from an optimum sensor system with the information available from a reduced sensor system. The genetic analyzer minimizes entropy and maximizes sensor information content.


[0015] In some embodiments, the control method may also include evolving a control rule relative to a variable of the controller by means of a genetic algorithm. The genetic algorithm uses a fitness function based on a difference between a time differentiation of the entropy of the plant (dSu/dt) and a time differentiation (dSc/dt) of the entropy provided to the plant. The variable may be corrected by using the evolved control rule.


[0016] In another embodiment, the invention comprises an AI control system adapted to control a nonlinear plant. The AI control system includes a simulator configured to use a thermodynamic model of a nonlinear equation of motion for the plant. The thermodynamic model is based on a Lyapunov function (V), and the simulator uses the function V to analyze control for a state stability of the plant. The AI control system calculates an entropy production difference between a time differentiation of the entropy of said plant (dSu/dt) and a time differentiation (dSc/dt) of the entropy provided to the plant by a low-level controller that controls the plant. The entropy production difference is used by a genetic algorithm to obtain an adaptation function in which the entropy production difference is minimized. The genetic algorithm provides a teaching signal to a fuzzy logic classifier that determines a fuzzy rule by using a learning process. The fuzzy logic controller is also configured to form a control rule that sets a control variable of the low-level controller.


[0017] In one embodiment, the low-level controller is a linear controller such as a PID controller. The learning processes may be implemented by a fuzzy neural network configured to form a look-up table for the fuzzy rule.


[0018] In yet another embodiment, the invention comprises a new physical measure of control quality based on minimum production entropy and using this measure for a fitness function of genetic algorithm in optimal control system design. This method provides a local entropy feedback loop in the control system. The entropy feedback loop provides for optimal control structure design by relating stability of the plant (using a Lyapunov function) and controllability of the plant (based on production entropy of the control system). The control system is applicable to all control systems, including, for example, control systems for mechanical systems, bio-mechanical systems, robotics, electro-mechanical systems, etc.







BRIEF DESCRIPTION OF THE DRAWINGS

[0019] The advantages and features of the disclosed invention will readily be appreciated by persons skilled in the art from the following detailed description when read in conjunction with the drawings listed below.


[0020]
FIG. 1 is a block diagram showing an example of an AI control method in the prior art.


[0021]
FIG. 2 is a block diagram showing an embodiment of an AI control method in accordance with one aspect of the present invention.


[0022]
FIG. 3A is a block diagram of an optimal control system.


[0023]
FIG. 3B is a block diagram of a reduced control system.


[0024]
FIG. 4A is an overview block diagram of a reduced control system that uses entropy-based soft computing.


[0025]
FIG. 4B is a detailed block diagram of a reduced control system that uses entropy-based soft computing.


[0026]
FIG. 5A is a system block diagram of a fuzzy logic controller.


[0027]
FIG. 5B is a computing block diagram of the fuzzy logic controller shown in FIG. 5A.


[0028]
FIG. 6 is a diagram of an internal combustion piston engine with sensors.


[0029]
FIG. 7 is a block diagram of a control system for controlling the internal combustion piston engine shown in FIG. 6.


[0030]
FIG. 8 is a schematic diagram of one half of an automobile suspension system.


[0031]
FIG. 9 is a block diagram of a control system for controlling the automobile suspension system shown in FIG. 8.







DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

[0032] A feedback control system is commonly used to control an output variable of a process or plant in the face of some disturbance. Linear feedback control systems typically use combinations of proportional feedback control, integral feedback control, and derivative feedback control. Feedback that is the sum of proportional plus integral plus derivative feedback is often referred to as PID control. The Laplace transform of an output u(s) of a PID controller is given by:
1u(s)=G(s)e(s)=[k1+k2s+k3s]e(s)(1)


[0033] In the above equation, G(s) is the transfer function of the PID controller, e(s) is the controller input, u(s) is the controller output, k1 is the coefficient for proportional feedback, k2 is the coefficient for integral feedback, and k3 is the coefficient for derivative feedback. The coefficients k1 may be represented by a coefficient vector K, where K=[k1, k2, k3]. The vector K is commonly called a Coefficient Gain Schedule (CGS). The values of the coefficients K used in the linear PID control system are based on a dynamic model of the plant. When the plant is unstable, nonlinear, and/or time-variant, then the coefficients in K are often controlled by an AI control system.


[0034]
FIG. 1 shows a typical prior art AI control system 100. An input y(t) of the control system 100 is provided to a plus input of an adder 104 and an output x(t) of a plant 110 is provided to a minus input of the adder 104. An output of the adder 104 is provided as an error signal e(t) to an error signal input of a PID controller 106. An output u(t) of the PID controller 106 is provided to a first input of an adder 108. A disturbance m(t) is provided to a second input of the adder 108. An output u*(t) of the adder 108 is provided to an input of the plant 110. The plant 110 has a transfer function H(s) and an output x(t), where x(t)←→X(s) (where the symbol ←→ denotes the Laplace transform) and X(s)=H(s)u*(s). An output of the genetic algorithm 116 is provided to an input of a Fuzzy logic Neural Network (FNN) 118 and an output of the fuzzy neural network 118 is provided to a Fuzzy Controller (FC) 120. An output of the fuzzy controller 120 is a set of coefficients K, which are provided to a coefficient input of the PID controller 106.


[0035] The error signal e(t) provided to the PID controller 106 is the difference between the desired plant output value y(t) and the actual plant output value x(t). The PID controller 106 is designed to minimize the error represented by e(t) (the error being the difference between the desired and actual output signal signals). The PID controller 106 minimizes the error e(t) by generating an output signal u(t) which will move the output signal x(t) from the plant 110 closer to the desired value. The genetic algorithm 116, fuzzy neural network 118, and fuzzy controller 120 monitor the error signal e(t) and modify the gain schedule K of the PID controller 106 in order to improve the operation of the PID controller 106.


[0036] The PID controller 106 constitutes a reverse model relative to the plant 110. The genetic algorithm 116 evolves an output signal a based on a performance function ƒ. Plural candidates for α are produced and these candidates are paired according to which plural chromosomes (parents) are produced. The chromosomes are evaluated and sorted from best to worst by using the performance function ƒ. After the evaluation for all parent chromosomes, good offspring chromosomes are selected from among the plural parent chromosomes, and some offspring chromosomes are randomly selected. The selected chromosomes are crossed so as to produce the parent chromosomes for the next generation. Mutation may also be provided. The second-generation parent chromosomes are also evaluated (sorted) and go through the same evolutionary process to produce the next-generation (i.e., third-generation) chromosomes. This evolutionary process is continued until it reaches a predetermined generation or the evaluation function ƒ finds a chromosome with a certain value. The outputs of the genetic algorithm are the chromosomes of the last generation. These chromosomes become input information α provided to the fuzzy neural network 118.


[0037] In the fuzzy neural network 118, a fuzzy rule to be used in the fuzzy controller 120 is selected from a set of rules. The selected rule is determined based on the input information α from the genetic algorithm 116. Using the selected rule, the fuzzy controller 120 generates a gain schedule K for the PID controller 106. The vector coefficient gain schedule K is provided to the PID controller 106 and thus adjusts the operation of the PID controller 106 so that the PID controller 106 is better able to minimize the error signal e(t).


[0038] Although the AI controller 100 is advantageous for accurate control in regions near linearized equilibrium points, the accuracy deteriorates in regions away from the linearized equilibrium points. Moreover, the AI controller 100 is typically slow or even unable to catch up with changes in the environment surrounding the plant 110. The PID controller 106 has a linear transfer function G(s) and thus is based upon a linearized equation of motion for the plant 110. Since the evaluation function ƒ used in the genetic algorithm 116 is only based on the information related to the input e(t) of the linear PID controller 106, the controller 100 does not solve the problem of poor controllability typically seen in linearization models. Furthermore, the output results, both in the gain schedule K and the output x(t) often fluctuate, greatly, depending on the nature of the performance function ƒ used in the genetic algorithm 116. The genetic algorithm 116 is a nonlinear optimizer that optimizes the performance function ƒ. As is the case with most optimizers, the success or failure of the optimization often ultimately depends on the selection of the performance function ƒ.


[0039] The present invention solves these and other problems by providing a new AI control system 200 shown in FIG. 2. Unlike prior AI control systems, the control system 200 is self-organizing and uses a new performance function ƒ, which is based on the physical law of minimum entropy. An input y(t) of the control system 200 is provided to a plus input of an adder 204 and an output x(t) of a plant 210 is provided to a minus input of the adder 204. An output of the adder 204 is provided as an error signal e(t) to an error signal input of a PID controller 206 and to an input of a fuzzy controller 220. An output u(t) of the PID controller 206 is provided to a first input of an adder 208 and to a first input of an entropy calculator (EC) 214. A disturbance m(t) is provided to a second input of the adder 208. An output u*(t) of the adder 208 is provided to an input of the plant 210. The plant 210 has a transfer function H(s) and an output x(t), such that X(s)=H(s)u*(s), where x(t)←→X(s). The output x(t) is provided to a second input of the entropy calculator 214 and to the minus input of the adder 204. An output of the entropy calculator 214 is provided to an input of a genetic algorithm 216 and an output of the genetic algorithm 216 is provided to an input of a Fuzzy logic Neural Network (FNN) 218. An output of the fuzzy neural network 218 is provided to a rules selector input 222 of the fuzzy controller 220. A Coefficient Gain Schedule (CGS) output 212 of the fuzzy controller 222 is provided to a gain schedule input of the PID 206.


[0040] The combination of the genetic algorithm 216 and the entropy calculator 214 comprises a Simulation System of Control Quality 215. The combination of the fuzzy neural network 218 and the fuzzy controller 220 comprises a Fuzzy Logic Classifier System FLCS 219. The combination of the plant 210 and the adder 208 comprises a disturbed plant model 213. The disturbed plant signal u*(t)=u(t)+m(t), and the disturbance m(t) are typically unobservable.


[0041] The error signal e(t) provided to the PID controller 206 is the difference between the desired plant output value y(t) and the actual plant output value x(t). The PID controller 206 is designed to minimize the error represented by e(t). The PID controller 206 minimizes the error e(t) by generating an output signal u(t) which will move the output signal x(t) from the plant 210 closer to the desired value. The fuzzy controller 220 monitors the error signal e(t) and modifies the gain schedule K of the PID controller 206 according to a fuzzy control rule selected by the fuzzy neural network 218.


[0042] The genetic algorithm 216 provides a teaching signal KT to the fuzzy neural network 218. The teaching signal KT is a global optimum solution of a coefficient gain schedule K generated by the genetic algorithm 216.


[0043] The PID controller 206 constitutes a reverse model relative to the plant 210. The genetic algorithm 216 evolves an output signal a based on a performance function ƒ. Plural candidates for α are produced and these candidates are paired by which plural chromosomes (parents) are produced. The chromosomes are evaluated and sorted from best to worst by using the performance function ƒ. After the evaluation for all parent chromosomes, good offspring chromosomes are selected from among the plural parent chromosomes, and some offspring chromosomes are randomly selected. The selected chromosomes are crossed so as to produce the parent chromosomes for the next generation. Mutation is also employed. The second-generation parent chromosomes are also evaluated (sorted) and go through the same evolutionary process to produce the next-generation (i.e., third-generation) chromosomes. This evolutionary process is continued until it reaches a predetermined generation or the evaluation function ƒ finds a chromosome with a certain value. Then, a component from a chromosome of the last generation becomes a last output, i.e., input information α provided to the fuzzy neural network 218.


[0044] In the fuzzy neural network 218, a fuzzy rule to be used in the fuzzy controller 220 is selected from a set of rules. The selected rule is determined based on the input information α from the genetic algorithm 216. Using the selected rule, the fuzzy controller 220 generates a gain schedule K for the PID controller 206. This is provided to the PID controller 206 and thus adjusts the operation of the PID controller 206 so that the PID controller 206 is better able to minimize the error signal e(t).


[0045] The fitness function ƒ for the genetic algorithm is given by
2f=minSt(2)whereSt=(Sct-Sut)(3)


[0046] The quantity dSu/dt represents the rate of entropy production in the output x(t) of the plant 210. The quantity dSc/dt represents the rate of entropy production in the output u(t) of the PID controller 206.


[0047] Entropy is a concept that originated in physics to characterize the heat, or disorder, of a system. It can also be used to provide a measure of the uncertainty of a collection of events, or, for a random variable, a distribution of probabilities. The entropy function provides a measure of the lack of information in the probability distribution. To illustrate, assume that p(x) represents a probabilistic description of the known state of a parameter, that p(x) its the probability that the parameter is equal to z. If p(x) is uniform, then the parameter p is equally likely to hold any value, and an observer will know little about the parameter p. In this case, the entropy function is at its maximum. However, if one of the elements of p(z) occurs with a probability of one, then an observer will know the parameter p exactly and have complete information about p. In this case, the entropy of p(x) is at its minimum possible value. Thus, by providing a measure of uniformity, the entropy function allows quantification of the information on a probability distribution.


[0048] It is possible to apply these entropy concepts to parameter recovery by maximizing the entropy measure of a distribution of probabilities while constraining the probabilities so that they satisfy a statistical model given measured moments or data. Though this optimization, the distribution that has the least possible information that is consistent with the data may be found. In a sense, one is translating all of the information in the data into the form of a probability distribution. Thus, the resultant probability distribution contains only the information in the data without imposing additional structure. In general, entropy techniques are used to formulate the parameters to be recovered in terms of probability distributions and to describe the data as constraints for the optimization. Using entropy formulations, it is possible to perform a wide range of estimations, address ill-posed problems, and combine information from varied sources without having to impose strong distributional assumptions.


[0049] Entropy-based control of the plant in FIG. 2 is based on obtaining the difference between a time differentiation (dSu/dt) of the entropy of the plant and a time differentiation (dSc/dt) of the entropy provided to the plant from a low-level controller that controls the plant 210, and then evolving a control rule using a genetic algorithm. The time derivative of the entropy is called the entropy production rate. The genetic algorithm uses the difference between the entropy production rate of the plant (dSu/dt) and the entropy production rate of the low-level controller (dSc/dt) as a performance function. Nonlinear operation characteristics of the physical plant 210 are calculated by using a Lyapunov function


[0050] The dynamic stability properties of the plant 210 near an equilibrium point can be determined by use of Lyapunov functions. Let V(x) be a continuously differentiable scalar function defined in a domain D⊂Rn that contains the origin. The function V(x) is said to be positive definite if V(0)=0 and V(x)>0 for x≠0. The function V(x) is said to be positive semidefinite if V(x)≧0 for all x. A function V(x) is said to be negative definite or negative semidefinite if −V(x) is positive definite or positive semidefinite, respectively. The derivative of V along the trajectories x=ƒ(x) is given by:
3V.(x)=i=1nVxix.i=Vxf(x)(4)


[0051] where ∂V/∂x is a row vector whose ith component is ∂V/∂xi and the components of the n-dimensional vector ƒ(x) are locally Lipschitz functions of x, defined for all x in the domain D. The Lyapunov stability theorem states that the origin is stable if there is a continuously differentiable positive definite function V(x) so that V(x) is negative definite. A function V(x) satisfying the conditions for stability is called a Lyapunov function.


[0052] Calculation of the Lyapunov dynamic stability and entropy production for a closed nonlinear mechanical system is demonstrated by using the Holmes-Rand (Duffing-Van der Pol) nonlinear oscillator as an example. The Holmes-Rand oscillator is described by the equation:




{umlaut over (x)}
+(α+βx2){dot over (x)}−γx+x3=0  (5)



[0053] where α, β, and γ are constant parameters. A Lyapunov function for the Holmes-Rand oscillator is given by:
4V=12x.2+U(x),whereU=14x4-12γx2(6)


[0054] Entropy production diS/dt for the Holmes-Rand oscillator is given by the equation:
5iSt=(α+βx2)x.2(7)


[0055] Equation 5 can be rewritten as:
6x¨+(α+βx2)x.+Ux=0(8)


[0056] After multiplying both sides of the above equation by {dot over (x)}, then dV/dt can be calculated as:
7Vt=x¨x.+Uxx.=-1TiSt(9)


[0057] where T is a normalizing factor.


[0058] An interrelation between a Lyapunov function and the entropy production in an open dynamic system can be established by assuming a Lyapunov function of the form
8V=12i=1n(qi2+S2)(10)


[0059] where S=Su−Sc, and the qi are generalized coordinates. It is possible to introduce the entropy function S in the Lyapunov function V because entropy S is also a scalar function of time. Differentiation of V with respect to time gives:
9Vt=i=16qiq.i+SS.(11)


[0060] In this case, {dot over (q)}ii(qi, τ, t), S=Su−Sc, {dot over (S)}={dot over (S)}u−{dot over (S)}c and thus:
10Vt=i=16qiφi(qi,τ,t)+(Su-Sc)(Sut-Sct)(12)


[0061] A special case occurs when β=0 and the Holmes-Rand oscillator reduces to a force-free Duffing oscillator, wherein:
11iSt=-ax.2(Duffing oscillator)(13)


[0062] A Van der Pol oscillator is described by the equation:




{umlaut over (x)}
+(x2−1){dot over (x)}+x=0  (14)



[0063] and the entropy production is given by:
12iSt=1T(x2-1)x.2(Van der Pol oscillator)(15)


[0064] For a micro-mobile robot in fluid, a mechanical model is given by:
13m1x¨1+Cdρ2A1&LeftBracketingBar;x.1&RightBracketingBar;x.1+K1(x1-x0-l1θ0)-K2(x2-x1-l2θ1)=0(16)14m2x¨2+Cdρ2A2&LeftBracketingBar;x.2&RightBracketingBar;x.2+K2(x2-x1-l2θ1)-K3(x3-x2-l3θ2)=0(17)m3x¨3+Cdρ2A3&LeftBracketingBar;x.3&RightBracketingBar;x.3+K3(x3-x2-l3θ2)=0(18)where:θn+1=-12θn+321ln+1(xn+1-xn)(19)


[0065] Values for a particular micro-mobile robot are given in Table 1 below.
1TABLE 1ItemValueUnitsm11.6 × 10−7kgm21.4 × 10−6kgm32.4 × 10−6kgl12.0 × 10−3m l14.0 × 10−3m l34.0 × 10−3m K161.1N/mK213.7N/mK323.5N/mA14.0 × 10−6m2A22.4 × 10−6m2A34.0 × 10−6m2Cd1.12ρ1000Kg/m3


[0066] Entropy production for the micro-mobile robot is given by the equation:
15Sit=n=13Cdρ2An&LeftBracketingBar;x.n&RightBracketingBar;x.n2(20)


[0067] and the Lyapunov Function is given by:
16V=i=13mix.i2ρ2+i=13Ki(xi-xi-1-liθi-1)22+S22(21)


[0068] where S=Si−Sc and Sc is the entropy of a controller with torque τ.


[0069] The necessary and sufficient conditions for Lyapunov stability of a plant is given by the relationship:
17iq1φi(qi,τ,t)<(Si-Sc)(Sct-Sit),Sct>Sit(22)


[0070] According to the above equation, stability of a plant can be achieved with “negentropy” Sc (in the terminology used by Brillouin) where a change of negentropy dSc/dt in the control system 206 is subtracted from a change of entropy dSi/dt in the motion of the plant 210.


[0071] The robust AI control system 200 provides improved control of mechanical systems in stochastic environments (e.g., active vibration control), intelligent robotics and electro-mechanical systems (e.g., mobile robot navigation, manipulators, collective mobile robot control), bio-mechanical systems (e.g., power assist systems, control of artificial replaced organs in medical systems as artificial lung ventilation), micro electro-mechanical systems (e.g., micro robots in fluids), etc.


[0072] The genetic algorithm realizes the search of optimal controllers with a simple structure using the principle of minimum entropy production. The fuzzy neural network controller offers a more flexible structure of controllers with a smaller torque, and the learning process produces less entropy. The fuzzy neural network controller gives a more flexible structure to controllers with smaller torque and the learning process produces less entropy than a genetic analyzer alone. Thus, an instinct mechanism produces less entropy than an intuition mechanism. However, necessary time for achieving an optimal control with learning process on fuzzy neural network (instinct) is larger than with the global search on genetic algorithm (intuition).


[0073] Realization of coordinated action between the look-up tables of the fuzzy controller 220 is accomplished by the genetic algorithm and the fuzzy neural network. In particular, the structure 200 provides a multimode fuzzy controller coupled with a linear or nonlinear neural network 218. The control system 200 is a realization of a self-organizing AI control system with intuition and instinct. In the adaptive controller 200, the feedback gains of the PID controller 210 are changed according to the quantum fuzzy logic, and approximate reasoning is provided by the use of nonlinear dynamic motion equations.


[0074] The fuzzy tuning rules for the gains ki are shaped by the learning system in the fuzzy neural network 218 with acceleration of fuzzy rules on the basis of global inputs provided by the genetic algorithm 216. The control system 200 is thus a hierarchical, two-level control system that is intelligent “in small.” The lower (execution) level is provided by a traditional PID controller 206, and the upper (coordination) level is provided by a KB (with fuzzy inference module in the form of production rules with different model of fuzzy implication) and fuizzification and de-fuzzification components, respectively.


[0075] The genetic algorithm 216 simulates an intuition mechanism of choosing the optimal structure of the PID controller 206 by using the fitness function, which is the measure of the entropy production, and the evolution function, which in this case is entropy.



Reduced Sensor Control Systems

[0076] In one embodiment, the entropy-based control system is applied to a reduced control system wherein the number of sensors has been reduced from that of an optimal system.


[0077]
FIG. 3A shows an optimal control system x 302 controlling a plant 304. The optimal control system x produces an output signal x. A group of m sensors collect state information from the plant 304 and provide the state information as feedback to the optimal control system x 302. The control system x 302 is optimal not in the sense that it is perfect or best, but rather, the control system x 302 is optimal in the sense that it provides a desired control accuracy for controlling the output of the plant 304.


[0078]
FIG. 3B shows a reduced control system y 312 controlling the plant 304. The reduced control system y 312 provides an output signal y. A group of n sensors (where n<m) collect state information from the plant 304 and provide the state information as feedback to the reduced control system x 312. The reduced control system 312 is reduced because it receives information from fewer sensors than the optimal control system 302 (n is less than m).


[0079] The reduced control system 312 is configured to use smart simulation techniques for controlling the plant 304 despite the reduction in the number of sensor number without significant loss of control quality (accuracy) as compared to the optimal control system 302. The plant 304 may be either stable or unstable, and the control system 312 can use either a complete model of the plant 304 or a partial model of the plant 304.


[0080] The reduced control system 312 provides reliability, sufficient accuracy, robustness of control, stability of itself and the plant 304, and lower cost (due, in part, to the reduced number of sensors needed for control). The reduced control system 312 is useful for robust intelligent control systems for plants such as, for example, suspension system, engines, helicopters, ships, robotics, micro electromechanical systems, etc.


[0081] In some embodiments, reduction in the number of sensors may be used, for example, to integrate sensors as intelligent systems (e.g., “sensor-actuator-microprocessor” with the preliminary data processing), and for simulation of robust intelligent control system with a reduced number of sensors.


[0082] The amount of information in the reduced output signal y (with n sensors) is preferably close to the amount of information in the optimal output signal. In other words, the reduced (variable) output y is similar to the optimal output x, such that (x≈y). The estimated accuracy ε of the output y as related to x is M[(x−y)2]≦ε2.


[0083] In some embodiments, the reduced control system 312 is adapted to minimize the entropy production in the control system 312 and in the plant 304.


[0084] The general structure of a reduced control system is shown in FIGS. 4A and 4B. FIG. 4A is a block diagram showing a reduced control system 480 and an optimal control system 420. The optimal control system 420, together with an optimizer 440 and a sensor compensator 460 are used to teach the reduced control system 480. In FIG. 4A, a desired signal (representing the desired output) is provided to an input of the optimal control system 420 and to an input of the reduced control system 480. The optimal control system 420, having m sensors, provides an output sensor signal xb and an optimal control signal xa. A reduced control system 480 provides an output sensor signal yb and a reduced control signal ya. The signals xb and yb includes data from k sensors where k≦m−n. The k sensors typically being the sensors that are not common between the sensor systems 422 and 482. The signals xb and yb are provided to first and second inputs of a subtractor 491. An output of the subtractor 491 is a signal εb, where εb=xb−yb. The signal εb is provided to a sensor input of a sensor compensator 460. The signals xa and ya are provided to first and second inputs of a subtractor 490. An output of the subtractor 490 is a signal εa where εa=xa−ya. The signal εa is provided to a control signal input of the sensor compensator 460. A control information output of the sensor compensator 460 is provided to a control information input of the optimizer 440. A sensor information output of the sensor compensator 460 is provided to a sensor information input of the optimizer 440. A sensor signal 483 from the reduced control system 480 is also provided to an input of the optimizer 440. An output of the optimizer 440 provides a teaching signal 443 to an input of the reduced control system 480.


[0085] In the description that follows, off-line mode typically refers to a calibration mode, wherein the control object 428 (and the control object 488) is run with an optimal set of set of m sensors. In one embodiment, the off-line mode is run at the factory or repair center where the additional sensors (i.e., the sensors that are in the m set but not in the n set) are used to train the FNN1426 and the FNN2486. The online mode typically refers to an operational mode (e.g., normal mode) where the system is run with only the n set of sensors.


[0086]
FIG. 4B is a block diagram showing details of the blocks in FIG. 4A. In FIG. 4B, the output signal xk is provided by an output of a sensor set m 422 having m sensors given by m=k+k1. The information from the sensor system m 422 is a signal (or group of signals) having optimal information content I1. In other words, the information I1 is the information from the complete set of m sensors in the sensor system 422. The output signal xa is provided by an output of a control unit 425. The control signal xa is provided to an input of a control object 428. An output of the control object 428 is provided to an input of the sensor system 422. Information Ik from the set k of sensors is provided to an online learning input of a fuzzy neural network (FNN1) 426 and to an input of a first Genetic Algorithm (GA1) 427. Information Ik1 from the set of sensors k1 in the sensor system 422 is provided to an input of a control object model 424. An off-line tuning signal output from the algorithm GA1427 is provided to an off-line tuning signal input of the FNN1426. A control output from the FNN 426 is the control signal xa, which is provided to a control input of the control object 428. The control object model 424 and the FNN 426 together comprise an optimal fuzzy control unit 425.


[0087] Also in FIG. 4B, the sensor compensator 460 includes a multiplier 462, a multiplier 466, an information calculator 468, and an information calculator 464. The multiplier 462 and the information calculator 464 are used in online (normal) mode. The multiplier 466 and the information calculator 468 are provided for off-line checking.


[0088] The signal εa from the output of the adder 490 is provided to a first input of the multiplier 462 and to a second input of the multiplier 462. An output of the multiplier 462, being a signal εa2, is provided to an input of the information calculator 464. The information calculator 464 computes Ha(y)≦I(xa,ya). An output of the information calculator 464 is an information criteria for accuracy and reliability, I(xa,ya)→max.


[0089] The signal εb from the output of the adder 491 is provided to a first input of the multiplier 466 and to a second input of the multiplier 466. An output of the multiplier 466, being a signal εb2, is provided to an input of the information calculator 468. The information calculator 468 computes Hb(y)≦I(xb,yb). An output of the information calculator 468 is an information criteria for accuracy and reliability, I(xb,yb)→max.


[0090] The optimizer 440 includes a second Genetic Algorithm (GA2) 444 and an entropy model 442. The signal I(xa,ya)→max from the information calculator 464 is provided to a first input of the algorithm (GA2) 444 in the optimizer 440. An entropy signal S→min is provided from an output of the thermodynamic model 442 to a second input of the algorithm GA2444. The signal I(xb,yb)→max from the information calculator 468 is provided to a third input of the algorithm (GA2) 444 in the optimizer 440.


[0091] The signals I(xa,ya)→max and I(xb,yb)→max provided to the first and third inputs of the algorithm (GA2) 444 are information criteria, and the entropy signal S(k2)→min provided to the second input of the algorithm GA2444 is a physical criteria based on entropy. An output of the algorithm GA2444 is a teaching signal for a FNN2486 described below.


[0092] The reduced control system 480 includes a reduced sensor set 482, a control object model 484, the FNN2486, and a control object 488. When run in a special off-line checking (verification) mode, the sensor system 482 also includes the set of sensors k. The control object model 484 and the FNN2486 together comprise a reduced fuzzy control unit 485. An output of the control object 488 is provided to an input of the sensor set 482. An I2 output of the sensor set 482 contains information from the set n of sensors, where n=(k1+k2)<m, such that I2<I1. The information I2 is provided to a tuning input of the FNN2486, to an input of the control object model 484, and to an input of the entropy model 442. The teaching signal 443 from the algorithm GA2444 is provided to a teaching signal input of the FNN2486. A control output from the FNN2486 is the signal ya, which is provided to a control input of the control object 488.


[0093] The control object models 424 and 484 may be either full or partial models. Although FIGS. 4A and 4B show the optimal system 420 and the reduced system 480 as separate systems, typically the systems 420 and 480 are the same system. The system 480 is “created” from the system 420 by removing the extra sensors and training the neural network. Thus, typically, the control object models 424 and 484 are the same. The control object 428 and 488 are also typically the same.


[0094]
FIG. 4B shows arrow an off-line tuning arrow mark from GA1427 to the FNN1426 and from the GA2444 to the FNN2486. FIG. 4B also shows an online learning arrow mark 429 from the sensor system 422 to the FNN1426. The Tuning of the GA2444 means to change a set of bonding coefficients in the FNN2486. The bonding coefficients are changed (using, for example, an iterative or trial and error process) so that I(x,y) trends toward a maximum and S trends toward a minimum. In other words, the information of the coded set of the coefficients is sent to the FNN2486 from the GA2444 as I(x,y) and S are evaluated. Typically, the bonding coefficients in the FNN2486 are tuned in off-line mode at a factory or service center.


[0095] The teaching signal 429 is a signal that operates on the FNN1426 during operation of the optimal control system 420 with an optimal control set. Typically, the teaching signal 429 is provided by sensors that are not used with the reduced control system 480 when the reduced control system is operating in online mode. The GA1427 tunes the FNN1426 during off-line mode. The signal lines associated with xb and yb are dashed to indicate that the xb and yb signals are typically used only during a special off-line checking (i.e., verification) mode. During the verification mode, the reduced control system is run with an optimal set of sensors. The additional sensor information is provided to the optimizer 440 and the optimizer 440 verifies that the reduced control system 480 operates with the desired, almost optimal, accuracy.


[0096] For stable and unstable control objects with a non-linear dissipative mathematical model description and reduced number of sensors (or a different set of sensors) the control system design is connected with calculation of an output accuracy of the control object and reliability of the control system according to the information criteria I(xa,ya)→max and I(xb,yb)→max. Control system design is also connected with the checking of stability and robustness of the control system and the control object according to the physical criteria S(k2)→min.


[0097] In a first step, the genetic algorithm GA2444 with a fitness function described as the maximum of mutual information between and optimal control signals xa and a reduced control signal ya is used to develop the teaching signal 443 for the fuzzy neural network FNN2486 in an off-line simulation. The fuzzy neural network FNN2486 is realized using the learning process with back-propagation for adaptation to the teaching signal, and to develop a lookup-table for changing the parameters of a PID-controller in the controller 485. This provides sufficient conditions for achieving the desired control reliability with sufficient accuracy.


[0098] In a second step, the genetic algorithm GA2444, with a fitness function described as the minimum entropy, S, production rate dS/dt, (calculated according to a mathematical model of the control object 488), is used to realize a node correction lookup-table in the FNN2486. This approach provides stability and robustness of the reduced control system 480 with reliable and sufficient accuracy of control. This provides a sufficient condition of design for the robust intelligent control system with the reduced number of sensors.


[0099] The first and second steps above need not be done in the listed order or sequentially. In the simulation of unstable objects, both steps are preferably done in parallel using the fitness function as the sum of the physical and the information criteria.


[0100] After the simulation of a lookup-table for the FNN2486, the mathematical model of control object 484 is changed on the sensor system 482 for checking the qualitative characteristics between the reduced control system with the reduced number of sensors and the reduced control system with a full complement (optimum number) of sensors. Parallel optimization in the GA2444 with two fitness functions is used to realize the global correction of a lookup-table in the FNN2486.


[0101] The entropy model 442 extracts data from the sensor information I2 to help determine the desired number of sensors for measurement and control of the control object 488.


[0102]
FIGS. 4A and 4B show the general case when the reduction of sensors excludes the measurement sensors in the output of the control object and the calculation comparison of control signal on information criteria is possible. The sensor compensator 460 computes the information criteria as a maximum of mutual information between the two control signals xa and ya (used as the first fitness function for the GA2444). The entropy model 442 present the physical criteria as a minimum production entropy (used as the second fitness function in the GA2444) using the information from sensors 482. The output of the GA2444 is the teaching signal 447 for the FNN2486 that is used on-line to develop the reduced control signal ya such that the quality of the reduced control signal yb is similar to the quality of the optimal control signal xa. Thus, the optimizer 440 provides stability and robustness of control (using the physical criteria), and reliability with sufficient accuracy (using the information criteria).


[0103] With off-line checking, the optimizer 440 provides correction of the control signal ya from the FNN2486 using new information criteria. Since the information is additive, it is possible to do the online/off-line steps in sequence, or in parallel. In off-line checking, the sensor system 482 typically uses all of the sensors only for checking of the quality and correction of the control signal ya. This approach provides the desired reliability and quality of control, even when the control object 488 is unstable.


[0104] For the reduced sensor system 482 (with n sensors) the FNN2486 preferably uses a learning and adaptation process instead of a Fuzzy Controller (FC) algorithm. FIG. 5 illustrates the similarity between a FC 501 and a FNN 540.


[0105] As shown in FIG. 5A, the structure of the FNN 540 is similar to the structure of the FC 501. FIG. 5A shows a PID controller 503 having a first input 535 to receive a desired sensor output value and a second input 536 to receive an actual sensor output value 536. The desired sensor output value is provided to a plus input of an adder 520 and the actual sensor output value 536 is provided to a minus input of the adder 520. An output of the adder 520 is provided to a proportional output 531, to an input of an integrator 521, and to an input of a differentiator 522. An output of the integrator is provided to an integral output 532, and an output of the differentiator is provided to a differentiated output 533.


[0106] The Fuzzy logic Controller (FC) 501 includes a fuzzification interface 504 (shown in FIG. 5B), a Knowledge Base (KB) 502, a fuzzy logic rules calculator 506, and a defuzzification interface 508. The PID outputs 531-533 are provided to PID inputs of the fuzzification interface 504. An output of the fuizzification interface 504 is provided to a first input of the fuzzy logic rules calculator 506. An output of the KB 502 is provided to a second input of the fuzzy logic rules calculator 506. An output of the fuzzy logic rules calculator 506 is provided to an input of the defuzzification interface 508, and an output of the defuzzification interface 508 is provided as a control output UFC of the FC 501.


[0107] The Fuzzy Neural Network includes six neuron layers 550-555. The neuron layer 550 is an input layer and has three neurons, corresponding to the three PID output signals 531-533. The neuron layer 550 is an output layer and includes a single neuron. The neuron layers 551-553 are hidden layers (e.g., not visible from either the input or the output). The neuron layer 551 has six neurons in two groups of three, where each group is associated with one of the three input neurons in the neuron layer 550. The neuron layer 552 includes eight neurons. Each neuron in the neuron layer 552 has three inputs, where each input is associated with one of the three groups from the neuron layer 551. The neuron layer 553 has eight neurons. Each neuron in the neuron layer 553 receives input from all of the neurons in the neuron layer 552. The neuron layer 554 has is a visible layer having 8 neurons. Each neuron in the neuron layer 554 has four inputs and receives data from one of the neurons in the neuron layer 553 (i.e., the first neuron in the neuron layer 554 receives input from the first neuron in the neuron layer 553, the second from the second, etc.). Each neuron in the neuron layer 554 also receives input from all of the PID input signals 531-533. The output neuron layer 555 is a single neuron with eight inputs and the single neuron receives data from all of the neurons in the layer 554 (a generalized Takagi-Sugeno FC model).


[0108] The number of hidden layers in the FNN 540 typically corresponds to the number of elements in the FC 501. One distinction between the FC 501 and the FNN 540 lies in the way in which the knowledge base 502 is formed. In the FC 501, the parameters and membership functions are typically fixed and do not change during of control. By contrast, the FNN 540 uses the learning and adaptation processes, such that when running on-line (e.g., normal mode) the FNN 540 changes the parameters of the membership function in the “IF . . . THEN” portions of the production rules.


[0109] The entropy-based control system in FIGS. 4A and 4B achieves a desired control accuracy in the output signal of a reduced control system y, as related to an optimal system x, by using the statistical criteria of the accuracy given above as:




M
[(x−y)2]≦ε2  (23)



[0110] The optimal output signal x has a constant dispersion σ2, and the variable output signal y has a variable dispersion σy2. For a desired control accuracy ε, the ε-entropy Hε(y) is a measure of the uncertainty in the measurement of the process y as related to the process x, where the processes x and y differ from each other in some metric by ε.


[0111] The ε-entropy Hε(y) can be calculated as
18Hε(y)=H(y)-εlogn-1ε-(1-ε)log11-ε(24)


[0112] where H(y) is the entropy of the process y and n is a number of point measurements in the process y. Further,
19H(y)=-k=1npklogpk(25)


[0113] where pk is the probability that one of the output values of the process y is equal to the value of the output values of the process x.


[0114] In general, the form of the ε-entropy is defined as




H


ε
(y)=inf I(x,y)  (26)




p(x,y)

[0115] with the accuracy M[(x−y)22.


[0116] In general, the amount of information, I(x,y), is given by
20I(x,y)=M[logp(x,y)p(x)p(y)](27)


[0117] When x and y are Gaussian process, then
21I(x,y)=12log(1+1(2+σ)σ)whereσ=σy2σx2(28)


[0118] The amount of information, given by I(x,y), satisfies the inequality




I
(x,y)≧Hε(y)  (29)



[0119] and, it may be seen that




M
[(x−y)2]≧[εy(I(x,y))]2  (30)



[0120] where the function εy (I(x,y)) is the inverse of H68 (y)




H


ε




γ(H)


(y)=H  (31)



[0121] and thus


εy(I(x,y))≦εy(Hε(y))=M[(x−y)2]  (32)


[0122] Thus, it is possible to bound the mean-square error M[(x−y)2]≦ε2 from below by the known amount of information I(x,y).


[0123] The calculation of


max I(x,y)  (33)



σγ2

[0124] determines the reliability of the measurements of the process y, as related to the optimal process x, with the accuracy


εy(I(x,y))≦M[(x−y)2]≦ε2  (34)


[0125] In equation (28), the amount of information I(x,y) in the process y as related to the process x will have a maximal value Imax when σγ2→σγ2. Moreover, at Imax, the probability p(x,y) of the similarity between the processes x and y is (p(y≈x)→1), and thus, the minimum value of ε2 is observed. The calculation of equation (33) for the control process y compensates for the loss of information, and minimized the statistical risk, caused by the reduction in the number of sensors.


[0126] Engine Control


[0127] In one embodiment, the reduced sensor, entropy-based, control system is applied to an engine control system, such as, for example, an internal combustion engine, a piston engine, a diesel engine, a jet engine, a gas-turbine engine, a rocket engine, etc. FIG. 6 shows an internal combustion piston engine having four sensors, an intake air temperature sensor 602, a water temperature sensor 604, a crank angle sensor 606, and an air-fuel ratio sensor 608. The air temperature sensor 602 measures the air temperature in an intake manifold 620. A fuel injector 628 provides fuel to the air in the intake manifold 620. The intake manifold 620 provides air and fuel to a combustion chamber 622. Burning of the fuel-air mixture in the combustion chamber 622 drives a piston 628. The piston 628 is connected to a crank 626 such that movement of the piston 628 turns the crank 626. The crank angle sensor 606 measures the rotational position of the crank 626. The water temperature sensor measures the temperature of water in a water jacket 630 surrounding the combustion chamber 622 and the piston 628. Exhaust gasses from the combustion chamber 622 are provided to an exhaust manifold 624 and the air-fuel ratio sensor 608 measures the ratio of air to fuel in the exhaust gases.


[0128]
FIG. 7 is a block diagram showing a reduced control system 780 and an optimal control system 720. The optimal control system 720, together with an optimizer 740 and a sensor compensator 760 are used to teach the reduced control system 780. In FIG. 7, a desired signal (representing the desired engine output) is provided to an input of the optimal control system 720 and to an input of the reduced control system 780. The optimal control system 720, having five sensors, provides an optimal control signal xa and an sensor output signal xb. A reduced control system 780 provides a reduced control output signal ya, and an output sensor signal yb. The signals xb and yb include data from the A/F sensor 608. The signals xb and yb are provided to first and second inputs of a subtractor 791. An output of the subtractor 791 is a signal εb where εb=xb−yb. The signal εb is provided to a sensor input of a sensor compensator 760. The signals xa and ya are provided to first and second inputs of a subtractor 790. An output of the subtractor 790 is a signal εa where ε=xa- ya. The signal εa is provided to a control signal input of the sensor compensator 760. A control information output of the sensor compensator 760 is provided to a control information input of the optimizer 740. A sensor information output of the sensor compensator 760 is provided to a sensor information input of the optimizer 740. A sensor signal 783 from the reduced control system 780 is also provided to an input of the optimizer 740. An output of the optimizer 740 provides a teaching signal 747 to an input of the reduced control system 780.


[0129] The output signal xb is provided by an output of a sensor system 722 having five sensors, including, the intake air temperature sensor 602, the water temperature sensor 604, the crank angle sensor 606, the pressure sensor 607, and the air-fuel ratio (A/F) sensor 608. The information from the sensor system 722 is a group of signals having optimal information content I1. In other words, the information I1 is the information from the complete set of five sensors in the sensor system 722.


[0130] The output signal xa is provided by an output of a control unit 725. The control signal xa is provided to an input of an engine 728. An output of the engine 728 is provided to an input of the sensor system 722. Information Ik from the A/F sensor 608 is provided to an online learning input of a fuzzy neural network (FNN) 726 and to an input of a first Genetic Algorithm (GA1) 727. Information Ik1 from the set of four sensors excluding the A/F sensor 608 is provided to an input of an engine model 724. An off-line tuning signal output from the algorithm GA1727 is provided to an off-line tuning signal input of the FNN 726. A control output from the FNN 726 is a fuel injector the control signal U1, which is provided to a control input of the engine 728. The signal U1 is also the signal xa. The engine model 724 and the FNN 726 together comprise an optimal control unit 725.


[0131] The sensor compensator 760 includes a multiplier 762, a multiplier 766, an information calculator 768, and an information calculator 764. The multiplier 762 and the information calculator 764 are used in online (normal) mode. The multiplier 766 and the information calculator 768 are provided for off-line checking.


[0132] The signal εa from the output of the adder 790 is provided to a first input of the multiplier 762 and to a second input of the multiplier 762. An output of the multiplier 762, being a signal εa2, is provided to an input of the information calculator 764. The information calculator 764 computes Ha(y)≦I(xa,ya). An output of the information calculator 764 is an information criteria for accuracy and reliability, I(xa,ya)→max.


[0133] The signal εb from the output of the adder 791 is provided to a first input of the multiplier 766 and to a second input of the multiplier 766. An output of the multiplier 764, being a signal εb2,is provided to an input of the information calculator 768. The information calculator 768 computes Hb(y)≦I(xb,yb). An output of the information calculator 768 is an information criteria for accuracy and reliability, I(xb,yb)→max.


[0134] The optimizer 740 includes a second Genetic Algorithm (GA2) 744 and a thermodynamic (entropy) model 742. The signal I(xa,ya)→max from the information calculator 764 is provided to a first input of the algorithm (GA2) 744 in the optimizer 740. An entropy signal S→min is provided from an output of the thermodynamic model 742 to a second input of the algorithm GA244. The signal I(xb,yb) max from the information calculator 768 is provided to a third input of the algorithm (GA2) 744 in the optimizer 740.


[0135] The signals I(xa,ya)→max and I(xb,yb)→max provided to the first and third inputs of the algorithm (GA2) 744 are information criteria, and the entropy signal S(k2)→min provided to the second input of the algorithm GA2744 is a physical criteria based on entropy. An output of the algorithm GA2744 is a teaching signal for the FNN 786.


[0136] The reduced control system 780 includes a reduced sensor system 782, an engine model 744, the FNN 786, and an engine 788. The reduced sensor system 782 includes all of the engine sensors in the sensor system 722 except the A/F sensor 608. When run in a special off-line checking mode, the sensor system 782 also includes the A/F sensor 608. The engine model 744 and the FNN 786 together comprise a reduced control unit 785. An output of the engine 788 is provided to an input of the sensor set 782. An I2 output of the sensor set 782 contains information from four engine sensors, such that I2<I1. The information I2 is provided to an input of the control object model 784, and to an input of the thermodynamic model 742. The teaching signal 747 from the algorithm GA2744 is provided to a teaching signal input of the FNN 786. A control output from the FNN 786 is an injector control signal U2, which is also the signal ya.


[0137] Operation of the system shown in FIG. 7 is in many respects similar to the operation of the system shown in FIGS. 4A, 4B, and 5B.


[0138] The thermodynamic model 742 is built using the thermodynamic relationship between entropy production and temperature information from the water temperature sensor 604 (TW) and the air temperature sensor 602 (TA). The entropy production S(TW,TA), is calculated using the relationship
22S=c[ln(TWTA)]2Δτ-ln(TWTA)(35)


[0139] where Δτ is the duration of a finite process.


[0140] The engine model 744 is analyzed from a thermodynamic point of view as a steady-state model where the maximum work, which is equal to the minimum entropy production, is delivered from a finite resource fluid and a bath. Moreover, the engine model is analyzed as a dissipative, finite-time, generalization of the evolutionary Carnot problem in which the temperature driving force between two interacting subsystems varies with the constant time Δτ (a reversible cycle).


[0141] The presence of reversible cycles fixes, automatically, the first-law efficiency of each infinitesimal stage at the Carnot level
23η=1-TeT(36)


[0142] where T is the instantaneous temperature of a finite resource (such as the water temperature) and Te is the temperature of the environment or an infinite bath (such as the air temperature). The unit mass of a resource releases the heat dq=−cdT, where c is the specific heat. The classical rate and duration-dependent function of available energy (dissipative energy) Ex follows by integration of the product
24-T=-c(1-TeT)T(37)


[0143] between the limits T and Te. The integration yields the well known expression
25W=TTec(1-TeT)T=Δh-TeΔs(38)


[0144] The problem becomes non-trivial in the case where a finite-rate process is considered, because in the finite-rate process, the efficiency differs from the Carnot efficiency. The finite-rate efficiency should be determined before the integration of the product −cdT can be evaluated. The integration then leads to a generalized available energy associated with the extremal release of the mechanical work in a finite time.


[0145] The local efficiency of an infinitesimal unit of the finite-rate process is
26η=WQ1(39)


[0146] where Q1 is the cumulative heat flux flowing from the upper reservoir. A flux Q2 is the cumulative heat flux flowing to the lower reservoir. While this local first-law efficiency is still described by the Carnot formula
27n=1-T2T1(40)


[0147] were T1≦T1′≦T2′≦T2, this efficiency is nonetheless lower than the efficiency of the unit working between the boundary temperatures T4 and T2=Te, as the former applies to the intermediate temperatures T1′ and T2′. By solving equation (40) along with the reversible entropy balance of the Carnot differential subsystem
28dγ1(T1T1)T1=dγ2(T2-T2)T2(41)


[0148] one obtains the primed temperatures as functions of the variables T1, T2=Te and η. In equation (41), the parameters γ1 and γ2 are the partial conductances, and thus link the heat sources with the working fluid of the engine at high and low temperatures. The differentials of the parameters γ1 and γ2 can be expressed as dγ11dAi, for i=1,2, where the values αi are the heat transfer coefficients and the values dAi are the upper and lower exchange surface areas.


[0149] The associated driving heat flux dQ1=dγ1(T1−T1′) is then found in the form
29dQ1=dγ[T1-1(1-η)T2](42)


[0150] from which the efficiency-power characteristic follows in the form
30η=1-T2T1-Q1γ(43)


[0151] From the energy balance of the driving fluid, the heat power variable Q1 satisfies dQ=−GcdT, where dT is the differential temperature drop of the driving fluid, G is the finite-mass flux of the driving fluid whose constant specific heat capacity equals c. Using the above definitions, and the differential heat balance of the driving fluid, the control term dQ1/dγ from equation (43) may be written (with the subscript 1 omitted) as
31Qγ=Tτ(44)


[0152] The negative of the derivative dQ/dγ is the control variable u of the process. In short, the above equation says that u={dot over (T)}, or that the control variable u equals the rate of the temperature change with respect to the non-dimensional time τ. Thus, equation (43) becomes the simple, finite-rate generalization of the Carnot formula
32η=1-TeT+T.(45)


[0153] When T>Te then the derivative {dot over (T)} is negative for the engine mode. This is because the driving fluid releases the energy to the engine as part of the work production.


[0154] Work Functionals for an Infinite Sequence of Infinitesimal Processes


[0155] The cumulative power delivered per unit fluid flow W/G is obtained by the integration of the product of n and dQ/G=−cdT between an arbitrary initial temperature T and a final temperature Tf of the fluid. This integration yields the specific work of the flowing fluid in the form of the functional
33W[TTf]WG=-TTfc(1-TeT+T.)T.τ(46)


[0156] The notation [T,Tf] means the passage of the vector T≡(T,τ) from its initial state to its final state. For the above functional, the work maximization problem can be stated for the engine mode of the process by
34Wmax=max{-TTfL(T,T.)τ}=max{-TTfc(1-TeT+T.)T.τ}(47)


[0157] In equation (47), the function L(T,{dot over (T)}) is the Lagrangian. The above Lagrangian functional represents the total power per unit mass flux of the fluid, which is the quantity of the specific work dimension, and thus the direct relation to the specific energy of the fluid flow. In the quasi-static limit of vanishing rates, where dT/dτ=0, the above work functional represents the change of the classical energy according to equation (38).


[0158] For the engine mode of the process, the dissipative energy itself is obtained as the maximum of the functional in equation (47), with the integration limits Ti=Te and Tf=T.


[0159] An alternative form of the specific work can be written as the functional
35W[T,Tf]WG=TTf-c(1-TeT.)τ-TeTTfcT.2T(T+T.)=-TTfc(1-TeT.)T-TeS(48)


[0160] in which the first term is the classical “reversible” term and the second term is the product of the equilibrium temperature and the entropy production S, where
36S=TTecT.2T(T+T.)τ(49)


[0161] The entropy generation rate is referred here to the unit mass flux of the driving fluid, and thus to the specific entropy dimension of the quantity S. The quantity S is distinguished from the specific entropy of the driving fluid, s in equation (38). The entropy generation S is the quadratic function of the process rate, u=dT/dτ, only in the case when the work is not produced, corresponding to the vanishing efficiency η=0 or the equality T+{dot over (T)}=Te. For the activity heat transfer (non-vanishing η) case, the entropy production appears to be a non-quadratic function of rates, represented by the integrand of equation (48). Applying the maximum operation for the functional in equation (48) at the fixed temperatures and times, it is seen that the role of the first term (i.e., the potential term) is inessential, and the problem of the maximum released work is equivalent to the associated problem of the minimum entropy production. This confirms the role of the entropy generation minimization in the context of the extremum work problem. The consequence of the conclusion is that a problem of the extremal work and an associated fixed-end problem of the minimum entropy generation have the same solutions.


[0162] Hamilton-Jacobi Approach to Minimum Entropy Generation


[0163] The work extremization problems can be broken down to the variational calculus for the Lagrangian
37L=c(1-TeT.-T)T.(50)


[0164] The Euler-Lagrangian equations for the problem of extremal work and the minimum entropy production lead to a second-order differential equation


T{umlaut over (T)}−{dot over (T)}c=0  (51)


[0165] which characterizes the optimal trajectories of all considered processes. A first integral of the above equation provides a formal analog of the mechanical energy as
38ELT.-L=cTeT.2(T.+T)2(52)


[0166] An equation for the optimal temperature flow from the condition E=h in equation (38) is given by
39T.=±ThcTe1-±hcTeξTwhereξ=lnTfTiτf-τi(53)


[0167] The power of dynamic programming (DP) methods when applied to problems of this sort lies in the fact that regardless of local constraints, functions satisfy an equation of the Hamilton-Jacobi-Bellman (HJB-equation) with the same state variables as those for the unconstrained problem. Only the numerical values of the optimizing control sets and the optimal performance functions differ in constrained and unconstrained cases.


[0168] The engine problem may be correctly described by a backward HJB-equation. The backward HJB-equation is associated with the optimal work or energy as an optimal integral function I defined on the initial states (i.e., temperatures), and, accordingly, refers to the engine mode or process approaching an equilibrium. The function I is often called the optimal performance function for the work integral.


[0169] The maximum work delivery (whether constrained or unconstrained) is governed by the characteristic function
40I(τf,Tf,τi,Ti)maxW[T,Tf]=max{TTf[-c(1-TeT+u)u]τ}(54)


[0170] The quantity I in equation (54) describes the extremal value of the work integral W(Ti,Tf). It characterizes the extremal value of the work released for the prescribed temperatures Ti and Tf when the total process duration is τƒ−τi. While the knowledge of the characteristic function I only is sufficient for a description of the extremal properties of the problem, other functions of this sort are nonetheless useful for characterization of the problem.


[0171] Here the problem is transformed into the equivalent problem in which one seeks the maximum of the final work coordinate xOƒ=Wƒ for the system described by the following set of differential equations
41Wτ=-c(1-Teu+T)ufo(T,u)Tτ=uf1(T,u)x2τ=1f2(T,u)(55)


[0172] The state of the system described by equations (55) is described by the enlarged state vector X=(x0, x1, x2) which is composed of the three state coordinates xO=W, x1=T, and x2=τ. At this point, it is convenient to introduce a set of optimal performance functions Θi and Θf corresponding to the initial and final work coordinates, respectively. The function Θi works in a space of one dimension larger than I, involves the work coordinate xO=W, and is defined as




maxW


ƒ
≡Θi(Wii,Tiƒ,Tƒ)=Wi+Ii,Tiƒ, Tƒ)  (56)




u

[0173] A function V is also conveniently defined in the enlarged space of variables as




V=W


ƒ
−Θi(Wii,Tiƒ,Tƒ)=Wƒ−Wi−Ii,Ti, τƒ,Tƒ)  (57)



[0174] Two mutually equal maxima at the constant Wi and at the constant Wf are described by the extremal functions




V


i
(Wii, Tiƒ,Tƒ)=Vƒi,Ti,Wƒƒ,Tƒ)≡0  (58))



[0175] which vanish identically along all optimal paths. These functions can be written in terms of a wave-function V as follows:




maxV=max{W


ƒ


−W


i


−I
ƒ,Tƒi,Ti)}=0  (59)



[0176] The equation for the integral work function I in the narrow space of the coordinates (T,τ), which does not involve the coordinates W, follows immediately boom the condition V=0. (Development of the backward DP algorithm and equations for Θi(Xi) are provided later in the discussion that follows.)


[0177] Solution of the HJB equations begins by noting that the system of three state equations in equation (55) have the state variables x0=W, x1=T and x2=τ. The state equations from equation (55) can be in a general form as
42xβτ=fβ(x,u),β=0,1,2(60)


[0178] For small Δτ, from equation (60) it follows that


Δxββ(x,u)Δτ+O2)  (61)


[0179] In equation (61), the symbol O(ε2) means second-order and higher terms. The second-order and higher terms posses the property that
43limΔτ0O(ε2)Δt0.(62)


[0180] The optimal final work for the whole path in the interval [τi,τ∫] is the maximum of the criterion




W


ƒ
≡Θi(xi+Δx)=Θi(Wi+ΔW,Ti+ΔT, τi+Δτ)  (63)



[0181] Expanding equation (63) in a Taylor series yields
44Wf=Θi(Xi)+ΘiXβiΔXβ+O(ε2)=Θi(Wi,Ti,τi)+ΘiWiΔW+ΘiTiΔT+ΘiτiΔτ+O(ε2)(64)


[0182] Substituting equation (62) into equation (64) and performing an appropriate extremization in accordance with Bellman's principle of optimality, yields, for the variation of the initial point
45maxuWf=maxu{Θi(Xi)+ΘiXβifβ(X,u)Δτ+O(ε2)}(65)


[0183] After reduction of Θi, and the division of both sides of equation (65) by Δτ, the passage to the limit Δτ→0 subject to the condition
46limΔτ0O(ε2)Δt0(66)


[0184] yields the backward HJB equation of the optimal control problem. For the initial point of the extremal path, the backward DP equation is
47max{ΘiXβifβ(X,u)}u=max{ΘiWiWi(Ti,ui)+ΘiTiT.i(Ti,ui)+Θiτi}u=max{Θiτi}u=-minu{Vτi}=max{V(-τi)}=0(67)


[0185] The properties of V=Wi−Θi have been used in the second line of the above equation.


[0186] The partial derivative of V with respect to the independent variable τ can remain outside of the bracket of this equation as well. Using the relationships
48VWi=-ΘiWi=-1(68)


[0187] and {dot over (W)}=ƒ0=−L (see e.g., equations (47) and (50)), leads to the extremal work function
49Vτi+minu{VTiui+Li(Ti,ui)}=0(maxWf)(69)


[0188] In terms of the integral function of optimal work, I=Wƒ−Wi−V, equation (69) becomes
50Iτi+minu{ITiui+f0(Ti,ui)}=0(70)


[0189] As long as the optimal control u is found in terms of the state, time, and gradient components of the extremal performance function I, the passage from the quasi-linear HJB equation to the corresponding nonlinear Hamilton-Jacobi equation is possible.


[0190] The maximization of equation (70) with respect to the rate u leads to two equations, the first of which describes the optimal control u expressed through the variables T and z=∂I/∂T as follows:
51IT=-f0(T,u)u(71)


[0191] and the second is the original equation (70) without the extremization sign
52Iτ+ITu+f0(T,u)=0(72)


[0192] In the proceeding two equations, the index i is omitted. Using the momentum-type variable z=∂I/∂T, and using equation (71) written in the form
53z=-f0(T,u)u=L(T,u)u(73)


[0193] leads to the energy-type Hamiltonian of the extremal process as




H=zu
(z,T)+ƒ0(z,T)  (74)



[0194] Using the energy-type Hamiltonian and equation and equation (72) leads to the Hamilton-Jacobi equation for the integral I as
54Iτ+H(T,Iτ)=0(75)


[0195] Equation (75) differs from the HJB equation in that equation (75) refers to an extremal path only, and H is the extremal Hamiltonian. The above formulas are applied to the concrete Lagrangian L=−ƒ0, where ƒ0 is the intensity of the mechanical work production.


[0196] The basic integral W[Ti,Tƒ] written in the form of equation (47)
55Wmax=max{-TTfL(T,T.)τ}=max{-Tτfc(1-TeT+T.)T.τ}(76)


[0197] whose extremal value is the function I(Tii,Tƒƒ). The momentum-like variable (equal to the temperature adjoint) is then
56z-f0u=c(1-TeT(T+u)2)(77)


[0198] and
57u=(TeT1-zc)1/2-T(78)


[0199] The Hamilton-Jacobi partial differential equation for the maximum work problem (engine mode of the system) deals with the initial coordinates, and has the form
58IτH(T,Iτ)=0H(T,IT)=c[Te-T(1-1cIT)]2(79)


[0200] The variational front-end problem for the maximum work W is equivalent to the variational fixed-end problem of the minimum entropy production.


[0201] Hamilton-Jacobi Approach to Minimum Entropy Production


[0202] The specific entropy production is described by the functional from equation (49) as
59Sσ=0τfLστ=0τfcu2T(T+u)τ(80)


[0203] The minimum of the functional can be described by the optimal function Iσ(Tii,Tƒ, τƒ) and the Hamilton-Jacobi equation is then
60Iστ+Hσ=0Hσ=c[1-1-1cTIσT]2(81)


[0204] The two functionals (that of the work and that of the entropy generation) yield the same extremal.


[0205] Considering the Hamilton-Jacobi equation for the engine, from equation (54) integration along an extremal path leads to a function which describes the optimal specific work
61I(Ti,Tf,τi,τf)=c(Tf-Ti)-Te1+ξlnTiTf(82)


[0206] where the parameter ξ is defined in equation (53). The extremal specific work between two arbitrary states follows in the form
62I(Ti,Tf,τi,τf)=c(Ti-Tf)-cTelnTiTf-cTe[lnTiTf]τf-τi-lnTiTf(83)


[0207] From equation (83), with T=Tw and Tf=TA, then the minimal integral of the entropy production is
63Sσ=c[lnTWTA]2Δτ-[lnTWTA],whereΔτ=τf-τi(84)


[0208] The function from equation (83) satisfies the backward Hamilton-Jacobi equation, which is equation (79), and the function (84) satisfies the Hamilton-Jacobi equation (81).


[0209] The Lyapunnov function ℑ for the system of equation (60) can be defined as
64𝔉=12β=13xβ2+12S2and(85)𝔉τ=β=13xβfβ+(Su-Sc)(Suτ-Scτ)(86)


[0210] The relation (86) is the necessary and sufficient conditions for stability and robustness of control with minimum entropy in the control object and the control system.


[0211] Suspension Control


[0212] In one embodiment, the reduced control system of FIGS. 4A and 4B is applied to a suspension control system, such as, for example, in an automobile, truck, tank, motorcycle, etc.


[0213]
FIG. 8 is a schematic diagram of one half of an automobile suspension system. In FIG. 8, a right wheel 802 is connected to a right axle 803. A spring 804 controls the angle of the axle 803 with respect to a body 801. A left wheel 812 is connected to a left axle 813 and a spring 814 controls the angle of the axle 813. A torsion bar 820 controls the angle of the left axle 803 with respect to the right axle 813.


[0214]
FIG. 9 is a block diagram of a control system for controlling the automobile suspension system shown in FIG. 8. The block diagram of FIG. 9 is similar to the diagram of FIG. 4B showing an optimal control system and a reduced control system 980.


[0215] The optimal control system 920 includes an optimal sensor system 922 that provides information I1 to an input of a control object model 924. An output of the control object model 924 is provided to a first genetic analyzer GA1927, and the GA1927 provides an off-line tuning signal to a first fuzzy neural network FNN1926. An output from the FNN1926, and an output from the optimal sensor system 922 are provided to a control unit 925. An output from the control unit 925 is provided to a control object 928. The control object 928 is the automobile suspension and body shown in FIG. 8.


[0216] The reduced control system 980 includes a reduced sensor system 982 that provides reduced sensor information I2 to an input of a control unit 985. The control unit 985 is configured by a second fuzzy neural network FNN2986. An output from the control unit 985 is provided to a control object 988. The control object 988 is also the automobile suspension and body shown in FIG. 8.


[0217] The FNN2986 is tuned by a second genetic analyzer GA2, that, (like the GA2444 in FIG. 4.) maximizes an information signal I(x,y) and minimizes an entropy signal S. The information signal I(x,y) is computed from a difference between the control and sensor signals produced by the optimum control system 920 and the reduced control system 980. The entropy signal S is computed from a mathematical model of the control object 988.


[0218] The optimal sensor system 922 includes a pitch angle sensor, a roll angle sensor, four position sensors, and four angle sensors as described below.


[0219] The reduced sensor system 922 includes a single sensor, a vertical accelerometer placed near the center of the vehicle 801.


[0220] A Half Car Coordinate Transformation


[0221] 1. Description of transformation matrices


[0222] 1.1 Global reference coordinate xr, yr, zy {r} is assumed to be at the pivot center Pr of a rear axle.


[0223] The following are the transformation matrices to describe the local coordinates for:


[0224] {2} the gravity center of the body.


[0225] {7} the gravity center of the suspension.


[0226] {10} the gravity center of the arm.


[0227] {12} the gravity center of the wheel.


[0228] {13} the touch point of the wheel to the road.


[0229] {14} the stabilizer linkage point.


[0230] 1.2 Transformation matrices.


[0231] Rotating {r} along yr with angle β makes a local coordinate system x0r, y0r, z0r, {0r} with a transformation matrix r0rT.
650rrT=[cosβ0sinβ00100-sinβ0cosβ00001](87)


[0232] Transferring {0r} through the vector (a1, 0, 0) makes a local coordinate system x0f, y0f, z0f {0f} with a transformation matrix 0r0fT.
660f0rT=[100a1010000100001](88)


[0233] The above procedure is repeated to create other local coordinate systems with the following transformation matrices.
671f0fT=[10000cosα-sinα00sinαcosα00001](89)21fT=[100a0010b0001c00001](90)


[0234] 1.3 Coordinates for the front wheels 802, 812 (index n:i for the left, ii for the right) are generated as follows.


[0235] Transferring {1f} through the vector (0, b2n, 0) makes local coordinate system x3n, y3n, z3n {3n} with transformation matrix 1f3nT.
683n1fT=[1000010b2n00100001](91)4n3nT=[10000cosγn-sinγn00sinγncosγn00001](92)5n4nT=[10000100001c1n0001](93)6n5nT=[10000cosηn-sinηn00sinηncosηn00001](94)7n6nT=[10000100001z6n0001](95)8n4nT=[10000100001c2n0001](96)9n8nT=[10000cosθn-sinθn00sinθncosθn00001](97)10n9nT=[1000010e1n00100001](98)11n9nT=[1000010e3n00100001](99)12n11nT=[10000cosζn-sinζn00sinζncosζn00001](100)13n12nT=[10000100001z12n0001](101)14n9nT=[1000010e0n00100001](102)


[0236] 1.4 Some matrices are sub-assembled to make the calculation simpler.
691frT=0rrT0f0rT1f0fT=[cosβ0sinβ00100-sinβ0cosβ00001][100a1010000100001][10000cosα-sinα00sinαcosα00001]=[cosβ0sinβa1cosβ0100-sinβ0cosβ-a1sinβ0001][10000cosα-sinα00sinαcosα00001]=[cosβsinβsinαsinβcosαa1cosβ0cosα-sinα0-sinβcosβsinαcosβcosα-a1sinβ0001](103)4nrT=1frT3n1fT4n3nT=[cosβsinβsinαsinβcosαa1cosβ0cosα-sinα0-sinβcosβsinαcosβcosα-a1sinβ0001][1000010b2n00100001][10000cosγn-sinγn00sinγncosγn00001]=[cosβsinβsin(α+γn)sinβcos(α+γn)b2nsinβsin+a1cosβ0cos(α+γn)-sin(α+γn)b2ncosα-sinβcosβsin(α+γn)cosβcos(α+γn)b2ncosβsinα-a1sinβ0001](104)7n4nT=5n4nT6n5nT7n6nT=[10000100001c1n0001][10000cosηn-sinηn00sinηncosηn00001][10000100001z6n0001]=[10000cosηn-sinηn00sinηncosηnc1n0001][10000100001z6n0001]=[10000cosηn-sinηn-z6nsinηn0sinηncosηnc1n+z6ncosηn0001](105)10n4nT=8n4nTin8nT10n9nT=[10000100001c2n0001][10000cosθn-sinθn00sinθncosθn00001][1000010e1n00100001]=[10000cosθn-sinθn00sinθncosθnc2n0001][1000010e1n00100001]=[10000cosθn-sinθn-e1ncosθ0sinθncosθnc2n+e1nsinθn0001](106)12n4nT=8n4nT9in8nT11n9nT12n11nT=[10000100001c2n0001][10000cosθn-sinθn00sinθncosθn00001][1000010e1n00100001][10000cosζn-sinζn00sinζncosζn00001]=[10000cosθn-sinθn00sinθncosθn00001][1000010e3n00100001][10000cosζn-sinζn00sinζncosζn00001]=[10000cosθn-sinθne3ncosθn0sinθncosθnc2n+e3nsinθn0001][10000cosζn-sinζn00sinζncosζn00001]=[10000cos(θm+ζn)-sin(θn+ζn)e3ncosθn0sin(θn+ζn)cos(θn+ζn)c2n+e3nsinθn0001](107)


[0237] 2. Description of all the parts of the model both in local coordinate systems and relations to the coordinate {r} or {1f}.


[0238] 2.1 Description in local coordinate systems.
70Pbody2=Psuspn7n=Parmn10n=Pwheel·n12n=Ptouchpointm13n=Pstabn14n=[0001](108)


[0239] 2.2 Description in global reference coordinate system {r}.
71Pbodyr=&AutoRightMatch;1frT21fTPbody2=[cosβsinβsinαsinβcosαa1cosβ0cosα-sinα0-sinβcosβsinαcosβcosα-a1sinβ0001][100a0010b0001c00001][0001]=[a0cosβ+b0sinβsinα+c0sinβcosα+a1cosβb0cosα-c0sinα-a0sinβ+b0cosβsinα+c0cosβcosα-a1sinβ1](109)Psuspnr=&AutoLeftMatch;4nrT7n4nTPsuspn7n=[cosβsinβsin(α+γn)sinβcos(α+γn)b2nsinβsinα+a1cosβ0cos(α+γn)-sin(α+γn)b2ncosα-sinβcosβsin(α+γn)cosβcos(α+γn)b2ncosβsinα-a1sinβ0001][10000cosηn-sinηn-z6nsinηn0sinηncosηnc1n+z6ncosηn0001][0001]=[{z6ncos(α+γn+ηn)+c1ncos(α+γn)+b2nsinα}sinβ+a1cosβ-z6nsin(α+γn+ηn)-c1nsin(α+γn)+b2ncosα{z6ncos(α+γn+ηn)+c1ncos(α+γn)+b2nsinα}cosβ-a1sinβ1](110)Parmnr=&AutoLeftMatch;4nrT10n4nRParmn10n=[cosβsinβsin(α+γn)sinβcos(α+γn)b2nsinβsinα+a1cosβ0cos(α+γn)-sin(α+γn)b2ncosα-sinβcosβsin(α+γn)cosβcos(α+γn)b2ncosβsinα-a1sinβ0001][10000cosθn-sinθne3ncosθn0sinθncosθnc2n+31nsinθn0001][0001]=[{e1nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}sinβ+a1cosβe1ncos(α+γn+θn)-c2nsin(α+γn)+b2ncosα{e1nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}cosβ-a1sinβ1](111)Pwheelnr=&AutoLeftMatch;4nrT12n4nTPwheeln12n=[cosβsinβsin(α+γn)sinβcos(α+γn)b2nsinβsinα+a1cosβ0cos(α+γn)-sin(α+γn)b2ncosα-sinβcosβsin(α+γn)cosβcos(α+γn)b2ncosβsinα-a1sinβ0000][10000cos(θn+ζn)-sin(θn+ζn)e3ncosθn0sin(θn+ζn)cos(θn+ζn)c2n+e3nsinθn0001][0001]=[{e3nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}sinβ+a1cosβe3ncos(α+γn+θn)-c2nsin(α+γn)+b2ncosα{e3nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}cosβ-a1sinβ1](112)Ptouchpointnr=&AutoLeftMatch;4nrT12n4nT13n12nTPtouchpointn13n=[cosβsinβsin(α+γn)sinβcos(α+γn)b2nsinβsinα+a1cosβ0cos(α+γn)-sin(α+γn)b2ncosα-sinβcosβsin(α+γn)cosβcos(α+γn)b2ncosβsinα-a1sinβ0000][10000cos(θn+ζn)-sin(θn+ζn)e3ncosθn0sin(θn+ζn)cos(θn+ζn)c2n+e3nsinθn0001][10000100001z12n0001][0001]=[{z12ncosα+e3nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}sinβ+a1cosβ-z12nsinα+e3ncos(α+γn+θn)-c2nsin(α+γn)+b2ncosα{z12ncosα+e3nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}cosβ-a1sinβ1](113)


[0240] where ζn is substituted by,


ζn=−γn−θn


[0241] because of the link mechanism to support wheel at this geometric relation.


[0242] 2.3 Description of the stabilizer linkage point in local coordinate system {1f}.


[0243] The stabilizer works as a spring in which force is proportional to the difference of displacement between both arms in a local coordinate system {1f} fixed to the body.
72Pstabn1f=&AutoLeftMatch;3n1fT4n3nT8n4nT9n8nT14n9nTPstabn14n=[1000010b2n00100001][10010cosγn-sinγn00sinγncosγn00001][10000100001c2n0001][10000cosθn-sinθn00sinθncosθn00001][1000010e0n00100001][0001]=[0e0ncos(γn+θn)-c2nsinγn+b2ne0nsin(γn+θn)+c2ncosγn0](114)


[0244] 3. Kinetic energy, potential energy and dissipative functions for the <Body>, <Suspension>, <Arm>, <Wheel> and <Stabilizer>.


[0245] Kinetic energy and potential energy except by springs are calculated based on the displacement referred to the inertial global coordinate {r}. Potential energy by springs and dissipative functions are calculated based on the movement in each local coordinate.


[0246] <Body>
73Tbtr=12mb(x.b2+y.b2+z.b2)(115)


[0247] where


[0248]

x


b
=(a0+a1n) cos β+(b0 sin α+c0 cos α) sin β


[0249]

y


b


=b


0
cos α−c0 sinα


[0250]

z


b
=−(a0+a1n)sin β+(b0 sin α+c0 cos α) cos β  (116)


[0251] and
74qj,k=β,αxbβ=-(a0+a1)sinβ+(b0sinα+c0cosα)cosβxbα=(b0cosα-c0sinα)sinβybβ=0ybα=-b0sinα-c0cosαzbβ=-(a0+a1)cosβ-(b0sinα+c0cosα)sinβzbα=(b0cosα-c0sinα)cosβ(117)


[0252] and thus
75Tbtr=12mb(x.b2+y.b2+z.b2)=12mbj,k(xbqjxbqkq.jq.k+ybqjybqkq.jq.k+zbqjzqkq.jq.k)=12mbβ.2{-(a0+a1)sinβ+(b0sinα+c0cosα)cosβ}2+α.2{(b0cosα-c0sinα)sinβ}2+α.2(-b0sinα-c0cosα)2+β.2{-(a0+a1)cosβ-(b0sinα+((c0cosα)sinβ}))2+α.2{(b0cosα-c0sinα)cosβ}2+2α.β[{-(a0+a1)sinβ+(b0sinα+c0cosα)cosβ}(b0cosα-c0sinα)sinβ+{-(a0+a1)cosβ-(b0sinα+c0cosα)sinβ}(b0cosα-c0sinα)cosβ]=12mbα.2(b02+c02)+β.2{(a0+a1)2+(b0sinα+c0cosα)2}-2α.β.(a0+a1)(b0cosα-c0sinα)(118)Tbr0=12(Ibxωbx2+Ibyωby2+Ibzωbz2)whereωbx=α.ωby=β.ωbz=0Tbro=12(Ibxα.2+Ibyβ.2)Ub=mbgzb=mbg{-(a0+a1)sinβ+(b0sinα+c0cosα)cosβ}<Suspension>Tsntr=12msn(x.sn2+y.sn2+z.sn2)where(119)xsn={z6ncos(α+γn+ηn)+c1ncos(α+γn)+b2nsinα}sinβ+a1ncosβysn=-z6nsin(α+γn+ηn)-c1nsin(α+γn)+b2ncosαzsn={z6ncos(α+γn+ηn)+c1ncos(α+γn)+b2nsinα}cosβ-a1nsinβ(120)qj,k=z6n,ηn,α,βxz6n=cos(α+γn+ηn)sinβxsnηn=-z6nsin(α+γn+ηn)sinβxsnα={-z6nsin(α+γn+ηn)-c1nsin(α+γn)+b2ncosα}sinβxsnβ={z6ncos(α+γn+ηn)+c1ncos(α+γn)+b2nsinα}cosβ-a1nsinβysnz6n=-sin(α+γn+ηn)ysnηn=-z6ncos(α+γn+ηn)ysnα=-z6ncos(α+γn+ηn)-c1ncos(α+γn)-b2nsinαysnβ=0(121)zsnz6n=cos(α+γn+ηn)cosβzsnηn=-z6nsin(α+γn+ηn)cosβzsnα={-z6nsin(α+γn+ηn)-c1nsin(α+γn)+b2ncosα}cosβ(122)Tsntr=12msn(x.sn2+y.sn2+z.sn2)=12msnj,k(xsnqjxsnqkq.jq.k+ysnqjysnqkq.jq.k+zsnqjzsnqkq.jq.k)(123)=12msnz.6n2+η.n2z6n2+α.2[z6n2+c1n2+b2n2+2{z6nc1ncosηn-z6nb2nsin(γn+ηn)-c1nb2nsinγn}]+β.2[{(z6ncos(α+γn+ηn)+((c1ncos(α+γn)+b2nsinα)}))2+a1n2]+2z.6nα.{c1nsinηn+b2ncos(γn+ηn)}-2z.6nβ.a1ncos(α+γn+ηn)+2η.nα.z6n{z6n+c1ncosηn-b2nsin(γn+ηn)}+2η.nβ.z6na1nsin(α+γn+ηn)+c1nsin(α+γn)-b2ncosα}Tsnro0Usn=msngzsn+12ksn(z6n-lsn)2=msngzsn[{z6ncos(α+γn+ηn)+c1ncos(α+γn)+b2nsinα}cosβ-a1nsinβ]+12ksn(z6n-lsn)2(124)Fsn=-12csnz.6n2<Arm>(125)Tantr=12man(x.an2+y.an2+z.an2)(126)


[0253] where


[0254]

x


an


={e


1n
sin(α+γnn) +c2n cos (α+γn)+b2n sin α} sin β+a1n cos β


[0255] γan=e1n cos(α+γnn)−c2n sin(α+γn)+b2n cos α


[0256]

z


an


={e


1n
sin(α+γnn)+c2n cos(α+γn)+b2n sin α}cos β−a 1n sin β  (127)


[0257] and
76qj,k=θn,α,βxanθn=e1ncos(α+γn+θn)sinβxanα={e1ncos(α+γn+θn)-c2nsin(α+γn)+b2ncosα}sinβxanβ={e1nsin(α+γn+θn)+c2nsin(α+γn)+b2nsinα}cosβ-a1nsinβyanθn=-e1nsin(α+γn+θn)yanα=-e1nsin(α+γn+θn)-c2ncos(α+γn)-b2nsinαyanβ=0zanθn=e1ncos(α+γn+θn)cosβzanα={e1ncos(α+γn+θn)-c2nsin(α+γn)+b2ncosα}cosβzanβ=-{e1nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}sinβ-a1ncosβthus(128)Tantr=12man(x.an2+y.an2+z.an2)=12manj,k(xanqjxanqkq.jq.k+yanqjyanqkq.jq.k+zanqjzanqkq.jq.k)=12manθ.n2e1n2+α.2[e1n2+c2n2+b2n2-2{e1nc2nsinθn+e1nb2ncos(γn+θn)+c2nb2nsinγn}]+β.2[{e1nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}2+a1n2]+2θ.α.e1n{e1n-c2nsinθn+b2ncos(γn+θn)}-2θ.nβ.e1na1ncos(α+γn+θn)-2α.β.a1n{e1ncos(α+γn+θ)n-c1nsin(α+γn)+b2ncosα}(129)Tanro=12Iaxωax2=12Iax(α.+θ.n)2(130)Uan=mangzan=mang[{e1nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}cosβ-a1nsinβ]<Wheel>(131)Twntr=12mwn(x.wn2+y.wn2+z.wn2)(132)


[0258] where


[0259]

x


wn


={e


3n
sin(α+γnn)+c2n cos(α+γn)+b2n sin α} sin β+a1n cos β


[0260]

y


wn


e


3n
cos(α+γnn)−c2n sin(α+γn)+b2n cos α


[0261]

z


wn


={e


3n
sin(α+γnn)+c2n cos(α+γn)+b2n sin α} cos β−a1n sin β  (133)


[0262] Substituting man with mwn and e1n with e3n in the equation for arm, yields an equation for the wheel as:
77Twntr=12mwnθ.n2e3n2+α.2[e3n2+c2n2+b2n2-2{e3nc2nsinθn+e3nb2ncos(γn+θn)+c2nb2nsinγn}]+β.2[{e3nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}2+a1n2]+2θ.α.e3n{e3n-c2nsinθn+b2ncos(γn+θn)}-2θ.nβ.e3na1ncos(α+γn+θn)-2α.β.a1n{e3ncos(α+γn+θn)-c1nsin(α+γn)+b2ncosα}Twnro=0Uwn=mwngzwn+12kwn(z12n-lwn)2=mwng[{e3nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}cosβ-a1nsinβ]+12kwn(z12n-lwn)2(134)Fwn=-12cwnz.12n2<Stabilizer>(135)Tzntr0(136)Tznro0(137)Uzn12kzn(zzi-zzii)2=(12kzn(e0isin(γi+θi)-e0iisin(γii+θii)})2=12kzne0i2{sin(γi+θi)+sin(γii+θii)}2wheree0ii=-e0i(138)Fzn0(139)


[0263] Therefore the total kinetic energy is:
78Ttot=Tbtr+ni,ii&LeftBracketingBar;Tsntr+Tantr+Twntr+Tbro+Tanro(140)TtotTbtr+ni,ii&LeftBracketingBar;Tsntr+Tantr+Twntr+Tbro+Tanro&RightBracketingBar;=12mbα.2(b02+c02)+β.2{(a0+a1)2+(b0sinα+c0cosα)2}-2α.β.(a0+a1)(b0cosα-c0sinα)+ni,ii&LeftBracketingBar;12msnz.6n2+η.n2z6n2+α.2[z6n2+c1n2+b2n2+2{z6nc1ncosηn-z6nb2nsin(γn+ηn)-c1nb2nsinγn}]+β.2[{z6ncos(α+γn+ηn)+c1ncos(α+γn)+b2nsinα}2+a1n2]+2z.6nα.{c1nsinηn+b2ncos(γn+ηn)}-2z.6nβ.a1ncos(α+γn+ηn)+2η.nα.z6n{z6n+c1ncosηn-b2nsin(γn+ηn)}+2η.nβ.z6na1nsin(α+γn+ηn)+2α.β.a1n{z6nsin(α+γn+ηn)+c1nsin(α+γn)-b2ncosα}+12manθ.n2e1n2+α.2[e1n2+c2n2+b2n2-2{e1nc2nsinθn-e1nb2ncos(γn+θn)+c2nb2nsinγn}]+β.2[{e1nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}2+a1n2]+2θ.α.e1n{e1n-c2nsinθn+b2ncos(γn+θn)}-2θ.nβ.e1na1ncos(α+γn+θn)-2α.β.a1n{e1ncos(α+γn+θn)-c1nsin(α+γn)+b2ncosα}+12mwnθ.n2e3n2+α.2[e3n2+c2n2+b2n2-2{e3nc2nsinθn-33nb2ncos(γn+θn)+c2nb2nsinγn}]+β.2[{e3nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}2+a1n2]+2θ.α.e3n{e3n-c2nsinθn+b2ncos(γn+θn)}-2θ.nβ.e3na1ncos(α+γn+θn)-2α.β.a1n(e3ncos(α+γn+θn)-c1nsin(α+γn)+b2ncosα}+12(Ibxα.2+Ibyβ.2)+12Ianx(α.+θ.n)2&RightBracketingBar;(141)=12[α.2mbbI+β.2{mba1+mb(b0sinα+c0cosα)2}-2α.β.mba(b0cosα-c0sinα)]+12ni,ii&LeftBracketingBar;msn(z.6n2+η.n2z6n2)+θ.n2maw2In+α.2msawIn+msnz6n[z6n+2msn{c1ncosηn-b2nsin(γn+ηn)}]-2maw1n{c2nsinθn-b2ncos(γn+θn)}+β.2msaw2n+msn{z6ncos(α+γn+ηn)+c1ncos(α+γn)+b2nsinα}2+man{e1sin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}2+mwn{e3sin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}2+2z.6nα.msn{c1nsinηn+b2ncos(γn+ηn)}-2z.6nβ.ma1ncos(α+γn+ηn)+2η.nα.msnz6n{z6n+c1ncosηn-b2nsin(γn+ηn)}+2η.nβ.msnz6na1nsin(α+γn+ηn)+2θ.α.[maw2In-mawIn{c2nsinθn-b2ncos(γn+θn)}]-2θ.β.maw1na1ncos(α+γn+θn)+2α.β.a1n{msawcnsin(α+γn)-msawbncosα+msnz6nsin(α+γn+ηn)-maw1ncos(α+γn+θn)}&RightBracketingBar;(142)


[0264] where


[0265]

m


ba


=m


b
(a0+a1)


[0266]

m


bb1


=m


b
(b02+c02)+Ibt


[0267]

m


ba1


=m


b
(a0+a1)2+Iby


[0268]

m


sawan
=(msn+man+mwn)a1n


[0269]

m


sawbn
=(msn+man+mwn)b2n


[0270]

m


sawcn


=m


sn


c


1n
+(man+mwn)c2n  (143)


[0271] msaw2n=(msn+man+mwn)a1n2


[0272]

m


saw1n


=m


an


e


1n


2


+m


wn


e


3n


2


+m


sn
(c1n2+b2n2−2c1nb2n sin γn)


[0273] +(man+mwn)(c2n2+b2n2−2c2nb2n sin γn)+Iaxn


[0274]

m


aw2In


m


an


e


1n


2


+m


mw


e


3n


2


+I


axn



[0275]

m


aw1n


=m


an


e


1n


+m


wn


e


3n



[0276]

m


aw2n


=m


an


e


1n


2


+m


wn


e


3n


2



[0277] Hereafter variables and constants which have index ‘n’ implies implicit or explicit that they require summation with n=i, ii.


[0278] Total potential energy is:
79Utot=Ub+ni,n&LeftBracketingBar;Usn+Uan+Uwn+Uzn(144)=mbg{-(a0+a1)sinβ+(b0sinα+c0cosα)cosβ}+ni,si&LeftBracketingBar;msng[{z6ncos(α+γn+ηn)+c1ncos(α+γn)+b2nsinα}cosβ-a1nsinβ]+12ksn(z6n-lsn)2+mang[{e1nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}cosβ-a1nsinβ]+mwng[{e3nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}cosβ-a1nsinβ]+12kwn(z12n-lwn)2+12kzn[e0n{sin(γi+((θi)+sin(γii+θii)}+c2n(cosγi-cosγzi)]))2&RightBracketingBar;(145)=g{-mbasinβ+mb(b0sinα+c0cosα)cosβ}+12kzie0i2{sin(γi+θi)+sin(γi+θi)+sin(γii+θii)}2+ni,ng[{msnz6ncos(α+γn+ηn)+maw1nsin(α+γn+θn)+msawcncos(α+γn)+msawbnsinα}cosβ-msawansinβ]+12ksn(z6n-lsn)2+12kwn(z12n-l2n)2(146)


[0279] where


[0280]

m


ba


=m


b
(a0+a1)


[0281]

m


sawan
=(msn+man+mwn)a1n


[0282]

m


sawbn
=(msn+man+mwn)b2n


[0283]

m


sawcn


=m


sn


c


1n
+(man+mwn)c2n


[0284] γii=−γi  (147)


[0285] 4. Lagrange's Equation


[0286] The Lagrangian is written as:
80L=Ttot-Utot=12[α.2mbbI+β.2{mbaI+mb(b0sinα+c0cosα)2}-2α.β.mba(b0cosα-c0sinα)]+12ni,ii&LeftBracketingBar;msn(z.6n2+η.n2z6n2)+θ.n2maw2In+α.2msawIn+msnz6n[z6n+2{c1ncosηn-b2nsin(γn+ηn)}]-2maw1n{c2nsinθn-b2ncos(γn+θn)}+β.2msaw2n+msn{z6ncos(α+(γn+ηn)+c1ncos(α+γn)+b2nsinα})2+man{e1sin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}2+m2n{e3sin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}2+2z.6nα.msn{c1nsinηn+b2ncos(γn+ηn)}-2z.6nβ.msna1ncos(α+γn+ηn)+2η.nα.msnz6n{z6n+c1ncosηn-b2nsin(γn+ηn)}+2η.nβ.msnz6na1nsin(α+γn+ηn)+2θ.α.[maw2In-maw1n{c2nsinθn-b2ncos(γn+θn)}]-2θ.β.maw1na1ncos(α+γn+θn)+2α.β.a1n{msawcnsin(α+γn)-msawbncosα+msnz6nsin(α+γn+ηn)-maw1ncos(α+γn+θn)}&RightBracketingBar;-g{-mbasinβ+mb(b0sinα+c0cosα)cosβ}+12kzie0i2{sin(γi+θi)+sin(γii+θii)}2-ni,iig[{msnz6ncos(α+γn+ηn)+maw1nsin(α+γn+θn)+msawcncos(α+γn)+msawbnsinα}cosβ-msawansinβ]+12ksn(z6n-lsn)2+12kwn(z12n-lwn)2(148)Lβ=g{mbacosβ+mb(b0sinα+c0cosα)sinβ}+ni,iig[{msnz6ncos(α+γn+ηn)+maw1nsin(α+γn+θn)+msawcncos(α+γn)+msawbnsinα}sinβ+msawancosβ](149)Lα={β.2mb(b0cosα-c0sinα)+α.β.mba}(b0sinα+c0cosα)+ni,ii&LeftBracketingBar;+β.2msn{z6ncos(α+γn+ηn)+c1ncos(α+γn)+b2nsinα}{-z6nsin(α+γn+ηn)-c1nsin(α+γn)+b2ncosα}+man{e1nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}{e1cos(α+γn+θn)-c2nsin(α+γn)+b2ncosα}+mwn{e3nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}{e3cos(α+γn+θn)-c2nsin(α+γn)+b2ncosα}+z.6nβ.msna1nsin(α+γn+ηn)+η.nβ.msnz6na1ncos(α+γn+ηn)+θ.β.maw1na1nsin(α+γn+θn)+α.β.a1n{msawcncos(α+γn)+msawbnsinα+msnz6ncos(α+γn+ηn)+maw1nsin(α+γn+θn)}&RightBracketingBar;-gmb(b0cosα-c0sinα)cosβ+ni,iig{msnz6nsin(α+γn+ηn)-maw1ncos(α+γn+θn)+msawcnsin(α+γn)-msawbncosα}cosβ(150)Lηn=α.2msnz6n{-c1nsinηn-b2ncos(γn+ηn)}+β.2msn[z6ncos(α+γn+ηn)+c1ncos(α+γn)+b2nsinα}{-z6nsin(α+γn+ηn)}+z.6nα.msn{c1ncosηn-b2nsin(γn+ηn)}+z.6nβ.msna1nsin(α+γn+ηn)-η.nα.msnz6n{c1nsinηn+b2ncos(γn+ηn)}+η.nβ.msnz6na1ncos(α+γn+ηn)+α.β.a1nmsnz6ncos(α+γn+ηn)+gmsnz6nsin(α+γn+ηn)cosβ(151)Lθn=-kzie0i2{sin(γi+θi)+sin(γii+θii)}cos(γn+θn)-α.2maw1n{c2ncosθn+b2nsin(γn+θn)}+β.2man{e1nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}e1ncos(α+γn+θn)+mwn{e3nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}e3ncos(α+γn+θn)-θ.α.maw1n{c2ncosθn+b2nsin(γn+θ)}+θ.β.maw1na1nsin(α+γn+θn)+α.β.a1nmaw1nsin(α+γn+θn)-gmaw1ncos(α+γn+θn)cosβ(152)Lz6n=msnη.n2z6n+α.2msn[z6n+{c1ncosηn-b2nsin(γ+ηn)}]+β.2msn{z6ncos(α+γn+ηn)+c1ncos(α+γn)+b2nsinα}cos(α+γn+ηn)+η.nα.msn{2z6n+c1ncosηn-b2nsin(γn+ηn)}+η.nβ.msna1nsin(α+γn+ηn)+α.β.a1nmsnsin(α+γn+ηn)-gmsncos(α+γn+ηn)cosβ-ksn(z6n-lsn)(153)Lz12n=-kwn(z12n-lwn)(154)Lβ.=β.msaw2n+mbaI+mb(b0sinα+c0cosα)2+msn{z6ncos(α+γn+ηn)+c1ncos(α+γn)+b2nsinα}2+man{e1nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}2+mwn{e3nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}2-α.mba(b0cosα-c0sinα)-z.6nmsna1ncos(α+γn+ηn)+η.nmsnz6na1nsin(α+γn+ηn)-θ.maw1na1ncos(α+γn+θn)+α.a1n{msawcnsin(α+γn)-msawbncosα+msnz6nsin(α+γn+ηn)-maw1ncos(α+γn+θn)}(155)t(Lβ.)=β¨msaw2n+mbaI+mb(b0sinα+c0cosα)2+msn{z6ncos(α+γn+ηn)+c1ncos(α+γn)+b2nsinα}2+man{e1nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}2+mwn{e3nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}2+2β.α.mb(b0sinα+c0cosα)(b0cosα-c0sinα)+msn{z6ncos(α+γn+ηn)+c1ncos(α+γn)+b2nsinα}{z.6ncos(α+γn+ηn)-(α.+η.n)z6nsin(α+γn+ηn)-α.[c1nsin(α+γn)-b2ncosα]}+man{e1nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}{(α.+θ.n)e1ncos(α+γn+θn)-α.[c2nsin(α+γn)-b2ncosα]}+mwn{e3nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}{α.+θ.n)e3nsin(α+γn+θn)-α.[c2nsin(α+γn)-b2ncosα]}-α¨mba{b0cosα-c0sinα)+α.2mba(b0sinα+c0cosα)-z¨6nmsna1ncos(α+γn+ηn)+z.6n(α.+ηn)msna1nsin(α+γn+ηn)+η¨nmsnz6na1nsin(α+γn+ηn)+η.nmsnz.6na1nsin(α+ηn+ηn)+η.n(α.+η.n)msnz6nz1ncos(α+γn+ηn)-θ¨nmaw1na1ncos(α+γn+θn)+θ.n(α.+θ.n)maw1na1nsin(α+γn+θn)+α¨a1n{msawcnsin(α+γn)-msawbncosα+msnz6nsin(α+γn+ηn)-maw1ncos(α+γn+θn)}+α.a1n{α.msawcncos(α+γn)+α.msawbnsinα+(α.+η.n)msnz6ncos(α+γn+ηn)+msnz.6nsin(α+γn+ηn)+(α.+θ.n)maw1nsin(α+γn+θ)}(156)Lα.=α.mbbI-β.mba(b0cosα-c0sinα)+α.msawIn+msnz6n[z6n+2{c1ncosηn-b2nsin(γn+ηn)}]-2maw1n{c2nsinθn-b2ncos(γn+θ)}+z.6nmsn{c1nsinηn+b2ncos(γn+ηn)}+η.nmsnz6n{z6n+c1ncosηn-b2nsin(γn+ηn)}+θ.[maw2In-maw1n{c2nsinθn-b2ncos(γn+θ)}]+β.a1n{msawcnsin(α+γn)-msawbncosα+msnz6nsin(α+γn+ηn)-maw1ncos(α+γn+θ)}(157)t(Lα.)=-β¨mba(b0cosα-c0sinα)+β.α.mba(b0sinα+c0cosα)+α¨mbbI+msawIn+msnz6n[z6n+2{c1ncosηn-b2nsin(γn+ηn)}]-2maw1n{c2nsinθn-b2ncos(γn+θn)}+α.msnz.6n[z6n+2{c1ncosηn-b2nsin(γn+ηn)}]+msnz6n[z.6n-2η.n{c1nsinηn+b2ncos(γn+ηn)}]-2θ.nmaw1n{c2ncosθn+b2nsin(γn+θn)}+z¨6nmsn{c1nsinηn+b2ncos(γn+ηn)}+z.6nη.nmsn{c1ncosηn-b2nsin(γn+ηn)}+η¨nmsnz6n{z6n+c1ncosηn-b2nsin(γn+ηn)}+η.nmsnz.6n{z6n+c1ncosηn-b2nsin(γn+ηn)}+η.nmsnz6n{z.6n-η.n[c1nsinηn+b2ncos(γn+ηn)]}+θ¨[maw2In-maw1n{c2nsinθn-b2ncos(γn+θn)}]-θ.n2maw1n{c2ncosθn+b2nsin(γn+θn]}]+β¨a1n{msawcnsin(α+γn)-msawbncosα+msnz6nsin(α+γn+ηn)-maw1ncos(α+γn+θn)}+β.a1n{α.[msawcncos(α+γn)+msawbnsinα]+msnz.6nsin(α+γn+ηn)+(α.+θ.n)maw1nsin(α+γn+θn)}(158)Lη.n=msnη.nz6n2+α.msnz6n{z6n+c1ncosηn-b2nsin(γn+ηn)}+β.msnz6na1nsin(α+γn+ηn)(159)t(Lη.n)=msnη¨nz6n2+2msnη.nz.6nz6n+α¨msnz6n{z6n+c1ncosηn-b2nsin(γn+ηn)}+α.msnz.6n{z6n+c1ncosηn-b2nsin(γn+ηn)}+α.msnz6n{z.6n-η.n[c1nsinηn+b2ncos(γn+ηn)]}+β¨msnz6na1nsin(α+γn+ηn)+β.msnz.6na1nsin(α+γn+ηn)+β.(α.+η.n)msnz6na1ncos(α+γn+ηn)(160)Lθ.n=θ.nmaw2In+α.[maw2In-maw1n{c2nsinθn-b2ncos(γn+θn)}]-β.maw1na1ncos(α+γn+θn)-(161)α.θ.nmaw1n{c2ncosθn+b2nsin(γn+θn)}-β¨maw1na1ncos(α+γn+θn)+β.(α.+θ.n)maw1na1nsin(α+γn+θn)(162)Lz.6n=msnz.6n+α.msn{c1nsinηn+b2ncos(γn+ηn)}-β.msna1ncos(α+γn+ηn)(163)t(Lz.6n)=msnz¨6n+α¨msn{c1nsinηn+b2ncos(γn+ηn)}+α.η.nmsn{c1ncosηn-b2nsin(γn+ηn)}-β¨msna1ncos(α+γn+ηn)-β.(α.+η.n)msna1nsin(α+γn+ηn)(164)Lz.12n=0(165)t(Lz.12n)=0(166)


[0287] The dissipative function is:
81Ftot=-12(csnz.6n2+cwnz.12n2)(167)


[0288] The constraints are based on geometrical constraints, and the touch point of the road and the wheel. The geometrical constraint is expressed as




e


2n
cos θn=−(z6n−d1n) sin ηn





e


2n
sin θn−(z6n−d1n) cos ηn=c1n−c2n  (168)



[0289] The touch point of the road and the wheel is defined as


[0290] ztn=zPtouchpointnr


[0291] ={z12n cos α+e3n sin(α+γnn)+c2n cos(α+γn)  (169)


[0292] +b2n sin α} cos β−a1n sin β


[0293] =Rn(t)


[0294] where Rn(t) is road input at each wheel.


[0295] Differentials are:


[0296] {dot over (θ)}ne2n sin θn−{dot over (z)}6n sin ηn−{dot over (η)}n(z6n−d1n)cos ηn=0


[0297] {dot over (θ)}ne2n cos θn−{dot over (z)}6n cos ηn+{dot over (η)}n(z6n−d1n)sin ηn=0


[0298] {{dot over (z)}12n cos α−{dot over (α)}z12n sin α+({dot over (α)}+{dot over (θ)}n)e3n cos(α+γnn)


[0299] −{dot over (α)}c2n sin (α+γn)+{dot over (α)}b2n cos α} cos β  (170)


[0300] −{dot over (β)}[{z12n cos α+e3n sin (α+γnn)


[0301] +c2n cos(α+γn)+b2n sin α} sin β+a1n cos β]−{dot over (R)}n(t)=0


[0302] Since the differentials of these constraints are written as
82jalnjdq.j+alntdt=0(l=1,2,3n=i,ii)(171)


[0303] then the values a1nj are obtained as follows.


[0304] a1n1=0, a1n2=0, a1n3=−(z6n−d1n)cos ηn, a1n4=e2n sin ηn, a1n5=− sin ηn, a1n6=0


[0305] a2n1=0, a2n2=0, a2n3=(z6n−d1n)sin ηn, a2n4=e2n cos ηn, a2n5=− cos ηn, a2n6=0


[0306] a3n1=−{z12n cos α+e3n sin(α+γnn)+c2n cos(α+γn)+b2n sin α} sin β+a1n cos β,


[0307] a3n2={−z12n sin α+e3n cos(α+γnn)−c2n sin(α+γn)+b2n cos α} cos β,


[0308] a3n3=0, a3n4=e3n cos(α+γnn)cos β, a3n5=0, a3n6= cos α cos β  (172)


[0309] From the above, Lagrange's equation becomes
83t(Lq.j)-Lqj=Qj+l,nλlnalnj(173)


[0310] where


[0311] q1=β, q2=α, q3ii, q4ii, q5i=z6i, q6i=z12i


[0312] q3iiii, q4iiii, q5ii=z6ii, q6ii=z12ii  (174)


[0313] can be obtained as follows:
84t(Lβ.)-Lβ=(Lβ.)+l,nλlnalnj(175)β¨msaw2n+mba1+mb(b0sinα+c0cosα)2+(176)msn{z6ncos(α+γn+ηn)+clncos(α+γn)+b2nsinα}2+man{e1nsin(α+γn+θn)+c2ncos(α+γn)+b2n+sinα}2+mwn{3nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}2+2β.α.mb(b0sinα+c0cosα)(b0cosα-c0sinα)+msn{z6ncos(α+γn+ηn)+c1ncos(α+γn)+b2nsinα}{z.6ncos(α+γn+ηn)-(α.+η.n)z6nsin(α+γn+ηn)-α.[c1nsin(α+γn)-b2ncosα]}+man{e1nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}{(α.+θ.n)e1ncos(α+γn+θn)-α.[c2nsin(α+γn)-b2ncosα]}+mwn{e3nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}{(α.+θ.n)e3nsin(α+γn+θn)-α.[c2nsin(α+γn)-b2ncosα]}-α¨mbn(b0cosα-c0sinα)+α.2mba(b0sinα+c0sinα)+-z¨6nmsna1ncos(α+γn+ηn)+z.6n(α.+η.n)msna1nsin(α+γn+ηn)+η¨nmsnz6na1nsin(α+γn+ηn)+η.nmsnz.6na1nsin(α+γn+ηn)+η.n(α.+η.n)msnz6na1ncos(α+γn+ηn)-θ¨nmaw1na1ncos(α+γn+θn)+θ.n(α.+θ.n)maw1na1nsin(α+γn+θn)+α¨a1n{msawcnsin(α+γn)-msawbncosα+msnz6nsin(α+γn+ηn)-maw1ncos(α+γn+θn)}+α.a1n{α.msawcncos(α+γn)+α.msawbnsinα+(α.+η.n)msnz6ncos(α+γn+ηn)+msnz.6nsin(α+γn+ηn)+(α.+θ.n)maw1nsin(α+γn+θn)}-g{mbacosβ+mb(b0sinα+c0cosα)sinβ}-ni,iig[{msnz6ncos(α+γn+ηn)+maw1nsin(α+γn+θn)+msawcncos(α+γn)+msawbnsinα}sinβ+msawancosβ]=λ3n[-{z12ncosα+e3nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}sinβ+a1ncosβ]β¨(msaw2n+mbaI+mbA12+msnB12+manB22+mwnB32)+(177)2β.[α.mbA1A2+msnB1{z.6nCαγηn-(α.+η.n)z6nSαγηn-α.A4}+manB2{(α.+θ.n)e1nCαγθn-α.A6}+mwnB3{(α.+θ.n)e3nSαγθn-α.A6}]-α¨mbaA2+α.2mbaA1-z¨6nmsna1nCαγηn+2z.6n(α.+η.n)+η¨nmsnz6na1nSαγηn+η.n(2α.+η.n)msna1nSαγηn+η¨nmsnz6na1nCαγηn-θ¨nmaw1na1nCαγθn+θ.n(2α.+θ.n)maw1na1nSαγθn+α¨a1n{msawcnSαγn-msawbnCα+msnz6nSαγηn-maw1nCαγθn}+α.2a1n{msawcnCαγn+msawbnSα+msnz6nCαγηn+maw1nSαγθn}-g[mbuCβ+mbA1Sβ+{msnz6nCαγηn+maw1nSαγθn+msawcnCαγn+msawbnSα}Sβ+msawanCβ]=λ3n[-{z12nCα+e3nSaγθn+c2nCαγn+b2nSα}Sβ+a1nCβ]β¨=2β.[α.mbA1A2+msnB1{z.6nCαγηn(α.+η.n)z6nSαγηn-α.A4}+manB2{(α.+θ.n)e1nCαγθn-α.A6}+mwnB3{(α.+θ.n)e3nSαγθn-α.A6}]-α¨mbaA2+α.2mbaA1-z¨6nmsnα1nCαγηn+2z.6n(α.+η.n)msna1nSαγηn+η¨nmsnz6na1nSαγηn+η.n(2α.+η.n)msnz6na1nCαγηn-θ¨nmaw1na1nCαγθn+θ.n(2α.+θ.n)maw1na1nSαγθn+α¨a1n{msawcnSαγn-msawbnCα+msnz6nSαγηn-maw1nCαγθn}+α.2a1n{msawcnCαγn+msawbnSα+msnz6nCαγηn+maw1nSαγθn}-g[mbaCβ+mbA1Sβ+{msnz6nCαγηn+maw1nSαγθn+msawcnCαγn+msawbnSα}Sβ+msawanCβ]+λ3n{(z12nCα++e3nSαγθn+c2nCαγn+b2nSα)Sβ-a1nCβ}-(msaw2n+mbaI+mbA12+msnB12+manB22+mwnB32(178)t(Lα.)-Lα=Fα.+1,nλ1na1n2(179)-β¨mba(b0cosα-c0sinα)+β.α.mba(b0sinα+c0cosα)+(180)α¨mbb1+msaw1n+msnz6n[z6n+2{c1ncosηn-b2nsin(γn+ηn)}]-2maw1n{c2nsinθn-b2ncos(γn+θn)}+α.msnz.6n[z6n+2{c1ncosηn-b2nsin(γn+ηn)}]+msnz6n[z.6n-2η.n{c1nsinηn+b2ncos(γn+ηn)}]-2θ.nmaw1n{c2ncosθn+b2nsin(γn+θn)}+z¨6nmsn{c1nsinηn+b2ncos(γn+ηn)}+z.6nη.nmsn{c1ncosηn-b2nsin(γn+ηn)}+η¨nmsnz6n{z.6n-η.n[c1nsinηn+b2ncos(γn+ηn)]}+θ¨[maw21n-maw1n{c2nsinθn-b2ncos(γn+θn)}]-θ.n2maw1n{c2ncosθn+b2nsin(γn+θn)}]+β¨a1n{msawcnsin(α+γn)-msawbncosα+msnz6nsin(α+γn+ηn)-maw1ncos(α+γn+θn)}+β.a1n{α.[msawcncos(α+γn)+msawbnsinα]+msnz.6nsin(α+γn+ηn)+(α.+η.n)msnz6ncos(α+γn+ηn)+(α.+θ.n)maw1nsin(α+γn+θn)}-{β.2mb(b0cosα-c0sinα)+α.β.mba}(b0sinα+c0cosα)-ni,ii&RightBracketingBar;+β.2msn{z6ncos(α+γn+ηn)+c1ncos(α+γn)+b2nsinα}{-z6nsin(α+γn+ηn)-c1nsin(α+γn)+b2ncosα}+man{e1nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}{e1cos(α+γn+θn)-c2nsin(α+γn)+b2ncosα}+mwn{e3nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}{e3cos(α+γn+θn)-c2nsin(α+γn)+b2ncosα}+z.6nβ.msna1nsin(α+γn+ηn)+η.nβ.msnz6na1ncos(α+γn+ηn)+θ.β.maw1na1nsin(α+γn+θn)+α.β.a1n{msawencos(α+γn)+msawbnsinα+msnz6ncos(α+γn+ηn)+maw1nsin(α+γn+θn)}&RightBracketingBar;+gmb(b0cosα-c0sinα)cosβ-ni,ug{msnz6nsin(α+γn+ηn)-maw1ncos(α+γn+θn)+msawcnsin(α+γn)-msawbncosα}cosβ=λ3n{-z12nsinα+e3ncos(α+γn+θn)-c2nsin(α+γn)+b2ncosα}cosβ-β¨mbaA2+α¨{mbb1+msaw1n+msnz6n(z6n+2E1n)-2maw1nH1n}+(181)2α.{msnz.6n(z6n+E1n)-msnz6nη.nE2n+θ.nmaw1nH2n}+z¨6nmsnE2n+z.6nη.nmsnE1n+η¨nmsnz6n{z6n+E1n}+η.nmsnz.6n{2z6n+E1n}-η.n2msnz6nE2n+θ¨(maw21n-maw1nH1n)-θ.n2maw1nH2n+β¨α1n(msawcnSαγn-msawbnCα+msnz6nSαγηn-maw1nCαγθn)+β.α1n{α.(msawcnCαγn+msawbnSα)+msnz.6nSαγηn+(α.+η.n)msnz6nCαγηn+(α.+θ.n)maw1nSαγθn}-β.2mbA2A1-[β.2{msnB1(-z6Saγn-A4)+manB2(e1Cαγθn-A6)+mwnB3(e3Cαγθn-A6)}+z.6nβ.msna1nSαγηn+η.nβ.msnz6na1nCαγηn+θ.β.maw1na1nSαγθn+α.β.a1n{msawcnCαγn+msawbnSα+msnz6nCαγηn+maw1nSαγθn}]+gmbA2Cβ-g{msnz6nSαγηn-maw1nCαγθn+msawcnSαγn-msawbnCα}Cβ=λ3n(-z12nSα+e3nCαγθn-c2nSαγn+b2nCα)Cβ-β¨mbaA2+α¨{mbb1+msaw1n+msnz6n(z6n+2E1n)-2maw1nH1n}+(182)msn(2α.z.6n+η¨nz6n+2η.nz.6n)(z6n+E1n)-2α.(msnz6nη.nE2n+θ.nmaw1nH2n)+z¨6nmsnE2n-η.n2msnz6nE2n+θ¨(maw21n-maw1nH1n)-θ.n2maw1nH2n+β¨α1n(msawcnSαγn-msawbnCα+msnz6nSαγηn-maw1nCαγθn)-β.2{mbA2A1+msnB1(-z6Saγn-A4)+manB2(e1Cαγθn-A6)+mwnB3(e3Cαγθn-A6)}+gmbA2Cβ-g{msnz6nSαγηn-maw1nCαγθn+msawcnSαγn-msawbnCα}Cβ=λ3n(-z12nSα+e3nCαγθn-c2nSαγn+b2nCα)Cβα¨=msn(2α.z.6n+η¨nz6n+2η.nz.6n)(z6n+E1n)-2α.(msnz6nη.nE2n+θ.nmaw1nH2n)+z¨6nmsnE2n-η.n2msnz6nE2n+θ¨(maw2ln-maw1nH1n)-θ.n2maw1nH2n+β¨a1n(msawcnSαγn-msawbnCα+msnz6nSαγηn-maw1nCαγθn)-β.2{mbA2A1+msnB1(-z6nSαγηnA4)+manB2(e1Cαγθn-A6)+mwnB3(e3Cαγθn-A6)}+gmbA2Cβ-g{msnz6nSαγηn-maw1nCαγθn+msawcnSαγn-msawbnCα}Cβ-β¨mbaA2+λ3n(z12nSα-e3nCαγθn+c2nSαγn-b2nCα)Cβ-{mbb1+msaw1n+msnz6n(z6n+2E1n)-2maw1nH1n}(183)t(Lη.n)-Lη.n=Fη.n+l,nλlnaln3l=1,2,3n=i,ii(184)




m


sn
{umlaut over (η)}nz6n2+2msn{dot over (η)}n{dot over (z)}6n+{umlaut over (α)}msnz6n{z6n+c1n cos ηn−b2n sin (γnn)}+{dot over (α)}msn{dot over (z)}6n{z6n+c1n cos ηn−b2n sin (γnn)}+{dot over (α)}msnz6n{{dot over (z)}6n−{dot over (η)}n[c1n sin ηn+b2n cos (γnn)]}+{umlaut over (β)}msnZ6na1n sin(α+γnn)+{dot over (β)}msn{dot over (z)}6na1n sin(α+γnn)+{dot over (β)}({dot over (α)}+{dot over (η)}n)msnz6na1n cos(α+γnn) −{dot over (α)}2msnz6n{−c1n sin ηn−b2n cos(γnn)}+{dot over (β)}2msn{z6n cos(α+γnn)+c1n cos (α+γn)+b2n sin α}{−z6n sin (α+γnn)}+{dot over (z)}6n{dot over (α)}msn{c1n cos ηn−b2a sin(γnn)}+{dot over (z6n)}{dot over (β)}msna1a sin (α+γnn) −{dot over (η)}n{dot over (α)}msnz6a{c1n sin ηn+b2n cos(γnηn)}+{dot over (η)}n{dot over (β)}msnz6na1n cos(α+γnn) +{dot over (αβ)}a1nmsnz6n cos(α+γnn)+gmsnz6n sin(α+γnn)cos β=−λ1n(z6n−d1n)cos ηn2n(z6n−d1n)sin ηn  (185)



[0314]

m


sn
{umlaut over (η)}nz6n2+2msn{dot over (η)}n{dot over (z)}6nz6n+{umlaut over (α)}msnz6n{z6n+E1}+{dot over (α)}msn{dot over (z)}6n{2z6n+E1}−{dot over (α)}msnz6n{dot over (η)}nE2+{umlaut over (β)}msnz6na1nSαγηn+{dot over (β)}msn{dot over (z)}6na1nSαγηn+{dot over (β)}({dot over (α)}+{dot over (η)}n)msnz6na1nCαγηn +{dot over (α)}2msnz6nE22msnB1z6aSαγηn−{dot over (z)}6a{dot over (α)}msnE1−{dot over (z)}6n{dot over (β)}msna1nSαγηn+{dot over (η)}n{dot over (α)}msnz6nE2−{dot over (η)}n{dot over (β)}msnz6na1nCαγηn −{dot over (αβ)}a1nmsnz6nCαγηn−gmsnz6nSαγηnCβ=−λ1n(z6n−d1n)Cηn2n(z6n−d1n)Sηn  (186)




m


sn


z


6n
{{umlaut over (η)}nz6n+2{dot over (η)}n{dot over (z)}6n+{umlaut over (α)}(z6n+E1)+2{dot over (αz)}6n+{umlaut over (β)}a1nSαγηn+{dot over (α)}2E2+{dot over (β)}2B1Sαγηn−gSαγηnCβ}=−λ1n(z6n−d1n)Cηnλ2n(z6n−d1n)Sηn  (187)



[0315]

85
















λ

1

n


=






m
sn



z

6

n




{




η
¨

n



z

6

n



+

2



η
.

n




z
.


6

n



+


α
¨



(


z

6

n


+

E
1


)


+

2


α
.




z
.


6

n



+












β
¨



a

1

n




S

αγη





n



+



α
.

2



E
2


+



β
.

2



B
1



S

αγη





n



-


gS

αγη





n




C
β



}

-








λ

2

n




(


z

6

n


-

d

1

n



)




S

η





n








-

(


z

6

n


-

d

1

n



)




C

η





n








(
188
)













t




(



L





θ
.

n



)


-



L





θ
.

n




=





F





θ
.

n



+




l
,
n









λ
ln



a
ln4






l



=
1


,
2
,


3





n

=
i

,
ii




(
189
)










[0316] {umlaut over (θ)}nmaw2In+{umlaut over (α)}[maw2In−maw1n{c2n sin θn−b2n cos(γnn)}−{dot over (αθ)}nmaw1n{c2n cos θn+b2n sin(γnn)}−{umlaut over (β)}maw1na1n cos(α+γnn)+{dot over (β)}({dot over (α)}+{dot over (θ)}n)maw1na1n sin(α+γnn) −[kzie0i2{ sin(γii)−sin(γiiii)} cos(γnn)Xs −{dot over (α)}2maw1n{c2n cos θn+b2n sin(γnn)}+{dot over (β)}2man{e1n sin(α+γnθn)+c2n cos (α+γn)+b2n sin α}e1n cos (α+γnn) +mwn{e3n sin (α+γnn)+c2n cos (α+γn)+b2n sin α}e3n cos (α+γnn) −{dot over (θα)}maw1n{c2n cos θn+b2n sin (γnn)}+{dot over (θβ)}maw1na1n sin (α+γnn) +{dot over (αβ)}a1nmaw1n sin (α+γnn)−gmaw1n cos (α+γnn) cos β]=λ1ne2n sin θn2ne2n cos θn3neen cos (α+γnn) cos β  (190)


[0317] {umlaut over (θ)}nmaw2n+{umlaut over (α)}(maw2In−maw1nH1)−{dot over (αθ)}nmaw1nH2−{umlaut over (β)}maw1na1nCαγθn+{dot over (β)}({dot over (α)}+{dot over (θ)}n)maw1na1nSαγθn −[−kzie0i2{ sin(γii)+sin(γiiii)} cos(γnn)XS−{dot over (α)}2maw1nH2+{dot over (β)}2(manB2e1nCαγθn+mwnB3e3nCαγθn) −{dot over (θα)}aw1nH2+{dot over (θβ)}maw1na1nSαγθn+{dot over (αβ)}a1nSαγθn−gmaw1nCαγθnC62 ]=λ1ne2nSθn2ne2nCθn3ne3nCαγθnC62   (191)


[0318] {umlaut over (θ)}nmaw2In+{umlaut over (α)}(maw2In−maw1nH1)−{umlaut over (β)}maw1na1nCαγθn+{dot over (α)}2maw1nH2 −{dot over (β)}2(manB2e1nCαγθn+mwnB3nCαγθn) +gmaw1nCαγθnCβ+kzie0i2{ sin(γii)+sin(γiiii)} cos(γnn) =λ1ne2nSθn2ne2nCθn3ne3nCαγθnCβ  (192)
86λ1n=α¨(maw21n-maw1nH1)-β¨maw1na1nCαγθn+α.2maw1nH2-β.2(manB2e1nCαγθn+mwnB3e3nCαγθn)+gmaw1nCαγθnCβ-λ1ne2nSθn-λ2ne2nCθn-λ3ne3nCαγθnCβ+kzie0i2{sin(γi+θi)+sin(γii+θii)}cos(γn+θn)-maw21n(193)t(Lz.6n)-Lz6n=Fz.6n+l,nλlnaln5l=1,2,3n=i,ii(194)


[0319]

m


sn


{umlaut over (z)}


6n
+{umlaut over (α)}msn{c1n sin ηn+b2n cos(γnn)}+{dot over (αη)}nmsn{c1n cos ηn−b2n sin(γnn)}−{umlaut over (β)}msna1n cos (α+γnn)+{dot over (β)}({dot over (α)}+{dot over (η)}n)msna1n sin (α+γnn) −msn{dot over (η)}n2z6n+{dot over (α)}2msn[z6n+{c1n cos ηn−b2n sin (γnn)}]+{dot over (β)}2msn{z6n cos (α+γnn)+c1n cos (α+γn)+b2n sin α} cos (α+γnn) +{dot over (η)}n{dot over (α)}msn{2z6n+c1n cos ηn−b2n sin (γnn)}+{dot over (η)}n{dot over (β)}msna1n sin (α+γnn) +{dot over (αβ)}a1nmsn sin (α+γnn)−gmsn cos (α+γnn)cos β−ksn(z6n−lsn) =−csn{dot over (z)}6n−λ1n sin ηn−λ2n cos ηn  (195)


[0320]

m


sn


{{umlaut over (z)}


6n


+{umlaut over (α)}E


2
−{umlaut over (β)}a1nCαγηn−{dot over (η)}n2z6n−{dot over (α)}2(z6n+E1)−{dot over (β)}2B1Cαγηn−2{dot over (η)}n{dot over (α)}z6n+gCαγηnC62 }+ksn(z6n−lsn) =−csn{dot over (z)}6n−λ1nSηn−λ2nCηn  (196)
87λ2n=msn{z¨6n+α¨E2-β¨a1nCαγηn-η.n2z6n-α.2(z6n+E1)-β.2B1Cαγηn-2η.nα.z6n+gCαγηnCβ}+ksn(z6n-lsn)+csnz.6n+λ1nSηn-Cηn(197)t(Lz.12n)-Lz12n=Fz.12n+l,nλlnaln6l=1,2,3n=i,iikwn(z12n-lwn)=-cwnz.12n+λ3ncosαcosβ=-cwnz.12n+λ3nCαCβ(198)λ3n=cwnz.12n+kwn(z12n-lwn)Cα(199)


[0321] From the differentiated constraints it follows that:


[0322] {dot over (θ)}ne2n sin θn−{dot over (z)}6n sin ηn−{dot over (η)}n(z6n−d1n)cos ηn=0


[0323] {dot over (θ)}ne2n cos θn−{dot over (z)}6n cos ηn+{dot over (η)}n(z6n−d1n)sin ηn=0  (200)


[0324] and


[0325] {{dot over (z)}12n cos α−{dot over (α)}z12n sin α+({dot over (α)}+{dot over (θ)}n)e3n cos(α+γnn) −{dot over (α)}c2n sin(α+γn)+{dot over (α)}b2n cos α} cos β−{dot over (β)}[{z12n cos α+e3n sin (α+γnn) +c2n cos(α+γn)+b2n sin α} sin β+a1n cos β]−{dot over (R)}n(t)=0  (201)


[0326] Second order of differentiation of equation (200) yields:


[0327] {umlaut over (θ)}ne2nSθn+{dot over (θ)}n2e2nCθn−{umlaut over (z)}6nSηn−{dot over (z6nη)}ncηn{umlaut over (η)}n(z6n−d1n)Cηn−{dot over (η)}n{dot over (z)}6nCηn+{dot over (η)}n2(z6n−d1n)Sηn=0


[0328] {umlaut over (θ)}ne2nCθn−{dot over (θ)}n2e2nSθn−{umlaut over (z)}6n{dot over (η)}nSηn+{umlaut over (η)}n(z6n−d1n)Sηn+{dot over (η)}n{dot over (z)}6nSηn+{dot over (η)}n2(z6n−d1n)Cηn=0  (202)
88η¨n=θ¨ne2nSθn+θ.n2e2nCθn-z¨6nSηn-2η.nz.6nCηn+η.n2(z6n-d1n)Sηn(z6n-d1n)Cηn(203)z¨6n=θ¨ne2nCθn-θ.n2e2nSθn+η¨n(z6n-d1n)Sηn+2η.nz.6nSηn+η.n2(z6n-d1n)CηnCηnand(204)z.12n={α.z12nSα-(α.+θ.n)e3nCαγθn+α.c2nSαγn-α.b2nCα}Cβ+β.[{z12nCα+e3nSαγθn+b2nSα}Sβ+alnCβ]+R.n(t)CαCβ(205)


[0329] Supplemental differentiation of equation (199) for the later entropy production calculation yields:




k


wn


{dot over (z)}


12n


=−c


wn


{umlaut over (z)}


12n
+{dot over (λ)}3nCαCβ−{dot over (α)}λ3nSαC62−{dot over (β)}λ3nCαSβ  (206)



[0330] therefore
89z¨12n=λ.3nCαCβ-α.λ3nSαCβ-β.λ3nCαSβ-kwnz.12ncwn(207)


[0331] or from the third equation of constraint:


[0332] {{umlaut over (z)}12n cos α−{dot over (z)}12n{dot over (α)} cos α−{umlaut over (α)}z12n sin α−{dot over (α)}{dot over (z)}12n sin α−{dot over (α)}3τ12n cos α+({umlaut over (α)}+{dot over (θ)}n)e3n cos(α+γnn) −({dot over (α)}+{dot over (θ)}n)2e2n sin(α+γnn) −{umlaut over (α)}c2n sin(α+γn)−{dot over (α)}2c2n cos(α+γn)+{umlaut over (α)}b2n cos α−{dot over (α)}2b2n sin α} cos β−{dot over (β)}{{dot over (z)}12n cos α−{dot over (α)}z12n sin α+({dot over (α)}+{dot over (θ)}n)e3n cos (α+γnn)−{dot over (α)}c2n sin(α+γn)+{dot over (α)}b2n cos α} sin β−{umlaut over (β)}[{z12n cos α+e3n sin(α+γnn)c2n cos(α+γn)+b2n sin α} sin β+a1n cos β]−{dot over (β)}[{z12n cos α−{dot over (α)}z12n sin α+({dot over (α)}+{dot over (θ)}n)e3n cos(α+γnn)−({dot over (α)}+{dot over (γ)}n)c2n sin(α+γn)+{dot over (α)}b2n cos α} sin β÷{dot over (β)}{z12n cos α+e3n sin(α+γnn)+c2n cos(α+γn)+b2n sin α} cos β−{dot over (β)}a1n sin β]−{umlaut over (R)}n(t)=0   (208)
90z¨12n={-z.12nα.cosα-α¨z12nsinα-α.z.12nsinα-α.2z12nz¨12ncosα+(α¨+θ.n)e3ncos(α+γn+θn)-(α.+θ.n)2e3nsin(α+γn+θn)-α¨c2nsin(α+γn)-α.2c2ncos(α+γn)+α¨b2ncosα-α.2b2nsinα}cosβ-β.{z.12cosα-α.z12nsinα+(α.+θ.n)e3ncos(α+γn+θn)-α.c2nsin(α+γn)+α.b2ncosα}sinβ-β¨[{z12ncosα+e3nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}sinβ+alncosβ]-β.[{z.12ncosα-α.z12nsinα+(α.+θ.n)e3ncos(α+γn+θ)-(α.+γ.n)c2nsin(α+γn)+α.b2ncosα}sinβ+β.{z12ncosα+e3nsin(α+γn+θn)+c2ncos(α+γn)+b2nsinα}cosβ-β.a1nsinβ]-R¨n(t)-cosαcosβ(209)


[0333] 5. Summarization for simulation.
91β¨=2β.[α.mbA1A2+msnB1{z.6nCαγηn-(α.+η.n)z6nSαγηn-α.A4}+manB2{(α.+θ.n)e1nCαγθn-α.A6}+mwnB3{(α.+θ.n)e3nSαγθn-α.A6}]-α¨mbaA2+α.2mbaA1-z¨6nmsna1nCαγηn+2z.6n(α.+η.n)msna1nSαγηn+η¨nmsnz6na1nSαγηn+η.n(2α.+η.n)msnz6na1nCαγηn+θ¨nmaw1na1nCαγθn+θ.n(2α.+θ.n)maw1na1nSαγθn+α¨a1n{msawcnSαγn-msawbnCα+msnz6nSαγηn-maw1nCαγθn}+α.2a1n{msawcnCαγn+msawbnSα+msnz6nCαγηn+maw1nSαγθn}-g[mbaCβ+mbA1Sβ+{msnz6nCαγηn+maw1nSαγθn+msawcnCαγn+msawbnSα}Sβ+msawanCβ]-λ3n{(z12nCα+e3nSαγθn+c2nCαγn+b2nSα)Sβ-a1nCβ}-(msaw2n+mba1+mbA12+msnB12+manB22+mwnB32)-(210)*Signofλ3nismodifiedjudgingfromthebehaviorofthemodel.α¨=msn2α.z.6n+η¨nz6n+2η.nz.6n)(z6n+E1n)-2α.(msnz6nηn.E2n+θ.nmaw1nH2n)+z¨6nmsnE2n-η.n2msnz6nE2n+θ¨(maw2In-maw1nH1n)-θ.n2maw1nH2n+β¨a1n(msawcnSαγn-msawbnCα+msnz6nSαγηn-maw1nCαγθn)-β.2{mbA2A1-msnB1(z6nSαγηn+A4)+manB2(e1nCαγθn-A6)+mwnB3n(e3Cαγθn-A6)}+gmbA2Cβ-g{msnz6nSαγηn-maw1nCαγθn+msawcnSαγn-msawbnCα}Cβ-β¨mbaA2+λ3n(z12nSα-e3nCαγθn+c2nSαγn-b2nCα)Cβ-{mbb1+msawIn+msnz6n(z6n+2E1n)-2maw1nH1n}-(211)λ1n=msnz6n{η¨nz6n+2η.nz.6n+α¨(z6n+E1)+2α.z.6n+β¨a1nSαγηn+α.2E2+β.2B1Sαγηn-gSαγηnCβ}-λ2n(z6n-d1n)Sηn-(z6n-d1n)Cηn(212)θ¨n=α¨(maw2In-maw1nH1)-β¨maw1na1nCαγθn+α.2maw1nH2-β.2(manB2e1nCαγθn+mwnB3e3nCαγθn)+gmaw1nCαγθnCβ-λ1ne2nSθn-λ2ne2nCθn-λ3ne3nCαγθnCβ+12kzie0i2{sin(γi+θi)+sin(γli+θii)}cos(γn+θn)-maw2In(213)


[0334] In the above expression, the potential energy of the stabilizer is halved because there are left and right portions of the stabilizer.
92λ2n=msn{z¨6n+α¨E2-β¨a1nCαγηn-η.n2z6n-α.2(z6n+E1)-β.2B1Cαγηn-2η.nα.z6n+gCαγηnCβ}+ksn(z6n-lsn)+csnz.6n+λ1nSηn-Cηn(214)λ3n=cwnz.12n+kwn(z12n-lwn)Cα(215)η¨n=θ¨ne2nSθn+θ.n2e2nCθn-z¨6nSηn-2η.nz.6nCηn+η.n2(z6n-d1n)Sηn(z6n-d1n)Cηn(216)z¨6n=θ¨ne2nCθn-θ.n2e2nSθn+η¨n(z6n-d1n)Sηn+2η.nz.6nSηn+η.n2(z6n-d1n)CηnCηn(217)z.12n={α.z12nSα-(α.+θ.n)e3nCαγθn+α.c2nSαγn-α.b2nCα}Cβ+β.[{z12nCα+e3nSαγθn+c2nCαγn+b2nSα}Sβ+a1nCβ]+R.n(t)CαCβ(218)


[0335] where


[0336] n=i,ii


[0337]

m


ba


=m


b
(a0+a1)


[0338]

m


bb1


=m


b
(b02+c02)+Ibx


[0339]

m


baI


=m


b
(a0+a1)2+Iby


[0340]

m


sawan
=(msn+man+mwn)a1n


[0341]

m


sawbn
=(msn+man+mwn)b2n


[0342]

m


sawcn


=m


sn


c


1n
+(man+mwn)c2n


[0343]

m


sawln


=m


an


e


ln


e


+m


wn


e


3n


2


m


sn
(c1n2b2n2−2c1nb2n sin γn)


[0344] +(man+mwn)(c2n2+b2n2−2c1nb2n sin γn)+I axn


[0345]

m


aw2In


=m


an


e


1n


2


+m


wn


e


3n


2


+I


axn



[0346]

m


aw1n


=m


an


e


1n


+m


wn


e


3n



[0347]

m


aw2n


=m


an


e


1n


2


+m


wn


e


3n


2



[0348]

A


1


=b


0
sin α+c0 cos α


[0349]

A


2


=b


0
cos α−c0 sin α


[0350]

A


4n


=c


1n
sin(α+γn)−b2n cos α


[0351]

A


6n


=c


2n
sin(α+γn)−b2n cos α


[0352]

B


1n


=z


6n
cos(α+γnn)+c1n cos(α+γn)+b2n sin α


[0353]

B


2n


=e


1n
sin(α+γnn)+c2n cos(α+γn)+b2n sin α


[0354]

B


3n


=e


3n
sin(α+γnn)+c2n cos(α+γn)+b2n sin α


[0355]

E


1n


=c


1n
cos ηn=b2n sin(γn+ηn)


[0356]

E


2n


=c


1n
sin ηn+b2n cos(γnn)


[0357]

H


1n


=c


2n
sin ηn−b2n cos(γnn)


[0358]

H


2n


=c


2n
cos θn+b2n sin(γ+θn)  (219)


[0359] Sα=sin α, Sβ=sin β, Sαβn=sin(α+γnn), Sαγθn=sin(α+γnn)


[0360] Cα=cos α, Cβ=cos β, Cαγn=cos(α+γn), Cαγηn=cos(α+γnn), Cαγθn=cos(α+γnn)  (220)


[0361] The initial conditions are:


[0362] {dot over (β)}={dot over (α)}={dot over (η)}n={dot over (θ)}n={dot over (z)}6n={dot over (z)}12n=0  (221)


[0363] β=α=0  (222)
93z6n=lsn+ls0nls0n0.6mb2ksngz12n=lwn+lw0nlw0n0.6(mb+msn+man+mwn)2kwng(223)ηn=cos-1(z6n-d1n)2+(c1n-c2n)2-e2n2-2(c1n-c2n)(z6n-d1n)(224)θn=sin-1(z6n-d1n)2-e2n2-(c1n-c2n)2-2e2n(c1n-c2n)(225)


[0364]

R


n
(0)=z12n cos α+e3n sin (α+γnn)+c2n cos(α+γn)+b2n sin αn  (226)


[0365] where lsn is free length of suspension spring, ls0n is initial suspension spring deflection at one g, lwn is free length of spring component of wheel and lw0n is initial wheel spring deflection at one g.


[0366] IV. Equations for Entropy Production


[0367] Minimum entropy production (for use in the fitness function of the genetic algorithm) is expressed as:
94βSt=-2β.2[α.mbA1A2+msnB1{z.6nCαγηn-(α.+η.n)z6nSαγηn-α.A4}+manB2{(α.+θ.n)e1nCαγθn-α.A6}+mwnB3{(α.+θ.n)e3nSαγθn-α.A6}]msaw2n+mbaI+mbA12+msnB12+manB22+mwnB32(227)αSt=-2α.2{msnα.z.6n(z6n+E1n)+msnz6nη.nE2n+θ.nmaw1nH2n}mbbI+msawIn+msnz6n(z6n+2E1n)-2maw1nH1n(228)ηnSt=η.n3tgηn-2η.n2z.6nz6n-d1n(229)z6nSt=2η.nz.6n2tgηn(230)z12nSt=z.12n2(α.+α.tgα+2β.tgβ)(231)



Other Embodiments

[0368] Although the foregoing has been a description and illustration of specific embodiments of the invention, various modifications and changes can be made thereto by persons skilled in the art, without departing from the scope and spirit of the invention as defined by the following claims.


Claims
  • 1. A method for controlling an internal combustion engine, comprising the steps of: measuring first information from said engine by using first plurality of sensors; providing said first information to a first engine control system, said first engine control system configured to provide a desired accuracy for said engine, said first control system providing a first control signal; measuring second information from said engine by using a second plurality of sensors, where said second plurality of sensors comprises fewer sensors than said first plurality of sensors, providing said second information to a second engine control system, said second engine control system providing a second control signal; and configuring said second engine control system using said first engine control signal and said second engine control signal
  • 2. The method of claim 1, wherein said step of configuring comprises generating a physical criteria and generating an information criteria.
  • 3. The method of claim 2, wherein said second control system wherein said physical criteria is calculated by an entropy model based on thermodynamic properties of said engine.
  • 4. The method of claim 3, wherein said second control system wherein said physical criteria is calculated by an entropy model.
  • 5. The method of claim 4, wherein said second control system is adapted to reduce an entropy production in said second control system and said engine.
  • 6. The method of claim 5, wherein said thermodynamic model is based on engine air temperature and engine water temperature.
  • 7. The method of claim 2, wherein said optimizer uses a genetic algorithm having a fitness function, wherein a portion of said fitness function based on entropy.
  • 8. The method of claim 2, wherein said step of configuring further comprises providing said physical criteria and said information criteria to a genetic algorithm having a fitness function, said fitness function based on entropy.
  • 9. The method of claim 2, wherein said step of configuring further comprises providing a training signal to a fuzzy neural network in said second control system.
  • 10. The method of claim 1, wherein said first plurality of sensors and said second plurality of sensors comprise a temperature sensor.
  • 11. The method of claim 1, wherein said first plurality of sensors comprises an oxygen sensor.
  • 12. The method of claim 1, wherein said fuzzy neural network is trained in an off-line mode.
  • 13. A control apparatus configured to control an engine, said apparatus comprising: engine control means for generating an engine control signal based on information from a plurality of sensors measuring said engine, said engine control means trained by optimizer means for generating a training signal, said optimizer means generating said training signal using said control signal and an optimal control signal provided by an optimal control means.
  • 14. A control system adapted to control an engine, comprising: a reduced plurality of sensors configured to measure first information about said engine, a first engine controller configured to receive at least a portion of said first information, said first engine controller trained to produce a first control signal, where said first engine controller is trained to use relatively more of said at least a portion of said first information signal in order to reduce an entropy production of said plant.
  • 15. The apparatus of claim 14, wherein said first engine controller comprises a fuzzy neural network configured to be trained by a genetic analyzer having a first fitness function, said first fitness function configured to increase mutual information between said first control signal and a second control signal, said second control signal provided by a second controller configured to receive information from a second plurality of sensors, wherein said second plurality of sensors is greater than said first plurality of sensors.
  • 16. The apparatus of claim 15, wherein said genetic analyzer further comprises a second fitness function configured to reduce entropy production rate of said engine.
  • 17. The apparatus of claim 16, wherein said genetic analyzer is configured to use said second fitness function to realize a node correction in said fuzzy neural network.
  • 18. The apparatus of claim 15, wherein said first plurality of sensors comprises a temperature sensor.
  • 19. The apparatus of claim 15, wherein said second plurality of sensors comprises an oxygen sensor.
  • 20. The apparatus of claim 15, wherein said first plurality of sensors comprises a water temperature sensor, an air temperature sensor, and an airflow sensor.
  • 21. The apparatus of claim 15, wherein said first plurality of sensors comprises a water temperature sensor, an air temperature sensor, an airflow sensor, and an oxygen sensor.
  • 22. The apparatus of claim 15, wherein said first control signal comprises an injector control signal configured to control a fuel injector.
RELATED APPLICATION

[0001] This application is a continuation of U.S. application Ser. No. 09/176,987, filed on Oct. 22, 1998.

Continuations (1)
Number Date Country
Parent 09176987 Oct 1998 US
Child 09776413 Feb 2001 US