This application is based on and hereby claims priority to European Application No. 05105786.7 filed on 29 Jun. 2005, the contents of which are hereby incorporated by reference.
It is already well known that tolerances of important influence factors must be taken into account for planning a technical system or a technical product. Conventionally high security margins are provided to take no risk respectively to take risks as low as possible for the planning of a system. This may lead to high fabrication and/or operation costs. For some special applications software packages do exist:
COMREL
COMREL is based is based on FORM/SORM and exists in two variants (FORM/SORM are first and second order reliability methods). COMREL is for reliability analysis of components. COMREL consists of two parts: COMREL-TI for time invariant and COMREL-TV for time variant reliability analysis. Base for both program parts is the method of first order (FORM) or second order (SORM). COMREL-TI can be supplied separately. COMREL-TV bases on COMREL-TI. COMREL uses two alternative, efficient and robust algorithms to find the so called beta-point (point for the local highest constraint or failure probability). The better-point is the base for the FORM/SORM method for a probability integration. Other options for probability integration are mean value first order (MVFO), Monte Carlo simulation, adaptive simulation, spheric simulation and several importance sampling schemata. 44 different probability distributions (univariate stochastic models) are useable. Arbitrary dependency structures can be generated with the aid of the Rosenblatt-transformation, the equivalent correlation coefficients according to Nataf, Der Kiuerghian or of the Hermite-models. Next to the reliability index importance values for all relevant input values are calculated: Global influence of the basis variables to the reliability index, sensitivities and elasticities, for the distribution parameters, the mean values and the standard deviations of the basis variables, sensitivities and elasticities for deterministic parameters in the constraint or failure function. Out of the sensitivity analysis partial security factors are deviated. Parameter studies can be performed for arbitrary values, e.g. for a distribution parameter, a correlation coefficient or a deterministic parameter. Basing on a parameter study charts of the reliability index, of the failure or of the survival probability, of the influences of basic variables or of deterministic parameters and of the expectancy value of a cost function can be generated. All results are available as a structured text file and as a file for the generation of plots. Charts generated in COMREL and formatted with the extensive plot options can be exported with the usual Windows equipment (Clipboard, Metafile, Bitmap). If it is necessary, a detailed outprint of provisional results for a failure search can be generated.
NESSUS
NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) is an integrated finite element program with probabilistic load. Probabilistic load. Probabilistic sensitivities relating to μ and σ: FORM/SORM/FPI (fast probability integration), . . . connection to ANSYS, ABAQUS, DYNA3D. NESSUS is a modular computer software system for performing probabilistic analysis of structural/mechanical components and systems. NESSUS combines state of the art probabilistic algorithms with general-purpose numerical analysis methods to compute the probabilistic response and reliability of engineered systems. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. Many deterministic modeling tools can be used such as finite element, boundary element, hydrocodes, and user-defined Fortran subroutines.
DARWIN
DARWIN (Design Assessment of Reliability With Inspection)
This software integrates finite element stress analysis results, fracture-mechanics based life assessment for low-cycle fatigue, material anomaly data, probability of anomaly detection and inspection schedules to determine the probability of fracture of a rotor disc as a function of applied operating cycles. The program also indicates the regions of the disk most likely to fail, and the risk reduction associated with single and multiple inspections. This software will be enhanced to handle anomalies in cast/wrought and powder nickel disks and manufacturing and maintenance-induced surface defects in all disk materials in the near future.
The programs NESSUS, DARWIN and COMREL have a certain distribution in industry. All those programs merely concern mechanical reliability analyze. Finite element packages are integrated, in which stochastic is directly integrated. Thus the stochastic distribution of the load can directly be converted into the distributions of the displacements and a component part can be converted into risk zones by this. For elected, external finite element programs there exist interfaces at NESSUS and COMREL. DARWIN and COMREL moreover merely offer an instationary analysis. The process variables can be stochastic processes on a limited scale. Stochastic optimization, is not integrated within NESSUS, DARWIN and COMREL.
Nonlinear optimization algorithms cope with the problem:
with g({right arrow over (x)}) is a constraint, especially a failure, of an arbitrary value, the constraint or failure being caused by deterministic input parameters.
When {right arrow over (x)} are no longer deterministic variables but stochastic random variables (e.g. normal distributed random variable {right arrow over (x)} ∈ N({right arrow over (μ)}, Σ)) the deterministic optimization problem (1) passes into the following probabilistic optimization problem:
I.e. the expectation value respectively mean value of the target size, E(f({right arrow over (x)})), is minimized and the constraints may be violated up to a prescribed tol. The mean values {right arrow over (μ)} of the input parameters are the design parameters.
A popular method for the computation of the response of a stochastic system is the Monte-Carlo method. The computation of the mean value and the variance of the system y=f({right arrow over (x)}) is presented in the following Table:
Monte Carlo method:
START: Determine a set {right arrow over (x)}1, . . . , {right arrow over (x)}m, which represents the distribution of the input parameter.
In order to assure a correct computation of the characteristic sizes E(Y) and V(Y), the size m of the ensemble must be very large. Hence, embedding the Monte-Carlo method into a framework of optimization is difficult in practical cases: To handle computational fluid dynamics or large Finite Element problems in reasonable time neither a super computer nor a large cluster of workstations would suffice.
An aspect is to efficiently reduce costs for designing nonlinear technical systems, especially like technical products or technical processes. Especially optimized operating points of the technical systems should be easily and efficiently found in a short period of time. Especially computational fluid dynamics or large Finite Element problems should be handled in reasonable time. Accordingly the optimization of the system should be “time efficient”, that is the necessary period of time for achieving an optimized result should be short in comparison with known methods, for example with the Monte-Carlo method.
The present method was developed in order to optimize nonlinear technical systems which are afflicted with uncertainties. Input parameters or model parameters of general technical systems may fluctuate, i.e. may have an underlying stochastic distribution. These uncertainties of the input parameters are carried over to the target values, which also have underlying stochastic distributions. A continuous auxiliary function is calculated on the base of these stochastic dependencies. Afterwards using the auxiliary function ({tilde over (ƒ)}i({right arrow over (x)}); {tilde over (g)}i({right arrow over (x)})) at least one optimizing parameter (E({tilde over (ƒ)}i({right arrow over (x)})); Var({tilde over (ƒ)}i({right arrow over (x)}); P({tilde over (g)}i({right arrow over (x)})) is generated. An objective function is used for optimizing the optimizing parameter thereby generating a discrete technical system dependence ({tilde over (ƒ)}i({right arrow over (x)}i); {tilde over (g)}i({right arrow over (x)}i)). This dependence corresponds to an interpolation point. The newly generated interpolation point is used for making the stochastic dependencies more accurate by adding the interpolation point to the stochastic dependencies of the technical system. Again a continuous auxiliary function is calculated by interpolation to repeat the two step cycle. The cycle can be repeated until the difference of successive optimized optimizing parameters (|E({tilde over (ƒ)}i({right arrow over (x)}i)−E(ƒi+1({right arrow over (x)}i+1)|; |Var({tilde over (ƒ)}i({right arrow over (x)}i)−Var({tilde over (ƒ)}i+1({right arrow over (x)}i+1)|; |P({tilde over (g)}i({right arrow over (x)}i))−P({tilde over (g)}i+1({right arrow over (x)}i+1))|) is as low as desired. A thereto belonging last additional discrete technical system dependence ({tilde over (ƒ)}p({right arrow over (x)}p); {tilde over (g)}p({right arrow over (x)}p)) is useable as an optimal technical system operating point. Thus the technical system described by certain physical values (physical values of the nonlinear technical system can be length, area, volume, angle, time, frequency, velocity, acceleration, mass, density, force, moment, work, energy, power, pressure, tension, viscosity, and all further physical kinds of quantities) is optimized stochastically. For example {tilde over (ƒ)}p({right arrow over (x)}p) is the transmitting power of a transmitter depending on the area and/or the align angle of an antenna. Knowing the method described herein, one skilled in the art is able to stochastically optimize arbitrary technical systems basing on technical and/or physical parameters without being inventive.
Alternative or cumulative objective functions may be used. On the contrary to the state of the art the present method is a stochastic optimizer allowing a common interface. In this case the present method does optimization without modeling. The focal point clearly lies on stochastic optimization.
An additional discrete technical system dependence ({tilde over (ƒ)}i({right arrow over (x)}i); {tilde over (g)}i({right arrow over (x)}i)) can be a base for an additional interpolation point being used for calculating a continuous auxiliary function with a higher accuracy than the precedent continuous auxiliary function. Repetition is performed by executing a first two step cycle (i=1) followed by a second two step cycle (i=2) followed by third two step cycle (i=3) and so on up to a last two step cycle with i=p. Accordingly i=1, 2, 3 . . . p or in other terms {i ε N/1, 2 . . . p}.
According to the present method an improved approach is proposed. Basing on the present method optimizing the mean value and optimizing the variance of the target parameter are two efficient alternatives for objective functions. The objective function can be provided by optimizing the mean value (E(f({right arrow over (x)})) determined by the formula
E(y)=∫Rz,
and/or by optimizing the variance (Var(f({right arrow over (x)})) of the target value (y=f({right arrow over (x)})) determined by the formula
Var(y)=∫
where ρ({right arrow over (x)}) is a probabilistic density function of the input parameter distribution. The precedent integrals for the system y=f({right arrow over (x)}) are calculated numerically. The efficient calculation of the mean value (expectation value) and of the variance of a system is discussed in the DE 10308314.6 “Statistische Analyse eines technischen Ausgangsparameters unter Berücksichtigung der Sensitivität” the content of which is hereby totally introduced into the disclosure of the present description.
According to an advantageous embodiment optimizing is performed by minimizing or maximizing the mean value (E(f({right arrow over (x)})) and/or minimizing the amount of the variance (|Var(f({right arrow over (x)})|) of the target value (y=f({right arrow over (x)})).
According to another advantageous embodiment the objective function is alternatively or cumulatively provided by optimizing the input parameter ({right arrow over (x)}) tolerances (σi, i=1, . . . , n), with the input parameter tolerances (σi) are especially maximized
The calculation of the maximally allowed tolerances with the present method can provide efficient reductions of costs.
According to another advantageous embodiment optimizing is performed by additionally keeping a constraint like a failure probability
(P(g({right arrow over (x)})≦0)=∫g({right arrow over (x)})≦0ρ({right arrow over (x)})d{right arrow over (x)})) (6)
of another value under or equal to a prescribed probability tolerance (tol), with ρ({right arrow over (x)}) is a probabilistic density function of the input parameter distribution. Thus general nonlinear deterministic constraints may be added to the optimization problem. Accordingly the present method is capable to treat stochastic constraints. Hence, the present method can keep e.g. a constraint especially the failure probability in a mechanical system within a prescribed limit of tolerance. Keeping a constraint or failure probability under or equal to a prescribed probability tolerance (tol) means keeping a probability of differences to constraints (P(g({right arrow over (x)})≦0)) under or equal to a prescribed probability tolerance (tol). Constraints can be nonlinear deterministic and/or stochastic.
According to another advantageous embodiment the objective function is alternatively or cumulatively provided by optimizing, especially minimizing, a constraint or failure probability (P(g({right arrow over (x)})≦0)=∫g({right arrow over (x)})≦0ρ({right arrow over (x)})d{right arrow over (x)}) of another value, with ρ({right arrow over (x)}) is a probabilistic density function of the input parameter distribution
According to the advantageous embodiment of mixed stochastic input parameters ({right arrow over (x)}) and deterministic input parameters ({right arrow over (x)}D) in the formulas ({right arrow over (x)}) is substituted to ({right arrow over (x)}, {right arrow over (x)}D) and/or a constraint like the failure of another value caused by the deterministic input parameters (h({right arrow over (x)}D) is limited to ≦0.
According to another advantageous embodiment optimizing is performed by using sensitivities
of the target value (y=f({right arrow over (x)})) with respect to the input parameters ({right arrow over (x)}). As for the deterministic optimization (1), the sensitivities with respect to the input variables are required. The efficient calculation of the mean value (expectation value) and of the variance of a system is discussed in the DE 10308314.6 “Statistische Analyse eines technischen Ausgangsparameters unter Berücksichtigung der Sensitivität” the content of which is again hereby totally introduced into the disclosure of the present description.
The present probabilistic optimizer method is able to treat with stochastic design variables with normal distributions and beta distributions for the design variables. Both distributions may be handled at the same time and the normal distributed variables may also be dependent. The normal distributed design variables have the density:
The beta distributed design variables have the density:
The beta distribution has the advantage that also asymmetric distributions and as a special case even the uniform distribution can be represented. If the input distributions are given in terms of discrete points, the parameters of the normal distribution may be identified by the Gauss-Newton algorithm.
According to another advantageous embodiment firstly a stochastic evaluating of the technical system is performed on the base of a nonlinear technical system function and of the density functions of the system input parameters
and/or
by calculating the mean value E(y)=∫
According to another advantageous embodiment in case of discrete distributed input parameters ({right arrow over (x)}j with j=1,2,3, . . . , m) and corresponding discrete target values (yj=ƒ({right arrow over (x)}j)), the following steps can be performed by generating of an nonlinear auxillary model for the technical product or the technical process, especially by an polynomial approximation to the discrete data; further by time efficient optimizing the technical system using one or more of the objective functions including a statistic representation, thereby generating an operating point of the nonlinear technical system. Especially “high dimensional” input parameters ({right arrow over (x)}j), with j=1,2,3, . . . , m and for example with m≧20 (for “j” and “m” see the above described Monte-Carlo method) should be handled relatively rapidly for example in comparison with the Monte-Carlo method. Other possibilities are for example m≧30, m≧50, m≧100, m≧500, m≧1000 . . . .
According to another advantageous embodiment the input parameters ({right arrow over (x)}) satisfy common stochastical differential equations, whose density development is described by the Fokker-Plank equation.
According to further advantageous embodiments the Response Surface Methods are the Hermite, the Laguerre, the Legendre and/or the Jacobi approximation. Other methods are also possible.
These and other aspects and advantages will become more apparent and more readily appreciated from the following description of the exemplary embodiments, taken in conjunction with the accompanying drawings of which:
Reference will now be made in detail to the preferred embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout.
The difference of the present method approximates and the Monte-Carlo evaluations may be observed in
Probabilistic Design Goals
The deterministic optimization problem (1) determines the optimal operation point. When changing the probabilistic design parameters some Six Sigma relevant goals respectively probabilistic design goals can be achieved which could be treated by the present probabilistic optimizer method. According to the used objective function(s) the following embodiments for optimization goals can be achieved:
1. Stochastic evaluation of nonlinear systems: Given a general non-linear system and the density functions of the system input parameters, the present method is able to compute the stochastic response of the system without any Monte-Carlo evaluation. To be concrete, the mean value, the variance, the density function and the cumulative density function of the system response may be computed. Thus, parametric studies of the system may be performed.
2. Probabilistic optimization: The mean value of the system y=f({right arrow over (x)}) is minimized. Additionally, another value could be limited to a given probability (constraint or failure probability).
3. Robust design: The variance of the system y=f({right arrow over (x)}) is minimized. That is, the system is shifted into states, which are not sensitive with respect to perturbations of the input parameters. Additionally, another value could be limited to a given probability (constraint or failure probability).
4. Robust design optimization: Any combination of the preceding cases may be optimized:
The variables “α” and “β” are weighting factors for weighting the mean value and the variance. Looking for a robust operation point and looking for a probabilistic optimal operating point may be competing targets. Therefore, all combinations of the weighted sum (12) may be reasonable. In a further embodiment of the present method, the Pareto set of the two objectives is computed, see
5. Minimization of constraint or failure probability: In many cases, it makes sense to minimize the failure probability directly (instead of limiting failure probability by a given value).
Constraints can be also optimized by maximizing a constraint probability.
6. Maximization of input tolerances: Questions of cost lead to the following problem: How inaccurate may a system or product be produced while keeping its constraint or failure probability within a given tolerance? Let {right arrow over (x)} be independent random variables, e.g. {right arrow over (x)} ∈ N({right arrow over (μ)}, diag(σ1, . . . σn)).
The question is how large the variances σi, i=1, . . . , n may be chosen while keeping the constraint or failure probability within a given tolerance.
7. Mixed deterministic and probabilistic Design variables: Modeling technical systems often deterministic and probabilistic design variables arise at the same time. The optimization problem (11) then becomes:
The present method is able to treat the above optimization problem (15). The variables “α” and “β” are weighting factors for weighting the mean value and the variance. The density function and the accumulated density function of system output y=f({right arrow over (x)}) are calculated numerically in every case. The stochastic sensitivities, that is, the derivatives of the output moments with respect to the input moments are a byproduct of optimization. They are available in every state of the system. Point (1)-(7) suggest, that the system input must be normal distributed ({right arrow over (x)}) ∈ (N({right arrow over (μ)}, Σ)). The present method is able to treat also with mixed normal and beta distributions.
Highlights of the present method are especially:
Basing on the present method also instationary analysis should be performed. Therewith the process variable respectively the input variable can satisfy common stochastical differential equations, whose density development is described by the Fokker-Plank equation. An instationary optimization, e.g. the optimization of the period of life, is not known by the state of the art.
Design of Experiments (DOE)
It may happen, that no physical model is available for a complicated process. In this case, the present method is able to construct a auxiliary model from discrete date of the system. With this auxiliary model, all the analysis of the present method, given in the last section may be performed. Of course the validity of such a model is only given in a small range. To demonstrate this, a comparison of the nonlinear model with the auxiliary model is given.
Consider a very simple nonlinear model is given by
f(x, y)=(exp(−3*x)+2*arctan(x)+exp(8−c))*(y*y+1) (16)
X and Y are normal distributed random variables with
A stochastic analysis by RODEO gives the corresponding mean value of the nonlinear system:
E(f(x, y))=4.47 (18)
Now we want to minimize the system f(x, y) in the stochastical sense.
In a first step a deterministic optimization is performed:
leads to the values
x=0.14, y=0.0, f(x, y)=0.93, E(f(x, y))=2.37 (20)
Using stochastic optimization by the present method, see last sections,
leads to the values
μ1=0.6, μ20.0, f(x, y)=1.25, E(f(x, y))=1.42 (22)
First we could state that stochastic optimization results in a higher deterministic value (f(x, y)=1.25) but in a much smaller stochastic value E(f(x, y))=1. 42).
In the next step, we assume that we have no longer a nonlinear model but discrete normal distributed values (17) and additionally the discrete corresponding system response. With a random generator normal distributed values are generated in the range of (x, y) ∈ [5:95]×[−0.4; 1,4]. The present method is able to fit an auxiliary model to this data.
Also with this model, the present method is able to improve the operating point.
Stochastic minimization of the mean value leads to
μ1=5.7, μ2=0.0, E(f(x, y))=3.2 (23)
which is an improvement to (18).
Applications of the Present Method
To sum up Design For Six Sigma (DFSS) or probabilistic design is a task of ongoing interest when manufacturing products or controlling processes. These methodologies try to analyze in which way uncertainties of the design parameter influence the system response and try to improve the system or product. The present probabilistic optimizer method is designed to support the goals of Six Sigma, see section “probabilistic design goals”. There are two main applications of the present method:
Generally the present method was developed to optimize systems or products whose influence parameters fluctuate. Optimizing can mean, that the system or product is provided as robust as possible or are provided as optimal as possible in a probabilistic sense.
Many technical processes (error dynamic, electromagnetism or structural mechanic) can be simulated by commercially distributed software packets. These software packages can be coupled with the present method to optimize in a probabilistic sense predetermined goals like aerodynamic efficiency, electromagnetic emission behavior or mechanical stability.
For many complicated processes no models exist. In these cases with RODEO a data based optimizing can be performed.
In the following possible applications for the present method are shown. These are merely examples. The actual application range for the present method is much greater.
1. An airline wants to reduce its delays. Firstly possible influence factors are determined like for example desk time, baggage dispatch, start slots etc. On many subsequently following days data of these influence factors and of the resulting delays are collected. The present method locates the greatest influence factors and performs a data based optimizing (see section “design of experiments”).
2. The weight of a product should be minimized, the mechanical stability should be not lower than a given limit. The wall thicknesses of the product do fluctuate, since the rolling machines merely guarantee a certain accuracy. Therewith the weight of a product also varies and the mechanical stability merely can be provided with a predetermined probability. The mechanical stability can be calculated with a finite-element-package. The present method calculates the minimal expectation value respectively minimal mean value of the weight (see section “probabilistic design goals” item 2.
3. Many technical apparatuses and measuring devices must fulfill predetermined accuracy predeterminations. Many influence factors and their variability lead to the end accuracy. First the present method locates the most important influence factors in view of the end accuracy. Second the variability (inaccuracies) of the single influence factors can be maximized with the target, that the end accuracy keeps the demanded value (see section “probabilistic design goals” item 6).
4. The operating point of a plant should be determined. On the one hand the operating point should be optimal relating to one criterion, on the other hand the plant should be insensitive to fluctuations of the influence factors. The present method calculates the Pareto set out of probabilistic optimality and robustness of the plant. Basing on this the applicant can decide by himself which compromise of optimality and robustness he elects (see section “probabilistic design goals” item 4).
5. The crash behavior of a car is investigated. It is demanded that the negative acceleration of a dummy does not exceed a certain value. An essential influence factor is the sheet metal thickness, which is a random variable because of the inaccuracy of the rolling machines. Now it is a demand, that the expectancy value of the sheet metal thickness is as small as possible, but the negative acceleration should not exceed a certain value with a pre-given probability. There exists a known method for the simulation of the crash behavior, the known method can be coupled with the present method (see section “probabilistic design goals” item 2).
A further example for a nonlinear technical system may be an antenna configuration, whereby an input parameter is the length of the transmitter part and a target value is the transmitting power.
The present method is not limited by the application examples stated above. The examples are merely seen as possible embodiments of the present method.
The present method uses mathematical formulas, which are practically utilized, to improve all kind of nonlinear values of the nonlinear technical system can be length, area, volume, angle, time, frequency, velocity, acceleration, mass, density, force, moment, work, energy, power, pressure, tension, viscosity, and all further physical kinds of quantitiestechnical systems. Input parameters and/or target (see “Taschenbuch der Physik”, Kuchling, Verlag Harri Deutsch, Thun and Frankfurt/Main, 1985, chapter 3.6.). Examples for technical systems are transport like cars or airplanes, electronic circuits, power stations, phones, turbines, antennas, fabrication processes of all industrial goods and so on. In all cases input parameters and target values are identified and used for optimizing. An improvement takes place especially in comparison with conventional design.
An embodiment for the present method is a certain software named “RODEO” standing for “robust design optimizer”. According to the method for optimization of technical systems with uncertainties an optimization model is proposed which uses a target function including the expectancy value E(y) of the technical system y=f({right arrow over (x)}) or the variance Var(y) or a combination of both values. A possible constraint can be used in the proposed optimization model and can be a failure probability Pf being hold within a given tolerance. Expectancy value E(y) and Var(y) are given by formulas
E(y)=∫f({right arrow over (x)})ρ({right arrow over (x)})d{right arrow over (x)} (3)
Var(y)=∫(f({right arrow over (x)})−E(y))2p({right arrow over (x)})d{right arrow over (x)} (4).
The failure probability is given by formula
The integrals in equations (3), (4) and (6) usually can not be analytically calculated. The overall method to solve the optimization model can be merely efficient, in case the integral calculating methods are efficient. Therefore used methods are subsequently described.
Methods for Calculating Expectancy Value and Variance
The used methods belong to the class of so called Response Surface methods. Specifically two variants can be used:
Using the taylor approximation the function f({right arrow over (x)}) is squarely developed:
and the approximation fT({right arrow over (x)}) is inserted into equation (3) respectively (4):
E(y)≅∫fT({right arrow over (x)})ρ({right arrow over (x)})d{right arrow over (x)} (8)
Var(y)≅∫(fT({right arrow over (x)})−E(y))2p({right arrow over (x)})d{right arrow over (x)} (9).
The integral in (8) and (9) can be exactly calculated now.
A higher approximation accuracy is achieved by a Hermite approximation. y=f({right arrow over (x)}) is approximated by a Hermite approach of second order:
Herewith the Hermite polynomials are given by
H0({right arrow over (x)})=1, H1i({right arrow over (x)})=xi, H2ij({right arrow over (x)})=xixj−δij.
By inserting of fH({right arrow over (x)}) instead of f({right arrow over (x)}) in (3) respectively (4) an approximation of expectancy value respectively variance is achieved likely to the Taylor approximation. The coefficients in (10) are calculated for example by solving the following least square problem:
for given interpolation points (evaluation places) {right arrow over (x)}k. Since functional evaluations are expensive an adaptive method is used. Therewith firstly starting with only few interpolation points and adaptively adding further interpolation points as long as the approximation of expectancy value respectively variance results in amendments. The adaptive optimizing method generates further advantages such that by the first optimizing steps the integrals can be calculated with a low accuracy but should be more accurate close to the solution point. For high dimension problems the accuracy of the integral approximation can be also adaptively fitted.
Method for Calculating the Failure Probability
For an easy representation it is assumed that the random variables are independent and standard normal distributed. The methods are also useable with a common case. Merely a transformation must be executed before.
By a first step a point {right arrow over (x)}* of the highest failure probability, a so called beta point is determined. This point is resulting from the solution of the following optimizing problem:
min∥{right arrow over (x)}∥2
relating to (12)
g({right arrow over (x)})≦0.
Two variants for calculating the integrals in (6) are used:
Being {right arrow over (x)}* the solution of (10) (beta point) and β=∥{right arrow over (x)}∥. By the linear approximation (FORM: “first order reliability method”) g({right arrow over (x)}) is approximated by
g({right arrow over (x)})≅aT({right arrow over (x)}−{right arrow over (x)}*).
Accordingly the following approximation for Pf is generated:
Pf≅φ(−β)
whereby φ is the distribution function of the standard normal distribution.
A higher accuracy is achieved by using the Hermite approximation gH({right arrow over (x)}) of the function g({right arrow over (x)}) close to the beta point. Likely to the approach (10) the following equation is achieved:
The coefficients are also determined by the least square problem (11). To evaluate the quality of the approximation the beta point {right arrow over (x)}H* relating to the Hermite approximation is determined. The beta point {right arrow over (x)}H* results from the solution of the following optimizing problem:
min∥{right arrow over (x)}∥2
relating to (14).
gH({right arrow over (x)})≦0.
Again an adaptive method is applied. Interpolation points are added as long as {right arrow over (x)}H* and the main curvatures in {right arrow over (x)}H* do amend. For evaluating the failure probability Pf the integral in (6) is transformed into:
thereby Γg is the indicator function of g({right arrow over (x)}):
The approximation of Pf is
The integral in (16) can be efficiently calculated for example by “importance sampling”.
Monte Carlo methods are usable merely for small systems. For optimizing problems they are not usable at all. “Standard” Response Surface methods (it means without adapting) are also still to large-scale, at least for optimizing problems.
By using adaptive Response Surface Methods efficient methods also for complex optimization tasks with technical pertinent target functions (see above) and “chance constraints” (failure probability) being constraints are achieved. The adapting can be performed by two steps:
Using the adapting a given tolerance is achieved by minimal efforts.
The system also includes permanent or removable storage, such as magnetic and optical discs, RAM, ROM, etc. on which the process and data structures of the present invention can be stored and distributed. The processes can also be distributed via, for example, downloading over a network such as the Internet. The system can output the results to a display device, printer, readily accessible memory or another computer on a network.
A description has been provided with particular reference to preferred embodiments thereof and examples, but it will be understood that variations and modifications can be effected within the spirit and scope of the claims which may include the phrase “at least one of A, B and C” as an alternative expression that means one or more of A, B and C may be used, contrary to the holding in Superguide v. DIRECTV, 358 F3d 870, 69 USPQ2d 1865 (Fed. Cir. 2004).
Number | Date | Country | Kind |
---|---|---|---|
05105786 | Jun 2005 | EP | regional |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2006/062099 | 5/5/2006 | WO | 00 | 12/31/2007 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2007/000366 | 1/4/2007 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20020062212 | Nakatsuka | May 2002 | A1 |
20040059621 | Jameson | Mar 2004 | A1 |
Number | Date | Country |
---|---|---|
103 08 314 | Sep 2004 | DE |
Number | Date | Country | |
---|---|---|---|
20090112534 A1 | Apr 2009 | US |