The present invention relates to a method for generating a model ensemble that estimates at least one output variable of a physical process as a function of at least one input variable, the model ensemble being formed from a sum of model outputs from a plurality of models that have been weighted with a weighting factor.
In the development of internal combustion engines, there are legal requirements that must be taken into account with respect to emissions, in particular of NOx, soot, CO, CO2, etc., and with respect to consumption. A combustion engine control device is calibrated for this purpose during development in such a manner that these requirements are met during operation of the combustion engine. Calibration here means that certain parameters of the combustion engine, such as the air/fuel ratio, the recirculation of waste gas in the cylinder, firing intervals, etc., are specified as a function of a particular state of the combustion engine, such as torque, speed, engine coolant temperature, etc. For example, corresponding engine maps are stored in the control device that are readout during operation of the combustion engine in order to identify the control parameters for a particular state. Because of the many influencing variables, calibration is a very time-consuming and expensive process that is primarily carried out on specialized test benches. For this, a combustion engine is mounted on a test bench and connected to a loading machine that simulates specific, predetermined load conditions. In the course of this, predetermined load cycles (driving cycles) are usually run on the test bench. During the load cycles, the emissions values and/or the consumption of the combustion engine are measured as a function of the prevailing state. After evaluation of the recorded measurement values, the control parameters are changed in the control device and the process is repeated until a satisfactory calibration was achieved. The time on the test bench is very expensive, however, and should be reduced as much as possible.
Therefore, methods have been developed to simplify the calibration, in particular to reduce test bench times. These methods are often based on models of the emissions or consumption behaviors of the combustion engine, or on models of a physical process in general. What is required here, therefore, is to determine a sufficiently precise model of the physical process that can then be used for calibration of the control device. Methods for automated model identification of non-linear processes (for example, of NOx emission or of consumption) have already become known, as described in WO 2013/131836 A2, for example. These methods are each based on a predetermined model structure, such as a neural network, a Kriging model or a linear model network. The model structure chosen thus defines model parameters that are determined by the method for automated model identification. Data in the form of measurement values on a test bench are collected for this, and the model is parameterized or trained using these data. Consequently, only a small number of test runs using an actual combustion engine on an actual test bench is necessary. Using the trained model, the influence of particular control parameters on emissions or on consumption can be examined without the need for further test bench tests.
The dissertation by Hartmann B. “Local model networks for identification and test design of non-linear systems,” Siegen University January 2014, goes into more detail on linear model networks as model structure. In the case of linear model networks, valid local models are defined in a known way over partial ranges of the input variable range. The output of a linear model network over the whole input range then results from the sum of the outputs of the local models weighted using validity functions. A local model thus estimates only a locally valid output value, and hence only a portion of the output value of the model network.
However, the selection of the best model structure for modeling a specific behavior of a combustion engine (NOx emission, for example) is already ambitious and not immediately obvious.
For this reason, so-called model ensembles have already been employed. Here, different models are trained that are then weighted in order to achieve the best possible estimation for a specific behavior of a combustion engine (emission or consumption, for example). The output of the complete model (of the model ensemble) results from a weighted sum of the outputs of the individual models. The weighting factors must therefore be determined for a model ensemble. A method often used for determining the weighting factors is based on the Akaike Information Criterion, as described in Akaike, H. “Information theory and an extension of the maximum likelihood principle,” Proceedings 2nd International Symposium on Information Theory, Budapest 1973, pp. 267-281. Hartmann's dissertation also describes model ensembles using weighting factors according to an Akaike Information Criterion.
The plausibility of a model Mj is evaluated using the Akaike Information Criterion. Thus, for each model Mj, the model error Ej and the complexity of model Mj are evaluated in the form
Using model error Ej, the deviation of the model output variable ŷj, of model Mj from the actual measured output y of the process is evaluated. In the Akaike Information Criterion AIC, the mean square error MSEj of model Mj at N different known input variables u of model Mj is used as model error Ej. The mean square error MSEj of the jth model Mj is calculated in well-known manner from
Complexity is evaluated in the Akaike Information Criterion simply in the form of the number pj of the model parameters of the jth model Mj. There are also known modifications, for example, in the case of a neural network as a model structure for the jth model Mj, the number of effective parameters, that can be calculated according to known methods not more closely detailed here, is often employed for evaluating the complexity. It is also known to weight the complexity using a factor α (indicated in the above equation for AICj), the so-called risk aversion parameter.
The weighting factors wj for the individual models Mj are then calculated according to the associated determined plausibility according to the Akaike Information Criterion AICj using
and normalized to 1.
The problem with this Akaike Information Criterion AIC is that, although it can be calculated quickly, it is designed for a large number N of known data points (measured output variables y at a specified input variable u). The known data points are thus often also those data points by which the models Mj were trained. Moreover, the model structures (for the number of parameters pj) of models Mj must be known so that the plausibility and thus the weighting factors wj can be calculated.
For this application, however, the lowest number possible of known data points is desired because the expense for test bench tests and measurements on the test bench should be reduced as much as possible.
An adapted Akaike Information Criterion in the form
has already been proposed for a smaller number N of data points. This adapted Akaike Information Criterion, however, delivers also unsatisfactory results for the small number of data points available. Within the meaning of the present invention, a small number of data points is understood as a number N that is comparable to the number pj of the model parameters, thus N≈pj, whereas N and pj preferably having the same order of magnitude. In particular, it is a goal of the invention to keep the number N of measured data points as low as possible in order to keep the necessary number of test bench tests or measurements on the test bench as low as possible.
Further, it is often also the case in the present application that the models Mj are available as already trained models, whose model structure is completely unknown. The models Mj of the model ensemble can thus partly also be present as unknown black box. The known Akaike Information Criterion AIC cannot be used on such unknown models Mj for generating a model ensemble, however, because the number pi of the model parameter is not known. Because of these disadvantages of the Akaike Information Criterion AIC under the present terms and conditions (small number of available data points, possibly no knowledge about the model structure), this cannot be used, or at least not satisfactorily used, for the generation of a model ensemble.
It is therefore an object of the present invention to specify a method for the generation of a model ensemble that can manage using a low number of available data points and thus a low number of test bench tests and that requires no knowledge of the model structures of the models in the model ensemble.
In order to be able to determine for such a small number of actually measured, available data points and for partially unknown models the weighting factors for a good model ensemble, according to the invention for each model an empirical complexity measurement, that evaluates over a predetermined range of input variables evaluates the deviation of the model output variables from the output variables of the actual physical process, and a model error are determined and a surface information criterion is formed from the empirical complexity measurement and the model error, from which the weighting factor for the model ensemble can be derived. The model structure is thus not evaluated as in the Akaike Information Criterion, but an empirical complexity measurement is instead used that evaluates the complexity of a model based on the deviation of the model from the underlying physical process. Not only the deviation of the measured data points (model error) is evaluated here, but also a deviation between these data points, thus over a complete input variable range, that is reflected in the empirical complexity measurement. Knowledge of the model structures of the models in the model ensemble is thus no longer necessary. Through the use of the empirical complexity measurement, the number of necessary data points can also be greatly reduced so that the time on the test bench for measuring the necessary data points can also be greatly shortened.
The model error of a jth model can be calculated simply and quickly as a mean square error between the output variables measured at an input variable (data points) of the physical process and the model output variables calculated at these input variables according to the equation
Especially advantageously, the empirical complexity measurement of a jth model is calculated using the formula
or the formula
Using these empirical measures of complexity allows an especially good model ensemble to be determined that is in particular better than any individual model of the model ensemble.
In order to have a degree of freedom for the determination of the weighting factors, the empirical complexity measurement is preferably weighted with a complexity aversion parameter.
In a simple embodiment of the invention, the weighting factors for each model of the model ensemble can be calculated from the formula
In a particularly advantageous embodiment of the invention, it is provided that the surface information criterion is formed for the model ensemble from an error matrix that includes the model error of the models and from a complexity measurement matrix that includes the empirical complexity measurement of the models Mj whereas the error matrix and the complexity measurement matrix according to the formula SIC={wTFw+wTCw} each being weighted twice with a weighting vector that includes the weighting factors of the model, and the surface information criterion of the model ensemble being minimized with respect to the weighting factors. Weighting factors of the model ensemble can be determined by optimization, which results in an especially small error between the model output variables of the model ensemble and the actual physical process.
It is advantageous in this context if the error matrix is calculated as a product of a matrix E, the matrix being calculated using the formula E=(y(ui)−ŷj (u)).
It is especially advantageous if the complexity measurement matrix is weighted by a complexity aversion parameter because one can thereby obtain a degree of freedom by which it is possible to reduce the error between model output variables of the model ensemble and the actual process even further.
For this purpose, it can be provided in an especially advantageous embodiment of the invention that the weighting vectors for different complexity aversion parameters can be calculated and the weighting vector associated with a selected complexity aversion parameter is chosen as the optimum weighting vector for the model ensemble, or that the weighting vector for different complexity aversion parameters is calculated and the relationship
is minimized with respect to the weighting vectors calculated for the different complexity aversion parameters in order to determine the optimum weighting vector.
The present invention is explained below with reference to
A model ensemble 1, as illustrated in
Using model ensemble 1, or models Mj included therein, as a physical process e.g. an emission or consumption variable of a combustion engine, such as the NOx emission, the CO or CO2 emission or the fuel consumption, is estimated as an output variable ŷ of model ensemble 1, or as a model output variable ŷj of model Mj. In the following description, for the sake of simplicity, a single output variable y will be assumed without limitation of general applicability, whereas an output variable vector y made up of a plurality of output variables y also being possible of course.
In model ensemble 1, each model output variable ŷj is weighted with a weighting factor wj and output variable ŷ of model ensemble 1 is the weighted sum of model output variables ŷj of individual models Mj in the form
In the description, for simplicity's sake, ŷ and ŷj are also used, respectively, instead of the correct notation ŷ(u) and ŷj(u). With respect to weighting factors wj, boundary conditions wj∈[0,1]and
are preferably to be taken into consideration. The problem is thus presented of how to best determine weighting factors wj so that output variables y of the physical process are approximated by model ensemble 1 or by its output variable ŷ, as best as possible. The goal here, of course, is for model ensemble 1 to estimate output variable y of the physical process over the complete input variable range U, or the range of interest, better than the best model Mj of model ensemble 1.
This basic relationship is illustrated in
In order to evaluate the complexity of the jth model Mj, an empirical complexity measurement cj is used according to the invention that does not evaluate the model structure as in the prior art, but instead evaluates the deviation of model output variable ŷj from the output variable y of the physical process over a specified input variable range U. In contrast to a model error E, which relates to the deviation between model Mj and the physical process at specific measured data points, empirical complexity measurement cj evaluates the deviation over a complete input variable range U, thus specifically also between the measured data points. Different approaches are available for an evaluation of this sort.
In a first approach, the surface of the model output variable ŷj over the input variable range U is used for evaluation. The inventive idea behind this can also be explained in reference to
In this, ∇ is the known Nabla operator with respect to the input variables in the input variable vector u, therefore
The integral is determined over a specified input variable range uεU, preferably over the whole range. This integral increases monotonically with the surface of model output variable ŷj. The surface of model output variable ŷj over input variable range U is thus evaluated here as empirical complexity measurement cj.
As an alternative empirical complexity measurement c, which evaluates the deviation of model Mj or of model output variable ŷj from output variable y of the physical process, the variance of the model output variables ŷj can be employed. The variance (also designated as the second moment of a random variable) is, as is well known, the expected square deviation of a random variable from its expected value. Applied to the present invention, the model output variable ŷj at the available N data points is compared, using the variance, to the model output variable ŷj between these data points, which is designated here as variability. The idea behind this is that a model Mj having an increased variability generally predicts the basic physical process over input variable range U worse than a model Mj having a lower variability. This lies in the fact that the better model Mj approximates the measured data points, i.e. the more complex the model Mj becomes, the greater the probability of an increased variability becomes. However, if the variability becomes too large, the risk of overfit for model Mj therefore also increases. The typical behavior of such an overfilled or too-complex model Mj is a greatly varying model output variable ŷj over input variable range U, which in turn can lead to a larger deviation between actual output variable y and model output variable ŷj. This variability based on the variance can be mapped onto empirical complexity measurement cj if empirical complexity measurement ci is calculated according to the following formula.
It is clear that there are additional possibilities for evaluating the deviation between model Mj and the physical process, or output variable y of the process and the model output variable ŷj. The basic idea remains unaltered, namely, the idea that the larger the empirical complexity measurement cj, the more complex basic model Mj is. Empirical complexity measurement cj therefore also evaluates the complexity of model Mj.
According to the invention, a surface information criterion SICj of jth model Mj is derived from empirical complexity measurement cj, which, analogous to the above Akaike Information Criterion AIC in the prior art, is again formed from model error Ej of model Mj and empirical complexity measurement cj, therefore SICj=(Ej+αK·cjs). Mean square error
for example, can again be used as model error Ej, wherein also any other model error Ej, such as in the form of the mean absolute deviation, could obviously also be used.
The preferably used parameter αKε[0, ∞[ in surface information criterion SICj is used as a complexity aversion parameter. This represents the only degree of freedom with which the complexity of model Mj of model ensemble 1 can be further penalized. The larger the complexity aversion parameter αK becomes, the more complexity enters into the surface information criterion SICj. Small complexity aversion parameters αK therefore favor more complex models Mj, meaning models Mj having more degrees of freedom (number of model parameters pj).
Analogous to the known Akaike Information Criterion, weighting factors wj can again be determined from
wherein wj∈[0,1] and
can be preferably be considered as boundary conditions. Although a model ensemble 1 can already be formed by using this, which, under the given conditions, better approximates the actual process, meaning with fewer errors than a model formed using the Akaike Information Criterion AIC, the quality of model ensemble 1 can be further improved according to the invention. The method involves the approach as explained below.
It can be shown that the mean square model error MSE and the empirical complexity measurement c of model ensemble 1 with respect to a weighting vector w, which includes weighting factors wj of j models Mj can each be represented as a quadratic function of model error Ej and empirical complexity measurement cj of models Mj in the form SIC={wTFw+αKwTCw}. Within this, optional complexity aversion parameter αK represents a degree of freedom in the determination of weighting factors wj of j models Mj.
In this context, F designates an error matrix that includes model error Ej of models Mj and C a complexity measurement matrix that includes empirical complexity measurement cj of models Mj. In the case of mean square error MSE; as model error Ej and with a matrix E=(y(ui)−ŷj(ui)), for all i ∈ N data points and j, error matrix F results as the product of matrix E with itself, according to F=ETE. Depending upon the empirical complexity measurement cj chosen, complexity measurement matrix C results in, for example,
each having model output variable vector ŷa, which contains model output variables ŷj of j models, thus ŷa={ŷ1 . . . ŷj}. Matrices F and C can thus be calculated in advance and, above all, without knowledge of models Mj or their model structure or the number of model parameters pj.
For determining weighting factors wj (or, analogously, weighting vector w), surface information criterion SIC of model ensemble 1 for a specified complexity aversion parameter αK can be optimized with regard to weighting factors wj, in particular minimized. An optimization problem in the form
can be derived from this.
As can be easily recognized, this is a quadratic optimization problem that can be solved quickly and efficiently using available standard solution algorithms for a predetermined complexity aversion parameter αK, wj∈[0,1] and
preferably apply as boundary conditions for optimization. Any initial weighting vector w can be specified.
The result of the optimization of Surface Information Criterion SIC of model ensemble 1 for determination of weighting factors wj is described in reference to
In
It can also be deduced from the diagram on the right in
To accomplish this, the associated weighting vectors wα
Using the known Mallow equation, complexity aversion parameter αK is chosen as optimum complexity aversion parameter αK,opt, which solves the following optimization problem
Within this, F is again the error matrix (F=ETE) and a is the standard deviation of the available data points, but which is generally not known. There are, however, known methods (as described in Hansen, B. E. “Least squares model averaging,” Econometrica, 75(4), 2007, pp. 1175-1189, for example) to estimate the standard deviation σ from the available data points. Vector p again includes for all j models Mj the number of model parameters pj. The knowledge of models Mj or their model structures is, therefore, required for this step.
This optimization is not, however, solved directly, but with respect to the initially determined set of weighting vectors
This means that there is selected the weighting vector w associated to a specific complexity aversion parameter αK as optimum weighting vector wopt, which yields the minimal expression
In
A model ensemble determined according to the invention is used, for example, for calibrating a technical system, such as a combustion engine. In the calibration—in order to optimize at least one output variable of the technical system—control variables of the technical system, by which the technical system is controlled, are varied in a specified operational state of the technical system that is defined by state variables or a state variable vector. The optimization of output variables by variation of the control variables is generally formulated and solved as an optimization problem. There are sufficient known methods for accomplishing this. The control variables determined in this manner are stored as a function of the respective operational conditions, for example in the form of characteristic maps or tables. This relationship can then be used to control the technical system as a function of the actual operational state (which is measured or otherwise determined (for example, estimated)). This means that the stored control variables for the relevant operational state are readout from the stored relationship and used to control the technical process. In the case of a combustion engine as technical system, the operational condition is often described using measurable variables such as speed and torque, wherein other variables such as engine coolant temperature, ambient temperature, etc., can also be used. In a combustion engine, the position of a variable-turbine-geometry turbocharger, the position of an exhaust-gas recirculation system or the injection timing are often used as control variables. The output variable to be optimized in a combustion engine is typically the consumption and/or emission variable (for example, NOx, CO, CO2, etc.). Calibration of a combustion engine thus ensures by setting correct control variables that consumption and/or emission during operation are minimal.
Number | Date | Country | Kind |
---|---|---|---|
A 50202/2015 | Mar 2015 | AT | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2016/055307 | 3/11/2016 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2016/146528 | 9/22/2016 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20050203725 | Jenny et al. | Sep 2005 | A1 |
20140188768 | Bonissone | Jul 2014 | A1 |
Number | Date | Country |
---|---|---|
2013131836 | Sep 2013 | WO |
Entry |
---|
Tindle (Cold Engine Emissions Optimization Using Model Based Calibration, (24 pages)). (Year: 2007). |
Christoph Hametner, et al.: “Dymanic NOx emission modelling using local model networks” International Journal of Engine Research, Professional Engineering Publishing, GB, pp. 928-933, vol. 15. 6 pages. Published: Dec. 1, 2014. |
Vom Fachberiech: “Emission Modelling and Model-Based Optimisation of the Engine Control” Published: Feb. 25, 2013, 188 pages. http://tuprints.ulb.tu-darmstadt.de/3948/1/Emission Modelling and Model-Based Optimisation of the Engine Control-Dissertation Heiko Sequenz.pdf. |
Knut-Andreas Lie: “Finite-Element Methods and Numerical Linear Algebra” Published: Jan. 1, 2005, 30 pages. http://www.uio.no/studier/emner/matnat/ifi/INF2340/v05/foiler/sim06.pdf. |
Akaike, Hirotogu. “Information Theory and An Extension of the Maximum Likelihood Principle” In Second International Symposium on Information Theory, 1973, 267-281. 15 pages. |
Hartmann, Benjamin: “Lokale Modellnetze zur Identifikation und Versuchsplanung nichtlinearer Systeme” Dissertation: Schriftenreihe der Arbeitsgruppe Mess-und Regelungstechnik-Mechatronik, Department Maschinenbau der Universitat Siegen Published: Jan. 24, 2014, 202 pages http://d-mb.info/1050874226/34. |
Sung, Alexander, et al.: “Modellbasierte Online-Optimierung in der Simulation und am Motorenprufstand” MTZ—Motortechnische Zeitschrift Published: Jan. 2007, vol. 68, Issue 1, 8 pages. http://www.ra.cs.unituebingen.de/publikationen/2007/OnlineOpti_BMW_m01-07-09.pdf. |
Christoph Hametner, et al: “Nonlinear System Identification through Local Model Approaches: Partitioning Strategies and Parameter Estimation 179 0 Nonlinear System Identification through Local Model Approaches: Partitioning Strategies and Parameter Estimation” Published: Jan. 1, 2010; 18 pages http://cdn.intechopen.com/pdrs-wm/11739.pdf. |
Austrian Search Report Application No. A50202/2015 dated Mar. 11, 2016 1 Page. |
International Search Report and Written Opinion Application No. PCT/EP2016/055307 Completed: Jun. 17, 2016; dated Jun. 24, 2016 13 Pages. |
International Search Report and Written Opinion Translation Application No. PCT/EP2016/055307 Completed Date: Jun. 17, 2016; dated Jun. 24, 2016 3 Pages. |
Number | Date | Country | |
---|---|---|---|
20180112580 A1 | Apr 2018 | US |