The present application claims the benefit under 35 U.S.C. § 119 of German Patent Application No. DE 102018215061.3 filed on Sep. 5, 2018, which is expressly incorporated herein by reference in its entirety.
The present invention relates to a method including a safety condition for active learning for modelling dynamic systems with the aid of time series based on Gaussian processes, a system that has been trained using this method, a computer program which includes instructions that are configured to carry out the method when it is executed on a computer, a machine-readable memory medium on which the computer program is stored, and a computer which is configured to carry out the method.
Safe exploration during active learning is described in “Safe Exploration for Active Learning with Gaussian Processes” by J. Schreiter, D. Nguyen-Tuong, M. Eberts, B. Bischoff, H. Markert and M. Toussaint (ECML/PKDD, Volume 9286, 2015). Specifically, selective data are detected in a static state.
Active learning deals with sequential data identification for learning an unknown function. In the process, the data points are selected sequentially for identification in such a way that the availability of the pieces of information required for the approximation of the unknown function is maximized. The general aim is to create an accurate model without providing more pieces of information than are necessary. In this way, the model becomes more efficient, since potentially cost-intensive measurements may be avoided.
Active learning is commonplace for classifying data, for example, for identifying images. For active learning in the case of time series models that represent physical systems, the data must be generated in such a way that relevant dynamic processes are able to be detected.
This means, the physical system must be stimulated by dynamic movement in the input area by input curves in such a way that the collected data, i.e., input and output curves, contain as many pieces of information about the dynamics as possible. Examples of input curves that may be used are, among others, sinus functions, ramp functions and step functions, and white noise. When stimulating the physical systems, however, it is also necessary to observe safety requirements. The stimulation must not damage the physical system as the input area is being dynamically explored.
It is important, therefore, to identify areas in which the dynamic stimulation may be safely carried out.
An example method according to the present invention may have an advantage that it combines dynamic exploration, active exploration and safe exploration.
Dynamic exploration in this case is understood to mean the detection of pieces of information under changing conditions of the system to be measured. Active exploration aims for a preferably rapid detection of pieces of information, the pieces of information being detected sequentially in such a way that many pieces of information are able to be detected in a short period of time. In other words, the information gain of the individual measurement is maximized. Finally, safe exploration ensures that the system to be measured is preferably not damaged.
These three types of exploration may be combined using the example method according to the present invention.
Advantageous refinements of and improvements on the example method are described herein.
The present invention provides an active learning environment with dynamic exploration (active learning) for time series models, based on Gaussian processes, which take the aspect of safety into consideration by deriving a suitable criterion for the dynamic exploration of the input area.
Active learning is useful in a series of applications, such as in simulations and forecasts. The goal of learning methods is, in general, to create a model that describes reality. For this purpose, a real process, a real system or a real object is measured, also referred to as objective below in the sense that pieces of information about the objective are detected. The reality model created may then be used in a simulation or forecast instead of the objective. An advantage of this approach is in the savings resulting from not having to repeat the process, since most resources are consumed in this case or that the object or the system is not exposed to the process to be simulated and potentially consumed, damaged or modified in the process.
It is advantageous if the model describes reality as accurately as possible. In the present invention, it is particularly advantageous that active learning may be used while taking safety conditions into consideration. These safety conditions are to ensure that the objective to be detected is influenced negatively/critically as little as possible, for example, in the sense that the object or the system is damaged.
In the present invention, a Gaussian process having a time series structure is used, for example, with the non-linear exogenous structure or with the non-linear autoregressive exogenous structure. By dynamically exploring the input area, input curves and output curves or output measurements appropriate for the time series model are generated. The output measurements, i.e., the data identification, serve as pieces of information for the time series model. In the process, the input curve is parameterized in successive curve areas, for example, successive sections of ramp functions or of step functions which, given safety requirements and preceding observations, are determined gradually using an explorative approach.
The respectively subsequent section is determined while taking the previous observations into consideration in such a way that the information gain with respect to a criterion relating to the model is maximized.
In the process, a Gaussian process having non-linear exogenous structures is used with a suitable exploration criterion as a time series model. At the same time, an additional Gaussian model is used in order to forecast safe input areas with respect to the given safety demands. The sections of the input curve are determined by solving an optimization problem with secondary conditions for taking the safety forecast into consideration.
Exemplary applications of the present invention are, for example, test benches for internal combustion engines, in which processes in the engines are intended to be able to be simulated. Parameters to be detected in this case are, for example, pressure values, exhaust values, consumption values, power values, etc. Another application is, for example, the learning of dynamic models of robot controllers, in which a dynamic model is to be learned, which maps joint positions on joint torques of the robot, which may be used for controlling the robot. This model may be actively learned through exploration of the joint area, however, this should be carried out in a safe manner so that the movement limits of the joints are not exceeded, as a result of which the robots may be damaged. Another application is, for example, the learning of a dynamic model that is used as a substitute for a physical sensor. The data for learning this model may be actively generated and measured through exploration on the physical system. A safe exploration is essential in this case, since a measurement in an unsafe region may damage the physical system. Another application is, for example, the learning of the behavior of a chemical reaction, in which the safety requirements may relate to parameters such as temperature, pressure, acidity or the like.
Exemplary embodiments of the present invention are shown in the figures and are explained in greater detail below.
The approximation of an unknown function f: X⊂d→Y⊂ is to be achieved. In the case of time series models such as, for example, the well-known non-linear exogenous (NX) model, the input area is made up of discrete values, the so-called manipulated variables.
With xk for the point in time k, (uk, uk−1, . . . , uk−{tilde over (d)}+1) is applicable, (uk)k, uk∈π⊂
The elements uk are measured by the physical system and need not be equidistant. For reasons of simpler notation, an equidistance is assumed by way of example. In general, the control curves are continuous signals and may be explicitly controlled.
Data in the form of n successive curve sections Dnf={τi, ρi}i=12 are observed in the learning environment of the model, the input curve τi being a matrix and being made up of m input points of dimension d, i.e., τi=(x1i, . . . , xmi)∈d×m. Output curve ρi includes m corresponding output measurements, i.e., ρi=(y1i, . . . , ymi)∈m
The next curve section τn+1 to be input as stimulation into the physical system is to be determined in such a way that the information gain Dn+1f with respect to the modelling of f is increased, taking safety conditions into consideration, however.
As an approximation of the function f, a Gaussian process (hereinafter abbreviated as GP) is used, which is established by its mean value function μ(x) and its covariance function k(xi, xj), i.e., f(xi)˜GP(μ(xi), k(xi, xj)).
Assuming noisy observations of the input and output curves, the shared distribution according to the Gaussian process is given as p(Pn|Tn)=N(Pn|0, Kn+σ2I), Pnϵn·m being a vector, which connects output curves, and Tnϵn·m×d being a matrix containing input curves. The covariance matrix is represented by Knϵn·m×n·m. As an illustration, a Gaussian core is used as a covariance function, i.e., k(xi, xi)=σf2 exp(−½(xi−xj)TΛf2(xi−xj)), which is parameterized by θf=(σf2, Λf2). A zero vector 0∈n·m is also assumed as a mean value, a nm-dimensional identity matrix I and σ2 as an output noise variance.
Under the given shared distribution, the forecasted distribution p(ρ*|τ*, Dnf) may be expressed for a new curve section τ* as
p(ρ*|τ*, Dnf)=N(ρ*|μ(τ*), Σ(τ*)), [equation A:]
μ(τ*)=k(τ*, Tn)T(Kn+σ2I)−1Pn, [equation B:] being
Σ(τ*)=k**(τ*, τ*)−k(τ*, Tn)T(Kn+σ2I)−1k(τ*, Tn) [equation C:] being
k**ϵm×m being a matrix with kij**=(xi, xj). Matrix k**ϵm×n·m further contains core evaluations with respect to τ* of the previous n input curves. Since the covariance matrix is completely filled, input points x correlate both completely with a curve section as well as beyond various curves utilizing the correlations for planning the next curve. Since matrix Kn+σ2I potentially has a high dimensional number n·m, its inversion may be time-consuming, so that GP approximation techniques may be used.
The safety status of the system is described by an unknown function g, with g:X⊂d→Z⊂, which assigns to each input point x a safety value z, which serves as a safety indicator. Values z are determined using pieces of information from the system, and are configured in such a way that for all values of z that are greater than or equal to zero, corresponding input point x is considered as safe.
Such safety values z are a function of the respective system and may, as explained above, embody system-dependent values for safe or unsafe pressure values, exhaust values, consumption values, power values, joint position values, movement limits, sensor values, temperature values, acidity values or the like.
The values of z are generally continuous and indicate the distance of a given point x from the unknown safety limit in the input area. Thus, the safety level for a curve τ may be ascertained with the given function g or with an estimation thereof. A curve is classified as safe if the probability that its safety value z is greater than zero is sufficiently great, i.e., ∫z
p(ζ*|τ*, Dng)=N(ζ*|μg(τ*), Σg(τ*)) [equation D:]
μg(τ*) and Σg(τ*) being the corresponding mean value and covariance values. The variables μg and Σg are calculated as shown in equation 2 and 3, however, with Znϵn·m as the target vector, which connects all ζi. By using a GP for approximating g, safety condition ξ(τ) may be calculated for a curve τ as follows
ξ(τ)=∫z1, . . . z
The calculation of ξ(τ) is generally difficult to solve analytically, and thus a certain approximation may be used such as, for example, a Monte-Carlo simulation or expectation value progagation (“expectation propagation”).
For the efficient selection of an optimal τ, the curve must be parameterized in a suitable manner. One possibility is to perform the parameterization already in the input area. The parameterization of the curve may be implemented, for example, as ramp functions or step functions.
For a curve parameterization with a forecast distribution according to equation A and safety conditions according to equation E, the next curve section τn+1(η*) may be obtained by solving the following optimization problem with secondary conditions:
η*=arg maxη∈πJ(Σ(η)) [equation F:]
so that ξ(η)>1−α, [equation G:]
ηϵπ representing the curve parameterization and J an optimality criterion.
According to equation F, predictive variance Σ from equation A is used for the exploration. This is a covariance matrix, which is mapped on a real number by optimality criterion J, as shown in equation F. Different optimality criteria may be used for J as a function of the system. Thus, J may, for example, be the determinant, i.e., equivalent to maximizing the volume of the forecast reliance ellipsoid of the multi-normal distribution, the trace, i.e., equivalent to maximizing the average forecast variance, or the maximum intrinsic value, i.e., equivalent to maximizing the largest axis of the forecast reliance ellipsoid. However, other optimality criteria are also conceivable.
Referring to
A new curve section τm+1 is subsequently determined in step 160 according to equations F and G by optimizing η.
Determined curve section τn+1 is subsequently used as input in step 170 and measured in this area ρn+1 and ζn+1 on the physical system.
The regression and safety processes are then updated in step 150. Regression model f is updated according to equation A using Dnf={τi, ρi}i=1n, and safety model g is updated according to equation D using Dng={τi, ζi}i=1n.
The steps 150 through 170 in this case are passed through N-times. In addition to a previously established number of passes, an automatic ending after reaching a termination condition is also possible. This could be based, for example, on training errors (error metric in model prediction and system response) or on an additional potential information gain (if the optimality criterion becomes too small).
Subsequently, the regression model and the safety model are output in step 190.
Referring to
The part of the method encompassing steps 240 through 280 is carried out N times, k being the control variable, i.e., indicating the instantaneous pass. As in
Regression model f according to equation A is first updated in step 240 using Dkf={τi, ρi}i=1n. In step 250, safety model g according to equation D is updated using Dkg={τi, ζi}i=1n. In the first pass of steps 240 through 280, steps 240 and 250 may be omitted.
A new curve section τn+1 is subsequently determined in step 260 according to equations F and G by optimizing η.
Determined curve section τn+1 is subsequently used as input in step 270 and measured in this area ρn+1 and ζn+1 on the physical system.
The input and output curves to Dk−1f and Dk−1g processed in the preceding steps are then added in step 280.
After the repetitions of steps 240 through 280 are completed, step 290 follows, in which the regression and safety model are updated and output.
The incremental updating of the GP model for new data, i.e., steps 150, respectively 240 and 250, may be efficiently carried out, for example, by updating the range of the matrix (rank-one update). A NX structure is shown by way of example here in combination with the GP model for time series modelling, however, the general, non-linear auto-regressive exogenous case may also be used, i.e., GP with NARX input structure, where xk=(yk, yk−1, . . . , yk−q,uk,uk−1, . . . , uk−d). In this case, the forecast mean value of p(ρ|τ, Dnf), for example, may be used as a substitute for yk for optimizing and for planning for the next curve section. The input stimulation of the system is nevertheless carried out via manipulation variable uk in the case of NARX.
Number | Date | Country | Kind |
---|---|---|---|
102018215061.3 | Sep 2018 | DE | national |