This application is the US National Stage of International Application No. PCT/EP2011/055664 filed Apr. 12, 2011, and claims the benefit thereof. The International Application claims the benefits of German Application No. 10 2010 014 906.3 DE filed Apr. 14, 2010. All of the applications are incorporated by reference herein in their entirety.
The invention relates to a method for computer-aided learning of a recurrent neural network for modeling a dynamic system and to a method for predicting the observables of a dynamic system on the basis of a learned recurrent neural network, and to a corresponding computer program product.
Recurrent neural networks are used nowadays in various fields of application as an appropriate way of modeling the changes over time of a dynamic system such that a recurrent neural network learned using training data of the dynamic system can accurately predict the observables (observable states) of the system in question. Said recurrent neural network is also used to model, as states of the dynamic system, not only the observables but also unknown hidden states of the dynamic system, wherein generally only a causal information flow, i.e. proceeding forward in time, between consecutive states is considered. However, dynamic systems are often based on the principle that future predictions concerning observables also play a role in the changes over time of the states of the system. Such dynamic systems are often only inadequately described by known recurrent neural networks.
An object is to create a method for computer-aided learning of a recurrent neural network that will provide better modeling of dynamic systems.
This object is achieved by the independent claims. Developments of the invention are defined in the dependent claims.
The method according to the invention is used for computer-aided learning of a recurrent neural network for modeling a dynamic system which is characterized at respective points in time by an observable vector comprising one or more observables (i.e. observable states of the dynamic system) as entries. This method can be applied to any dynamic systems. It can be used, for example, to model energy price and/or commodity price movements. The method likewise enables any technical system that changes dynamically over time to be modeled on the basis of corresponding observable state variables of the technical system in order thereby to predict observables of the technical system using an appropriately learned network. For example, the method can be usefully employed to model a gas turbine and/or a wind turbine.
The recurrent neural network in the method according to the invention comprises a first subnetwork in the form of a causal network which describes an information flow proceeding forward in time between first state vectors of the dynamic system, wherein a first state vector at a respective point in time comprises one or more first entries which are each assigned to an entry of the observable vector, as well a one or more hidden (i.e. unobservable) states of the dynamic system. In order also to take future changes over time of the dynamic system into account in the recurrent neural network, a second subnetwork in the form of a retro-causal network is provided, wherein the retro-causal network describes an information flow proceeding backward in time between second state vectors of the dynamic system, wherein a second state vector at a respective point in time comprises one or more second entries which are each assigned to an entry of the observable vector, as well as one or more hidden states of the dynamic system. In the recurrent neural network, the observable vector at a respective point in time is determined such that the first entries of the first state vector are combined with the second entries of the second state vector. Finally, the causal and the retro-causal network are learned based on training data containing a sequence of consecutive known observable vectors.
The method according to the invention is characterized in that a dynamic system is described by a recurrent neural network which takes into account both an information flow from the past to the future and an information flow from the future to the past. This enables dynamic systems to be suitably modeled in which the observables at a respective point in time are also influenced by predicted future observable values.
In a particularly preferred embodiment, during learning of the causal and retro-causal network at a respective point in time for which a known observable vector from the training data exists, the first and second entries of the first and second state vectors are corrected using the difference between the observable vector determined in the recurrent neural network and the known observable vector at the respective point in time. The first and second state vectors with the corrected first and second entries then continue to be used for learning. In this way, at a respective point in time so-called teacher forcing is achieved whereby observables determined in the recurrent neural network are always matched to observables according to the training data.
In another particularly preferred embodiment, the causal and retro-causal networks are learned based on error-back-propagation with shared weights. This method of error-back-propagation with shared weights will be sufficiently familiar to the average person skilled in the art and is frequently used for learning in recurrent neural networks. By using this method, simple and efficient learning of the recurrent neural network is achieved.
In another preferred embodiment of the method according to the invention, in the recurrent neural network the observable vector is determined at a respective point in time such that the respective first and second entries which are assigned to the same entry of the observable vector are added.
In another embodiment of the method according to the invention, during learning of the causal and retro-causal network a target value is determined at a respective point in time for which a known observable vector according to the training data exists, which target value constitutes the difference vector between the observable vector determined in the recurrent neural network and the known observable vector at the respective point in time. Predefined here as the learning optimization target is the minimization of the sum of the absolute values or squared absolute values of the difference vectors at the respective points in time for which a known observable vector from the training data exists. This provides a simple means of ensuring that the recurrent neural network correctly models the dynamics of the system in question.
In another embodiment of the method according to the invention, in the causal network a first state vector at a respective point in time is converted to a first state vector at a subsequent point in time by multiplication by a matrix assigned to the causal network and application of an activation function. In a particularly preferred variant, first the activation function is applied to the state vector at the respective point in time and only subsequently is multiplication by the matrix assigned to the causal network performed. This ensures that observables can be described which are not limited by the value range of the activation function.
In another embodiment of the method according to the invention, in the retro-causal network a second state vector at a respective point in time is converted into a second state vector at a previous point in time by multiplication by a matrix assigned to the retro-causal network and application of an activation function. Once again, first the activation function is preferably applied to the second state vector at the respective point in time and only subsequently is multiplication by the matrix assigned to the retro-causal network performed. This ensures also for the retro-causal network that observables can be described which are not limited by the value range of the activation function.
In a particularly preferred variant, the above described activation functions are tan h (hyperbolic tangent) functions which are frequently used in recurrent neural networks.
In addition to the method described above, the invention comprises a method for predicting observables of a dynamic system whereby the prediction is carried out using a recurrent neural network which is learned using the inventive learning process based on training data comprising known observable vectors of the dynamic system.
The invention additionally relates to a computer program product having program code stored on a machine-readable medium for carrying out the methods described above when the program is run on a computer.
Exemplary embodiments of the invention will now be described in detail with reference to the accompanying drawings in which:
Recurrent neural networks for modeling the behavior over time of a dynamic system are sufficiently known from the prior art. These networks generally comprise a plurality of layers which generally contain a plurality of neurons and can be suitably learned based on training data from known states of the dynamic system such that future states of the dynamic system can be predicted.
A suitably learned recurrent neural network as shown in
enables the known observable vector ydt to be converted into an observable vector which contains not only the entries for the known observables but also entries for the other hidden states which, however, are all set to zero. This matrix
comprises a number of columns corresponding to the number of observables and a number of rows corresponding to the dimension of the state vector sτ. In the upper portion, the matrix forms a square identity matrix and the remaining rows of the matrix contain exclusively zeros. The network in
The linking shown in
The learning is based on the following optimization target:
In other words, the matrix A is sought which minimizes the quadratic error, summed over the time instants t−m≦τ≦t, between observable vectors determined via the network and known observable vectors.
The teacher forcing described above is also employed in the recurrent neural network used in the method according to the invention, but in modified variants which are illustrated in
Using the structure of the network according to
Similarly to the network in
Using the architecture according to
In the preceding, suitable learning of a causal network having an information flow proceeding forward in time was described. The invention is based on the insight that a causal modal is not always suitable for describing a dynamic system. In particular, there are dynamic systems which also have a retrocausal information flow in the reverse time direction from the future to the present. These are dynamic systems whose changes over time are influenced by planning involving the prediction of future observables. For the change over time of a corresponding state vector of the dynamic system, not only preceding state vectors but also predicted future state vectors are therefore taken into account. For example, regarding the market price movements of energy or commodities, the price is determined not only by supply and demand, but also by planning aspects of the sellers/buyers for the sale/purchase of energy or commodities.
The method according to the invention is based on the concept of modeling a dynamic system such that an information flow is considered not only in the causal direction from the past to the future, but also an information flow in the retro-causal direction from the future to the past. Such an information flow can be implemented by a retro-causal network. Such a network is depicted in
The invention is henceforward based on a combination of a causal network with a retro-causal network, thereby providing a recurrent neural network having an information flow both from the past to the future and from the future to the past. This makes it possible to also model dynamic systems in which predicted future states also play a role in the dynamic progression of the states.
Based on the network in
to the state vector sτ or sτ′, teacher forcing is again achieved for each time step τ≦t. In
In order to implement learning according to
The inventive method described in the foregoing has a number of advantages. In particular, dynamic systems can also be learned in which future predicted states of the dynamic system influence the current state. The method can be used for different dynamic systems. For example, the dynamic system can represent the changes over time of energy or more specifically electricity prices and/or commodity prices, wherein various types of energy (e.g. gas, oil) and/or commodities as well as other economic factors such as the conversion of different currencies and share indices can be taken into account as observables. Using a recurrent neural network learned by appropriate training data, suitable predictions concerning future price movements for energy and/or commodities can be made. Another field of application is modeling the dynamic behavior of a technical system. For example, the recurrent neural network according to the invention can be used to predict the observable states of a gas turbine and/or of a wind turbine or also of any other technical systems.
Number | Date | Country | Kind |
---|---|---|---|
10 2010 014 906 | Apr 2010 | DE | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2011/055664 | 4/12/2011 | WO | 00 | 12/12/2012 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2011/128313 | 10/20/2011 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
7464061 | Grothmann et al. | Dec 2008 | B2 |
7953683 | Minamino et al. | May 2011 | B2 |
Number | Date | Country |
---|---|---|
102008014126 | Oct 2009 | DE |
Entry |
---|
Caglayan Erdem, Hans Georg Zimmermann, “Segmental Duration Control With Asymmetric Causal and Retro-Causal Neural Networks”, proceedings Fourth ISCA (TRW on Speech Synthesis (SSW-4)), Perthshire, Scotland, Sep. 1, 2001, pp. 1-6. |
Achim F. Muller, Hans Georg Zimmermann, “Symbolic Prosody Modeling by Causal Retro-causal NNs with Variable Context Length”, ICANN '01 Proceedings of the International Conference on Artificial Neural Networks, 2001, pp. 57-64. |
T. G. Barbounis and J. B. Theocharis, “Locally Recurrent Neural Networks for Wind Speed Predictions Using Spatial Correlation”, Information Sciences, vol. 177, 2007, pp. 5775-5797. |
Jovina Roman and Akhtar Jameel, “Backpropagation and Recurrent Neural Netowks in Financial Analysis of Multiple Stock Returns”, Proceedings of the 29th Annual Hawaii International Conference on System Sciences, 1996, pp. 454-460. |
Dreyfus, G.: Neural Networks: Methodology and Applications, Springer-Verlag, 2005. In the Internet: http://www.springerlink.com/content/978-3-540-22980-3/contents/ or Http://lab.fs.uni-lj.si/lasin/www/teaching/neural/doc/Dreyfus2005.pdf, found on Oct. 21, 2010; Others. |
NeuroSolutions: NeuroSolutions Product Summary, Web-Archive Version: In the Internet on May 29, 2008: http://web.archive.org/web/20080529145212/http://www.neurosolutions.com/products/ns/features.html; Others. |
Mike Schuster et al., “Bidirectional Recurent Neural Networks” in: IEEE Transactions on Signal Processing, IEEE Service Center, New York, NY, US, vol. 45, No. 11, Nov. 1, 1997, pp. 2673-2681; Magazine. |
Achim F. Müller, Hans Georg Zimmermann, “Symbolic Prosody Modeling by Causal Retro-causal NNs with Variable Context Length”, in: Artificial Neural Networks—ICANN 2001, Jan. 1, 2001, Springer Verlag, pp. 57-64; Book. |
Baldi P. et al., “A Machine Learning Strategy for Protein Analysis”, in: IEEE Intelligent Systems, IEEE, US, vol. 17, No. 2, Mar. 1, 2002, pp. 28-35; Magazine. |
Zimmermann H.-G. et al., “Dynamical Consistent Recurrent Neural Networks”, Neural Networks, 2005. Proceedings, 2005. IEEE International Joint Conference on Montreal, Que., Canada,Jul. 31-Aug. 4, 2005, Piscataway, NJ, USA, IEEE, US, vol. 3, Jul. 31, 2005, pp. 1537-1541; Magazine. |
Caglayan Erdem, Hans-Georg Zimmermann: “SEgmental Duration Control With Asymmetric Causal Retro-Causal Neural Networks”, http://www.isca-speech.org/archive, Aug. 21, 2001, found on the Internet an Jul. 6, 2011 URL:http://www.isca-speech.org/archive—open/archive—papers/ssw4/ssw4—119.pdf; Others. |
Number | Date | Country | |
---|---|---|---|
20130204815 A1 | Aug 2013 | US |