1. Field of the Invention
This invention generally pertains to systems having states, and in particular to methods for determining a sequence of actions for such systems.
2. Discussion of the Related Art
A generalized method and arrangement for determining a sequence of actions for a system having states, wherein a transition in state between two states is performed on the basis of an action, is discussed by Neuneier in “Enhancing Q-Learning for Optimal Asset Allocation”, appearing in the Proceedings of the Neural Information Processing Systems, NIPS 1997. Neuneier describes a financial market as an example of a system which has states. His system is described as a Markov Decision Problem (MDP).
The characteristics of a Markov Decision Problem are represented below by way of summary:
X set of possible states of the system, e.g. X=m,
A(xt) set of possible actions in the state
p(xt+1|xt,at) xt
r(xt, at, xt+1) gain with expectation R(xt, at).
Starting from observable variables, the variables denoted below as training data, the aim is to determine a strategy, that is to say a sequence of functions
π={μ0,μ1, K, μT}, (3)
which at each instant t map each state into an action rule, that is to say action
μt(xt)=at (4)
Such a strategy is evaluated by an optimization function.
The optimization function specifies the expectation, the gains accumulated over time at a given strategy π, and a start state x0.
The so-called Q-learning method is described by Neuneier as an example of a method of approximative dynamic programming.
An optimum evaluation function V*(x) is defined by
V*(x)=πmax Vπ(x)∀x εX (5)
where
γ denoting a prescribable reduction factor which is formed in accordance with the following rule:
z∈ρ
+. (8)
A Q-evaluation function Q*(xt,at) is formed within the Q-learning method for each pair (state xt, action at) in accordance with the following rule:
On the basis respectively of the tupel (xt, xt+1, at, rt), the Q-values Q* (x,a) are adapted in the k+1 th iteration in accordance with the following learning rule with a prescribed learning rate ηk in accordance with the following rule:
Usually, the so-called Q-values Q*(x,a) are approximated for various actions by a function approximator in each case, for example a neural network or a polynomial classifier, with a weighting vector wa, which contains weights of the function approximator.
A function approximator is, for example, a neural network, a polynomial classifier or a combination of a neural network with a polynomial classifier.
It therefore holds that:
Q*(x, a)≈Q(x; wa). (11)
Changes in the weights in the weighting vector wa are based on a temporal difference dt which is formed in accordance with the following rule:
The following adaptation rule for the weights of the neural network, which are included in the weighting vector wa, follows for the Q-learning method with the use of a neural network:
The neural network representing the system of a financial market as described by Neuneier is trained using the training data which describe information on changes in prices on a financial market as time series values.
A further method of approximative dynamic programming is the so-called TD(λ) learning method. This method is discussed in R.S. Sutton's, “Learning To Predict By The Method Of Temporal Differences”, appearing in Machine Learning, Chapter 3, pages 9–44, 1988.
Furthermore, it is known from M. Heger's, “Risk and Reinforcement Learning: Concepts and Dynamic Programming”, ZKW Bericht No. Aug. 1994, Zentrum für Kognitionswissenschaften [Center for Cognitive Sciences], Bremen University, December 1994, that risk is associated with a strategy π and an initial state xt. A method for risk avoidance is also discussed by Hager, cited above.
The following optimization function, which is also referred to as an expanded Q-function Qπ(xt, at), is used in the Heger method:
The expanded Q-function Qπ(xt, at) describes the worst case if the action at is executed in the state xt and the strategy π is followed thereupon.
The optimization function Qπ(xt, at) for
is given by the following rule:
A substantial disadvantage of this mode of procedure is that only the worst case is taken into account when finding the strategy. However, this inadequately reflects the requirements of the most varied technical systems.
In “Dynamic Programming and Optimal Control”, Athena Scientific, Belmont, Mass., 1995, D.P. Bertsekas formulates access control for a communications network and routing within the communications network as a problem of dynamic programming.
Therefore, the present invention is based on the problem of specifying a method and system for determining a sequence of actions in which the method or sequences of actions achieve an increased flexibility in determining the strategy needed.
In a method for computer-aided determination of a sequence of actions for a system which has states, a transition in state between two states being performed on the basis of an action, the determination of the sequence of actions is performed in such a way that a sequence of states resulting from the sequence of actions is optimized with regard to a prescribed optimization function, the optimization function including a variable parameter with the aid of which it is possible to set a risk which the resulting sequence of states has with respect to a prescribed state of the system.
A system for determining a sequence of actions for a system which has states, a transition in state between two states being performed on the basis of an action, has a processor which is set up in such a way that the determination of the sequence of actions can be performed in such a way that a sequence of states resulting from the sequence of actions is optimized with regard to a prescribed optimization function, the optimization function including a variable parameter with the aid of which it is possible to set a risk which the resulting sequence of states has with respect to a prescribed state of the system.
Thus, the present invention offers a method for determining a sequence of actions at a freely prescribable level of accuracy when finding a strategy for a possible closed-loop control or open-loop control of the system, in general for influencing it. Hence, the embodiments described below are valid both for the method and for the system.
Approximative dynamic programming is used for the purpose of determination, for example a method based on Q-learning or a method based on TD(λ)-learning.
Within Q-learning, the optimization function OFQ is preferably formed in accordance with the following rule:
OFQ=Q(x; wa),
x denoting a state in a state space X
a denoting an action from an action space A, and
wa denoting the weights of a function approximator which belong to the action a.
The following adaptation step is executed during Q-learning in order to determine the optimum weights wa of the function approximator:
with the abbreviation
xt, xt+1 respectively denoting a state in the state space X,
at denoting an action from an action space A,
γ denoting a prescribable reduction factor,
wta
wt+1a
ηt(t=1, . . . ) denoting a prescribable step size sequence,
κε[−1; 1] denoting a risk monitoring parameter,
κ denoting a risk monitoring function κ (ξ)=(1−κsign(ξ))ξ,
∇Q(;) denoting the derivation of the function approximator according to its weights, and
r(xt, at, xt+1) denoting a gain upon the transition of state from the state xt to the subsequent state xt+1.
The optimization function is preferably formed in accordance with the following rule within the TD(λ)-learning method:
OFTD=J(x;w)
x denoting a state in a state space X,
a denoting an action from an action space A, and
w denoting the weights of a function approximator.
The following adaptation step is executed during TD(λ)-learning in order to determine the optimum weights w of the function approximator:
wt+1=wt+ηt·κ(dt)·zt
with the abbreviations
dt=r(wt, at, xt+1)+γJ(xt+1; wt)−J(xt; wt),
zt=λ·γ·zt−1+∇J(xt; wt),
z1=0
xt, xt+1 respectively denoting a state in the state space X,
at denoting an action from an action space A,
γ denoting a prescribable reduction factor,
wt denoting the weighting vector before the adaptation step,
wt+1 denoting the weighting vector after the adaptation step,
ηt (t=1, . . . ) denoting a prescribable step size sequence,
κε[−1; 1] denoting a risk monitoring parameter,
κ denoting a risk monitoring function κ(ξ)=(1−κsign(ξ))ξ,
∇J(;) denoting the derivation of the function approximator according to its weights, and
r(xt, at, xt+1) denoting a gain upon the transition of state from the state xt to the subsequent state xt+1.
It is an object of the present invention to provide a technical system and method for determining a sequence of actions using measured values.
It is another object of the present invention to provide a technical system and method that can be subjected to open-loop control or closed-loop control with the use of a determined sequence of actions.
It is a further object of the invention to provide a technical system and method modeled as a Markov Decision Problem.
It is an additional object of the invention to provide a technical system and method that can be used in a traffic management system.
It is yet another object of the invention to provide a technical system and method that can be used in a communications system, such that a sequence of actions is used to carry out access control, routing or path allocation.
It is yet a further object of the invention to provide a technical system and method for a financial market modeled by a Markov Decision Problem, wherein a change in an index of stocks, or a change in a rate of exchange on a foreign exchange market, makes it possible to intervene in the market in accordance with a sequence of determined actions.
These and other objects of the invention will be apparent from a careful review of the following detailed description of the preferred embodiments, which is to read in conjunction with a review of the accompanying drawing figures.
The system 201 is in a state xt at an instant t. The state xt can be observed by an observer of the system. On the basis of an action at from a set in the state xt of possible actions, atεA(xt), the system makes a transition with a certain probability into a subsequent state xt+1 at a subsequent instant t+1.
As illustrated diagrammatically in
The observer 200 obtains a gain rt 204
rt=r(xt, at, xt+1) ερ, (1)
which is a function of the action at 203 and the original state xt at the instant t as well as of the subsequent state xt+1 of the system at the subsequent instant t+1.
The gain rt can assume a positive or negative scalar value depending on whether the decision leads, with regard to a prescribable criterion, to a positive or negative system development, to an increase in capital stock or to a loss.
In a further time step, the observer 200 of the system 201 decides on the basis of the observable variables 202, 204 of the subsequent state xt+1 in favor of a new action at+1, etc.
A sequence of
State: xtεX
Action: atεA(xt)
Subsequent state: xt+1εX
Gain rt=r(xt, at, xt+1) ερ
describes a trajectory of the system which is evaluated by a performance criterion which accumulates the individual gains rt over the instants t. It is assumed by way of simplification in a Markov Decision Problem that the state xt and the action at all contain information for the purpose of describing a transition probability p(xt+1|·) of the system from the state xt to the subsequent state xt+1.
In formal terms, this means that:
p(xt+1|xt, K, x0, at, K, a0)=p(xt+1|xt, at). (2)
p(xt+1|xt, at) denotes a transition probability for the subsequent state xt+1 for a given state xt and given action at.
In a Markov Decision Problem, future states of the system 201 are thus not a function of states and actions which lie further in the past than one time step.
The communications network 300 has a multiplicity of switching units 301a, 301b, . . . , 301i, . . . 301n, which are interconnected via connections 302a, 302b, 302j, . . . 302m. A first terminal 303 is connected to a first switching unit 301a. From the first terminal 303, the first switching unit 301a is sent a request message 304 which requests preservation of a prescribed bandwidth within the communications network 300 for the purpose of transmitting data, such as video data or text data.
It is determined in the first switching unit 301a in accordance with a strategy described below, whether the requested bandwidth is available in the communications network 300 on a specified, requested connection instep 305. The request is refused instep 306 if this is not the case. If sufficient bandwidth is available, it is checked in checking step 307 whether the bandwidth can be reserved.
The request is refused in step 308 if this is not the case. Otherwise, the first switching unit 301a selects a route from the first switching unit 301a via further switching units 301i to a second terminal 309 with which the first terminal 303 wishes to communicate, and a connection is initialized in step 310.
The starting point below is a communications network 300 which comprises a set of switching units
N={1, K, n, K, N} (17)
and a set of physical connections
L={1, K, 1, K, L}, (18)
a physical connection l having a capacity of B(l) bandwidth units.
A set
M={1, K, m, K, M} (19)
of different types of service m are available, a type of service m being characterized by
a bandwidth requirement b(m),
an average connection time
and
a gain c(m) which is obtained whenever a call request of the corresponding type of service m is accepted.
The gain c(m) is given by the amount of money which a network operator of the communications network 300 bills a subscriber for a connection of the type of service. Clearly, the gain c(m) reflects different priorities, which can be prescribed by the network operator and which he associates with different services.
A physical connection 1 can simultaneously provide any desired combination of communications connections as long as the bandwidth used for the communications connections does not exceed the bandwidth available overall for the physical connection.
If a new communications connection of type m is requested between a first node i and a second node j (terminals are also denoted as nodes), the requested communications connection can, as represented above, either be accepted or be refused. If the communications connection is accepted, a route is selected from a set of prescribed routes. This selection is denoted as a routing. b(m) bandwidth units are used in the communications connection of type m for each physical connection along the selected route for the duration of the connection.
Thus, during access control, also referred to as call admission control, a route can be selected within the communications network 300 only when the selected route has sufficient bandwidth available. The aim of the access control and of the routing is to maximize a long term gain which is obtained by acceptance of the requested connections.
At an instant t, the technical system which is the communications network 300 is in a state xt which is described by a list of routes via existing connections, by means of which lists it is shown how many connections of which type of service are using the respective routes at the instant t.
Events w, by means of which a state xt could be transferred into a subsequent state xt+1, are the arrival of new connection request messages, or else the termination of a connection existing in the communications network 300.
In this embodiment, an action at at an instant t, owing to a connection request is the decision as to whether a connection request is to be accepted or refused and, if the connection is accepted, the selection of the route through the communications network 300.
The aim is to determine a sequence of actions, that is to say clearly to determine the learning of a strategy with actions relating to a state xt in such a way that the following rule is maximized:
E{.} denoting an expectation,
tk denoting an instant at which a kth event takes place,
g(xtk, ωk, atk). denoting the gain which is associated with the kth event, and
β denoting a reduction factor which evaluates an immediate gain as being more valuable than a gain at instants lying further in the future.
Different implementations of a strategy lead normally to different overall gains
G:
The aim is to maximize the expectation of the overall gain G in accordance with the following rule J:
it being possible to set a risk which reduces the overall gain G of a specific implementation of access control and of a routing strategy to below the expectation.
The TD(λ)-learning method is used to carry out the access control and the routing.
The following target function is used in this embodiment:
A denoting an action space with a prescribed number of actions which are respectively available in a state xt,
τ denoting a first instant at which a first event ω occurs, and
xt+1 denoting a subsequent state of the system.
An approximated value of the target value J*(xt) is learned and stored by employing a function approximator 400 (compare
Training data are data previously measured in the communications network 300 and relating to the behavior of the communications network 300 in the case of incoming connection requests 304 and of termination of messages. This time sequence of states is stored, and these training data are used to train the function approximator 400 in accordance with the learning method described below.
A number of connections of in each case one type of service m on a route of the communications network 300 serve in each case as input variable of the function approximator 400 for each input 401, 402, 403 of the function approximator 400. These are represented in
One output variable is the approximated target value {tilde over (J)}, which is formed in accordance with the following rule:
The input variables of the component function approximators 510, 520, which are present at the inputs 511, 512, 513 of the first component function approximator 510, or at the inputs 521, 522 and 523 of the second component function approximator 520 are, in turn, respectively a number of types of service of a type m in a physical connection r in each case, symbolized by blocks 514, 515, 516 for the first component function approximator, and 524, 525 and 526 for the second component function approximator 520.
Component output variables 530, 531, 532, 533 are fed to an adder unit 540, and the approximated target variable {tilde over (J)} is formed as output variable of the adder unit.
Let it be assumed that the communications network 300 is in the state xt
A list of permitted routes between the nodes i and j is denoted by R(i, j), and a list of all possible routes is denoted by
{tilde over (R)}(i, j, xt
as a subset of the routes R(i, j) which could implement a possible connection with regard to the available and requested bandwidth.
For each possible route r, rε{tilde over (R)}(i,j,xt
This is illustrated in
r*=argrε{tilde over (R)}(i,j,x
A check is made in step 104 as to whether the following rule is fulfilled:
c(m)+{tilde over (J)}(xt
If this is the case, the connection request 304 is rejected in step 105, otherwise the connection is accepted and “switched through” to the node j along the selected route r* in step 106.
Weights of the function approximator 400, 500 which are adapted in the TD(λ)-learning method to the training data, are stored in a parameter vector Θ for an instant t in each case, such that an optimized access control and an optimized routing are achieved.
During the training phase, the weighting parameters are adapted to the training data applied to the function approximator.
A risk parameter κ is defined with the aid of which a desired risk, which the system has with regard to a prescribed state owing to a sequence of actions and states, can be set in accordance with the following rules:
−1≦κ<0: risky learning,
κ=0: neutral learning with regard to the risk,
0<κ<1: risk-avoiding learning,
κ=1: worst-case learning.
Furthermore, a prescribable parameter 0≦λ≦1 and a step size sequence γk are prescribed in the learning method.
The weighting values of the weighting vector Θ are adapted to the training data on the basis of each event ωtk in accordance with the following adaptation rule:
Θk=Θk−1=γkηκ(dk)zt, (28)
in which case
dk=e−β(t
zt=λe−β(t
and
κ(ξ)=(1−κsign(ξ))ξ. (31)
It is assumed that: z1=0.
The function
g(xt
denotes the immediate gain in accordance with the following rule:
Thus, as described above, a sequence of actions is determined with regard to a connection request such that a connection request is either rejected or accepted on the basis of an action. The determination is performed taking account of an optimization function in which the risk can be set by means of a risk control parameter κε[−1; 1] in a variable fashion.
A road 600 on which automobiles 601, 602, 603, 604, 605 and 606 are being driven. Conductor loops 610, 611 integrated into the road 600 receive electric signals in a known way and feed the electric signals 615, 616 to a computer 620 via an input/output interface 621. In an analog-to-digital converter 622 connected to the input/output interface 621, the electric signals are digitized into a time series and stored in a memory 623, which is connected by a bus 624 to the analog-to-digital converter 622 and a processor 625. Via the input/output interface 621, a traffic management system 650 is fed control signals 651 from which it is possible to set a prescribed speed stipulation 652 in the traffic management system 650, or else further particulars of traffic regulations, which are displayed via the traffic management system 650 to drivers of the vehicles 601, 602, 603, 604, 605 and 606.
The following local state variables are used in this case for the purpose of traffic modeling:
traffic flow rate v,
vehicle density p (p= number of vehicles per kilometer
traffic flow q (q= number of vehicles per hour
(q=v*p)), and
speed restrictions 652 displayed by the traffic management system 650 at an instant in each case.
The local state variables are measured as described above by using the conductor loops 610, 611.
These variables (v(t), p(t), q(t)) therefore represent a state of the technical system of “traffic” at a specific instant t.
In this embodiment, the system is therefore a traffic system which is controlled by using the traffic management system 650, and an extended Q-learning method is described as method of approximative dynamic programming.
The state xt is described by a state vector
x(t)=(v(t), p(t), q(t)) (34)
The action at denotes the speed restriction 652, which is displayed at the instant t by the traffic management system 650. The gain r(xt, at, xt+1) describes the quality of the traffic flow which was measured between the instants t and t+1 by the conductor loops 610 and 611.
In this embodiment, r(xt, at, xt+1) denotes
the average speed of the vehicles in the time interval [t, t+1]
or
the number of vehicles which have passed the conductor loops 610 and 611 in the time interval [t, t+1]
or
the variance of the vehicle speeds in the time interval [t, t+1],
or
a weighted sum from the above variables.
A value of the optimization function OFQ is determined for each possible action at, that is to say for each speed restriction which can be displayed by the traffic management system 650, an estimated value of the optimization function OFQ being realized in each case as a neural network.
This results in a set of evaluation variables for the various actions at in the system state xt. Those actions at for which the maximum evaluation variable OFQ has been determined in the current system state xt are selected in a control phase from the possible actions at, that is to say from the set of the speed restrictions which can be displayed by the traffic management system 650.
In accordance with this embodiment, the adaptation rule, known from the Q-learning method, for calculating the optimization function OFQ is extended by a risk control function κ(.), which takes account of the risk.
In turn, the risk control parameter κ is prescribed in accordance with the strategy from the first exemplary embodiment in the interval of [−1≦κ≦1], and represents the risk which a user wishes to run in the application with regard to the control strategy to be determined.
The following evaluation function OFQ is used in accordance with this exemplary embodiment:
OFQ=Q(x; wa), (35)
x=(v; p; q) denoting a state of the traffic system,
a denoting a speed restriction from the action space A of all speed restrictions which can be displayed by the traffic management system 650, and
wa denoting the weights of the neural network which belong to the speed restriction a.
The following adaptation step is executed in Q-learning in order to determine the optimum weights wa of the neural network:
wt+1a
using the abbreviation:
xt, xt+1 denoting in each case a state of the traffic system in accordance with rule (34),
at denoting an action, that is to say a speed restriction which can be displayed by the traffic management system 650,
γ denoting a prescribable reduction factor,
wta
wt+1a
ηt(t=1, . . . ) denoting a prescribable step size sequence,
κε[−1; 1] denoting a risk control parameter,
κ denoting a risk control function κ (ξ)=(1−κsign(ξ))ξ,
∇Q(;) denoting the derivative of the neural network with respect to its weights, and
r(xt, at, xt+1) denoting a gain upon the transition in state from the state xt to the subsequent state xt+1.
An action at can be selected at random from the possible actions at during learning. It is not necessary in this case to select the action at which has led to the largest evaluation variable.
The adaptation of the weights has to be performed in such a way that not only is a traffic control achieved which is optimized in terms of the expectation of the optimization function, but that also account is taken of a variance of the control results.
This is particularly advantageous since the state vector x(t) models the actual system of traffic only inadequately in some aspects, and so unexpected disturbances can thereby occur. Thus, the dynamics of the traffic, and therefore of its modeling, depend on further factors such as weather, proportion of trucks on the road, proportion of mobile homes, etc., which are not always integrated in the measured variables of the state vector x(t). In addition, it is not always ensured that the road users immediately implement the new speed instructions in accordance with the traffic management system.
A control phase on the real system in accordance with the traffic management system takes place in accordance with the following steps:
1. The state xt is measured at the instant t at various points in the traffic system of traffic and yields a state vector
x(t):=(v(t), p(t), q(t)).
2. A value of the optimization function is determined for all possible actions at, and that action at with the highest evaluation in the optimization function is selected.
Although modifications and changes may be suggested by those skilled in the art to which this invention pertains, it is the intention of the inventors to embody within the patent warranted hereon all changes and modifications that may reasonably and properly come under the scope of their contribution to the art.
Number | Date | Country | Kind |
---|---|---|---|
198 43 620 | Sep 1998 | DE | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/DE99/02846 | 9/8/1999 | WO | 00 | 3/21/2001 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO00/17811 | 3/30/2000 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5608843 | Baird, III | Mar 1997 | A |
6169981 | Werbos | Jan 2001 | B1 |
6336109 | Howard | Jan 2002 | B1 |
6581048 | Werbos | Jun 2003 | B1 |