UNCERTAINTY-DIRECTED TRAINING OF A REINFORCEMENT LEARNING AGENT FOR TACTICAL DECISION-MAKING

Information

  • Patent Application
  • 20230242144
  • Publication Number
    20230242144
  • Date Filed
    April 20, 2020
    4 years ago
  • Date Published
    August 03, 2023
    a year ago
Abstract
A method of providing a reinforcement learning, RL, agent for decision-making to be used in controlling an autonomous vehicle. The method includes: a plurality of training sessions, in which the RL agent interacts with a first environment including the autonomous vehicle, each training session having a different initial value and yielding a state-action value function Qk(s, a) dependent on state and action; an uncertainty evaluation on the basis of a variability measure for the plurality of state-action value functions evaluated for one or more state-action pairs corresponding to possible decisions by the trained RL agent; additional training, in which the RL agent interacts with a second environment including the autonomous vehicle, wherein the second environment differs from the first environment by an increased exposure to a subset of state-action pairs for which the variability measure indicates a relatively higher uncertainty.
Description
TECHNICAL FIELD

The present disclosure relates to the field of autonomous vehicles and in particular to a method of providing a reinforcement learning agent for decision-making to be used in controlling an autonomous vehicle.


BACKGROUND

The decision-making task of an autonomous vehicle is commonly divided into strategic, tactical, and operational decision-making, also called navigation, gui-dance and stabilization. In short, tactical decisions refer to high-level, often discrete, decisions, such as when to change lanes on a highway, or whether to stop or go at an intersection. This invention primarily targets the tactical decision-making field.


Tactical decision-making is challenging due to the diverse set of environments the vehicle needs to handle, the interaction with other road users, and the uncertainty associated with sensor information. To manually predict all situations that can occur and code a suitable behavior is infeasible. Therefore, it is an attractive option to consider methods that are based on machine learning to train a decision-making agent.


Conventional decision-making methods are based on predefined rules and implemented as hand-crafted state machines. Other classical methods treat the decision-making task as a motion planning problem. Although these methods are successful in many cases, one drawback is that they are designed for specific driving situations, which makes it hard to scale them to the complexity of real-world driving.


Reinforcement learning (RL) has previously been applied to decision-making for autonomous driving in simulated environments. See for example C. J. Hoel, K. Wolff and L. Laine, “Automated speed and lane change decision making using deep reinforcement learning”, Proceedings of the 21st International Conference on Intelligent Transportation Systems (ITSC), 4-7 Nov. 2018, pp. 2148-2155 [doi:10.1109/ITSC.2018.8569568]. However, the agents that were trained by RL in previous works can only be expected to output rational decisions in situations that are close to the training distribution. Indeed, a fundamental problem with these methods is that no matter what situation the agents are facing, they will always output a decision, with no suggestion or indication about the uncertainty of the decision or whether the agent has experienced anything similar during its training. If, for example, an agent that was trained for one-way highway driving was deployed in a scenario with oncoming traffic, it would still output decisions, without any warning that these are presumably of a much lower quality. A more subtle case of insufficient training is one where the agent that has been exposed to a nominal or normal highway driving environment is suddenly facing a speeding driver or an accident that creates standstill traffic.


A precaution that has been taken in view of such shortcomings is compre-hensive real-world testing in confined environments and/or with a safety driver on board, combined with successive refinements. Testing and refinement are iterated until the decision-making agent is seen to make an acceptably low level of observed errors and thus fit for use outside the testing environment. This is onerous, time-consuming and drains resources from other aspects of research and development.


SUMMARY

One objective of the present invention is to make available methods and devices for assessing the need for additional training of a decision-making agent, such as an RL agent. A particular objective would be to provide methods and devices determining the situations which the additional training of decision-making agent should focus on. Such methods and devices may preferably include a safety criterion that determines whether the trained decision-making agent is confident enough about a given state-action pair (corresponding to a possible decision) or about a given state, so that—in the negative case—the agent can be given additional training aimed at this situation.


These and other objectives are achieved by the invention according to the independent claims. The dependent claims define example embodiments of the invention.


In a first aspect of the invention, there is provided a method of providing an RL agent for decision-making to be used in controlling an autonomous vehicle. The method, which may be performed by processing circuitry arranged in the autonomous vehicle or separate therefrom, comprises a K≥2 training sessions, in which the RL agent interacts with a first environment E1 including the autonomous vehicle, wherein each training session has a different initial value and yields a state-action value function Qk(s, a) (k=1, 2, . . . , K) dependent on state s and action a. The K training sessions can be performed sequentially with respect to time, or in parallel with each other. Next follows an uncertainty evaluation on the basis of a variability measure cv(⋅,⋅) for the plurality of state-action value functions evaluated for one or more state-action pairs (ŝl, âl) (l=1, 2, . . . , L, where L≥1) corresponding to possible decisions by the trained RL agent. The method further comprises initiating additional training, either by actually performing this training or by providing instructions as to how the RL agent is to be trained. In the additional training, the RL agent interacts with a second environment E2 including the autonomous vehicle, wherein the second environment differs from the first environment by an increased exposure to a subset of state-action pairs for which the variability measure indicates a relatively higher uncertainty, which may be written B={(ŝl, âl): cv l, âl)>Cv}, where Cv is a threshold variability which may be predefined or dynamically chosen in any of the ways described below. Introducing the index set LB⊂{1, 2, . . . , L}, the subset may equivalently be expressed as B={(ŝl, âl): l∈LB}.


Accordingly, the uncertainty of an lth possible decision to take action âl in state ŝl is assessed on the basis of a measure of the statistic variability of K observa-tions being the K state-action value functions evaluated for this state-action pair: Q1l, âl), Q2l, âl), . . . , QKl, âl). The uncertainty is assessed on the basis of the variability measure, that is, either by considering its value without processing or by considering a quantity derived from the variability measure, e.g., after normalization, scaling, combination with other relevant factors etc. The inventors have realized that additional training of the RL agent is particularly efficient if focused on the subset of such possible state-action pairs for which the uncertainty is relatively higher. This is achieved by increasing, in the second environment E2, the exposure to this subset compared to the first environment E1; in particular, the second environment's incidence of states or situations which the state-action pairs in this subset involve may be higher. This makes it possible to provide an RL agent with a desired safety level in shorter time.


It is understood that the exposure to the subset of state-action pairs in the first environment E1 may have been zero, that is, such state-action pairs were altogether missing. Furthermore, the second environment E2 may expose the autonomous vehicle to the subset of state-action pairs only.


As used herein, an “RL agent” may be understood as software instructions implementing a mapping from a state s to an action a. The term “environment” refers to a simulated or real-world environment, in which the autonomous vehicle—or its model/avatar in the case of a simulated environment—operates. A mathematical model of the RL agent's interaction with an “environment” in this sense is given below. A “variability measure” includes any suitable measure for quantifying statistic dispersion, such as a variance, a range of variation, a deviation, a variation coefficient, an entropy etc. Generally, all terms used in the claims are to be construed according to their ordinary meaning in the technical field, unless explicitly defined otherwise herein. All references to “a/an/the element, apparatus, component, means, step, etc.” are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated.


In one embodiment, the uncertainty evaluation is guided by a preceding traffic sampling in which state-action pairs encountered by the autonomous vehicle are recorded on the basis of at least one physical sensor signal. The physical sensor signal may be provided by a physical sensor in the vehicle so as to detect current conditions of the driving environment or internal states prevailing in the vehicle. The autonomous vehicle may be operating in real-world traffic. The uncertainty evaluation may then relate to the state-action pairs recorded during the traffic sampling. An advantage with this embodiment is that the traffic sampling may help reveal situations that have been overlooked, or not given sufficient weight, in the first environment E1. This may broaden the distribution of situations that the RL agent can handle.


In embodiments where no traffic sampling is performed, the method may instead traverse the complete set of possible state-action pairs and evaluate the uncertainty for each of these by calculating the variability measure. This may be combined by manual or experience-based elimination of state-action pairs that are unlikely to occur in the intended use case and therefore need not be evaluated.


In one embodiment, the first or the second environment, or both, are simulated environments. If the second environment E2 is a simulated environment, it may be generated from the subset B of state-action pairs for which the variability measure indicates a relatively higher uncertainty. The second environment E2 may be generated from the subset B by manually setting up a scenario that will involve these state-action pairs, by using them as initial conditions or situations that will deterministically occur as the system evolves (e.g., an avoidable collision). It may be convenient to first determine which states occur—or which states occur most frequently—in the subset B of state-action pairs. With the notation introduced above, the states to be considered can be written custom-characterB{s: s=ŝl for some l∈LB}. These states are then used to generate the second environment E2. Furthermore, the second environment E2 may be generated from the state-action pairs in subset B by adding the subset B to the first environment E1.


As stated above, the subset of state-action pairs on which the additional training is to be focused can be expressed in terms of a threshold variability Cv, which gives B={(ŝl, âl): cv l, âl)>Cv}. In one embodiment, the threshold variability Cv is predefined and may correspond to the maximum uncertainty that is acceptable to allow the RL agent to perform a specific decision-making task. For example, if coefficient of variation is the variability measure, then one may set Cv=0.02. This constitutes an absolute selection criterion. Alternatively, and in particular in the early stages of training an RL agent, the threshold variability may be set dynamically in response to the distribution of the variability for the plurality of state-action pairs evaluated. As an example, one may set Cv corresponding to the nth percentile of the values of the variability measure, which will cause the worst (in the sense of most un-certain) 100-n percent of the state-action value pairs to have increased exposure in the second environment E2. This may be described as a relative selection criterion which takes into account the statistics of the variability measure values.


In various embodiments, the RL agent may be implemented by at least one neural network. In particular, K neural networks may be utilized to perform the K training sessions. Each of the K neural networks may be initialized with an independently sampled set of weights.


The invention is not dependent on the specific type of RL agent but can be embodied using a policy-based or value-based RL agent. Specifically, the RL agent may include a policy network and a value network. The RL agent may be obtained by a policy-gradient algorithm, such as an actor-critic algorithm. As another example, the RL agent is a Q-learning agent, such as a deep Q network (DQN).


In a second aspect of the invention, there is provided an arrangement for controlling an autonomous vehicle. The arrangement, which may correspond to functional or physical components of a computer or distributed computing system, includes processing circuitry and memory which implement an RL agent configured to interact with a first environment including the autonomous vehicle in a plurality of training sessions, each training session having a different initial value and yielding a state-action value function Qk(s, a) dependent on state and action. The processing circuitry and memory furthermore implement a training manager which is configured to

    • estimate an uncertainty on the basis of a variability measure for the plurality of state-action value functions evaluated for one or more state-action pairs corresponding to possible decisions by the trained RL agent, and
    • initiate additional training, in which the RL agent interacts with a second environment including the autonomous vehicle, wherein the second environment differs from the first environment by an increased exposure to a subset of state-action pairs for which the variability measure indicates a relatively higher uncertainty.


      It is emphasized that the arrangement may be located in the autonomous vehicle or separate from it. Further, the inventors envision distributed embodiments where the RL agent and the training manager, which may be in use during initial training stages but is not necessary while the autonomous vehicle operates in traffic, are implemented by respective, physically separate circuitry sections.


In some embodiments, the arrangement may further comprise a vehicle control interface configured to record state-action pairs encountered by the autonomous vehicle on the basis of at least one physical sensor in the autonomous vehicle. The training manager will estimate the uncertainty at least for these recorded state-action pairs.


In a third aspect, the invention provides a computer program for executing the vehicle control method on an arrangement with these characteristics. The computer program may be stored or distributed on a data carrier. As used herein, a “data carrier” may be a transitory data carrier, such as modulated electromagnetic or optical waves, or a non-transitory data carrier. Non-transitory data carriers include volatile and non-volatile memories, such as permanent and non-permanent storages of the magnetic, optical or solid-state type. Still within the scope of “data carrier”, such memories may be fixedly mounted or portable.


The arrangement according to the second aspect of the invention and the computer program according to the third aspect have same or similar effects and advantages as the method according to the first aspect. The embodiments and further developments described above in method terms are equally applicable to the second and third aspects.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the invention are described, by way of example, with reference to the accompanying drawings, on which:



FIG. 1 is a flowchart of a method according to an embodiment of the invention;



FIG. 2 is a block diagram of an arrangement for controlling an autonomous vehicle according to another embodiment of the invention;



FIG. 3 shows an architecture of a neural network of an RL agent; and



FIG. 4 is a plot of the mean uncertainty of a chosen action over 5 million training steps in an example.





DETAILED DESCRIPTION

The aspects of the present invention will now be described more fully with reference to the accompanying drawings, on which certain embodiments of the invention are shown. These aspects may, however, be embodied in many different forms and the described embodiments should not be construed as limiting; rather, they are provided by way of example so that this disclosure will be thorough and complete, and to fully convey the scope of all aspects of invention to those skilled in the art.


Reinforcement learning is a subfield of machine learning, where an agent interacts with some environment to learn a policy π(s) that maximizes the future expected return. Reference is made to the textbook R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, 2nd ed., MIT Press (2018).


The policy π(s) defines which action a to take in each state s. When an action is taken, the environment transitions to a new state s′ and the agent receives a reward r. The reinforcement learning problem can be modeled as a Markov Decision Process (MDP), which is defined by the tuple (custom-charactercustom-characterT; R; γ) where custom-character is the state space, custom-character is the action space, T is a state transition model (or evolution operator), R is a reward model, and γ is a discount factor. This model can also be considered to represent the RL agent's interaction with the training environment. At every time step t, the goal of the agent is to choose an action a that maximizes the discounted return,







R
t

=




k
=
0





γ
k




r

t
+
k


.







In Q-learning, the agent tries to learn the optimal action-value function Q*(s, a), which is defined as








Q
*

(

s
,
a

)

=


max
π



𝔼
[




R
t

|

s
t


=
s

,


a
t

=
a

,
π

]

.






From the optimal action-value function, the policy is derived as per







π

(
s
)

=


argmax
a





Q
*

(

s
,
a

)

.






An embodiment of the invention is illustrated by FIG. 1, which is a flowchart of a method 100 for providing an RL agent for decision-making to be used in controlling an autonomous vehicle. In the embodiment illustrated, the method begins with a plurality of training sessions 110-1, 110-2, . . . , 110-K (K≥2), which may be carried out in a simultaneous or at least time-overlapping fashion. In each training session, the RL agent interacts with a first environment E1 which has its own initial value and includes the autonomous vehicle (or, if the environment is simulated, a model of the vehicle). The kth training session returns a state-action value function Qk(s, a), for any 1≤k≤K, from which a decision-making policy may be derived in the manner described above. Preferably, all K state-action value functions are combined into a combined state-action value function Q(s, a), which may represent a central tendency of the state-action value functions, such as a mean value, of the K state-action values:








Q
¯

(

s
,
a

)

=


1
K






k
=
1

K




Q
k

(

s
,
a

)

.







The inventors have realized that the uncertainty of a possible decision corresponding to the state-action pair (ŝ, â) can be estimated on the basis of a variability of the numbers Q1(ŝ, â), Q2(ŝ, â), . . . , QK (ŝ, â). The variability may be measured as the standard deviation, coefficient of variation (i.e., standard deviation normalized by mean), variance, range, mean absolute difference or the like. The variability measure is denoted cv(ŝ, â) in this disclosure, whichever definition is used. Conceptually, a goal of the method 100 is to determine a training set custom-character of those states for which the RL agent will be from additional training:






custom-character={s∈custom-characterS: cv(s,a)>Cv for some a∈custom-character},


where Cv is the threshold variability and custom-character is the set of possible actions in state s. If the threshold Cv is predefined, its value may represent a desired safety level at which the autonomous vehicle is to be operated. It may have been determined or calibrated by traffic testing and may be based on the frequency of decisions deemed erroneous, collisions, near-collisions, road departures and the like. A possible alternative is to set the threshold dynamically, e.g., in such manner that a predefined percentage of the state-action pairs are to have increased exposure during the additional training.


To determine the need for additional training, the disclosed method 100 includes an uncertainty evaluation 114 of at least some of the RL agent's possible decisions, which can be represented as state-action pairs. One option is to perform a full uncertainty evaluation including also state-action pairs with a relatively low incidence in real traffic. Another option is to perform a partial uncertainty evaluation. To this end, prior to the uncertainty evaluation 114, an optional traffic sampling 112 may be performed, during which the state-action pairs that are encountered in the traffic are recorded. In the notation used above, the collection of recorded state-action pairs may be written {(ŝl, âl)∈custom-character×custom-character: 1≤l≤L}, where l is an arbitrary index. In the uncertainty evaluation 114, the variability measure cv is computed for each recorded state-action pair, that is, for each l∈[1, L]. The training set custom-character can be approximated






custom-character={s∈custom-characters=ŝl and cv(ŝll)>Cv for some l∈[1,L]}.


Recalling that LB, as defined above, is the set of all indices for which the threshold variability Cv is exceeded, the approximate training set is equal to






custom-character={s∈custom-characterŝl for some l∈LB}.


The method 100 then concludes with an additional training stage 116, in which the RL agent is caused to interact with a second environment E2 differing from the first environment E1 by an increased exposure to the training set custom-character or by an increased exposure to the approximate training set custom-character or by another property promoting the exposure to the state-action pairs (ŝl, âl) that have indices in LB. In some embodiments, it is an RL agent corresponding to combined state-action value function Q(s, a) that is caused to interact with the second environment E2. In other embodiments, the K state-action value function Qk(s, a), which may be regarded as an equal number of sub-agents, are caused to interact with the second environment E2. Whichever option is chosen, the method 100 is suitable for providing an RL agent adapted to make decisions underlying the control of an autonomous vehicle.


To illustrate the uncertainty evaluation stage 114, an example output when L=15 and the variability measure is the coefficient of variations may be represented as in Table 1.









TABLE 1







Example uncertainty evaluation









l
l, âl)
cvl, âl)












1
(S1, right)
0.011


2
(S1, remain)
0.015


3
(S1, left)
0.440


4
(S2, yes)
0.005


5
(S2, no)
0.006


6
(S3, A71)
0.101


7
(S3, A72)
0.017


8
(S3, A73)
0.026


9
(S3, A74)
0.034


10
(S3, A75)
0.015


11
(S3, A76)
0.125


12
(S3, A77)
0.033


13
(S4, right)
0.017


14
(S4, remain)
0.002


15
(S4, left)
0.009









Here, the sets of possible actions for each state S1, S2, S3, S4 are not known. If it is assumed that the enumeration of state-action pairs for each state is exhaustive, then custom-character|S1=custom-character|S4={right, remain, left}, custom-character|S2={yes, no} and custom-character|S3={A71, A72, A73, A74, A75, A76, A77}. If the enumeration is not exhaustive, then {right, remain, left}⊂custom-character|S1, {yes, no}⊂custom-character|S2 and so forth. For an example value of the threshold Cv=0.020, one obtains LB={3,6,8,9,11,12}. The training set consists of all states for which at least one action belongs to a state-action pair with a variability measure exceeding the threshold, namely, custom-character={S1,S3}, which will be the emphasis of the additional training 116.


The training set SB can be defined in alternative ways. For example, the training set may be taken to include all states s∈custom-character for which the mean variability of the possible actions custom-character|s exceeds the threshold Cv. This may be a proper choice if it is deemed acceptable for the RL agent to have minor points of uncertainty but that the bulk of its decisions are relatively reliable.



FIG. 2 illustrates an arrangement 200 for controlling an autonomous vehicle 299 according to another embodiment of the invention. The autonomous vehicle 299 may be any road vehicle or vehicle combination, including trucks, buses, construction equipment, mining equipment and other heavy equipment operating in public or non-public traffic. The arrangement 200 may be provided, at last partially, in the autonomous vehicle 299. The arrangement 200, or portions thereof, may alternatively be provided as part of a stationary or mobile controller (not shown), which communicates with the vehicle 299 wirelessly.


The arrangement 200 includes processing circuitry 210, a memory 212 and a vehicle control interface 214. The vehicle control interface 214 is configured to control the autonomous vehicle 299 by transmitting wired or wireless signals, directly or via intermediary components, to actuators (not shown) in the vehicle. In a similar fashion, the vehicle control interface 214 may receive signals from physical sensors (not shown) in the vehicle so as to detect current conditions of the driving environment or internal states prevailing in the vehicle 299. The processing circuitry 210 implements an RL agent 220 and a training manager 222 to be described next.


The RL agent 220 interacts with a first environment E1 including the autonomous vehicle 299 in a plurality of training sessions, each training session having a different initial value and yielding a state-action value function dependent on state and action. The RL agent 220 may, at least during the training phase, comprise as many sub-agents as there are training sessions, each sub-agent corresponding to a state-action value function Qk(s, a). The sub-agents may form a combined into a joint RL agent, corresponding to the combined state-action value function Q(s, a) introduced above, for the purpose of the decision-making. The RL agent 220 according to any of these definitions can be brought to interact with a second environment E2 in which the autonomous vehicle 299 is exposed more intensely to the training set custom-character or its approximation custom-character as discussed previously.


The training manager 222 is configured to estimate an uncertainty on the basis of a variability measure for the plurality of state-action value functions evaluated for a state-action pair corresponding to each of the possible decisions by the RL agent. In some embodiments, the training manager 222 does not perform a complete uncertainty estimation. For example, as suggested by the broken-line arrow, the training manager 222 may receive physical sensor data via the vehicle control interface 214 and determine on this basis a collection of state-action pairs to evaluate, {(ŝl, âl)∈custom-character×custom-character1≤l≤L}. The training manager 222 is configured to estimate an uncertainty on the basis of the variability measure cv for the K state-action value functions Qk(s, a) evaluated for these state-action pairs. The state-action pairs found to be associated with a relatively higher value of the variability measure are to be focused on in additional training, which the training manager 222 will initiate.


The thus additionally trained RL agent may be used to control the autonomous vehicle 299, namely by executing decision by the RL agent via the vehicle control interface 214.


Returning to the description of the invention from a mathematical viewpoint, an embodiment relies on the DQN algorithm. This algorithm uses a neural network with weights θ to approximate the optimal action-value function as Q*(s, a)≈Q(s, a; θ); see further V. Mnih et al., “Human-level control through deep reinforcement learning”, Nature, vol. 518, pp. 529-533 (2015) [doi:10.1038/nature14236.]. Since the action-value function follows the Bellman equation, the weights can be optimized by minimizing the loss function







L

(
θ
)

=

𝔼
[


(

r
+

γ


max
a



Q

(


s


,


a


:

θ
-



)


-

Q

(

s
,

a
;
θ


)


)

2

]





As explained in Mnih, the loss is calculated for a minibatch M and the weights θ of a target network are updated repeatedly.


The DQN algorithm returns a maximum likelihood estimate of the Q values but gives no information about the uncertainty of the estimation. The risk of an action could be represented as the variance of the return when taking that action. One line of RL research focuses on obtaining an estimate of the uncertainty by statistical bootstrap; an ensemble of models is then trained on different subsets of the available data and the distribution that is given by the ensemble is used to approximate the uncertainty. A sometimes better-performing Bayesian posterior is obtained if a randomized prior function (RPF) is added to each ensemble member; see for example I. Osband, J. Aslanides and A. Cassirer, “Randomized prior functions for deep reinforcement learning,” in: S. Bengjo et al. (eds.), Adv. in Neural Inf. Process. Syst. 31 (2018), pp. 8617-8629. When RPF is used, each individual ensemble member, here indexed by k, estimates the Q values as the sum






Q
k(s,a)=f(s,a;θk)+βp(s,a;{circumflex over (θ)}k),


where f, p are neural networks, with parameters θk that can be trained and further parameters {circumflex over (θ)}k that are kept fixed. The factor β can be used to tune the importance of the prior function. When adding the prior, the loss function L(θ) defined above changes into







L

(

θ
k

)

=



𝔼
M

[


(

r
+

γ




max

a




(


f

θ
k
-


+

β



p

θ
^


k



)



(


s


,

a



)


-


(


f

θ
k
-


+

β


p


θ
^

k




)



(

s
,
a

)



)

2

]

.





The full ensemble RPF method, which was used in this implementation, may be represented in pseudo-code as Algorithm 1:












Algorithm 1 Ensemble RPF training process


















 1:
for k ← 1 to K



 2:
 Initialize θk and {circumflex over (θ)}k randomly



 3:
 mk ← { }



 4:
i ← 0



 5:
while networks not converged



 6:
 si ← initial random state



 7:
 k ~ custom-character  {1, K}



 8:
 while episode not finished



 9:
  ai ← arg maxa Qk(si, a)



10:
  si+1, ri ← STEPENVIRONMENT(si, ai)



11:
  for k ← 1 to K



12:
   if p ~ custom-character  (0, 1) < padd



13:
    mk ← mk ∪ {(si, ai, ri, si+1)}



14:
   M ← sample from mk



15:
   update θk with SGD and loss L(θk)



16:
  i ← i + 1










In the pseudo-code, the function StepEnvironment corresponds to a combination of the reward model R and state transition model T discussed above. The notation k˜custom-character{1, K} refers to sampling of an integer k from a uniform distribution over the integer range [1, K], and p˜custom-character(0,1) denotes sampling of a real number from a uniform distribution over the open interval (0,1).


Here, an ensemble of K trainable neural networks and K fixed prior networks are first initialized randomly. A replay memory is divided into K parallel buffers mk, for the individual ensemble members (although in practice, this can be implemented in a memory-efficient way that uses a negligible amount of more memory than a single replay memory). To handle exploration, a random ensemble member is chosen for each training episode. Actions are then taken by greedily maximizing the Q value of the chosen ensemble member, which corresponds to a form of approximate Thompson sampling. The new experience (si, ai, ri, si+1) is then added to each ensemble buffer with probability padd Finally, a minibatch M of experiences is sampled from each ensemble buffer and the trainable network parameters of the corresponding ensemble member are updated by stochastic gradient descent (SGD), using the second definition of the loss function given above.


The presented ensemble RPF algorithm was trained in a one-way, three-lane highway driving scenario using the Simulation of Urban Mobility (SUMO) traffic simulator. The vehicle to be controlled (ego vehicle) was a 16 m long truck-trailer combination with a maximum speed of 25 m/s. In the beginning of each episode, 25 passenger cars were inserted into the simulation, with a random desired speed in the range 15 to 35 m/s. In order to create interesting traffic situations, slower vehicles were positioned in front of the ego vehicle, and faster vehicles were placed behind the ego vehicle. Each episode was terminated after N=100 timesteps, or earlier if a collision occurred or the ego vehicle drove off the road. The simulation time step was set to Δt=1 s. The passenger vehicles were controlled by the standard SUMO driver model, which consists of an adaptive cruise controller for the longitudinal motion and a lane-change model that makes tactical decisions to overtake slower vehicles. In the scenarios considered here, no strategical decisions were necessary, so the strategical part of the lane-changing model was turned off. Furthermore, in order to make the traffic situations more demanding, the cooperation level of the lane changing model was set to zero. Overtaking was allowed both on the left and right side of another vehicle, and each change took 4 s to complete. This environment was modeled by defining a corresponding state space custom-character action space custom-character state transition model T, and reward R.



FIG. 3 illustrates the architecture of the neural network is used in this embodiment. The architecture includes a temporal convolutional neural network (CNN) architecture, which makes the training faster and, at least in some use cases, gives better results than a standard fully connected (FC) architecture. By applying CNN layers and a max pooling layer to the part of the input that describes the surrounding vehicles, the output of the network becomes independent of the ordering of the surrounding vehicles in the input vector, and the architecture allows a varying input vector size. Rectified linear units (ReLUs) are used as activation functions for all layers, except the last, which has a linear activation function. The architecture also includes a dueling structure that separates the state value V(s) and action advantage A(s, a) estimation.


In an example, the RL agent was trained in the simulated environment described above. After every 50000 added training samples, henceforth called training steps, the agent was evaluated on 100 different test episodes. These test episodes were randomly generated in the same way as the training episodes, but not present during the training. The test episodes were also kept identical for all the test phases.


To gain insight into how the uncertainty estimation evolves during the training process, and to illustrate how to set the uncertainty threshold parameter Cv, FIG. 4 shows the coefficient of variation cv for the chosen action during the test episodes as a function of the number of training steps (scale in millions of steps). Each plotted value is an average over the 100 test episodes of that test phase. FIG. 4 shows the uncertainty of the chosen action, whereas the uncertainty for not-chosen actions may be higher. After around four million training steps, the coefficient of variation settles at around 0.01, with a small spread in values, which may justify setting the threshold at or around Cv=0.02.


To assess the ability of the RPF ensemble agent to cope with unseen situations, the agent obtained after five million training steps was deployed in scenarios that had not been included in the training episodes. In various situations that involved an oncoming vehicle, the uncertainty estimate was consistently high, cv≈0.2. The fact that this value is one level of magnitude above the proposed value of the threshold Cv=0.02, along with several further examples, suggests that the criterion cv(s, a)>Cv for including this state-action value pair (or the state that it involves) in the additional training is a robust and reliable guideline.


The aspects of the present disclosure have mainly been described above with reference to a few embodiments. However, as is readily appreciated by a person skilled in the art, other embodiments than the ones disclosed above are equally possible within the scope of the invention, as defined by the appended patent claims. In particular, the disclosed approach to estimating the uncertainty of a possible decision by an RL agent is applicable in machine learning more generally, also outside of the field of autonomous vehicles. It may be advantageous wherever the reliability of a possible decision is expected to influence personal safety, material values, information quality, user experience and the like, and where the problem of precisely focusing additional training arises.

Claims
  • 1. A method of providing a reinforcement learning, RL, agent for decision-making to be used in controlling an autonomous vehicle, the method comprising: a plurality of training sessions, in which the RL agent interacts with a first environment including the autonomous vehicle, each training session having a different initial value and yielding a state-action value function Qk(s, a) dependent on state and action;an uncertainty evaluation on the basis of a variability measure for the plurality of state-action value functions evaluated for one or more state-action pairs corresponding to possible decisions by the trained RL agent;additional training, in which the RL agent interacts with a second environment including the autonomous vehicle, wherein the second environment differs from the first environment by an increased exposure to a subset of state-action pairs for which the variability measure indicates a relatively higher uncertainty.
  • 2. The method of claim 1, further comprising: traffic sampling, in which state-action pairs encountered by the autonomous vehicle are recorded on the basis of at least one physical sensor signal,wherein the uncertainty evaluation relates to the recorded state-action pairs.
  • 3. The method of claim 1, wherein the first and/or the second environment is a simulated environment.
  • 4. The method of claim 3, wherein the second environment is generated from the subset of state-action pairs.
  • 5. The method of claim 1, wherein the state-action pairs in the subset have a variability measure exceeding a predefined threshold.
  • 6. The method of claim 1, wherein the additional training includes modifying said plurality of state-action value functions in respective training sessions.
  • 7. The method of claim 1, wherein the additional training includes modifying a combined state-action value function representing a central tendency of said plurality of state-action value functions.
  • 8. The method of claim 1, wherein the RL agent is configured for tactical decision-making.
  • 9. The method of claim 1, wherein the RL agent includes at least one neural network.
  • 10. The method of claim 9, wherein the RL agent is obtained by a policy gradient algorithm, such as an actor-critic algorithm.
  • 11. The method of claim 9, wherein the RL agent is a Q-learning agent, such as a deep Q network, DQN.
  • 12. The method of claim 9, wherein the training sessions use an equal number of neural networks.
  • 13. The method of claim 9, wherein the initial value corresponds to a randomized prior function, RPF.
  • 14. The method of claim 1, wherein the variability measure is one or more of: a variance, a range, a deviation, a variation coefficient, an entropy.
  • 15. An arrangement for controlling an autonomous vehicle, comprising: processing circuitry and memory implementing a reinforcement learning, RL, agent configured to interact with a first environment including the autonomous vehicle in a plurality of training sessions, each training session having a different initial value and yielding a state-action value function Qk(s, a) dependent on state and action,the processing circuitry and memory further implementing a training manager configured to: estimate an uncertainty on the basis of a variability measure for the plurality of state-action value functions evaluated for one or more state-action pairs corresponding to possible decisions by the trained RL agent, andinitiate additional training, in which the RL agent interacts with a second environment including the autonomous vehicle, wherein the second environment differs from the first environment by an increased exposure to a subset of state-action pairs for which the variability measure indicates a relatively higher uncertainty.
  • 16. The arrangement of claim 15, further comprising a vehicle control interface configured to record state-action pairs encountered by the autonomous vehicle on the basis of at least one physical sensor in the autonomous vehicle, wherein the training manager is configured to estimate the uncertainty for the recorded state-action pairs.
  • 17. A computer program comprising instructions to cause a processor to perform the method of any of claim 1.
  • 18. A data carrier carrying the computer program of claim 17.
PCT Information
Filing Document Filing Date Country Kind
PCT/EP2020/061007 4/20/2020 WO