METHOD AND DEVICE FOR REINFORCEMENT LEARNING

Information

  • Patent Application
  • 20230132482
  • Publication Number
    20230132482
  • Date Filed
    October 14, 2022
    2 years ago
  • Date Published
    May 04, 2023
    a year ago
  • CPC
    • G06N20/00
  • International Classifications
    • G06N20/00
Abstract
A device and method for reinforcement learning. The method includes providing parameters of a policy for reinforcement learning, determining a behavior policy depending on the policy, sampling a training data set with the behavior polic, and determining an update for the parameters with an objective function, wherein the objective function maps a difference between an estimate for an expected reward when following the policy and an estimate for a distance between the policy and the behavior policy, that depends on the policy and on the behavior policy, to the update. Or, the method includes providing distribution for parameters of a policy for reinforcement learning, determining a behavior policy depending on the policy, sampling a training data set with the behavior policy, and determining an update for the distribution with another objective function.
Description
CROSS REFERENCE

The present application claims the benefit under 35 U.S.C. § 119 of German Patent Application No. DE 10 2021 212 277.9 filed on Oct. 29, 2021, which is expressly incorporated herein by reference in its entirety.


BACKGROUND INFORMATION

The present invention relates to a device, a computer program and a computer-implemented method for machine learning.


“Relative Entropy Policy Search,” by Jan Peters, Katharina Mülling, Yasemin Altung, in Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10), 2010 describe aspects of Relative Entropy Policy Search.


SUMMARY

According to an example embodiment of the present invention, a method for reinforcement learning comprises providing parameters of a policy for reinforcement learning, determining a behavior policy depending on the policy, sampling a training data set with the behavior polic, and determining an update for the parameters with an objective function, wherein the objective function maps a difference between an estimate for an expected reward when following the policy and an estimate for a distance between the policy and the behavior policy, that depends on the policy and on the behavior policy, to the update or wherein the method comprises providing distribution for parameters of a policy for reinforcement learning, determining a behavior policy depending on the policy, sampling a training data set with the behavior policy, and determining an update for the distribution with an objective function, wherein the objective function maps a difference between an expectancy value for an estimate for an expected reward when following the policy and an expectancy value for an estimate for a distance between the policy and the behavior policy, that depends on the policy and on the behavior policy, to the update. This way, it is not necessary to determine a closed-form solution to a Relative Entropy Policy Search problem. The updated policy is instead found by optimizing an objective function that corresponds to a lower bound that can be computed from training data.


According to an example embodiment of the present invention, the method may comprise determinig the update for the distribution depending on the distribution that result in a value of the objective function that is larger than a value of the objective function that results for at least one other distribution. This way, the policy is found by optimizing the objective function regarding the distribution of the parameters of the policy.


Preferably, the method comprises determinig the update for the distribution depending on the distribution that maximize the value of the objective function.


According to an example embodiment of the present invention, the method may comprise providing a reference distribution over the parameter values, and providing a confidence parameter, wherein the objective function comprises a term that depends on a sum of the confidence parameter and a Kullback-Leibler divergence between the distribution and the reference distribution. This term accounts for an uncertainty that arises from estimating the expected reward using the training data set.


According to an example embodiment of the present invention, the method may comprise sampling parameters from the reference distribution or from the distribution, and determining the behavior policy depending on the parameter values that are sampled from the distribution. This way, the policy is found by optimizing the objective function regarding the parameters that define the distribution. The parameters of the policy are derivable from the distribution afterwards.


According to an example embodiment of the present invention, the method may comprise determining the parameter values that result in a value of the objective function that is larger than a value of the objective function that results for other parameter values. This way, the policy is found by optimizing the objective function regarding the parameters of the policy.


Preferably, the method comprises determining the parameter values that maximize the value of the objective function.


According to an example embodiment of the present invention, the method may comprise determining the behavior policy depending on initial parameter values or depending on the parameter values.


According to an example embodiment of the present invention, the method may comprise determining the policy depending on the parameter values or determining the distribution and sampling the paramters of the policy from the distribution.


According to an example embodiment of the present invention, the method may comprise receiving input data and determining output data from the input data with the policy.


According to an example embodiment of the present invention, a device for reinforcement learning is configured, in particular with an input and an output and at least one processor and at least one storage, for executing steps in the method(a) disclosed herein.


According to an example embodiment of the present invention, a computer program that comprises computer-readable instructions, that when executed on a computer, cause the computer to perform the method(s) disclosed herein.


Further advantageous embodiments of the present invention are derivable from the following description and the figures.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 schematically depicts a part of a device for reinforcement learning, according to an example embodiment of the present invention.



FIG. 2 depicts steps in a first example embodiment of a method for reinforcement learning, according to the present invention.



FIG. 3 depicts steps in a second example embodiment of the method for reinforcement learning, according to the present invention.





DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS


FIG. 1 depicts schematically a part of a device 100 for reinforcement learning. The device 100 comprises at least one processor 102 and at least one storage 104. The at least one storage 104 may store a computer program that comprises computer-readable instructions, that when executed on a computer, cause the computer to perform a method that will be described below with reference to FIG. 2 and FIG. 3. The device 100 is configured for executing steps in the method, in particular when the at least one processor 102 executes instructions of the computer program.


The device 100 in the example comprises an input 106 and an output 108. The input 106 is configured for receiving input data. The output 108 is configured to output output data.


The input 106 may be configured for receiving the input data from a sensor 110. The sensor 110 may comprise a camera or a microphone. The input data may comprise at least one of digital images, e.g. video, radar, LiDAR, ultrasonic, motion, thermal images, sonar, or digital audio signals.


The device 100 may be configured for detecting anomalies in the input data, classifying the input data, detecting a presence of objects in the input data or performing a semantic segmentation on the input data, e.g., regarding traffic signs, road surfaces, pedestrians, vehicles.


The device 100 may be configured for controlling an apparatus 112. The apparatus 112 may be a vehicle or a robot. The device 100 may be configured for controlling the apparatus 112 depending on whether an anomaly is detected in the input data or not. The device 100 may be configured for controlling the apparatus 112 depending on a classification of the input data. The device 100 may be configured for controlling the apparatus 112 depending on whether the presence of an object is detected in the input data or not. The device 100 may be configured for controlling the apparatus 112 depending on a result of the semantic segmentation on the input data.


The method applies to contextual bandit problems. Input data classification and detecting anomalies may be framed as a contextual bandit problem. The method applies to other problems as well that are represented as contextual bandit problems.


A contextual bandit problem is defined by a set of states S, a set of actions A, an unknown initial state distribution µ over S and an unknown stochastic reward function ρ : S × A → M([0, 1]), wherin M([0,1]) denotes a set of all probability distributions over the interval [0; 1], µ(s) denotes a probability mass or probability density of a state s ∈ S under the initial state distribution, and ρ(r|s,a) denotes the probability mass or probability density of a reward r ∈ [0; 1] conditioned on the state s ∈ S and an action a ∈ A.


A policy π: S → M(A) is a function that maps states to distributions over actions.


The contextual bandit problem considered herein comprises parametric policies πθ : S × Θ → M(A), wherein Θ is some set of possible values that the parameter θ can take. The goal of the contextual bandit problem is to find the policy parameters θ that maximize an expected reward:






J



π
θ




=



s

μ

s





a

μ


a

s







r

μ


r


s
,
a







r





The method trains the device 100. The method may train the device 100 in particular for detecting anomalies in the input data, classifying the input data, detecting the presence of objects in the input data or performing the semantic segmentation on the input data.


Since µ and ρ are unknown, neither J(πθ) nor its gradient with respect to θ are computable. Thus, the expected reward or its gradient is estimated with a training data set D =











s
i

,

a
i

,

r
i





i
=
1

n





containing state, action and reward triples, where the states











s
i





i
=
1

n





are sampled independently from µ, the actions











a
i





i
=
1

n





are sampled independently from a known behavior policy b with a probability density b(s|a) and the rewards











r
i





i
=
1

n





are sampled independently from the reward distribution ρ.


The method comprises computing a lower bound on J(πθ). The lower bound in the example can be computed using only the training data set D.


The method comprises using this lower bound as an objective function, since maximizing a lower bound on the expected reward provides a policy πθ that has a high expected reward.


Below, two embodiments of the method are described.


A first embodiment is described referencing FIG. 2.


The first embodiment of the method for reinforcement learning comprises a step 202.


In the step 202, parameters θ of the parameterized policy πθ for reinforcement learning are provided. In the example, a predetermined number of iterations I is provided and a counter i for counting the iterations is initialized e.g. i=0.


Afterwards a step 204 is executed.


In step 204, the behavior policy b is determined depending on the parameterized policy πθ.


Afterwards a step 206 is executed.


In the step 206, the training data set D is sampled with the behavior policy b.


Afterwards a step 208 is executed.


In the step 208, an update for the parameters θ is determined with the objective function J(θ) according to the first embodiment.


The method may comprise determining the parameter values θ that result in a value of the objective function J(θ) according to the first embodiment that is larger than a value of the objective function J(θ) according to the first embodiment that results for other parameter values.


The method may comprise determining the parameter values θ that result in a value of the objective function J(θ) that maximizes the value of the objective function (J(θ) according to the first embodiment.


Afterwards, a step 210 is executed.


In the step 210 the counter i for the iterations is incremented, e.g. i=i+1, and it is determined, whether the counter i exceeds the predetermined number of iterations I or not.


When the counter i exceeds the predetermined number of iteration I, a step 212 is executed. Otherwise the step 204 is executed.


In the step 212, the parameters θ and/or the parameterized policy πθ may be stored.


The training for reinforcement learning comprises the steps 202 to 212. The result of this training is the parameterized policy πθ and/or the parameters θ that result in the final iteration.


Optionally, a step 214 may be executed afterwards. In the step 214, the parameters θ and/or the parameterized policy πθ may be applied to control the apparatus 112.


Controlling the apparatus 112 may comprise receiving input data, processing the input data according to the parameterized policy πθ that results from the final iteration, and outputting output data to control the apparatus 112 that results from processing the input data with this parameterized policy πθ.


The apparatus 112 may be controlled depending on whether an anomaly is detected in the input data or not with this parameterized policy πθ. The apparatus 112 may be controlled depending on a classification of the input data with this parameterized policy πθ. The apparatus 112 may be controlled depending on whether the presence of an object is detected in the input data or not with this parameterized policy πθ. The apparatus 112 may be controlled depending on a result of the semantic segmentation on the input data with this parameterized policy πθ.


The objective function J(θ) maps a difference between an estimate for an expected reward Ĵ(sg)θ,b,D) when following the parameterized policy πθ and an estimate D̂(πθ,b,D) for a distance DTV((µ,πθ)∥ (µ,b)) between the policy πθ and the behavior policy b to the update for the parameters θ.


The expected reward Ĵ(sg)θ,b,D) may be








J
^




s
,
g







π
θ

,
b
,

D


=

1
n





i
=
1

n





π
θ




a
i




s
i












π
θ




a
i




s
i









s
g





r
i







wherein [·]sg is a stop gradient operator. This means the term [πθ(ai|si)]sg is not considered when determining a gradient of the expected reward Ĵ(sg)θ,b,D).


The estimate for the distance D̂(πθ,b,D) may be







D
^




π
θ

,

b
,

D



=

1
n





i
=
1

n



D

T
V





π
θ







s
i





|

|
b






s
i














wherein DTVθ(·|si)∥b(·|si)) is an estimate for a distance DTV((µ,πθ)∥ (µ,b)) that depends on the policy πθ and on the behavior policy b, and wherein DTV((µ,πθ)∥ (µ,b)) is a total variation distance.


The objective function J(θ) according to the first embodiment is for example






J

θ

=


J
^




s
g







π
θ

,

b
,

D




D
^




π
θ

,

b
,

D


.




The update for the parameters θ may be determined iteratively. In the example, the objective function J(θ) according to the first embodiment may be maximized iteratively with respect to the parameters θ using gradient ascent or some variant of gradient ascent, e.g. Adam optimization.


The update for the parameters θ may be determined in k steps with a learning rate α.


An exemplary algorithm for implementing the method according to the first embodiment is:


Input: Initial parameters θ, learning rate α


for iteration i=1,2, ... do


Set behavior policy b←πθ


Sample






D


=







s
i

,


a
i

,


r
i





i
=
1

n





using b


for step = 1, 2, ..., k do






θ



θ

+

α


θ

J

θ





end for


end for


A second embodiment is described referencing FIG. 3.


The second embodiment of the method for reinforcement learning comprises a step 300.


In the step 300 a reference distribution P over the parameter values θ.


The reference distribution P may come from a parametric family of distributions, e.g. normal distributions. The reference distribution P may be






P


=


N



μ
P

,


σ
P
2

I






wherein µP denotes the mean and







σ
P
2





denotes the variance of the reference distribution P and I denotes the identity matrix. This means, the distribution P is a diagnoal normal distribution.


Afterwards a step 302 is executed.


In the step 302, a distribution Q for parameters θ of the parameterized policy πθ is determined. The distribution Q may come from a parametric family of distributions, e.g. normal distributions. The distribution Q may be






Q

=




μ
Q

,

σ
Q
2

I






wherein µQ denotes the mean and







σ
Q
2





denotes the variance of the distribution Q and I denotes the identity matrix. This means the distribution Q is a diagnoal normal distribution.


The mean µQ may be initialized with µQ ← µP.


The variance







σ
Q
2





may be initialized with







σ
Q
2





σ
P
2






In the example, a predetermined number of iterations I is provided and a counter i for counting the iterations is initialized e.g. i=0.


Afterwards a step 304 is executed.


In the step 304, parameters θ of the parameterized policy πθ for reinforcement learning are provided. In the example, the parameters θ are sampled from the distribution Q.


Afterwards a step 306 is executed.


In step 306 the behavior policy b is determined depending on the parameterized policy πθ.


Afterwards a step 308 is executed.


In the step 308, the training data set D is sampled with the behavior policy b.


Afterwards a step 310 is executed.


In the step 310, an update for distribution Q is determined with the objective function J(Q) according to the second embodiment.


The method may comprise determining the distribution Q that result in a value of the objective function J(Q) according to the second embodiment that is larger than a value of the objective function J(Q) according to the second embodiment that results for other distributions Q.


In the example, the mean µQ and the variance







σ
Q
2





that result in the value of the objective function J(Q) being larger than at least one other value of the objective function J(Q) that results for another mean and/or variance is determined.


The method may comprise determining the distribution Q that result in a value of the objective function J(Q) that maximizes the value of the objective function J(Q) according to the second embodiment.


In the example, the mean µQ and the variance







σ
Q
2





that maximize the objective function J(Q) are determined.


Afterwards, a step 312 is executed.


In the step 312 the counter i for the iterations is incremented, e.g. i=i+1, and it is determined, whether the counter i exceeds the predetermined number of iterations I or not.


When the counter i exceeds the predetermined number of iteration I, a step 314 is executed. Otherwise, the step 304 is executed.


In the step 314, the distribution Q may be stored In the example, the mean µQ and the variance







σ
Q
2





may be stored.


The training for reinforcement learning comprises the steps 300 to 314. The result of this training is the distribution Q and/or the mean µQ and the variance







σ
Q
2





that allows sampling the parameterized policy πθ and/or the parameters θ.


Optionally, a step 316 may be executed afterwards. In the step 316, the distribution Q and/or the mean µQ and the variance







σ
Q
2





and/or the parameterized policy πθ may be applied to control the apparatus 112.


Controlling the apparatus 112 may comprise receiving input data, processing the input data according to the parameterized policy πθ that results from sampling from the distribution Q that is determined in the final iteration, and outputting output data to control the appratus 112 that results from processing the input data with this parameterized policy πθ.


The apparatus 112 may be controlled depending on whether an anomaly is detected in the input data or not with this parameterized policy πθ. The apparatus 112 may be controlled depending on a classification of the input data with this parameterized policy πθ. The apparatus 112 may be controlled depending on whether the presence of an object is detected in the input data or not with this parameterized policy πθ. The apparatus 112 may be controlled depending on a result of the semantic segmentation on the input data with this parameterized policy πθ.


The objective function J(Q) according to the second embodiment maps a difference between an expectancy value Ĵ(sg)(Q,b,D) for the estimate Ĵ(sg)θ,b,D) for the expected reward when following the parameterized policy πθ and an expectancy value D̂(Q,b,D) for the estimate D̂(πθ,b,D) to the update for the distribution Q.


The objective function J(Q) comprises a term






2





D

K
L




Q



P


+
l
n


2

n

/
δ




2
n








that depends on a sum of a confidence parameter






ln


2

n

/
δ






and a Kullback-Leibler divergence DKL(Q∥P) between the distribution Q and the reference distribution P.


The confidence parameter






ln


2

n

/
δ






in the example depends on a parameter δ ∈ (0,1]. This parameter δ is provided e.g. in an initialization.


This term accounts for an uncertainty that arises from estimating the expected reward using the training data set D


The expectancy value Ĵ(sg)(Q,b,D) for the estimate Ĵ(sg)θ,b,D) for the expected reward may be








J
^




s
g






Q
,
b
,
D


=

E

θ

Q






J
^




s
g







π
θ

,
b
,
D








The expectancy value D(Q,b,D) for the estimate D̂(πθ,b,D) for the distance DTV((µ,πθ)∥ (µ,b)) between the policy πθ and the behavior policy b may be







D
^



Q
,
b
,
D


=

E

θ

Q





D
^




π
θ

,
b
,
D








The objective function J(Q) according to the second embodiment is for example






J

Q

=


J
^




s
g






Q
,
b
,
D




D
^



Q
,
b
,
D



2





D

K
L




Q



P


+
ln


2

n

/
δ




2
n




.




The update for the distribution Q may be determined in k steps with a learning rate α.


An exemplary algorithm for implementing the method according to the first first embodiment is:


Input: Prior parameter distribution






P
=
N



μ
P

,

σ
P
2

I


,




learning rate α


Initialise posterior mean µQ ← µP and variance







σ
Q
2



σ
P
2

·




for iteration i=1,2, ... do


Sample θ from






Q
=
N



μ
Q

,

σ
Q
2

I






Set behavior policy b ← πθ


Sample






D
=





s
i

,

a
i

,

r
i





i
=
1

n





using b


for step = 1, 2, ..., k do







μ
Q



μ
Q

+
α




μ
Q



J

Q










σ
Q
2



σ
Q
2

+
α




σ
Q
2



J

Q





end for


Update prior mean µP ← µQ


end for

Claims
  • 1. A method for reinforcement learning, wherein the method comprises the following steps: providing parameters of a policy for reinforcement learning;determining a behavior policy depending on the policy;sampling a training data set with the behavior policy; anddetermining an update for the parameters with an objective function;wherein the objective function maps a difference between an estimate for an expected reward when following the policy and an estimate for a distance between the policy and the behavior policy, that depends on the policy and on the behavior policy, to the update, orwherein the method comprises the following steps: providing a distribution for parameters of a policy for reinforcement learning;determining a behavior policy depending on the policy,sampling a training data set with the behaviour policy; anddetermining an update for the distribution with an objective function;wherein the objective function maps a difference between an expectancy value for an estimate for an expected reward when following the policy and an expectancy value for an estimate for a distance between the policy and the behavior policy, that depends on the policy and on the behavior policy, to the update.
  • 2. The method according to claim 1, wherein the method further comprises determining the update for the distribution depending on a distribution that results in a value of the objective function that is larger than a value of the objective function that results for at least one other distribution.
  • 3. The method according to claim 2, wherein the method further comprises determining the update for the distribution depending on the distribution that maximizes the value of the objective function.
  • 4. The method according to claim 1, wherein the method further comprises providing a reference distribution over the parameter values, and providing a confidence parameter, wherein the objective function includes a term that depends on a sum of the confidence parameter and a Kullback-Leibler divergence between the distribution and the reference distribution.
  • 5. The method according to claim 4, wherein the method further comprises sampling parameters from the reference distribution or from the distribution, and determining the behavior policy depending on the parameter values that are sampled from the distribution.
  • 6. The method according to claim 1, wherein the method further comprises determining parameter values that result in a value of the objective function that is larger than a value of the objective function that results for other parameter values.
  • 7. The method according to claim 6, wherein the method further comprises determining the parameter values that maximize the value of the objective function.
  • 8. The method according to claim 1, wherein the method further comprises determining the behavior policy depending on initial parameter values or depending on the parameter values.
  • 9. The method according to claim 1, wherein the method comprises determining the policy depending on the parameter values or determining the distribution and sampling the paramters of the policy from the distribution.
  • 10. The method according to claim 9, wherein the method comprises receiving input data and determining output data from the input data with the policy, for controlling an apparatus.
  • 11. A device for reinforcement learning, the device comprising: an input;an output;at least one processor; andat least one storage;wherein the device is configured to:(i) : provide parameters of a policy for reinforcement learning,determine a behavior policy depending on the policy,sample a training data set with the behavior policy, anddetermine an update for the parameters with an objective function,wherein the objective function maps a difference between an estimate for an expected reward when following the policy and an estimate for a distance between the policy and the behavior policy, that depends on the policy and on the behavior policy, to the update, or(ii) : provide a distribution for parameters of a policy for reinforcement learning; determining a behavior policy depending on the policy,sample a training data set with the behaviour policy; anddetermine an update for the distribution with an objective function;wherein the objective function maps a difference between an expectancy value for an estimate for an expected reward when following the policy and an expectancy value for an estimate for a distance between the policy and the behavior policy, that depends on the policy and on the behavior policy, to the update.
  • 12. A non-transitory computer-readable medium on which is stored a computer program including computer-readable instructions for reinforcement learning, wherein the instructions, when executed by a processor, causing the processor to perform the following steps: providing parameters of a policy for reinforcement learning;determining a behavior policy depending on the policy;sampling a training data set with the behavior policy; anddetermining an update for the parameters with an objective function;wherein the objective function maps a difference between an estimate for an expected reward when following the policy and an estimate for a distance between the policy and the behavior policy, that depends on the policy and on the behavior policy, to the update, orwherein the instructions, when executed by the processor, causing the processor to perform the following steps: providing a distribution for parameters of a policy for reinforcement learning;determining a behavior policy depending on the policy,sampling a training data set with the behaviour policy; anddetermining an update for the distribution with an objective function;wherein the objective function maps a difference between an expectancy value for an estimate for an expected reward when following the policy and an expectancy value for an estimate for a distance between the policy and the behavior policy, that depends on the policy and on the behavior policy, to the update.
Priority Claims (1)
Number Date Country Kind
10 2021 212 277.9 Oct 2021 DE national