The present application claims the benefit under 35 U.S.C. § 119 of German Patent Application No. DE 10 2021 212 906.4 filed on Nov. 17, 2021, which is expressly incorporated herein by reference in its entirety.
The present disclosure relates to a method for controlling an agent.
Reinforcement Learning (RL) is a paradigm of machine learning which allows an agent such as a machine to learn to execute a desired behavior with regard to a task specification, e.g., which control measures are to be implemented in order to reach a target destination in a robot navigation scenario. Learning a strategy that generates this behavior with the aid of reinforcement learning differs from learning by supervised learning by the way in which the training data are assembled and maintained. While in supervised learning the supplied training data are made up of mutually adapted pairs of inputs for the strategy (e.g., observations such as measured sensor values) and desired outputs (actions to be executed), there are no fixed training data in reinforcement learning. The strategy is learned from observational data which were collected by the interaction of the agent with its environment, the agent receiving a feedback signal (reward) which evaluates the actions carried out in a certain context (state).
However, reinforcement learning often requires a considerable quantity of such training data in order to converge to a correct solution. Especially when complex tasks are involved, the training therefore necessitates many interactions of the agent with the environment, which could lead to a considerable outlay in an execution in the real word. A training based solely on simulations, which would be less expensive, may lead to poor results, however.
For these reasons, procedures that allow for combined training in different representations of a task are desirable (e.g., simulated and in the real world).
According to different embodiments of the present invention, a method is provided for controlling an agent, which includes collecting training data for multiple representations of states of the agent; for every representation and using the training data, training a state encoder for mapping states to latent states in a (shared) latent state room; a state decoder for mapping latent states back from the latent state space; an action encoder for mapping actions to latent actions in a (shared) latent action space and an action decoder for mapping latent actions back from the latent action space, and a transition model, shared for the representations, for latent states and a Q function model, shared for the representations of latent states, using the state encoders, the state decoders, the action encoders and the action decoders. In addition, the method includes receiving a state of the agent in one of the representations for which a control action is to be ascertained; mapping the states to one or more latent state(s) with the aid of the state encoder for the one of the representations; determining Q values for the one or for the multiple latent state(s) for a set of actions with the aid of the Q function model; selecting the control action having the best Q value from the set of actions as a control action; and controlling the agent according to the selected control action.
The above-described method for controlling an agent makes it possible to combine knowledge from different representations of the same task (e.g., the same environment). This accelerates the training in one of the representations by utilizing the knowledge from other representations.
Especially training data obtained in different representations of a task are able to be combined. For instance, training may take place on a less detailed (more abstract), simpler representation of the respective task (which is possible with fewer interactions of the agent with its environment), and the obtained knowledge can then be transferred to a more complex target task actually to be solved by the agent, and the overall training effort be reduced as a result.
The simpler representation may also make it possible to carry out more interactions in a more rapid and/or advantageous manner (in terms of the required resources). For instance, training in the real world may be combined with training in a simulation. Since the training in the real world typically means a high expenditure, for example because it requires human assistance as well as additional hardware, this, too, may reduce the expense of the training.
For example, prior knowledge may exist about a task, e.g., a type of map or a plan or a symbolic representation of the task, which is able to be used (such as in a suitable representation) so that the training process can be accelerated as a whole.
In comparison with the conventional training of a RL control strategy, which must be trained anew for every representation, the data efficiency is able to be increased (that is, the number of required interactions with the environment reduced). In particular interactions in representations that require less work (simulations) may be used and the required number of interactions thus be reduced in representations that require considerable work (interactions with the real world).
In the following text, different exemplary embodiments of the present invention are provided.
Exemplary embodiment 1 is a method for controlling an agent as described above.
Exemplary embodiment 2 is the method according to the exemplary embodiment 1, in which the training is carried out using a loss function which has a loss that provides a reward when it is highly likely that the latent transition model supplies transitions of latent states to which the state encoder (of the respective representation) maps states that have transitioned into one another in the training data.
For example, the latent transition model and the state encoder, the state decoder, the action encoder and the action decoder (or the respective parameters) for all representations are trained using a loss function whose goal it is to maximize the likelihood of the occurrence of the collected interaction data from the representations under the overall model (maximum likelihood method).
The use of such a loss function ensures that the latent transition model is suitable for all representations and is not highly likely to generate transitions that occur only at a low probability in one of the representations.
Exemplary embodiment 3 is the method according to exemplary embodiment 2, in which the loss function has a locality-condition term which penalizes large distances in the state space between (likely) transitions between latent states.
In this way, a meaningful structure of the latent state space is able to be achieved, which is particularly suitable for tasks such as navigation tasks.
Exemplary embodiment 4 is the method according to exemplary embodiment 2 or 3, in which the loss function has a reinforcement-learning loss for the shared Q function model.
During the training, it is therefore possible that all components of the model are trained to reduce the value of the loss function for the training data.
Exemplary embodiment 5 is the method according to exemplary embodiment 4, in which the reinforcement-learning loss is a double deep Q-network loss.
This enables an efficient training of the shared (i.e., latent) Q function model.
Exemplary embodiment 6 is the method according to one of the exemplary embodiments 1 through 5, in which the state encoder and the action encoder map to a respective probability distribution. Instead of a single latent state, the Q function model therefore receives multiple samples (also referred to as particles) of latent states according to the distribution of latent states supplied for the state (from a respective representation) by the respective state encoder.
In other words, the components (e.g., neural networks) carry out a variational inference. This ensures a steady structure of the latent spaces, for instance.
Exemplary embodiment 7 is the method according to one of the exemplary embodiments 1 through 6, in which the representations have a first representation, which is a representation of states in the real world and for which training data are collected through an interaction of the agent with the real world, and the representations have a second representation, which is a representation of states in a simulation and for which training data are collected by a simulated interaction of the agent with a simulated environment.
The combination of training by an interaction in the real world and by an interaction in a simulation makes it possible to considerably reduce the training expense in comparison with an acquisition of training data only in the real world. and it also avoids a quality loss of the training as it happens in a training based purely on a simulation.
However, it is also possible to combine multiple simulations, that is, both (or more) representations may represent simulated states.
Exemplary embodiment 8 is a control device which is designed to carry out a method as recited in one of the exemplary embodiments 1 through 7.
Exemplary embodiment 9 is a computer program which includes instructions that when executed by a processor, induce the processor to carry out a method as recited in one of the exemplary embodiments 1 through 7.
Exemplary embodiment 10 is a computer-readable medium which stores instructions that when executed by a processor, induce the processor to carry out a method as recited in one of the exemplary embodiments 1 through 7.
In the figures, similar reference numerals generally relate to the same parts in all of the different views. The figures are not necessarily true to scale, the general emphasis instead being placed on representing the principles of the present invention. In the following description, different aspects will be described with reference to the figures.
The following detailed description relates to the figures, which show as information special details and aspects of this disclosure by which the present invention is able to be implemented. Other aspects may be used, and structural, logical and electrical modifications may be made without departing from the scope of the present invention. The different aspects of this disclosure do not necessarily exclude one another insofar as some aspects of this disclosure are able to be combined with one or more other aspect(s) of this disclosure to generate new aspects.
Different exemplary embodiments will be described in greater detail in the following text.
A controlled object 100 (such as a robot or a vehicle) is located in an environment 101. Controlled object 100 has a start position 102 and is meant to reach a target position 103. Obstacles 104, which the controlled object 100 is to circumvent, are located in environment 101. For example, they must not be passed by controlled object 100 (e.g., walls, trees or rocks) or should be avoided because the agent would damage or injure them (such as pedestrians).
Controlled object 100 is controlled by a control device 105 (control device 105 may be situated in controlled object 100 or possibly also be provided separately thereof, that is, the control object can also be controlled remotely). In the exemplary scenario of
In addition, the embodiments are not restricted to the scenario where a controlled object such as a robot (as a whole) is to be moved between positions 102, 103, but they may also be used for the control of a robot arm, for instance, whose end effectors are to be moved between positions 102, 103 (without encountering obstacles 104), etc.
In the following text, terms like robot, vehicle, machine, etc. are therefore used as examples of the object to be controlled or the computer-controlled system (e.g., a robot with objects in its working area). The described approaches are able to be used with different types of computer-controlled machines such as robots or vehicle and others. Hereinafter, the general term ‘agent’ is especially also used for all types of physical systems that are able to be controlled by the approaches described in the following text. However, the approaches described hereinafter can also be used for all types of agents (e.g., also for an agent which is only simulated and does not physically exist).
In the ideal case, control device 105 has learned a control strategy that enables it to successfully control controlled object 100 (from start position 102 to target position 103, without striking obstacles 104) for a variety of scenarios (i.e., environments, start and target positions) that control device 105 has not encountered so far (during the training).
The control strategy may be learned with the aid of reinforcement learning; and in order to achieve more efficient learning at a low effort, information from different fields is combined such as different types of data (in particular sensor data), e.g., images from the top view, coordinates of controlled object 101, one-hot vectors, . . . , or in other words, different representations of the states of the controlled object and its environment and actions, or to be performed according to the task, possibly with a different degree of detail (e.g., a resolution of images or maps).
In this context, it is not necessary for the combined (or ‘mixed’) information to be available together or at the same time; instead, it can be collected independently in each case. According to different embodiments, a control strategy is trained such that it is able to work in every representation, that is, can suitably control the controlled object in every one of the representations that have occurred during the training, the efficiency of the training becoming more efficient in every representation because knowledge from the training in the other representations it utilized.
The control strategy is represented by a model. It includes neural networks (in particular a neural Q function network) according to the different embodiments. More specifically, it includes components for mapping states and actions from multiple (abstract) representations of the same task (as a Markov decision problem) to the same latent representation with a latent transition model. For every representation, there exists an individual pair of encoder and decoder for states and actions (separately for states and actions) for mapping into and out of the latent representation. Based on ascertained latent states, a neural output network of the model calculates Q values for (discrete) actions, and based on these Q values, actions are selected in the respective representation. The model as a whole is trained by combining a loss term for the latent transition model and the state encoders, state decoders, action encoders and action decoders for all representations, and a classic RL loss term for maximizing the yield from interaction data collected in the different representations (in the same environment).
The model therefore makes it possible to combine information from different representations of the same task (or environment or controlled system) in a latent space.
A locality condition may be used to produce a meaningful structure in the latent representation (in particular for navigation tasks). In the process, (large) spatial distances between temporally consecutive latent states are penalized, the more so the greater the likelihood of such a transition of latent states. As a result, (spatially) local transitions between latent states are forced.
According to different embodiments, the RL control strategy or the Q-network (that is, the neural output network of the model) is trained with the aid of an off-policy double DQN using an experience replay. For example, a separate memory for state transitions is used for this purpose, which stores trajectory sections (for training the transition model). This leads to a high data efficiency during the training of this part of the architecture (Q-network) and may reduce the variance of the gradients during the training of the Q-network.
A high quality of the encodings (as latent values) is achievable through the use of separate encoders and decoders according to different embodiments.
Model 200 is developed in such a way that it is trained for multiple Markov decision processes (for the same task). In other words, the respective agent learns a state-value function (in the form of a neural Q-network) and thus a control strategy (to the effect that the action having the highest Q value is selected for a state). Additional components are provided for training the Q-network. States and actions are mapped to a latent state space and latent action space. They may be considered a state space and action space of a latent decision process. For the latent states (that is, the states from the latent state space), the Q-network calculates the Q values. During the training, the additional components help in suitably forming the latent decision process (or in other words, in suitably training the latent transition model).
In the training, the inputs for every instant t are the action that the agent has performed at the previous instant (at-1) and the resulting state (st) and the resulting reward (rt).
The trainable components of the model are the state encoder and the state decoder the action encoder and the action decoder the latent transition function (that is, the transition function for the latent decision process) and the Q function which may likewise be referred to as a “latent” Q function because its input are latent states.
The parameters of the encoders and decoders for states and actions and depend on the representation for which they are used (that is, in which the action was carried out or the state was achieved, and each representation may be considered a separate Markov decision process). This is meant to indicate that a separate set of encoder and decoder is used for each representation (for states and actions). The encoders and decoders may also have different topologies for different representations.
In the model 200, multiple neural networks perform a variational inference. Their output for an input is therefore a probability distribution, typically by outputting parameters of a probability density function (e.g., a mean value and covariance matrix of a multi-dimensional Gaussian distribution).
Such a network is the state encoder for an associated representation, which maps a state in the representation to a probability distribution 201 in the latent state space. According to one embodiment, it outputs probability distribution 201 as a mean value and diagonal covariance matrix of a multidimensional Gaussian distribution. For the sake of simplicity, denotes the probability of the latent state z given the state s. For an associated representation, action encoder similarly maps an action carried out in the representation to a probability distribution across the latent action space. Latent actions, that is, elements of the latent action space, are differences between two latent states. As a result, latent actions (that is, encodings of actions) have the same dimension as latent states, and the output probability distribution is also a multidimensional Gaussian distribution.
State decoder and action decoder are trained in such a way that they perform the mappings reversely to the encoders, their outputs also being probability distributions 203, 204.
During the training for evaluating the loss function, a plurality of samples (also referred to as particles) of latent states and latent actions is taken, where k is the index for the sample, and k=1, . . . , K, that is, K is the number of samples for each time step (with time step index t). This sampling is undertaken to express the uncertainty about the true pair of latent state and latent action. The reparameterization trick may be used to differentiate across the sampling (to ascertain the gradient of the loss function for the training, or in other words, to adapt the parameters of the different neural networks by a backpropagation).
Here, is the index of the precursor in the set of samples in the preceding time step This is because the preceding latent state must be known in order to calculate the (logarithmized) probability of a transition. This preceding state is sampled from the set of samples of the preceding time step at a probability 206, which is directly proportional to the associated weight. denotes the latent transition model (like the encoders, decoders and the Q function likewise implemented in the form of a neural network). It is the same for all representations and encodes both the state transitions and the reward function for the latent decision process. Given a latent (preceding) latent [sic] state action and a latent (preceding) action, it supplies a distribution 205 across the latent (next) state and the reward (once again as a parameter of a multidimensional Gaussian distribution).
For the training of the neural networks, the following loss is used
where the two components, the RL loss (double DQN loss) and the particle filter loss, are given by
The two prefactors and are weightings. They are suitably selectable with the aid of experiments or can also both be set to equal one.
Thus, one of the objectives consists of minimizing the particle filter loss. It is aimed at maximizing the logarithm of the sum of the weights wtk, e.g.,
According to one embodiment, a locality connection is also introduced into the particle loss. It is meant to reduce the distance between two latent states if there is an increasing probability of a direct transition between the two. For this purpose, the particle loss, for example, is given by
Prefactor λlocal is a weighting that may be suitably selected by experiments or also be set to equal one.
The latent Q function supplies a state action value (Q value) for every possible action given a set of sampled latent states. The input for the Q-network thus is a concatenation of multiple samples (particles). Entire model 200 may be considered a Q function approximator for multiple decision processes. For a given decision process (that is, a given representation) the approximator would be
i.e., the composition of state encoder, sampling (and concatenation) of latent states and the latent Q function. The Q function approximation for is denoted by According to one embodiment, the latent Q function is trained with the aid of a double DQN (double deep Q-network). It is also used in the inference (that is, in the operation, i.e., in the control following the training) for determining actions for a current state (which is encoded by the state encoder). In the operation, only the latent Q function and the state encoders are used any longer. The encoder is therefore trained both according to the particle filter loss and the double-DQN loss. The other components shown in
Hereinafter, an example is indicated of a training algorithm according to the above procedure. The algorithm is given in pseudo code, and common English key words such as “for”, “end”, “if” and “then” are used.
Markov decision processes (MDPs)
model components with parameters
RL ← PP ← 0
indicates data missing or illegible when filed
In summary, a method as illustrated in
In 401, training data are collected for multiple representations of states of the agent.
In 402, using the training data,
The transition model and the Q function model are trained using the state encoders, the state decoders, the action encoders, and the action decoders (for the representations), (parameters of the encoders and decoders being trained as well).
It should be noted that the collecting of training data in 401 and the training in 402 is carried out in alternation, that is, the training data are collected using the current state encoder and the Q function model in the one or more representation(s) and are written into corresponding memories, and the (entire) model is trained on that basis. In other words, the collecting of the data in 401 and the training of the model are repeated in alternation until the training of the model (i.e., the mentioned model components) is concluded.
In 403, (e.g., following the concluded training), a state of the agent for which a control action is to be ascertained is received in one of the representations.
In 404, the state is mapped to one or more latent state(s) (e.g., to samples from a distribution of latent states) with the aid of the state encoder for the one of the representations. For instance, the state is first mapped to a distribution of latent states, from which latent states are then sampled. This may be seen as mapping the state to the sampled latent states.
In 405, Q values for the one or the plurality of latent state(s) (e.g., the samples, also referred to as particles) are ascertained for a set of actions with the aid of the Q function model.
In 406, the control action having the best Q value is selected as a control action from the set of actions and the agent is controlled according to the selected control action.
According to different embodiments, multiple representations in a latent model (having a latent state space, a latent action space, a latent transition function, and a latent Q function model) are thus merged, so to speak.
The method of
The approach from
One example is a self-driving car for which a control device is to be trained or a control strategy to be learned. The car may collect training data in a real test environment, e.g., by recording camera images, or training data are able to be collected in a simulation of the test environment, in which x-y coordinates and distances from obstacles instead of images are collected in the simulation as information about control states (i.e., observations) because this information can easily be determined in a simulation. The approach from
Different embodiments may receive sensor signals from different sensors such as video, radar, LiDAR, ultrasound, movement, heat imaging, etc. and utilize the sensor signals, for instance in order to acquire sensor data with regard to states of the controlled system (e.g., robots and an object or objects). Embodiments are able to be used for training a machine learning system and controlling a robot, e.g., autonomously of robot manipulators, to carry out different manipulation tasks in different scenarios. In particular, embodiments are able to be used to control and monitor the execution of manipulation tasks such as on assembly lines.
Although special embodiments have been illustrated and described here, one skilled in the art will recognize that the depicted and described special embodiments may be exchanged for a multitude of alternative and/or equivalent implementations without deviating from the protective scope of the present invention. This application is meant to cover different adaptations or variations of the example embodiments described herein.
Number | Date | Country | Kind |
---|---|---|---|
10 2021 212 906.4 | Nov 2021 | DE | national |