TRAINING A POLICY NEURAL NETWORK AND A VALUE NEURAL NETWORK

Information

  • Patent Application
  • 20180032863
  • Publication Number
    20180032863
  • Date Filed
    September 29, 2016
    8 years ago
  • Date Published
    February 01, 2018
    6 years ago
Abstract
Methods, systems and apparatus, including computer programs encoded on computer storage media, for training a value neural network that is configured to receive an observation characterizing a state of an environment being interacted with by an agent and to process the observation in accordance with parameters of the value neural network to generate a value score. One of the systems performs operations that include training a supervised learning policy neural network; initializing initial values of parameters of a reinforcement learning policy neural network having a same architecture as the supervised learning policy network to the trained values of the parameters of the supervised learning policy neural network; training the reinforcement learning policy neural network on second training data; and training the value neural network to generate a value score for the state of the environment that represents a predicted long-term reward resulting from the environment being in the state.
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application claims the benefit of priority to German Utility Model Application No. 20 2016 004 627.7, filed on Jul. 27, 2016, the entire contents of which are incorporated herein by reference.


BACKGROUND

This specification relates to selecting actions to be performed by a reinforcement learning agent.


Reinforcement learning agents interact with an environment by receiving an observation that characterizes the current state of the environment, and in response, performing an action. Once the action is performed, the agent receives a reward that is dependent on the effect of the performance of the action on the environment.


Some reinforcement learning systems use neural networks to select the action to be performed by the agent in response to receiving any given observation.


Neural networks are machine learning models that employ one or more layers of nonlinear units to predict an output for a received input. Some neural networks are deep neural networks that include one or more hidden layers in addition to an output layer. The output of each hidden layer is used as input to the next layer in the network, i.e., the next hidden layer or the output layer. Each layer of the network generates an output from a received input in accordance with current values of a respective set of parameters.


SUMMARY

This specification describes technologies that relate to reinforcement learning.


The subject matter described in this specification can be implemented in particular embodiments so as to realize one or more of the following advantages. Actions to be performed by an agent interacting with an environment that has a very large state space can be effectively selected to maximize the rewards resulting from the performance of the action. In particular, actions can effectively be selected even when the environment has a state tree that is too large to be exhaustively searched. By using neural networks in searching the state tree, the amount of computing resources and the time required to effectively select an action to be performed by the agent can be reduced. Additionally, neural networks can be used to reduce the effective breadth and depth of the state tree during the search, reducing the computing resources required to search the tree and to select an action. By employing a training pipeline for training the neural networks as described in this specification, various kinds of training data can be effectively utilized in the training, resulting in trained neural networks with better performance.


The details of one or more embodiments of the subject matter of this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows an example reinforcement learning system.



FIG. 2 is a flow diagram of an example process for training a collection of neural networks for use in selecting actions to be performed by an agent interacting with an environment.



FIG. 3 is a flow diagram of an example process for selecting an action to be performed by the agent using a state tree.



FIG. 4 is a flow diagram of an example process for performing a search of an environment state tree using neural networks.





Like reference numbers and designations in the various drawings indicate like elements.


DETAILED DESCRIPTION

This specification generally describes a reinforcement learning system that selects actions to be performed by a reinforcement learning agent interacting with an environment. In order to interact with the environment, the reinforcement learning system receives data characterizing the current state of the environment and selects an action to be performed by the agent from a set of actions in response to the received data. Once the action has been selected by the reinforcement learning system, the agent performs the action to interact with the environment.


Generally, the agent interacts with the environment in order to complete one or more objectives and the reinforcement learning system selects actions in order to maximize the objectives, as represented by numeric rewards received by the reinforcement learning system in response to actions performed by the agent.


In some implementations, the environment is a real-world environment and the agent is a control system for a mechanical agent interacting with the real-world environment. For example, the agent may be a control system integrated in an autonomous or semi-autonomous vehicle navigating through the environment. In these implementations, the actions may be possible control inputs to control the vehicle and the objectives that the agent is attempting to complete are objectives for the navigation of the vehicle through the real-world environment. For example, the objectives can include one or more of: reaching a destination, ensuring the safety of any occupants of the vehicle, minimizing energy used in reaching the destination, maximizing the comfort of the occupants, and so on.


In some other implementations, the environment is a real-world environment and the agent is a computer system that generates outputs for presentation to a user.


For example, the environment may be a patient diagnosis environment such that each state is a respective patient state of a patient, i.e., as reflected by health data characterizing the health of the patient, and the agent may be a computer system for suggesting treatment for the patient. In this example, the actions in the set of actions are possible medical treatments for the patient and the objectives can include one or more of maintaining a current health of the patient, improving the current health of the patient, minimizing medical expenses for the patient, and so on.


As another example, the environment may be a protein folding environment such that each state is a respective state of a protein chain and the agent is a computer system for determining how to fold the protein chain. In this example, the actions are possible folding actions for folding the protein chain and the objective may include, e.g., folding the protein so that the protein is stable and so that it achieves a particular biological function. As another example, the agent may be a mechanical agent that performs the protein folding actions selected by the system automatically without human interaction.


In some other implementations, the environment is a simulated environment and the agent is implemented as one or more computer programs interacting with the simulated environment. For example, the simulated environment may be a virtual environment in which a user competes against a computerized agent to accomplish a goal and the agent is the computerized agent. In this example, the actions in the set of actions are possible actions that can be performed by the computerized agent and the objective may be, e.g., to win the competition against the user.



FIG. 1 shows an example reinforcement learning system 100. The reinforcement learning system 100 is an example of a system implemented as computer programs on one or more computers in one or more locations in which the systems, components, and techniques described below are implemented.


The reinforcement learning system 100 selects actions to be performed by a reinforcement learning agent 102 interacting with an environment 104. That is, the reinforcement learning system 100 receives observations, with each observation being data characterizing a respective state of the environment 104, and, in response to each received observation, selects an action from a set of actions to be performed by the reinforcement learning agent 102 in response to the observation.


Once the reinforcement learning system 100 selects an action to be performed by the agent 102, the reinforcement learning system 100 instructs the agent 102 and the agent 102 performs the selected action. Generally, the agent 102 performing the selected action results in the environment 104 transitioning into a different state.


The observations characterize the state of the environment in a manner that is appropriate for the context of use for the reinforcement learning system 100.


For example, when the agent 102 is a control system for a mechanical agent interacting with the real-world environment, the observations may be images captured by sensors of the mechanical agent as it interacts with the real-world environment and, optionally, other sensor data captured by the sensors of the agent.


As another example, when the environment 104 is a patient diagnosis environment, the observations may be data from an electronic medical record of a current patient.


As another example, when the environment 104 is a protein folding environment, the observations may be images of the current configuration of a protein chain, a vector characterizing the composition of the protein chain, or both.


In particular, the reinforcement learning system 100 selects actions using a collection of neural networks that includes at least one policy neural network, e.g., a supervised learning (SL) policy neural network 140, a reinforcement learning (RL) policy neural network 150, or both, a value neural network 160, and, optionally, a fast rollout neural network 130.


Generally, a policy neural network is a neural network that is configured to receive an observation and to process the observation in accordance with parameters of the policy neural network to generate a respective action probability for each action in the set of possible actions that can be performed by the agent to interact with the environment.


In particular, the SL policy neural network 140 is a neural network that is configured to receive an observation and to process the observation in accordance with parameters of the supervised learning policy neural network 140 to generate a respective action probability for each action in the set of possible actions that can be performed by the agent to interact with the environment.


When used by the reinforcement learning system 100, the fast rollout neural network 130 is also configured to generate action probabilities for actions in the set of possible actions (when generated by the fast rollout neural network 130, these probabilities will be referred to in this specification as “rollout action probabilities”), but is configured to generate an output faster than the SL policy neural network 140.


That is, the processing time necessary for the fast rollout policy neural network 130 to generate rollout action probabilities is less than the processing time necessary for the SL policy neural network 140 to generate action probabilities.


To that end, the fast rollout neural network 130 is a neural network that has an architecture that is more compact than the architecture of the SL policy neural network 140 and the inputs to the fast rollout policy neural network (referred to in this specification as “rollout inputs”) are less complex than the observations that are inputs to the SL policy neural network 140.


For example, in implementations where the observations are images, the SL policy neural network 140 may be a convolutional neural network configured to process the images while the fast rollout neural network 130 is a shallower, fully-connected neural network that is configured to receive as input feature vectors that characterize the state of the environment 104.


The RL policy neural network 150 is a neural network that has the same neural network architecture as the SL policy neural network 140 and therefore generates the same kind of output. However, as will be described in more detail below, in implementations where the system 100 uses both the RL policy neural network and the SL policy neural network, because the RL policy neural network 150 is trained differently from the SL policy neural network 140, once both neural networks are trained, parameter values differ between the two neural networks.


The value neural network 160 is a neural network that is configured to receive an observation and to process the observation to generate a value score for the state of the environment characterized by the observation. Generally, the value neural network 160 has a neural network architecture that is similar to that of the SL policy neural network 140 and the RL policy neural network 150 but has a different type of output layer from that of the SL policy neural network 140 and the RL policy neural network 150, e.g., a regression output layer, that results in the output of the value neural network 160 being a single value score.


To allow the agent 102 to effectively interact with the environment 104, the reinforcement learning system 100 includes a neural network training subsystem 110 that trains the neural networks in the collection to determine trained values of the parameters of the neural networks.


When used by the system 100 in selecting actions, the neural network training subsystem 110 trains the fast rollout neural network 130 and the SL policy neural network 140 on labeled training data using supervised learning and trains the RL policy neural network 150 and the value neural network 160 based on interactions of the agent 102 with a simulated version of the environment 104.


Generally, the simulated version of the environment 104 is a virtualized environment that simulates how actions performed by the agent 120 would affect the state of the environment 104.


For example, when the environment 104 is a real-world environment and the agent is an autonomous or semi-autonomous vehicle, the simulated version of the environment is a motion simulation environment that simulates navigation through the real-world environment. That is, the motion simulation environment simulates the effects of various control inputs on the navigation of the vehicle through the real-world environment.


As another example, when the environment 104 is a patient diagnosis environment, the simulated version of the environment is a patient health simulation that simulates effects of medical treatments on patients. For example, the patient health simulation may be a computer program that receives patient information and a treatment to be applied to the patient and outputs the effect of the treatment on the patient's health.


As another example, when the environment 104 is a protein folding environment, the simulated version of the environment is a simulated protein folding environment that simulates effects of folding actions on protein chains. That is, the simulated protein folding environment may be a computer program that maintains a virtual representation of a protein chain and models how performing various folding actions will influence the protein chain.


As another example, when the environment 104 is the virtual environment described above, the simulated version of the environment is a simulation in which the user is replaced by another computerized agent.


Training the collection of neural networks is described in more detail below with reference to FIG. 2.


The reinforcement learning system 100 also includes an action selection subsystem 120 that, once the neural networks in the collection have been trained, uses the trained neural networks to select actions to be performed by the agent 102 in response to a given observation.


In particular, the action selection subsystem 120 maintains data representing a state tree of the environment 104. The state tree includes nodes that represent states of the environment 104 and directed edges that connect nodes in the tree. An outgoing edge from a first node to a second node in the tree represents an action that was performed in response to an observation characterizing the first state and resulted in the environment transitioning into the second state.


While the data is logically described as a tree, the action selection subsystem 120 can be represented by any of a variety of convenient physical data structures, e.g., as multiple triples or as an adjacency list.


The action selection subsystem 120 also maintains edge data for each edge in the state tree that includes (i) an action score for the action represented by the edge, (ii) a visit count for the action represented by the edge, and (iii) a prior probability for the action represented by the edge.


At any given time, the action score for an action represents the current likelihood that the agent 102 will complete the objectives if the action is performed, the visit count for the action is the current number of times that the action has been performed by the agent 102 in response to observations characterizing the respective first state represented by the respective first node for the edge, and the prior probability represents the likelihood that the action is the action that should be performed 102 in response to observations characterizing the respective first state as determined by the output of one of the neural networks, i.e., and not as determined by subsequent interactions of the agent 102 with the environment 104 or the simulated version of the environment 104.


The action selection subsystem 120 updates the data representing the state tree and the edge data for the edges in the state tree from interactions of the agent 102 with the simulated version of the environment 104 using the trained neural networks in the collection. In particular, the action selection subsystem 120 repeatedly performs searches of the state tree to update the tree and edge data. Performing a search of the state tree to update the state tree and the edge data is described in more detail below with reference to FIG. 4.


In some implementations, the action selection subsystem 120 performs a specified number of searches or performs searches for a specified period of time to finalize the state tree and then uses the finalized state tree to select actions to be performed by the agent 102 in interacting with the actual environment 104, i.e., and not the simulated version of the environment.


In other implementations, however, the action selection subsystem 120 continues to update the state tree by performing searches as the agent 102 interacts with the actual environment 104, i.e., as the agent 102 continues to interact with the environment 104, the action selection subsystem 120 continues to update the state tree.


In any of these implementations, however, when an observation is received by the reinforcement learning system 100, the action selection subsystem 120 selects the action to be performed by the agent 102 using the current edge data for the edges that are outgoing from the node in the state tree that represents the state characterized by the observation. Selecting an action is described in more detail below with reference to FIG. 3.



FIG. 2 is a flow diagram of an example process 200 for training a collection of neural networks for use in selecting actions to be performed by an agent interacting with an environment. For convenience, the process 200 will be described as being performed by a system of one or more computers located in one or more locations. For example, a reinforcement learning system, e.g., the reinforcement learning system 100 of FIG. 1, appropriately programmed in accordance with this specification, can perform the process 200.


The system trains the SL policy neural network and, when included, the fast rollout policy neural network on labeled training data using supervised learning (step 202).


The labeled training data for the SL policy neural network includes multiple training observations and, for each training observation, an action label that identifies an action that was performed in response to the training observation.


For example, the action labels may identify, for each training observation, an action that was performed by an expert, e.g., an agent being controlled by a human actor, when the environment was in the state characterized by the training observation.


In particular, the system trains the SL policy neural network to generate action probabilities that match the action labels for the labeled training data by adjusting the values of the parameters of the SL policy neural network from initial values of the parameters to trained values of the parameters. For example, the system can train the SL policy neural network using asynchronous stochastic gradient descent updates to maximize the log likelihood of the action identified by the action label for a given training observation.


As described above, the fast rollout policy neural network is a network that generates outputs faster than the SL policy neural network, i.e., because the architecture of the fast rollout policy neural network is more compact than the architecture of the SL policy neural network and the inputs to the fast rollout policy neural network are less complex than the inputs to the SL policy neural network.


Thus, the labeled training data for the fast rollout policy neural network includes training rollout inputs, and for each training rollout input, an action label that identifies an action that was performed in response to the rollout input. For example, the labeled training data for the fast rollout policy neural network may be the same as the labeled training data for the SL policy neural network but with the training observations being replaced with training rollout inputs that characterize the same states as the training observations.


Like the SL policy neural network, the system trains the fast rollout neural network to generate rollout action probabilities that match the action labels in the labeled training data by adjusting the values of the parameters of the fast rollout neural network from initial values of the parameters to trained values of the parameters. For example, the system can train the fast rollout neural network using stochastic gradient descent updates to maximize the log likelihood of the action identified by the action label for a given training rollout input.


The system initializes initial values of the parameters of the RL policy neural network to the trained values of the SL policy neural network (step 204). As described before, the RL policy neural network and the SL policy neural network have the same network architecture, and the system initializes the values of the parameters of the RL policy neural network to match the trained values of the parameters of the SL policy neural network.


The system trains the RL policy neural network while the agent interacts with the simulated version of the environment (step 206).


That is, after initializing the values, the system trains the RL policy neural network to adjust the values of the parameters of the RL policy neural network using reinforcement learning from data generated from interactions of the agent with the simulated version of the environment.


During these interactions, the actions that are performed by the agent are selected using the RL policy neural network in accordance with current values of the parameters of the RL policy neural network.


In particular, the system trains the RL policy neural network to adjust the values of the parameters of the RL policy neural network to generate action probabilities that represent, for each action, the likelihood that for each action, a predicted likelihood that a long-term reward that will be received will be maximized if the action is performed by the agent in response to the observation instead of any other action in the set of possible actions. Generally, the long-term reward is a numeric value that is dependent on the degree to which the one or more objectives are completed during interaction of the agent with the environment.


To train the RL policy neural network, the system completes an episode of interaction of the agent while the actions were being selected using the RL policy neural network and then generates a long-term reward for the episode. The system generates the long-term reward based on the outcome of the episode, i.e., on whether the objectives were completed during the episode. For example, the system can set the reward to one value if the objectives were completed and to another, lower value if the objectives were not completed.


The system then trains the RL policy neural network on the training observations in the episode to adjust the values of the parameters using the long-term reward, e.g., by computing policy gradient updates and adjusting the values of the parameters using those policy gradient updates using a reinforcement learning technique, e.g., REINFORCE.


The system can determine final values of the parameters of the RL policy neural network by repeatedly training the RL policy neural network on episodes of interaction.


The system trains the value neural network on training data generated from interactions of the agent with the simulated version of the environment (step 208).


In particular, the system trains the value neural network to generate a value score for a given state of the environment that represents the predicted long-term reward resulting from the environment being in the state by adjusting the values of the parameters of the value neural network.


The system generates training data for the value neural network from the interaction of the agent with the simulated version of the environment. The interactions can be the same as the interactions used to train the RL policy neural network, or can be interactions during which actions performed by the agent are selected using a different action selection policy, e.g., the SL policy neural network, the RL policy neural network, or another action selection policy.


The training data includes training observations and, for each training observation, the long-term reward that resulted from the training observation.


For example, the system can select one or more observations randomly from each episode of interaction and then associate the observation with the reward for the episode to generate the training data.


As another example, the system can select one or more observations randomly from each episode, simulate the remainder of the episode by selecting actions using one of the policy neural networks, by randomly selecting actions, or both, and then determine the reward for the simulated episode. The system can then randomly select one or more observations from the simulated episode and associate the reward for the simulated episode with the observations to generate the training data.


The system then trains the value neural network on the training observations using supervised learning to determine trained values of the parameters of the value neural network from initial values of the parameters of the neural network. For example, the system can train the value neural network using asynchronous gradient descent to minimize the mean squared error between the value scores and the actual long-term reward received.



FIG. 3 is a flow diagram of an example process 300 for selecting an action to be performed by the agent using a state tree. For convenience, the process 300 will be described as being performed by a system of one or more computers located in one or more locations. For example, a reinforcement learning system, e.g., the reinforcement learning system 100 of FIG. 1, appropriately programmed in accordance with this specification, can perform the process 300.


The system receives a current observation characterizing a current state of the environment (step 302) and identifies a current node in the state tree that represents the current state (step 304).


Optionally, prior to selecting the action to be performed by the agent in response to the current observation, the system searches or continues to search the state tree until an action is to be selected (step 306). That is, in some implementations, the system is allotted a certain time period after receiving the observation to select an action. In these implementations, the system continues performing searches as described below with reference to FIG. 4, starting from the current node in the state tree until the allotted time period elapses. The system can then update the state tree and the edge data based on the searches before selecting an action in response to the current observation. In some of these implementations, the system searches or continues searching only if the edge data indicates that the action to be selected may be modified as a result of the additional searching.


The system selects an action to be performed by the agent in response to the current observation using the current edge data for outgoing edges from the current node (step 308).


In some implementations, the system selects the action represented by the outgoing edge having the highest action score as the action to be performed by the agent in response to the current observation. In some other implementations, the system selects the action represented by the outgoing edge having the highest visit count as the action to be performed by the agent in response to the current observation.


The system can continue performing the process 300 in response to received observations until the interaction of the agent with the environment terminates. In some implementations, the system continues performing searches of the environment using the simulated version of the environment, e.g., using one or more replicas of the agent to perform the actions to interact with the simulated version, independently from selecting actions to be performed by the agent to interact with the actual environment.



FIG. 4 is a flow diagram of an example process 400 for performing a search of an environment state tree using neural networks. For convenience, the process 400 will be described as being performed by a system of one or more computers located in one or more locations. For example, a reinforcement learning system, e.g., the reinforcement learning system 100 of FIG. 1, appropriately programmed in accordance with this specification, can perform the process 400.


The system receives data identifying a root node for the search, i.e., a node representing an initial state of the simulated version of the environment (step 402).


The system selects actions to be performed by the agent to interact with the environment by traversing the state tree until the environment reaches a leaf state, i.e., a state that is represented by a leaf node in the state tree (step 404).


That is, in response to each received observation characterizing an in-tree state, i.e., a state encountered by the agent starting from the initial state until the environment reaches the leaf state, the system selects an action to be performed by the agent in response to the observation using the edge data for the outgoing nodes from the in-tree node representing the in-tree state.


In particular, for each outgoing edge from an in-tree node, the system determines an adjusted action score for the edge based on the action score for the edge, the visit count for the edge, and the prior probability for the edge. Generally, the system computes the adjusted action score for a given edge by adding to the action score for the edge a bonus that is proportional to the prior probability for the edge but decays with repeated visits to encourage exploration. For example, the bonus may be directly proportional to a ratio that has the prior probability as the numerator and a constant, e.g., one, plus the visit count as the denominator.


The system then selects the action represented by the edge with the highest adjusted action score as the action to be performed by the agent in response to the observation.


The system continues selecting actions to be performed by the agent in this manner until an observation is received that characterizes a leaf state that is represented by a leaf node in the state tree. Generally, a leaf node is a node in the state tree that has no child nodes, i.e., is not connected to any other nodes by an outgoing edge.


The system expands the leaf node using one of the policy neural networks (step 406). That is, in some implementations, the system uses the SL policy neural network in expanding the leaf node, while in other implementations, the system uses the RL policy neural network.


To expand the leaf node, the system adds a respective new edge to the state tree for each action that is a valid action to be performed by the agent in response to the leaf observation. The system also initializes the edge data for each new edge by setting the visit count and action scores for the new edge to zero. To determine the posterior probability for each new edge, the system processes the leaf observation using the policy neural network, i.e., either the SL policy neural network or the RL policy neural network depending on the implementation, and uses the action probabilities generated by the network as the posterior probabilities for the corresponding edges. In some implementations, the temperature of the output layer of the policy neural network is reduced when generating the posterior probabilities to smooth out the probability distribution defined by the action probabilities.


The system evaluates the leaf node using the value neural network and, optionally, the fast rollout policy neural network to generate a leaf evaluation score for the leaf node (step 408).


To evaluate the leaf node using the value neural network, the system processes the observation characterizing the leaf state using the value neural network to generate a value score for the leaf state that represents a predicted long-term reward received as a result of the environment being in the leaf state.


To evaluate the leaf node using the fast rollout policy neural network, the system performs a rollout until the environment reaches a terminal state by selecting actions to be performed by the agent using the fast rollout policy neural network. That is, for each state encountered by the agent during the rollout, the system receives rollout data characterizing the state and processes the rollout data using the fast rollout policy neural network that has been trained to receive the rollout data to generate a respective rollout action probability for each action in the set of possible actions. In some implementations, the system then selects the action having a highest rollout action probability as the action to be performed by the agent in response to the rollout data characterizing the state. In some other implementations, the system samples from the possible actions in accordance with the rollout action probabilities to select the action to be performed by the agent.


The terminal state is a state in which the objectives have been completed or a state which has been classified as a state from which the objectives cannot be reasonably completed. Once the environment reaches the terminal state, the system determines a rollout long-term reward based on the terminal state. For example, the system can set the rollout long-term reward to a first value if the objective was completed in the terminal state and a second, lower value if the objective is not completed as of the terminal state.


The system then either uses the value score as the leaf evaluation score for the leaf node or if, both the value neural network and the fast rollout policy neural network are used, combines the value score and the rollout long-term reward to determine the leaf evaluation score for the leaf node. For example, when combined, the leaf evaluation score can be a weighted sum of the value score and the rollout long-term reward.


The system updates the edge data for the edges traversed during the search based on the leaf evaluation score for the leaf node (step 410).


In particular, for each edge that was traversed during the search, the system increments the visit count for the edge by a predetermined constant value, e.g., by one. The system also updates the action score for the edge using the leaf evaluation score by setting the action score equal to the new average of the leaf evaluation scores of all searches that involved traversing the edge.


While the description of FIG. 4 describes actions being selected for the agent interacting with the environment, it will be understood that the process 400 may instead be performed to search the state tree using the simulated version of the environment, i.e., with actions being selected to be performed by the agent or a replica of the agent to interact with the simulated version of the environment.


In some implementations, the system distributes the searching of the state tree, i.e., by running multiple different searches in parallel on multiple different machines, i.e., computing devices.


For example, the system may implement an architecture that includes a master machine that executes the main search, many remote worker CPUs that execute asynchronous rollouts, and many remote worker GPUs that execute asynchronous policy and value network evaluations. The entire state tree may be stored on the master, which only executes the in-tree phase of each simulation. The leaf positions are communicated to the worker CPUs, which execute the rollout phase of simulation, and to the worker GPUs, which compute network features and evaluate the policy and value networks.


In some cases, the system does not update the edge data until a predetermined number of searches have been performed since a most-recent update of the edge data, e.g., to improve the stability of the search process in cases where multiple different searches are being performed in parallel.


Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or in addition, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.


The term “data processing apparatus” refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can also be or further include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.


A computer program (which may also be referred to or described as a program, software, a software application, a module, a software module, a script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub-programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.


The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).


Computers suitable for the execution of a computer program include, by way of example, can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few.


Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.


To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser.


Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a relationship graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet.


The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.


While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.


Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.


Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous.

Claims
  • 1. A neural network training system comprising one or more computers and one or more storage devices storing instructions that when executed by the one or more computers cause the one or more computers to perform operations for training a value neural network that is configured to receive an observation characterizing a state of an environment being interacted with by an agent and to process the observation in accordance with parameters of the value neural network to generate a value score, the operations comprising: training a supervised learning policy neural network, wherein the supervised learning policy neural network is configured to receive the observation and to process the observation in accordance with parameters of the supervised learning policy neural network to generate a respective action probability for each action in a set of possible actions that can be performed by the agent to interact with the environment, andwherein training the supervised learning policy neural network comprises training the supervised learning policy neural network on labeled training data using supervised learning to determine trained values of the parameters of the supervised learning policy neural network;initializing initial values of parameters of a reinforcement learning policy neural network having a same architecture as the supervised learning policy network to the trained values of the parameters of the supervised learning policy neural network;training the reinforcement learning policy neural network on second training data generated from interactions of the agent with a simulated version of the environment using reinforcement learning to determine trained values of the parameters of the reinforcement learning policy neural network from the initial values; andtraining the value neural network to generate a value score for the state of the environment that represents a predicted long-term reward resulting from the environment being in the state by training the value neural network on third training data generated from interactions of the agent with the simulated version of the environment using supervised learning to determine trained values of the parameters of the value neural network from initial values of the parameters of the value neural network.
  • 2. The system of claim 1, wherein the environment is a real-world environment, andwherein the actions in the set of actions are possible control inputs to control the interaction of the agent with the environment.
  • 3. The system of claim 2, wherein the environment is a real-world environment,wherein the agent is a control system for an autonomous or semi-autonomous vehicle navigating through the real-world environment,wherein the actions in the set of actions are possible control inputs to control the autonomous or semi-autonomous vehicle, andwherein the simulated version of the environment is a motion simulation environment that simulates navigation through the real-world environment.
  • 4. The system of claim 2, wherein the predicted long-term reward received by the agent reflects a predicted degree to which objectives for the navigation of the vehicle through the real-world environment will be satisfied as a result of the environment being in the state.
  • 5. The system of claim 1, wherein the environment is a patient diagnosis environment,wherein the observation characterizes a patient state of a patient,wherein the agent is a computer system for suggesting treatment for the patient,wherein the actions in the set of actions are possible medical treatments for the patient, andwherein the simulated version of the environment is a patient health simulation that simulates effects of medical treatments on patients.
  • 6. The system of claim 1, wherein the environment is a protein folding environment,wherein the observation characterizes a current state of a protein chain,wherein the agent is a computer system for determining how to fold the protein chain,wherein the actions are possible folding actions for folding the protein chain, andwherein the simulated version of the environment is a simulated protein folding environment that simulates effects of folding actions on protein chains.
  • 7. The system of claim 1, wherein the environment is a virtualized environment in which a user competes against a computerized agent to accomplish a goal,wherein the agent is the computerized agent,wherein the actions in the set of actions are possible actions that can be performed by the computerized agent in the virtualized environment, andwherein the simulated version of the environment is a simulation in which the user is replaced by another computerized agent.
  • 8. The system of claim 1, wherein training the reinforcement learning policy neural network on the second training data comprises selecting actions to be performed by the agent while interacting with the simulated version of the environment using the reinforcement learning policy neural network.
  • 9. The system of claim 1, wherein training the reinforcement learning policy network on the second training data comprises: training the reinforcement learning policy network to generate action probabilities that represent, for each action, a predicted likelihood that the long-term reward will be maximized if the action is performed by the agent in response to the observation instead of any other action in the set of possible actions.
  • 10. The system of claim 1, wherein the labeled training data comprises a plurality of training observations and, for each training observation, an action label,wherein each training observation characterizes a respective training state, andwherein the action label for each training observation identifies an action that was performed in response to the training observation.
  • 11. The system of claim 10, wherein training the supervised learning policy neural network on the labeled training data comprises: training the supervised learning policy neural network to generate action probabilities that match the action labels for the raining observations.
  • 12. The system of claim 1, the operations further comprising: training a fast rollout policy neural network on the labeled training data,wherein the fast rollout policy neural network is configured to receive a rollout input characterizing the state and to process the rollout input to generate a respective rollout action probability for each action in the set of possible actions, andwherein a processing time necessary for the fast rollout policy neural network to generate the rollout action probabilities is less than a processing time necessary for the supervised learning policy neural network to generate the action probabilities.
  • 13. The system of claim 12, wherein the rollout input characterizing the state contains less data than the observation characterizing the state.
  • 14. The system of claim 12, the operations further comprising: using the fast rollout policy neural network to evaluate states of the environment as part of searching a state tree of states of the environment, wherein the state tree is used to select actions to be performed by the agent in response to received observations.
  • 15. The system of claim 1, the operations further comprising: using the trained value function neural network to evaluate states of the environment as part of searching a state tree of states of the environment, wherein the state tree is used to select actions to be performed by the agent in response to received observations.
  • 16. A method of training a value neural network that is configured to receive an observation characterizing a state of an environment being interacted with by an agent and to process the observation in accordance with parameters of the value neural network to generate a value score, the method comprising: training a supervised learning policy neural network, wherein the supervised learning policy neural network is configured to receive the observation and to process the observation in accordance with parameters of the supervised learning policy neural network to generate a respective action probability for each action in a set of possible actions that can be performed by the agent to interact with the environment, andwherein training the supervised learning policy neural network comprises training the supervised learning policy neural network on labeled training data using supervised learning to determine trained values of the parameters of the supervised learning policy neural network;initializing initial values of parameters of a reinforcement learning policy neural network having a same architecture as the supervised learning policy network to the trained values of the parameters of the supervised learning policy neural network;training the reinforcement learning policy neural network on second training data generated from interactions of the agent with a simulated version of the environment using reinforcement learning to determine trained values of the parameters of the reinforcement learning policy neural network from the initial values; andtraining the value neural network to generate a value score for the state of the environment that represents a predicted long-term reward resulting from the environment being in the state by training the value neural network on third training data generated from interactions of the agent with the simulated version of the environment using supervised learning to determine trained values of the parameters of the value neural network from initial values of the parameters of the value neural network.
  • 17. The method of claim 16, wherein training the reinforcement learning policy neural network on the second training data comprises selecting actions to be performed by the agent while interacting with the simulated version of the environment using the reinforcement learning policy neural network.
  • 18. The method of claim 16, wherein training the reinforcement learning policy network on the second training data comprises: training the reinforcement learning policy network to generate action probabilities that represent, for each action, a predicted likelihood that the long-term reward will be maximized if the action is performed by the agent in response to the observation instead of any other action in the set of possible actions.
  • 19. The method of claim 16, wherein the labeled training data comprises a plurality of training observations and, for each training observation, an action label,wherein each training observation characterizes a respective training state, andwherein the action label for each training observation identifies an action that was performed in response to the training observation.
  • 20. One or more non-transitory computer storage media storing instructions that when executed by one or more computers cause the one or more computers to perform operations for training a value neural network that is configured to receive an observation characterizing a state of an environment being interacted with by an agent and to process the observation in accordance with parameters of the value neural network to generate a value score, the operations comprising: training a supervised learning policy neural network, wherein the supervised learning policy neural network is configured to receive the observation and to process the observation in accordance with parameters of the supervised learning policy neural network to generate a respective action probability for each action in a set of possible actions that can be performed by the agent to interact with the environment, andwherein training the supervised learning policy neural network comprises training the supervised learning policy neural network on labeled training data using supervised learning to determine trained values of the parameters of the supervised learning policy neural network;initializing initial values of parameters of a reinforcement learning policy neural network having a same architecture as the supervised learning policy network to the trained values of the parameters of the supervised learning policy neural network;training the reinforcement learning policy neural network on second training data generated from interactions of the agent with a simulated version of the environment using reinforcement learning to determine trained values of the parameters of the reinforcement learning policy neural network from the initial values; andtraining the value neural network to generate a value score for the state of the environment that represents a predicted long-term reward resulting from the environment being in the state by training the value neural network on third training data generated from interactions of the agent with the simulated version of the environment using supervised learning to determine trained values of the parameters of the value neural network from initial values of the parameters of the value neural network.
Priority Claims (1)
Number Date Country Kind
202016004627.7 Jul 2016 DE national