Friday, March 13, 2015

AI's State of Affairs

Human-level control through deep reinforcement learning
-The theory of reinforcement learning provides a normative account1, deeply rooted in psychological2 and neuroscientific3 perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems4, 5, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms3. While reinforcement learning agents have achieved some successes in a variety of domains6, 7, 8, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks9, 10, 11 to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games12. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

***


This demo follows the description of the Deep Q Learning algorithm described in Playing Atari with Deep Reinforcement Learning, a paper from NIPS 2013 Deep Learning Workshop from DeepMind. The paper is a nice demo of a fairly standard (model-free) Reinforcement Learning algorithm (Q Learning) learning to play Atari games.

In this demo, instead of Atari games, we'll start out with something more simple: a 2D agent that has 9 eyes pointing in different angles ahead and every eye senses 3 values along its direction (up to a certain maximum visibility distance): distance to a wall, distance to a green thing, or distance to a red thing. The agent navigates by using one of 5 actions that turn it different angles. The red things are apples and the agent gets reward for eating them. The green things are poison and the agent gets negative reward for eating them. The training takes a few tens of minutes with current parameter settings.

Over time, the agent learns to avoid states that lead to states with low rewards, and picks actions that lead to better states instead.

***


Code for Human-Level Control through Deep Reinforcement Learning
Please click here to download the code associated with DeepMind's Nature Letter on "Human-Level Control through Deep Reinforcement Learning"

No comments:

Post a Comment