arXiv Analytics

Sign in

arXiv:1507.04296 [cs.LG]AbstractReferencesReviewsResources

Massively Parallel Methods for Deep Reinforcement Learning

Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, Shane Legg, Volodymyr Mnih, Koray Kavukcuoglu, David Silver

Published 2015-07-15Version 1

We present the first massively distributed architecture for deep reinforcement learning. This architecture uses four main components: parallel actors that generate new behaviour; parallel learners that are trained from stored experience; a distributed neural network to represent the value function or behaviour policy; and a distributed store of experience. We used our architecture to implement the Deep Q-Network algorithm (DQN). Our distributed algorithm was applied to 49 games from Atari 2600 games from the Arcade Learning Environment, using identical hyperparameters. Our performance surpassed non-distributed DQN in 41 of the 49 games and also reduced the wall-time required to achieve these results by an order of magnitude on most games.

Comments: Presented at the Deep Learning Workshop, International Conference on Machine Learning, Lille, France, 2015
Categories: cs.LG, cs.AI, cs.DC, cs.NE
Related articles: Most relevant | Search more
arXiv:2407.06507 [cs.LG] (Published 2024-07-09)
Economic span selection of bridge based on deep reinforcement learning
arXiv:1803.02811 [cs.LG] (Published 2018-03-07)
Accelerated Methods for Deep Reinforcement Learning
arXiv:1812.11794 [cs.LG] (Published 2018-12-31)
Deep Reinforcement Learning for Multi-Agent Systems: A Review of Challenges, Solutions and Applications