arXiv Analytics

Sign in

arXiv:1803.02811 [cs.LG]AbstractReferencesReviewsResources

Accelerated Methods for Deep Reinforcement Learning

Adam Stooke, Pieter Abbeel

Published 2018-03-07Version 1

Deep reinforcement learning (RL) has achieved many recent successes, yet experiment turn-around time remains a key bottleneck in research and in practice. We investigate how to optimize existing deep RL algorithms for modern computers, specifically for a combination of CPUs and GPUs. We confirm that both policy gradient and Q-value learning algorithms can be adapted to learn using many parallel simulator instances. We further find it possible to train using batch sizes considerably larger than are standard, without negatively affecting sample complexity or final performance. We leverage these facts to build a unified framework for parallelization that dramatically hastens experiments in both classes of algorithm. All neural network computations use GPUs, accelerating both data collection and training. Our results include using an entire NVIDIA DGX-1 to learn successful strategies in Atari games in single-digit minutes, using both synchronous and asynchronous algorithms.

Related articles: Most relevant | Search more
arXiv:1507.04296 [cs.LG] (Published 2015-07-15)
Massively Parallel Methods for Deep Reinforcement Learning
Arun Nair et al.
arXiv:1810.12558 [cs.LG] (Published 2018-10-30)
Relative Importance Sampling For Off-Policy Actor-Critic in Deep Reinforcement Learning
arXiv:1604.08153 [cs.LG] (Published 2016-04-27)
Classifying Options for Deep Reinforcement Learning