arXiv Analytics

Sign in

arXiv:1907.11788 [cs.LG]AbstractReferencesReviewsResources

On Hard Exploration for Reinforcement Learning: a Case Study in Pommerman

Chao Gao, Bilal Kartal, Pablo Hernandez-Leal, Matthew E. Taylor

Published 2019-07-26Version 1

How to best explore in domains with sparse, delayed, and deceptive rewards is an important open problem for reinforcement learning (RL). This paper considers one such domain, the recently-proposed multi-agent benchmark of Pommerman. This domain is very challenging for RL --- past work has shown that model-free RL algorithms fail to achieve significant learning without artificially reducing the environment's complexity. In this paper, we illuminate reasons behind this failure by providing a thorough analysis on the hardness of random exploration in Pommerman. While model-free random exploration is typically futile, we develop a model-based automatic reasoning module that can be used for safer exploration by pruning actions that will surely lead the agent to death. We empirically demonstrate that this module can significantly improve learning.

Comments: AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE) 2019
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1910.07856 [cs.LG] (Published 2019-10-17)
Effect of Superpixel Aggregation on Explanations in LIME -- A Case Study with Biological Data
arXiv:2007.00072 [cs.LG] (Published 2020-06-30)
Data Movement Is All You Need: A Case Study of Transformer Networks
arXiv:2005.12729 [cs.LG] (Published 2020-05-25)
Implementation Matters in Deep Policy Gradients: A Case Study on PPO and TRPO