arXiv Analytics

Sign in

arXiv:1802.05054 [cs.LG]AbstractReferencesReviewsResources

GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms

Cédric Colas, Olivier Sigaud, Pierre-Yves Oudeyer

Published 2018-02-14Version 1

In continuous action domains, standard deep reinforcement learning algorithms like DDPG suffer from inefficient exploration when facing sparse or deceptive reward problems. Conversely, evolutionary and developmental methods focusing on exploration like novelty search, quality-diversity or goal exploration processes are less sample efficient during exploitation. In this paper, we present the GEP-PG approach, taking the best of both worlds by sequentially combining two variants of a goal exploration process and two variants of DDPG. We study the learning performance of these components and their combination on a low dimensional deceptive reward problem and on the larger Half-Cheetah benchmark. Among other things, we show that DDPG fails on the former and that GEP-PG obtains performance above the state-of-the-art on the latter.

Related articles: Most relevant | Search more
arXiv:2107.08966 [cs.LG] (Published 2021-07-19)
Decoupling Exploration and Exploitation in Reinforcement Learning
arXiv:1205.2874 [cs.LG] (Published 2012-05-13, updated 2012-06-30)
Decoupling Exploration and Exploitation in Multi-Armed Bandits
arXiv:2312.15965 [cs.LG] (Published 2023-12-26)
Optimistic and Pessimistic Actor in RL:Decoupling Exploration and Utilization