arXiv Analytics

Sign in

arXiv:1707.01495 [cs.LG]AbstractReferencesReviewsResources

Hindsight Experience Replay

Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba

Published 2017-07-05Version 1

Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.

Related articles: Most relevant | Search more
arXiv:2008.09377 [cs.LG] (Published 2020-08-21)
Curriculum Learning with Hindsight Experience Replay for Sequential Object Manipulation Tasks
arXiv:2207.01115 [cs.LG] (Published 2022-07-03)
USHER: Unbiased Sampling for Hindsight Experience Replay
arXiv:2108.07887 [cs.LG] (Published 2021-08-17)
Diversity-based Trajectory and Goal Selection with Hindsight Experience Replay