arXiv Analytics

Sign in

arXiv:2210.10999 [cs.LG]AbstractReferencesReviewsResources

Task Phasing: Automated Curriculum Learning from Demonstrations

Vaibhav Bajaj, Guni Sharon, Peter Stone

Published 2022-10-20Version 1

Applying reinforcement learning (RL) to sparse reward domains is notoriously challenging due to insufficient guiding signals. Common techniques for addressing such domains include (1) learning from demonstrations and (2) curriculum learning. While these two approaches have been studied in detail, they have rarely been considered together. This paper aims to do so by introducing a principled task phasing approach that uses demonstrations to automatically generate a curriculum sequence. Using inverse RL from (suboptimal) demonstrations we define a simple initial task. Our task phasing approach then provides a framework to gradually increase the complexity of the task all the way to the target task, while retuning the RL agent in each phasing iteration. Two approaches for phasing are considered: (1) gradually increasing the proportion of time steps an RL agent is in control, and (2) phasing out a guiding informative reward function. We present conditions that guarantee the convergence of these approaches to an optimal policy. Experimental results on 3 sparse reward domains demonstrate that our task phasing approaches outperform state-of-the-art approaches with respect to their asymptotic performance.

Comments: 7 pages main paper, 7 figures, 4 pages appendix. Submitted to AAAI 2023 Conference
Categories: cs.LG, cs.AI
Related articles: Most relevant | Search more
arXiv:2303.13489 [cs.LG] (Published 2023-03-23)
Boosting Reinforcement Learning and Planning with Demonstrations: A Survey
arXiv:2009.14108 [cs.LG] (Published 2020-09-29)
Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution
arXiv:2106.04696 [cs.LG] (Published 2021-06-08)
Curriculum Design for Teaching via Demonstrations: Theory and Applications