arXiv Analytics

Sign in

arXiv:1901.11478 [cs.LG]AbstractReferencesReviewsResources

An Optimization Framework for Task Sequencing in Curriculum Learning

Francesco Foglino, Matteo Leonetti

Published 2019-01-31Version 1

Curriculum learning is gaining popularity in (deep) reinforcement learning. It can alleviate the burden on data collection and provide better exploration policies through transfer and generalization from less complex tasks. Current methods for automatic task sequencing for curriculum learning in reinforcement learning provided initial heuristic solutions, with little to no guarantee on their quality. We introduce an optimization framework for task sequencing composed of the problem definition, several candidate performance metrics for optimization, and three benchmark algorithms. We experimentally show that the two most commonly used baselines (learning with no curriculum, and with a random curriculum) perform worse than a simple greedy algorithm. Furthermore, we show theoretically and demonstrate experimentally that the three proposed algorithms provide increasing solution quality at moderately increasing computational complexity, and show that they constitute better baselines for curriculum learning in reinforcement learning.

Related articles: Most relevant | Search more
arXiv:1803.00590 [cs.LG] (Published 2018-03-01)
Hierarchical Imitation and Reinforcement Learning
arXiv:1203.3481 [cs.LG] (Published 2012-03-15)
Real-Time Scheduling via Reinforcement Learning
arXiv:1709.10089 [cs.LG] (Published 2017-09-28)
Overcoming Exploration in Reinforcement Learning with Demonstrations