arXiv Analytics

Sign in

arXiv:1011.0686 [cs.LG]AbstractReferencesReviewsResources

A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning

Stephane Ross, Geoffrey J. Gordon, J. Andrew Bagnell

Published 2010-11-02, updated 2011-03-16Version 3

Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i.i.d. assumptions made in statistical learning. This leads to poor performance in theory and often in practice. Some recent approaches provide stronger guarantees in this setting, but remain somewhat unsatisfactory as they train either non-stationary or stochastic policies and require a large number of iterations. In this paper, we propose a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting. We show that any such no regret algorithm, combined with additional reduction assumptions, must find a policy with good performance under the distribution of observations it induces in such sequential settings. We demonstrate that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem.

Comments: Appearing in the 14th International Conference on Artificial Intelligence and Statistics (AISTATS 2011)
Categories: cs.LG, cs.AI, stat.ML
Related articles: Most relevant | Search more
arXiv:2307.09423 [cs.LG] (Published 2023-07-18)
Scaling Laws for Imitation Learning in NetHack
arXiv:1206.5290 [cs.LG] (Published 2012-06-20)
Imitation Learning with a Value-Based Prior
arXiv:2309.02473 [cs.LG] (Published 2023-09-05)
A Survey of Imitation Learning: Algorithms, Recent Developments, and Challenges