arXiv Analytics

Sign in

arXiv:1909.11821 [cs.LG]AbstractReferencesReviewsResources

Model Imitation for Model-Based Reinforcement Learning

Yueh-Hua Wu, Ting-Han Fan, Peter J. Ramadge, Hao Su

Published 2019-09-25Version 1

Model-based reinforcement learning (MBRL) aims to learn a dynamic model to reduce the number of interactions with real-world environments. However, due to estimation error, rollouts in the learned model, especially those of long horizon, fail to match the ones in real-world environments. This mismatching has seriously impacted the sample complexity of MBRL. The phenomenon can be attributed to the fact that previous works employ supervised learning to learn the one-step transition models, which has inherent difficulty ensuring the matching of distributions from multi-step rollouts. Based on the claim, we propose to learn the synthesized model by matching the distributions of multi-step rollouts sampled from the synthesized model and the real ones via WGAN. We theoretically show that matching the two can minimize the difference of cumulative rewards between the real transition and the learned one. Our experiments also show that the proposed model imitation method outperforms the state-of-the-art in terms of sample complexity and average return.

Related articles: Most relevant | Search more
arXiv:2006.13408 [cs.LG] (Published 2020-06-24)
Control-Aware Representations for Model-based Reinforcement Learning
arXiv:2009.08586 [cs.LG] (Published 2020-09-18)
A Contraction Approach to Model-based Reinforcement Learning
arXiv:1807.03858 [cs.LG] (Published 2018-07-10)
Algorithmic Framework for Model-based Reinforcement Learning with Theoretical Guarantees