arXiv Analytics

Sign in

arXiv:2106.14080 [cs.LG]AbstractReferencesReviewsResources

Model-Advantage Optimization for Model-Based Reinforcement Learning

Nirbhay Modhe, Harish Kamath, Dhruv Batra, Ashwin Kalyan

Published 2021-06-26Version 1

Model-based Reinforcement Learning (MBRL) algorithms have been traditionally designed with the goal of learning accurate dynamics of the environment. This introduces a mismatch between the objectives of model-learning and the overall learning problem of finding an optimal policy. Value-aware model learning, an alternative model-learning paradigm to maximum likelihood, proposes to inform model-learning through the value function of the learnt policy. While this paradigm is theoretically sound, it does not scale beyond toy settings. In this work, we propose a novel value-aware objective that is an upper bound on the absolute performance difference of a policy across two models. Further, we propose a general purpose algorithm that modifies the standard MBRL pipeline -- enabling learning with value aware objectives. Our proposed objective, in conjunction with this algorithm, is the first successful instantiation of value-aware MBRL on challenging continuous control environments, outperforming previous value-aware objectives and with competitive performance w.r.t. MLE-based MBRL approaches.

Related articles: Most relevant | Search more
arXiv:1804.07193 [cs.LG] (Published 2018-04-19)
Lipschitz Continuity in Model-based Reinforcement Learning
arXiv:2009.08586 [cs.LG] (Published 2020-09-18)
A Contraction Approach to Model-based Reinforcement Learning
arXiv:1807.03858 [cs.LG] (Published 2018-07-10)
Algorithmic Framework for Model-based Reinforcement Learning with Theoretical Guarantees