arXiv Analytics

Sign in

arXiv:2106.08909 [cs.LG]AbstractReferencesReviewsResources

Offline RL Without Off-Policy Evaluation

David Brandfonbrener, William F. Whitney, Rajesh Ranganath, Joan Bruna

Published 2021-06-16Version 1

Most prior approaches to offline reinforcement learning (RL) have taken an iterative actor-critic approach involving off-policy evaluation. In this paper we show that simply doing one step of constrained/regularized policy improvement using an on-policy Q estimate of the behavior policy performs surprisingly well. This one-step algorithm beats the previously reported results of iterative algorithms on a large portion of the D4RL benchmark. The simple one-step baseline achieves this strong performance without many of the tricks used by previously proposed iterative algorithms and is more robust to hyperparameters. We argue that the relatively poor performance of iterative approaches is a result of the high variance inherent in doing off-policy evaluation and magnified by the repeated optimization of policies against those high-variance estimates. In addition, we hypothesize that the strong performance of the one-step algorithm is due to a combination of favorable structure in the environment and behavior policy.

Related articles: Most relevant | Search more
arXiv:1910.12809 [cs.LG] (Published 2019-10-28)
Minimax Weight and Q-Function Learning for Off-Policy Evaluation
arXiv:2109.06310 [cs.LG] (Published 2021-09-13)
State Relevance for Off-Policy Evaluation
arXiv:1605.04812 [cs.LG] (Published 2016-05-16)
Off-policy evaluation for slate recommendation