arXiv Analytics

Sign in

arXiv:1905.00360 [cs.LG]AbstractReferencesReviewsResources

Information-Theoretic Considerations in Batch Reinforcement Learning

Jinglin Chen, Nan Jiang

Published 2019-05-01Version 1

Value-function approximation methods that operate in batch mode have foundational importance to reinforcement learning (RL). Finite sample guarantees for these methods often crucially rely on two types of assumptions: (1) mild distribution shift, and (2) representation conditions that are stronger than realizability. However, the necessity ("why do we need them?") and the naturalness ("when do they hold?") of such assumptions have largely eluded the literature. In this paper, we revisit these assumptions and provide theoretical results towards answering the above questions, and make steps towards a deeper understanding of value-function approximation.

Related articles: Most relevant | Search more
arXiv:2003.03924 [cs.LG] (Published 2020-03-09)
$Q^\star$ Approximation Schemes for Batch Reinforcement Learning: A Theoretical Comparison
arXiv:2007.08202 [cs.LG] (Published 2020-07-16)
Provably Good Batch Reinforcement Learning Without Great Exploration
arXiv:2111.04279 [cs.LG] (Published 2021-11-08, updated 2022-11-29)
Batch Reinforcement Learning from Crowds