arXiv Analytics

Sign in

arXiv:2311.05638 [stat.ML]AbstractReferencesReviewsResources

Towards Instance-Optimality in Online PAC Reinforcement Learning

Aymen Al-Marjani, Andrea Tirinzoni, Emilie Kaufmann

Published 2023-10-31Version 1

Several recent works have proposed instance-dependent upper bounds on the number of episodes needed to identify, with probability $1-\delta$, an $\varepsilon$-optimal policy in finite-horizon tabular Markov Decision Processes (MDPs). These upper bounds feature various complexity measures for the MDP, which are defined based on different notions of sub-optimality gaps. However, as of now, no lower bound has been established to assess the optimality of any of these complexity measures, except for the special case of MDPs with deterministic transitions. In this paper, we propose the first instance-dependent lower bound on the sample complexity required for the PAC identification of a near-optimal policy in any tabular episodic MDP. Additionally, we demonstrate that the sample complexity of the PEDEL algorithm of \cite{Wagenmaker22linearMDP} closely approaches this lower bound. Considering the intractability of PEDEL, we formulate an open question regarding the possibility of achieving our lower bound using a computationally-efficient algorithm.

Related articles: Most relevant | Search more
arXiv:1011.5395 [stat.ML] (Published 2010-11-24)
The Sample Complexity of Dictionary Learning
arXiv:2409.01243 [stat.ML] (Published 2024-09-02)
Sample Complexity of the Sign-Perturbed Sums Method
arXiv:2106.07898 [stat.ML] (Published 2021-06-15)
Divergence Frontiers for Generative Models: Sample Complexity, Quantization Level, and Frontier Integral