arXiv Analytics

Sign in

arXiv:2310.02581 [stat.ML]AbstractReferencesReviewsResources

Online Estimation and Inference for Robust Policy Evaluation in Reinforcement Learning

Weidong Liu, Jiyuan Tu, Yichen Zhang, Xi Chen

Published 2023-10-04Version 1

Recently, reinforcement learning has gained prominence in modern statistics, with policy evaluation being a key component. Unlike traditional machine learning literature on this topic, our work places emphasis on statistical inference for the parameter estimates computed using reinforcement learning algorithms. While most existing analyses assume random rewards to follow standard distributions, limiting their applicability, we embrace the concept of robust statistics in reinforcement learning by simultaneously addressing issues of outlier contamination and heavy-tailed rewards within a unified framework. In this paper, we develop an online robust policy evaluation procedure, and establish the limiting distribution of our estimator, based on its Bahadur representation. Furthermore, we develop a fully-online procedure to efficiently conduct statistical inference based on the asymptotic distribution. This paper bridges the gap between robust statistics and statistical inference in reinforcement learning, offering a more versatile and reliable approach to policy evaluation. Finally, we validate the efficacy of our algorithm through numerical experiments conducted in real-world reinforcement learning experiments.

Related articles: Most relevant | Search more
arXiv:1903.04209 [stat.ML] (Published 2019-03-11)
Shapley regressions: A framework for statistical inference on machine learning models
arXiv:2505.18493 [stat.ML] (Published 2025-05-24, updated 2025-06-19)
Statistical Inference under Performativity
arXiv:2303.14281 [stat.ML] (Published 2023-03-24)
Sequential Knockoffs for Variable Selection in Reinforcement Learning