arXiv Analytics

Sign in

arXiv:2205.11831 [math.OC]AbstractReferencesReviewsResources

A Non-asymptotic Analysis of Non-parametric Temporal-Difference Learning

Eloïse Berthier, Ziad Kobeissi, Francis Bach

Published 2022-05-24Version 1

Temporal-difference learning is a popular algorithm for policy evaluation. In this paper, we study the convergence of the regularized non-parametric TD(0) algorithm, in both the independent and Markovian observation settings. In particular, when TD is performed in a universal reproducing kernel Hilbert space (RKHS), we prove convergence of the averaged iterates to the optimal value function, even when it does not belong to the RKHS. We provide explicit convergence rates that depend on a source condition relating the regularity of the optimal value function to the RKHS. We illustrate this convergence numerically on a simple continuous-state Markov reward process.

Related articles: Most relevant | Search more
arXiv:1310.7063 [math.OC] (Published 2013-10-26, updated 2015-07-01)
On the Convergence of Decentralized Gradient Descent
arXiv:0803.2211 [math.OC] (Published 2008-03-14, updated 2010-05-09)
On Conditions for Convergence to Consensus
arXiv:1801.08691 [math.OC] (Published 2018-01-26)
On Quasi-Newton Forward--Backward Splitting: Proximal Calculus and Convergence