arXiv Analytics

Sign in

arXiv:1806.02450 [cs.LG]AbstractReferencesReviewsResources

A Finite Time Analysis of Temporal Difference Learning With Linear Function Approximation

Jalaj Bhandari, Daniel Russo, Raghav Singal

Published 2018-06-06Version 1

Temporal difference learning (TD) is a simple iterative algorithm used to estimate the value function corresponding to a given policy in a Markov decision process. Although TD is one of the most widely used algorithms in reinforcement learning, its theoretical analysis has proved challenging and few guarantees on its statistical efficiency are available. In this work, we provide a \emph{simple and explicit finite time analysis} of temporal difference learning with linear function approximation. Except for a few key insights, our analysis mirrors standard techniques for analyzing stochastic gradient descent algorithms, and therefore inherits the simplicity and elegance of that literature. A final section of the paper shows that all of our main results extend to Q-learning applied to high dimensional optimal stopping problems.

Related articles: Most relevant | Search more
arXiv:2210.05918 [cs.LG] (Published 2022-10-12)
Finite time analysis of temporal difference learning with linear function approximation: Tail averaging and regularisation
arXiv:2406.07892 [cs.LG] (Published 2024-06-12)
Finite Time Analysis of Temporal Difference Learning for Mean-Variance in a Discounted MDP
arXiv:1902.02234 [cs.LG] (Published 2019-02-06)
Finite-Sample Analysis for SARSA and Q-Learning with Linear Function Approximation