{ "id": "1902.02234", "version": "v1", "published": "2019-02-06T15:33:45.000Z", "updated": "2019-02-06T15:33:45.000Z", "title": "Finite-Sample Analysis for SARSA and Q-Learning with Linear Function Approximation", "authors": [ "Shaofeng Zou", "Tengyu Xu", "Yingbin Liang" ], "categories": [ "cs.LG", "stat.ML" ], "abstract": "Though the convergence of major reinforcement learning algorithms has been extensively studied, the finite-sample analysis to further characterize the convergence rate in terms of the sample complexity for problems with continuous state space is still very limited. Such a type of analysis is especially challenging for algorithms with dynamically changing learning policies and under non-i.i.d.\\ sampled data. In this paper, we present the first finite-sample analysis for the SARSA algorithm and its minimax variant (for zero-sum Markov games), with a single sample path and linear function approximation. To establish our results, we develop a novel technique to bound the gradient bias for dynamically changing learning policies, which can be of independent interest. We further provide finite-sample bounds for Q-learning and its minimax variant. Comparison of our result with the existing finite-sample bound indicates that linear function approximation achieves order-level lower sample complexity than the nearest neighbor approach.", "revisions": [ { "version": "v1", "updated": "2019-02-06T15:33:45.000Z" } ], "analyses": { "keywords": [ "linear function approximation", "finite-sample analysis", "function approximation achieves order-level", "order-level lower sample complexity", "approximation achieves order-level lower sample" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }