arXiv Analytics

Sign in

arXiv:1911.04384 [cs.LG]AbstractReferencesReviewsResources

Provably Convergent Off-Policy Actor-Critic with Function Approximation

Shangtong Zhang, Bo Liu, Hengshuai Yao, Shimon Whiteson

Published 2019-11-11Version 1

We present the first provably convergent off-policy actor-critic algorithm (COF-PAC) with function approximation in a two-timescale form. Key to COF-PAC is the introduction of a new critic, the emphasis critic, which is trained via Gradient Emphasis Learning (GEM), a novel combination of the key ideas of Gradient Temporal Difference Learning and Emphatic Temporal Difference Learning. With the help of the emphasis critic and the canonical value function critic, we show convergence for COF-PAC, where the critics are linear and the actor can be nonlinear.

Comments: Optimization Foundations of Reinforcement Learning Workshop at NeurIPS 2019
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:2003.06350 [cs.LG] (Published 2020-03-13)
Interference and Generalization in Temporal Difference Learning
arXiv:1806.02450 [cs.LG] (Published 2018-06-06)
A Finite Time Analysis of Temporal Difference Learning With Linear Function Approximation
arXiv:2402.12687 [cs.LG] (Published 2024-02-20, updated 2024-08-18)
Learning on manifolds without manifold learning