arXiv Analytics

Sign in

arXiv:2404.03372 [math.OC]AbstractReferencesReviewsResources

Elementary Analysis of Policy Gradient Methods

Jiacai Liu, Wenye Li, Ke Wei

Published 2024-04-04Version 1

Projected policy gradient under the simplex parameterization, policy gradient and natural policy gradient under the softmax parameterization, are fundamental algorithms in reinforcement learning. There have been a flurry of recent activities in studying these algorithms from the theoretical aspect. Despite this, their convergence behavior is still not fully understood, even given the access to exact policy evaluations. In this paper, we focus on the discounted MDP setting and conduct a systematic study of the aforementioned policy optimization methods. Several novel results are presented, including 1) global linear convergence of projected policy gradient for any constant step size, 2) sublinear convergence of softmax policy gradient for any constant step size, 3) global linear convergence of softmax natural policy gradient for any constant step size, 4) global linear convergence of entropy regularized softmax policy gradient for a wider range of constant step sizes than existing result, 5) tight local linear convergence rate of entropy regularized natural policy gradient, and 6) a new and concise local quadratic convergence rate of soft policy iteration without the assumption on the stationary distribution under the optimal policy. New and elementary analysis techniques have been developed to establish these results.

Related articles: Most relevant | Search more
arXiv:2406.03734 [math.OC] (Published 2024-06-06)
Policy Gradient Methods for the Cost-Constrained LQR: Strong Duality and Global Convergence
arXiv:1906.09126 [math.OC] (Published 2019-06-21)
Restart FISTA with Global Linear Convergence
arXiv:2211.14850 [math.OC] (Published 2022-11-27)
Lyapunov stability of the subgradient method with constant step size