arXiv Analytics

Sign in

arXiv:2002.04090 [math.OC]AbstractReferencesReviewsResources

Convergence Guarantees of Policy Optimization Methods for Markovian Jump Linear Systems

Joao Paulo Jansch-Porto, Bin Hu, Geir Dullerud

Published 2020-02-10Version 1

Recently, policy optimization for control purposes has received renewed attention due to the increasing interest in reinforcement learning. In this paper, we investigate the convergence of policy optimization for quadratic control of Markovian jump linear systems (MJLS). First, we study the optimization landscape of direct policy optimization for MJLS, and, in particular, show that despite the non-convexity of the resultant problem the unique stationary point is the global optimal solution. Next, we prove that the Gauss-Newton method and the natural policy gradient method converge to the optimal state feedback controller for MJLS at a linear rate if initialized at a controller which stabilizes the closed-loop dynamics in the mean square sense. We propose a novel Lyapunov argument to fix a key stability issue in the convergence proof. Finally, we present a numerical example to support our theory. Our work brings new insights for understanding the performance of policy learning methods on controlling unknown MJLS.

Related articles: Most relevant | Search more
arXiv:2305.03938 [math.OC] (Published 2023-05-06)
Adam-family Methods for Nonsmooth Optimization with Convergence Guarantees
arXiv:1810.04059 [math.OC] (Published 2018-10-09)
Dynamic Optimization with Convergence Guarantees
arXiv:2210.17465 [math.OC] (Published 2022-10-31)
Convergence Guarantees of a Distributed Network Equivalence Algorithm for Distribution-OPF