arXiv:1909.06065 [math.OC]AbstractReferencesReviewsResources
Riemannian Proximal Gradient Methods
Published 2019-09-13Version 1
In the Euclidean setting the proximal gradient method and its accelerated variants are a class of efficient algorithms for optimization problems with decomposable objective. However, due to the lack of linearity on a generic manifold, studies on such methods for similar problems but constrained on a manifold are still limited. In this paper we develop and analyze a generalization of the proximal gradient methods with and without acceleration for problems on Riemannian manifolds. Global convergence of the Riemannian proximal gradient method has been established under mild assumptions. The $O(1/k)$ and $O(1/k^2)$ convergence rate analyses are also derived for the method and its accelerated variant provided more assumptions hold. To the best of our knowledge, this is the first attempt to establish the convergence rate of the Riemannian proximal gradient methods for the nonsmooth Riemannian optimization problem. Empirical performance comparisons show that the proposed Riemannian proximal gradient methods are competitive with existing ones.