arXiv Analytics

Sign in

arXiv:1906.07355 [math.OC]AbstractReferencesReviewsResources

Escaping from saddle points on Riemannian manifolds

Yue Sun, Nicolas Flammarion, Maryam Fazel

Published 2019-06-18Version 1

We consider minimizing a nonconvex, smooth function $f$ on a Riemannian manifold $\mathcal{M}$. We show that a perturbed version of Riemannian gradient descent algorithm converges to a second-order stationary point (and hence is able to escape saddle points on the manifold). The rate of convergence depends as $1/\epsilon^2$ on the accuracy $\epsilon$, which matches a rate known only for unconstrained smooth minimization. The convergence rate depends polylogarithmically on the manifold dimension $d$, hence is almost dimension-free. The rate also has a polynomial dependence on the parameters describing the curvature of the manifold and the smoothness of the function. While the unconstrained problem (Euclidean setting) is well-studied, our result is the first to prove such a rate for nonconvex, manifold-constrained problems.

Related articles: Most relevant | Search more
arXiv:1705.10412 [math.OC] (Published 2017-05-29)
Gradient Descent Can Take Exponential Time to Escape Saddle Points
arXiv:1111.5280 [math.OC] (Published 2011-11-22, updated 2013-11-19)
Stochastic gradient descent on Riemannian manifolds
arXiv:2008.11091 [math.OC] (Published 2020-08-25)
Unconstrained optimisation on Riemannian manifolds