arXiv:1111.5280 [math.OC]AbstractReferencesReviewsResources
Stochastic gradient descent on Riemannian manifolds
Published 2011-11-22, updated 2013-11-19Version 4
Stochastic gradient descent is a simple approach to find the local minima of a cost function whose evaluations are corrupted by noise. In this paper, we develop a procedure extending stochastic gradient descent algorithms to the case where the function is defined on a Riemannian manifold. We prove that, as in the Euclidian case, the gradient descent algorithm converges to a critical point of the cost function. The algorithm has numerous potential applications, and is illustrated here by four examples. In particular a novel gossip algorithm on the set of covariance matrices is derived and tested numerically.
Comments: A slightly shorter version has been published in IEEE Transactions Automatic Control
Journal: IEEE Transactions on Automatic Control, Vol 58 (9), pages 2217 - 2229, Sept 2013
Keywords: riemannian manifold, cost function, procedure extending stochastic gradient descent, gradient descent algorithm converges, extending stochastic gradient descent algorithms
Tags: journal article
Related articles: Most relevant | Search more
arXiv:1906.07355 [math.OC] (Published 2019-06-18)
Escaping from saddle points on Riemannian manifolds
arXiv:2008.11091 [math.OC] (Published 2020-08-25)
Unconstrained optimisation on Riemannian manifolds
arXiv:2404.10029 [math.OC] (Published 2024-04-15)
Federated Learning on Riemannian Manifolds with Differential Privacy