arXiv Analytics

Sign in

arXiv:1509.09002 [cs.LG]AbstractReferencesReviewsResources

Convergence of Stochastic Gradient Descent for PCA

Ohad Shamir

Published 2015-09-30Version 1

We consider the problem of principal component analysis (PCA) in a streaming stochastic setting, where our goal is to find a direction of approximate maximal variance, based on a stream of i.i.d. data points in $\mathbb{R}^d$. A simple and computationally cheap algorithm for this is stochastic gradient descent (SGD), which incrementally updates its estimate based on each new data point. However, due to the non-convex nature of the problem, analyzing its performance has been a challenge. In particular, existing guarantees rely on a non-trivial eigengap assumption on the covariance matrix, which is intuitively unnecessary. In this note, we provide (to the best of our knowledge) the first eigengap-free convergence guarantees for SGD in the context of PCA. This also partially resolves an open problem posed in [Hardt and Price, 2014].

Related articles: Most relevant | Search more
arXiv:1212.1824 [cs.LG] (Published 2012-12-08, updated 2012-12-28)
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
arXiv:1509.01240 [cs.LG] (Published 2015-09-03)
Train faster, generalize better: Stability of stochastic gradient descent
arXiv:1411.1134 [cs.LG] (Published 2014-11-05)
Global Convergence of Stochastic Gradient Descent for Some Nonconvex Matrix Problems