arXiv Analytics

Sign in

arXiv:0907.1020 [math.OC]AbstractReferencesReviewsResources

Convergence and Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Extrema

Vladislav B. Tadic

Published 2009-07-06, updated 2013-09-17Version 5

The asymptotic behavior of stochastic gradient algorithms is studied. Relying on results from differential geometry (Lojasiewicz gradient inequality), the single limit-point convergence of the algorithm iterates is demonstrated and relatively tight bounds on the convergence rate are derived. In sharp contrast to the existing asymptotic results, the new results presented here allow the objective function to have multiple and non-isolated minima. The new results also offer new insights into the asymptotic properties of several classes of recursive algorithms which are routinely used in engineering, statistics, machine learning and operations research.

Related articles: Most relevant | Search more
arXiv:0904.4229 [math.OC] (Published 2009-04-27)
Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Minima
arXiv:1601.00194 [math.OC] (Published 2016-01-02)
Convergence Rate of Distributed ADMM over Networks
arXiv:1204.0301 [math.OC] (Published 2012-04-02)
Tree Codes Improve Convergence Rate of Consensus Over Erasure Channels