arXiv Analytics

Sign in

arXiv:0904.4229 [math.OC]AbstractReferencesReviewsResources

Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Minima

Vladislav B. Tadić

Published 2009-04-27Version 2

The convergence rate of stochastic gradient search is analyzed in this paper. Using arguments based on differential geometry and Lojasiewicz inequalities, tight bounds on the convergence rate of general stochastic gradient algorithms are derived. As opposed to the existing results, the results presented in this paper allow the objective function to have multiple, non-isolated minima, impose no restriction on the values of the Hessian (of the objective function) and do not require the algorithm estimates to have a single limit point. Applying these new results, the convergence rate of recursive prediction error identification algorithms is studied. The convergence rate of supervised and temporal-difference learning algorithms is also analyzed using the results derived in the paper.

Related articles: Most relevant | Search more
arXiv:0907.1020 [math.OC] (Published 2009-07-06, updated 2013-09-17)
Convergence and Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Extrema
arXiv:1601.00194 [math.OC] (Published 2016-01-02)
Convergence Rate of Distributed ADMM over Networks
arXiv:1204.0301 [math.OC] (Published 2012-04-02)
Tree Codes Improve Convergence Rate of Consensus Over Erasure Channels