arXiv Analytics

Sign in

arXiv:2409.12243 [math.OC]AbstractReferencesReviewsResources

Convergence of Markov Chains for Constant Step-size Stochastic Gradient Descent with Separable Functions

David Shirokoff, Philip Zaleski

Published 2024-09-18Version 1

Stochastic gradient descent (SGD) is a popular algorithm for minimizing objective functions that arise in machine learning. For constant step-sized SGD, the iterates form a Markov chain on a general state space. Focusing on a class of separable (non-convex) objective functions, we establish a "Doeblin-type decomposition," in that the state space decomposes into a uniformly transient set and a disjoint union of absorbing sets. Each of the absorbing sets contains a unique invariant measure, with the set of all invariant measures being the convex hull. Moreover the set of invariant measures are shown to be global attractors to the Markov chain with a geometric convergence rate. The theory is highlighted with examples that show: (1) the failure of the diffusion approximation to characterize the long-time dynamics of SGD; (2) the global minimum of an objective function may lie outside the support of the invariant measures (i.e., even if initialized at the global minimum, SGD iterates will leave); and (3) bifurcations may enable the SGD iterates to transition between two local minima. Key ingredients in the theory involve viewing the SGD dynamics as a monotone iterated function system and establishing a "splitting condition" of Dubins and Freedman 1966 and Bhattacharya and Lee 1988.

Related articles: Most relevant | Search more
arXiv:1709.06466 [math.OC] (Published 2017-09-19)
Evaluation of the Rate of Convergence in the PIA
arXiv:0803.2211 [math.OC] (Published 2008-03-14, updated 2010-05-09)
On Conditions for Convergence to Consensus
arXiv:1504.06017 [math.OC] (Published 2015-04-23)
Network Newton-Part I: Algorithm and Convergence