arXiv Analytics

Sign in

arXiv:2003.02395 [stat.ML]AbstractReferencesReviewsResources

On the Convergence of Adam and Adagrad

Alexandre Défossez, Léon Bottou, Francis Bach, Nicolas Usunier

Published 2020-03-05Version 1

We provide a simple proof of the convergence of the optimization algorithms Adam and Adagrad with the assumptions of smooth gradients and almost sure uniform bound on the $\ell_\infty$ norm of the gradients. This work builds on the techniques introduced by Ward et al. (2019) and extends them to the Adam optimizer. We show that in expectation, the squared norm of the objective gradient averaged over the trajectory has an upper-bound which is explicit in the constants of the problem, parameters of the optimizer and the total number of iterations N. This bound can be made arbitrarily small. In particular, Adam with a learning rate $\alpha=1/\sqrt{N}$ and a momentum parameter on squared gradients $\beta_2=1 - 1/N$ achieves the same rate of convergence $O(\ln(N)/\sqrt{N})$ as Adagrad. Thus, it is possible to use Adam as a finite horizon version of Adagrad, much like constant step size SGD can be used instead of its asymptotically converging decaying step size version.

Comments: 19 pages, 0 figures, preprint version
Categories: stat.ML, cs.LG
Related articles: Most relevant | Search more
arXiv:2006.07904 [stat.ML] (Published 2020-06-14)
An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias
arXiv:2409.18804 [stat.ML] (Published 2024-09-27)
Convergence of Diffusion Models Under the Manifold Hypothesis in High-Dimensions
arXiv:2409.06938 [stat.ML] (Published 2024-09-11)
k-MLE, k-Bregman, k-VARs: Theory, Convergence, Computation