arXiv Analytics

Sign in

arXiv:2307.11782 [math.OC]AbstractReferencesReviewsResources

Convergence of Adam for Non-convex Objectives: Relaxed Hyperparameters and Non-ergodic Case

Meixuan He, Yuqing Liang, Jinlan Liu, Dongpo Xu

Published 2023-07-20Version 1

Adam is a commonly used stochastic optimization algorithm in machine learning. However, its convergence is still not fully understood, especially in the non-convex setting. This paper focuses on exploring hyperparameter settings for the convergence of vanilla Adam and tackling the challenges of non-ergodic convergence related to practical application. The primary contributions are summarized as follows: firstly, we introduce precise definitions of ergodic and non-ergodic convergence, which cover nearly all forms of convergence for stochastic optimization algorithms. Meanwhile, we emphasize the superiority of non-ergodic convergence over ergodic convergence. Secondly, we establish a weaker sufficient condition for the ergodic convergence guarantee of Adam, allowing a more relaxed choice of hyperparameters. On this basis, we achieve the almost sure ergodic convergence rate of Adam, which is arbitrarily close to $o(1/\sqrt{K})$. More importantly, we prove, for the first time, that the last iterate of Adam converges to a stationary point for non-convex objectives. Finally, we obtain the non-ergodic convergence rate of $O(1/K)$ for function values under the Polyak-Lojasiewicz (PL) condition. These findings build a solid theoretical foundation for Adam to solve non-convex stochastic optimization problems.

Related articles: Most relevant | Search more
arXiv:1607.00345 [math.OC] (Published 2016-07-01)
Convergence Rate of Frank-Wolfe for Non-Convex Objectives
arXiv:2307.14364 [math.OC] (Published 2023-07-25)
Federated Distributionally Robust Optimization with Non-Convex Objectives: Algorithm and Analysis
arXiv:2305.17298 [math.OC] (Published 2023-05-26)
Dual Bounds for Redistricting Problems with Non-Convex Objectives