arXiv Analytics

Sign in

arXiv:2502.03701 [math.OC]AbstractReferencesReviewsResources

First-ish Order Methods: Hessian-aware Scalings of Gradient Descent

Oscar Smee, Fred Roosta, Stephen J. Wright

Published 2025-02-06Version 1

Gradient descent is the primary workhorse for optimizing large-scale problems in machine learning. However, its performance is highly sensitive to the choice of the learning rate. A key limitation of gradient descent is its lack of natural scaling, which often necessitates expensive line searches or heuristic tuning to determine an appropriate step size. In this paper, we address this limitation by incorporating Hessian information to scale the gradient direction. By accounting for the curvature of the function along the gradient, our adaptive, Hessian-aware scaling method ensures a local unit step size guarantee, even in nonconvex settings. Near a local minimum that satisfies the second-order sufficient conditions, our approach achieves linear convergence with a unit step size. We show that our method converges globally under a significantly weaker version of the standard Lipschitz gradient smoothness assumption. Even when Hessian information is inexact, the local unit step size guarantee and global convergence properties remain valid under mild conditions. Finally, we validate our theoretical results empirically on a range of convex and nonconvex machine learning tasks, showcasing the effectiveness of the approach.

Related articles: Most relevant | Search more
arXiv:1911.05402 [math.OC] (Published 2019-11-13)
Quadratic number of nodes is sufficient to learn a dataset via gradient descent
arXiv:2106.08502 [math.OC] (Published 2021-06-16)
Averaging on the Bures-Wasserstein manifold: dimension-free convergence of gradient descent
arXiv:2410.09990 [math.OC] (Published 2024-10-13)
Phase retrieval: Global convergence of gradient descent with optimal sample complexity