arXiv Analytics

Sign in

arXiv:2201.11643 [math.OC]AbstractReferencesReviewsResources

From the Ravine method to the Nesterov method and vice versa: a dynamical system perspective

H. Attouch, J. Fadili

Published 2022-01-27Version 1

We revisit the Ravine method of Gelfand and Tsetlin from a dynamical system perspective, study its convergence properties, and highlight its similarities and differences with the Nesterov accelerated gradient method. The two methods are closely related. They can be deduced from each other by reversing the order of the extrapolation and gradient operations in their definitions. They benefit from similar fast convergence of values and convergence of iterates for general convex objective functions. We will also establish the high resolution ODE of the Ravine and Nesterov methods, and reveal an additional geometric damping term driven by the Hessian for both methods. This will allow us to prove fast convergence towards zero of the gradients not only for the Ravine method but also for the Nesterov method for the first time. We also highlight connections to other algorithms stemming from more subtle discretization schemes, and finally describe a Ravine version of the proximal-gradient algorithms for general structured smooth + non-smooth convex optimization problems.

Related articles: Most relevant | Search more
arXiv:2403.06708 [math.OC] (Published 2024-03-11)
Tikhonov Regularization for Stochastic Non-Smooth Convex Optimization in Hilbert Spaces
arXiv:2402.02461 [math.OC] (Published 2024-02-04, updated 2024-02-07)
Zeroth-order Median Clipping for Non-Smooth Convex Optimization Problems with Heavy-tailed Symmetric Noise
arXiv:1711.01850 [math.OC] (Published 2017-11-06)
A universal modification of the linear coupling method