arXiv Analytics

Sign in

arXiv:2211.08578 [math.OC]AbstractReferencesReviewsResources

Anderson acceleration of gradient methods with energy for optimization problems

Hailiang Liu, Jia-Hao He, Xuping Tian

Published 2022-11-15Version 1

Anderson acceleration (AA) as an efficient technique for speeding up the convergence of fixed-point iterations may be designed for accelerating an optimization method. We propose a novel optimization algorithm by adapting Anderson acceleration to the energy adaptive gradient method (AEGD) [arXiv:2010.05109]. The feasibility of our algorithm is examined in light of convergence results for AEGD, though it is not a fixed-point iteration. We also quantify the accelerated convergence rate of AA for gradient descent by a factor of the gain at each implementation of the Anderson mixing. Our experimental results show that the proposed algorithm requires little tuning of hyperparameters and exhibits superior fast convergence.

Related articles: Most relevant | Search more
arXiv:2105.08317 [math.OC] (Published 2021-05-18, updated 2022-04-19)
An Augmented Lagrangian Method for Optimization Problems with Structured Geometric Constraints
arXiv:2110.04882 [math.OC] (Published 2021-10-10, updated 2022-04-29)
First- and Second-Order Analysis for Optimization Problems with Manifold-Valued Constraints
arXiv:2402.12090 [math.OC] (Published 2024-02-19)
Characterization of optimization problems that are solvable iteratively with linear convergence