arXiv:1805.09077 [math.OC]AbstractReferencesReviewsResources
Accelerating the Fast Gradient Method
Published 2018-05-23Version 1
A set of accelerated first order optimisation algorithms for minimising strictly convex functions are proposed. Using only gradient information, these algorithms can achieve a worst-case convergence rate proportional to $ 1-\left(\frac{\mu}{L}\right)^{1/N}$ for $N = 1, 2, 3, \dots$ and can exceed that of the fast gradient method. However, the improvement in the rate of convergence comes at the expense of a loss of robustness. Robustness is recovered by using an adaptive restarting controller that guarantees iterate convergences and recovers the faster convergence rates.
Comments: 8 pages. 6 figures
Categories: math.OC
Related articles: Most relevant | Search more
arXiv:1610.01349 [math.OC] (Published 2016-10-05)
A Fast Gradient Method for Nonnegative Sparse Regression with Self Dictionary
arXiv:2003.01322 [math.OC] (Published 2020-03-03)
Randomized Primal-Dual Algorithms for Composite Convex Minimization with Faster Convergence Rates
arXiv:2003.05667 [math.OC] (Published 2020-03-12)
Fast Gradient Method for Model Predictive Control with Input Rate and Amplitude Constraints