arXiv Analytics

Sign in

arXiv:2203.02204 [math.OC]AbstractReferencesReviewsResources

Sharper Bounds for Proximal Gradient Algorithms with Errors

Anis Hamadouche, Yun Wu, Andrew M. Wallace, Joao F. C. Mota

Published 2022-03-04Version 1

We analyse the convergence of the proximal gradient algorithm for convex composite problems in the presence of gradient and proximal computational inaccuracies. We derive new tighter deterministic and probabilistic bounds that we use to verify a simulated (MPC) and a synthetic (LASSO) optimization problems solved on a reduced-precision machine in combination with an inaccurate proximal operator. We also show how the probabilistic bounds are more robust for algorithm verification and more accurate for application performance guarantees. Under some statistical assumptions, we also prove that some cumulative error terms follow a martingale property. And conforming to observations, e.g., in \cite{schmidt2011convergence}, we also show how the acceleration of the algorithm amplifies the gradient and proximal computational errors.

Related articles: Most relevant | Search more
arXiv:1512.09302 [math.OC] (Published 2015-12-31)
Linear Convergence of Proximal Gradient Algorithm with Extrapolation for a Class of Nonconvex Nonsmooth Minimization Problems
arXiv:1801.05589 [math.OC] (Published 2018-01-17)
On the Proximal Gradient Algorithm with Alternated Inertia
arXiv:1712.02623 [math.OC] (Published 2017-12-07)
The multiproximal linearization method for convex composite problems