arXiv Analytics

Sign in

arXiv:2212.12256 [math.OC]AbstractReferencesReviewsResources

On a fixed-point continuation method for a convex optimization problem

Jean-Baptiste Fest, Tommi Heikkilä, Ignace Loris, Ségolène Martin, Luca Ratti, Simone Rebegoldi, Gesa Sarnighausen

Published 2022-12-23Version 1

We consider a variation of the classical proximal-gradient algorithm for the iterative minimization of a cost function consisting of a sum of two terms, one smooth and the other prox-simple, and whose relative weight is determined by a penalty parameter. This so-called fixed-point continuation method allows one to approximate the problem's trade-off curve, i.e. to compute the minimizers of the cost function for a whole range of values of the penalty parameter at once. The algorithm is shown to converge, and a rate of convergence of the cost function is also derived. Furthermore, it is shown that this method is related to iterative algorithms constructed on the basis of the $\epsilon$-subdifferential of the prox-simple term. Some numerical examples are provided.

Comments: 15 pages, 2 figures. Workshop on Advanced Techniques in Optimization for Machine learning and Imaging
Categories: math.OC, cs.NA, math.NA
Related articles: Most relevant | Search more
arXiv:2406.09786 [math.OC] (Published 2024-06-14)
Convergence analysis of a regularized Newton method with generalized regularization terms for convex optimization problems
arXiv:1804.05098 [math.OC] (Published 2018-04-13)
On the Differentiability of the Solution to Convex Optimization Problems
arXiv:2205.06725 [math.OC] (Published 2022-05-13)
Multi-Marginal Gromov-Wasserstein Transport and Barycenters