arXiv Analytics

Sign in

arXiv:2302.12105 [math.OC]AbstractReferencesReviewsResources

A subgradient method with constant step-size for $\ell_1$-composite optimization

Alessandro Scagliotti, Piero Colli Franzone

Published 2023-02-23Version 1

Subgradient methods are the natural extension to the non-smooth case of the classical gradient descent for regular convex optimization problems. However, in general, they are characterized by slow convergence rates, and they require decreasing step-sizes to converge. In this paper we propose a subgradient method with constant step-size for composite convex objectives with $\ell_1$-regularization. If the smooth term is strongly convex, we can establish a linear convergence result for the function values. This fact relies on an accurate choice of the element of the subdifferential used for the update, and on proper actions adopted when non-differentiability regions are crossed. Then, we propose an accelerated version of the algorithm, based on conservative inertial dynamics and on an adaptive restart strategy. Finally, we test the performances of our algorithms on some strongly and non-strongly convex examples.

Related articles: Most relevant | Search more
arXiv:1808.06274 [math.OC] (Published 2018-08-20)
Iteration-Complexity of the Subgradient Method on Riemannian Manifolds with Lower Bounded Curvature
arXiv:2403.15749 [math.OC] (Published 2024-03-23)
Horoballs and the subgradient method
arXiv:2108.11832 [math.OC] (Published 2021-08-26)
Subgradient methods near active manifolds: saddle point avoidance, local convergence, and asymptotic normality