arXiv Analytics

Sign in

arXiv:2104.10911 [math.OC]AbstractReferencesReviewsResources

Converting ADMM to a Proximal Gradient for Convex Optimization Problems

Ryosuke Shimmura, Joe Suzuki

Published 2021-04-22Version 1

In machine learning and data science, we often consider efficiency for solving problems. In sparse estimation, such as fused lasso and convex clustering, we apply either the proximal gradient method or the alternating direction method of multipliers (ADMM) to solve the problem. It takes time to include matrix division in the former case, while an efficient method such as FISTA (fast iterative shrinkage-thresholding algorithm) has been developed in the latter case. This paper proposes a general method for converting the ADMM solution to the proximal gradient method, assuming that the constraints and objectives are strongly convex. Then, we apply it to sparse estimation problems, such as sparse convex clustering and trend filtering, and we show by numerical experiments that we can obtain a significant improvement in terms of efficiency.

Related articles: Most relevant | Search more
arXiv:2402.05215 [math.OC] (Published 2024-02-07)
Geometric characterizations of Lipschitz stability for convex optimization problems
arXiv:1908.03075 [math.OC] (Published 2019-08-07)
Domain-Driven Solver (DDS): a MATLAB-based Software Package for Convex Optimization Problems in Domain-Driven Form
arXiv:1403.6526 [math.OC] (Published 2014-03-25, updated 2015-07-01)
A Family of Subgradient-Based Methods for Convex Optimization Problems in a Unifying Framework