arXiv Analytics

Sign in

arXiv:1801.01088 [math.OC]AbstractReferencesReviewsResources

Convergence rates of Forward--Douglas--Rachford splitting method

Cesare Molinari, Jingwei Liang, Jalal Fadili

Published 2018-01-03Version 1

Over the past years, operator splitting methods have become ubiquitous for non-smooth optimization owing to their simplicity and efficiency. In this paper, we consider the Forward--Douglas--Rachford splitting method (FDR) [10,40], and study both global and local convergence rates of this method. For the global rate, we establish an $o(1/k)$ convergence rate in terms of a Bregman divergence suitably designed for the objective function. Moreover, when specializing to the case of Forward--Backward splitting method, we show that convergence rate of the objective function of the method is actually $o(1/k)$ for a large choice of the descent step-size. Then locally, based on the assumption that the non-smooth part of the optimization problem is partly smooth, we establish local linear convergence of the method. More precisely, we show that the sequence generated by FDR method first (i) identifies a smooth manifold in a finite number of iteration, and then (ii) enters a local linear convergence regime, which is for instance characterized in terms of the structure of the underlying active smooth manifold. To exemplify the usefulness of the obtained result, we consider several concrete numerical experiments arising from applicative fields including, for instance, signal/image processing, inverse problems and machine learning.

Related articles: Most relevant | Search more
arXiv:1507.07375 [math.OC] (Published 2015-07-27)
Accelerating the DC algorithm for smooth functions
arXiv:1802.01062 [math.OC] (Published 2018-02-04)
How to Characterize the Worst-Case Performance of Algorithms for Nonconvex Optimization
arXiv:1403.0268 [math.OC] (Published 2014-03-02, updated 2014-12-15)
Tropical optimization problems with application to project scheduling with minimum makespan