arXiv:1609.08177 [math.OC]AbstractReferencesReviewsResources
Extragradient Method in Optimization: Convergence and Complexity
Trong Phong Nguyen, Edouard Pauwels, Emile Richard, Bruce W. Suter
Published 2016-09-26Version 1
We consider the extragradient method to minimize the sum of two functions, the first one being smooth and the second being convex. Under Kurdyka-Lojasiewicz assumption, we prove that the sequence produced by the extragradient method converges to a critical point of the problem and has finite length. The analysis is extended to the case when both functions are convex. We provide a 1/k convergence rate which is classical for gradient methods. Furthermore, we show that the recent \textit{small-prox} complexity result can be applied to this method. Considering the extragradient method is the occasion to describe exact line search for proximal decomposition methods. We provide details for the implementation of this scheme for the $\ell_1$ regularized least squares problem and give numerical results which suggest that combining \er{nonaccelerated} methods with exact line search can be a competitive choice.