arXiv:1611.08022 [math.OC]AbstractReferencesReviewsResources
A Stronger Convergence Result on the Proximal Incremental Aggregated Gradient Method
Nuri Denizcan Vanli, Mert Gurbuzbalaban, Asu Ozdaglar
Published 2016-11-23Version 1
We study the convergence rate of the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a large number of smooth component functions (where the sum is strongly convex) and a non-smooth convex function. At each iteration, the PIAG method moves along an aggregated gradient formed by incrementally updating gradients of component functions at least once in the last $K$ iterations and takes a proximal step with respect to the non-smooth function. We show that the PIAG algorithm attains an iteration complexity that grows linear in the condition number of the problem and the delay parameter $K$. This improves upon the previously best known global linear convergence rate of the PIAG algorithm in the literature which has a quadratic dependence on $K$.