arXiv Analytics

Sign in

arXiv:2407.13564 [math.OC]AbstractReferencesReviewsResources

Convergence result for the gradient-push algorithm and its application to boost up the Push-DIging algorithm

Hyogi Choi, Woocheol Choi, Gwangil Kim

Published 2024-07-18Version 1

The gradient-push algorithm is a fundamental algorithm for the distributed optimization problem \begin{equation} \min_{x \in \mathbb{R}^d} f(x) = \sum_{j=1}^n f_j (x), \end{equation} where each local cost $f_j$ is only known to agent $a_i$ for $1 \leq i \leq n$ and the agents are connected by a directed graph. In this paper, we obtain convergence results for the gradient-push algorithm with constant stepsize whose range is sharp in terms the order of the smoothness constant $L>0$. Precisely, under the two settings: 1) Each local cost $f_i$ is strongly convex and $L$-smooth, 2) Each local cost $f_i$ is convex quadratic and $L$-smooth while the aggregate cost $f$ is strongly convex, we show that the gradient-push algorithm with stepsize $\alpha>0$ converges to an $O(\alpha)$-neighborhood of the minimizer of $f$ for a range $\alpha \in (0, c/L]$ with a value $c>0$ independent of $L>0$. As a benefit of the result, we suggest a hybrid algorithm that performs the gradient-push algorithm with a relatively large stepsize $\alpha>0$ for a number of iterations and then go over to perform the Push-DIGing algorithm. It is verified by a numerical test that the hybrid algorithm enhances the performance of the Push-DIGing algorithm significantly. The convergence results of the gradient-push algorithm are also supported by numerical tests.

Related articles: Most relevant | Search more
arXiv:math/0004064 [math.OC] (Published 2000-04-11)
The fractional - order controllers: Methods for their synthesis and application
arXiv:1403.2816 [math.OC] (Published 2014-03-12, updated 2015-04-17)
S-Lemma with Equality and Its Applications
arXiv:1206.4897 [math.OC] (Published 2012-06-21)
Robust Eigenvector of a Stochastic Matrix with Application to PageRank