{ "id": "2407.13564", "version": "v1", "published": "2024-07-18T14:36:26.000Z", "updated": "2024-07-18T14:36:26.000Z", "title": "Convergence result for the gradient-push algorithm and its application to boost up the Push-DIging algorithm", "authors": [ "Hyogi Choi", "Woocheol Choi", "Gwangil Kim" ], "categories": [ "math.OC", "cs.SY", "eess.SY" ], "abstract": "The gradient-push algorithm is a fundamental algorithm for the distributed optimization problem \\begin{equation} \\min_{x \\in \\mathbb{R}^d} f(x) = \\sum_{j=1}^n f_j (x), \\end{equation} where each local cost $f_j$ is only known to agent $a_i$ for $1 \\leq i \\leq n$ and the agents are connected by a directed graph. In this paper, we obtain convergence results for the gradient-push algorithm with constant stepsize whose range is sharp in terms the order of the smoothness constant $L>0$. Precisely, under the two settings: 1) Each local cost $f_i$ is strongly convex and $L$-smooth, 2) Each local cost $f_i$ is convex quadratic and $L$-smooth while the aggregate cost $f$ is strongly convex, we show that the gradient-push algorithm with stepsize $\\alpha>0$ converges to an $O(\\alpha)$-neighborhood of the minimizer of $f$ for a range $\\alpha \\in (0, c/L]$ with a value $c>0$ independent of $L>0$. As a benefit of the result, we suggest a hybrid algorithm that performs the gradient-push algorithm with a relatively large stepsize $\\alpha>0$ for a number of iterations and then go over to perform the Push-DIGing algorithm. It is verified by a numerical test that the hybrid algorithm enhances the performance of the Push-DIGing algorithm significantly. The convergence results of the gradient-push algorithm are also supported by numerical tests.", "revisions": [ { "version": "v1", "updated": "2024-07-18T14:36:26.000Z" } ], "analyses": { "keywords": [ "gradient-push algorithm", "convergence result", "push-diging algorithm", "local cost", "application" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }