arXiv Analytics

Sign in

arXiv:2110.12347 [math.OC]AbstractReferencesReviewsResources

Acceleration in Distributed Optimization under Similarity

Ye Tian, Gesualdo Scutari, Tianyu Cao, Alexander Gasnikov

Published 2021-10-24, updated 2022-04-10Version 2

We study distributed (strongly convex) optimization problems over a network of agents, with no centralized nodes. The loss functions of the agents are assumed to be \textit{similar}, due to statistical data similarity or otherwise. In order to reduce the number of communications to reach a solution accuracy, we proposed a {\it preconditioned, accelerated} distributed method. An $\varepsilon$-solution is achieved in $\tilde{\mathcal{O}}\big(\sqrt{\frac{\beta/\mu}{1-\rho}}\log1/\varepsilon\big)$ number of communications steps, where $\beta/\mu$ is the relative condition number between the global and local loss functions, and $\rho$ characterizes the connectivity of the network. This rate matches (up to poly-log factors) lower complexity communication bounds of distributed gossip-algorithms applied to the class of problems of interest. Numerical results show significant communication savings with respect to existing accelerated distributed schemes, especially when solving ill-conditioned problems.

Related articles: Most relevant | Search more
arXiv:1702.05122 [math.OC] (Published 2017-02-16)
Exact Diffusion for Distributed Optimization and Learning --- Part I: Algorithm Development
arXiv:1805.12591 [math.OC] (Published 2018-05-31)
On Acceleration with Noise-Corrupted Gradients
arXiv:1911.02410 [math.OC] (Published 2019-11-06)
DISROPT: a Python Framework for Distributed Optimization