arXiv Analytics

Sign in

arXiv:1510.02146 [math.OC]AbstractReferencesReviewsResources

Distributed Gradient Descent over Directed Graphs

Chenguang Xi, Qiong Wu, Usman A. Khan

Published 2015-10-07Version 1

In this paper, we propose a distributed algorithm, called Directed-Distributed Gradient Descent (D-DGD), to solve multi-agent optimization problems over directed graphs. Existing algorithms mostly deal with similar problems under the assumption of undirected networks, i.e., requiring the weight matrices to be doubly-stochastic. The row-stochasticity of the weight matrix guarantees that all agents reach consensus, while the column-stochasticity ensures that each agent's local gradient contributes equally to the global objective. In a directed graph, however, it may not be possible to construct a doubly-stochastic weight matrix in a distributed manner. We overcome this difficulty by augmenting an additional variable for each agent to record the change in the state evolution. In each iteration, the algorithm simultaneously constructs a row-stochastic matrix and a column-stochastic matrix instead of only a doubly-stochastic matrix. The convergence of the new weight matrix, depending on the row-stochastic and column-stochastic matrices, ensures agents to reach both consensus and optimality. The analysis shows that the proposed algorithm converges at a rate of $O(\frac{\ln k}{\sqrt{k}})$, where $k$ is the number of iterations.

Related articles: Most relevant | Search more
arXiv:2309.12480 [math.OC] (Published 2023-09-21)
Global Uniform Ultimate Boundedness of Semi-Passive Systems Interconnected over Directed Graphs
arXiv:2406.14011 [math.OC] (Published 2024-06-20)
Primal-Dual Strategy (PDS) for Composite Optimization Over Directed graphs
arXiv:2009.02706 [math.OC] (Published 2020-09-06)
On the probabilistic feasibility of solutions in multi-agent optimization problems under uncertainty