{ "id": "1510.02146", "version": "v1", "published": "2015-10-07T21:48:54.000Z", "updated": "2015-10-07T21:48:54.000Z", "title": "Distributed Gradient Descent over Directed Graphs", "authors": [ "Chenguang Xi", "Qiong Wu", "Usman A. Khan" ], "categories": [ "math.OC" ], "abstract": "In this paper, we propose a distributed algorithm, called Directed-Distributed Gradient Descent (D-DGD), to solve multi-agent optimization problems over directed graphs. Existing algorithms mostly deal with similar problems under the assumption of undirected networks, i.e., requiring the weight matrices to be doubly-stochastic. The row-stochasticity of the weight matrix guarantees that all agents reach consensus, while the column-stochasticity ensures that each agent's local gradient contributes equally to the global objective. In a directed graph, however, it may not be possible to construct a doubly-stochastic weight matrix in a distributed manner. We overcome this difficulty by augmenting an additional variable for each agent to record the change in the state evolution. In each iteration, the algorithm simultaneously constructs a row-stochastic matrix and a column-stochastic matrix instead of only a doubly-stochastic matrix. The convergence of the new weight matrix, depending on the row-stochastic and column-stochastic matrices, ensures agents to reach both consensus and optimality. The analysis shows that the proposed algorithm converges at a rate of $O(\\frac{\\ln k}{\\sqrt{k}})$, where $k$ is the number of iterations.", "revisions": [ { "version": "v1", "updated": "2015-10-07T21:48:54.000Z" } ], "analyses": { "keywords": [ "directed graph", "agents local gradient contributes", "column-stochastic matrix", "multi-agent optimization problems", "doubly-stochastic weight matrix" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable", "inspire": 1396868 } } }