arXiv Analytics

Sign in

arXiv:0803.1202 [math.OC]AbstractReferencesReviewsResources

Distributed Subgradient Methods and Quantization Effects

Angelia Nedić, Alex Olshevsky, Asuman Ozdaglar, John N. Tsitsiklis

Published 2008-03-08Version 1

We consider a convex unconstrained optimization problem that arises in a network of agents whose goal is to cooperatively optimize the sum of the individual agent objective functions through local computations and communications. For this problem, we use averaging algorithms to develop distributed subgradient methods that can operate over a time-varying topology. Our focus is on the convergence rate of these methods and the degradation in performance when only quantized information is available. Based on our recent results on the convergence time of distributed averaging algorithms, we derive improved upper bounds on the convergence rate of the unquantized subgradient method. We then propose a distributed subgradient method under the additional constraint that agents can only store and communicate quantized information, and we provide bounds on its convergence rate that highlight the dependence on the number of quantization levels.

Related articles: Most relevant | Search more
arXiv:0904.4229 [math.OC] (Published 2009-04-27)
Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Minima
arXiv:1204.0301 [math.OC] (Published 2012-04-02)
Tree Codes Improve Convergence Rate of Consensus Over Erasure Channels
arXiv:1503.05601 [math.OC] (Published 2015-03-18)
A New Perspective of Proximal Gradient Algorithms