arXiv Analytics

Sign in

arXiv:1911.05979 [math.OC]AbstractReferencesReviewsResources

Towards an $O(\frac{1}{t})$ convergence rate for distributed dual averaging

Changxin Liu, Huiping Li, Yang Shi

Published 2019-11-14Version 1

Recently, distributed dual averaging has received increasing attention due to its superiority in handling constraints and dynamic networks in multiagent optimization. However, all distributed dual averaging methods reported so far considered nonsmooth problems and have a convergence rate of $O(\frac{1}{\sqrt{t}})$. To achieve an improved convergence guarantee for smooth problems, this work proposes a second-order consensus scheme that assists each agent to locally track the global dual variable more accurately. This new scheme in conjunction with smoothness of the objective ensures that the accumulation of consensus error over time caused by incomplete global information is bounded from above. Then, a rigorous investigation of dual averaging with inexact gradient oracles is carried out to compensate the consensus error and achieve an $O(\frac{1}{t})$ convergence rate. The proposed method is examined in a large-scale LASSO problem.

Related articles: Most relevant | Search more
arXiv:0904.4229 [math.OC] (Published 2009-04-27)
Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Minima
arXiv:1204.0301 [math.OC] (Published 2012-04-02)
Tree Codes Improve Convergence Rate of Consensus Over Erasure Channels
arXiv:1503.05601 [math.OC] (Published 2015-03-18)
A New Perspective of Proximal Gradient Algorithms