arXiv Analytics

Sign in

arXiv:1803.07741 [math.OC]AbstractReferencesReviewsResources

A Distributed Stochastic Gradient Tracking Method

Shi Pu, Angelia Nedić

Published 2018-03-21Version 1

In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a distributed stochastic gradient tracking method. We show that, in expectation, the iterates generated by each agent are attracted to a neighborhood of the optimal solution, where they accumulate exponentially fast (under a constant step size choice). More importantly, the limiting (expected) error bounds on the distance of the iterates from the optimal solution decrease with the network size, which is a comparable performance to a centralized stochastic gradient algorithm. Numerical examples further demonstrate the effectiveness of the method.

Related articles: Most relevant | Search more
arXiv:1805.11454 [math.OC] (Published 2018-05-25)
Distributed Stochastic Gradient Tracking Methods
arXiv:2003.07180 [math.OC] (Published 2020-03-13)
Iterative Pre-Conditioning to Expedite the Gradient-Descent Method
arXiv:2105.12597 [math.OC] (Published 2021-05-26)
Distributed Zeroth-Order Stochastic Optimization in Time-varying Networks