arXiv Analytics

Sign in

arXiv:2105.12597 [math.OC]AbstractReferencesReviewsResources

Distributed Zeroth-Order Stochastic Optimization in Time-varying Networks

Wenjie Li, Mohamad Assaad

Published 2021-05-26Version 1

We consider a distributed convex optimization problem in a network which is time-varying and not always strongly connected. The local cost function of each node is affected by some stochastic process. All nodes of the network collaborate to minimize the average of their local cost functions. The major challenge of our work is that the gradient of cost functions is supposed to be unavailable and has to be estimated only based on the numerical observation of cost functions. Such problem is known as zeroth-order stochastic convex optimization (ZOSCO). In this paper we take a first step towards the distributed optimization problem with a ZOSCO setting. The proposed algorithm contains two basic steps at each iteration: i) each unit updates a local variable according to a random perturbation based single point gradient estimator of its own local cost function; ii) each unit exchange its local variable with its direct neighbors and then perform a weighted average. In the situation where the cost function is smooth and strongly convex, our attainable optimization error is $O(T^{-1/2})$ after $T$ iterations. This result is interesting as $O(T^{-1/2})$ is the optimal convergence rate in the ZOSCO problem. We have also investigate the optimization error with the general Lipschitz convex function, the result is $O(T^{-1/4})$.

Related articles: Most relevant | Search more
arXiv:2003.07180 [math.OC] (Published 2020-03-13)
Iterative Pre-Conditioning to Expedite the Gradient-Descent Method
arXiv:2009.11069 [math.OC] (Published 2020-09-23)
Towards accelerated rates for distributed optimization over time-varying networks
arXiv:2307.01655 [math.OC] (Published 2023-07-04)
Decentralized optimization with affine constraints over time-varying networks