arXiv Analytics

Sign in

arXiv:1806.04207 [math.OC]AbstractReferencesReviewsResources

Swarming for Faster Convergence in Stochastic Optimization

Shi Pu, Alfredo Garcia

Published 2018-06-11Version 1

We study a distributed framework for stochastic optimization which is inspired by models of collective motion found in nature (e.g., swarming) with mild communication requirements. Specifically, we analyze a scheme in which each one of $N > 1$ independent threads, implements in a distributed and unsynchronized fashion, a stochastic gradient-descent algorithm which is perturbed by a swarming potential. Assuming the overhead caused by synchronization is not negligible, we show the swarming-based approach exhibits better performance than a centralized algorithm (based upon the average of $N$ observations) in terms of (real-time) convergence speed. We also derive an error bound that is monotone decreasing in network size and connectivity. We characterize the scheme's finite-time performances for both convex and non-convex objective functions.

Related articles: Most relevant | Search more
arXiv:2210.05995 [math.OC] (Published 2022-10-12)
SGDA with shuffling: faster convergence for nonconvex-PŁ minimax optimization
arXiv:2402.03982 [math.OC] (Published 2024-02-06)
On Convergence of Adam for Stochastic Optimization under Relaxed Assumptions
arXiv:2306.04174 [math.OC] (Published 2023-06-07)
End-to-End Learning for Stochastic Optimization: A Bayesian Perspective