arXiv Analytics

Sign in

arXiv:1512.02970 [cs.LG]AbstractReferencesReviewsResources

Scaling Up Distributed Stochastic Gradient Descent Using Variance Reduction

Soham De, Gavin Taylor, Tom Goldstein

Published 2015-12-09Version 1

Variance reduction stochastic gradient descent methods enable minimization of model fitting problems involving big datasets with low iteration complexity and fast asymptotic convergence rates. However, they scale poorly in distributed settings. In this paper, we propose a highly parallel variance reduction method, CentralVR, with performance that scales linearly with the number of worker nodes. We also propose distributed versions of popular variance reduction methods that support a high degree of parallelization. Unlike existing distributed stochastic gradient schemes, CentralVR exhibits linear performance gains up to thousands of cores for massive datasets.

Related articles: Most relevant | Search more
arXiv:1512.01708 [cs.LG] (Published 2015-12-05)
Variance Reduction for Distributed Stochastic Gradient Descent
arXiv:2106.10796 [cs.LG] (Published 2021-06-21)
CD-SGD: Distributed Stochastic Gradient Descent with Compression and Delay Compensation
arXiv:2001.05918 [cs.LG] (Published 2020-01-16)
Elastic Consistency: A General Consistency Model for Distributed Stochastic Gradient Descent