arXiv Analytics

Sign in

arXiv:1805.10559 [stat.ML]AbstractReferencesReviewsResources

cpSGD: Communication-efficient and differentially-private distributed SGD

Naman Agarwal, Ananda Theertha Suresh, Felix Yu, Sanjiv Kumar, H. Brendan Mcmahan

Published 2018-05-27Version 1

Distributed stochastic gradient descent is an important subroutine in distributed learning. A setting of particular interest is when the clients are mobile devices, where two important concerns are communication efficiency and the privacy of the clients. Several recent works have focused on reducing the communication cost or introducing privacy guarantees, but none of the proposed communication efficient methods are known to be privacy preserving and none of the known privacy mechanisms are known to be communication efficient. To this end, we study algorithms that achieve both communication efficiency and differential privacy. For $d$ variables and $n \approx d$ clients, the proposed method uses $O(\log \log(nd))$ bits of communication per client per coordinate and ensures constant privacy. We also extend and improve previous analysis of the \emph{Binomial mechanism} showing that it achieves nearly the same utility as the Gaussian mechanism, while requiring fewer representation bits, which can be of independent interest.

Related articles: Most relevant | Search more
arXiv:2410.08934 [stat.ML] (Published 2024-10-11)
The Effect of Personalization in FedProx: A Fine-grained Analysis on Statistical Accuracy and Communication Efficiency
arXiv:2008.04975 [stat.ML] (Published 2020-08-11)
FedSKETCH: Communication-Efficient and Private Federated Learning via Sketching
arXiv:2109.01326 [stat.ML] (Published 2021-09-03)
Statistical Estimation and Inference via Local SGD in Federated Learning