arXiv Analytics

Sign in

arXiv:2102.08166 [cs.LG]AbstractReferencesReviewsResources

Differential Privacy and Byzantine Resilience in SGD: Do They Add Up?

Rachid Guerraoui, Nirupam Gupta, Rafaël Pinot, Sébastien Rouault, John Stephan

Published 2021-02-16Version 1

This paper addresses the problem of combining Byzantine resilience with privacy in machine learning (ML). Specifically, we study whether a distributed implementation of the renowned Stochastic Gradient Descent (SGD) learning algorithm is feasible with both differential privacy (DP) and $(\alpha,f)$-Byzantine resilience. To the best of our knowledge, this is the first work to tackle this problem from a theoretical point of view. A key finding of our analyses is that the classical approaches to these two (seemingly) orthogonal issues are incompatible. More precisely, we show that a direct composition of these techniques makes the guarantees of the resulting SGD algorithm depend unfavourably upon the number of parameters in the ML model, making the training of large models practically infeasible. We validate our theoretical results through numerical experiments on publicly-available datasets; showing that it is impractical to ensure DP and Byzantine resilience simultaneously.

Related articles: Most relevant | Search more
arXiv:2107.01895 [cs.LG] (Published 2021-07-05)
Optimizing the Numbers of Queries and Replies in Federated Learning with Differential Privacy
arXiv:2012.07828 [cs.LG] (Published 2020-12-14, updated 2021-08-23)
Robustness Threats of Differential Privacy
arXiv:2211.10708 [cs.LG] (Published 2022-11-19)
A Survey on Differential Privacy with Machine Learning and Future Outlook