arXiv Analytics

Sign in

arXiv:1911.10071 [cs.LG]AbstractReferencesReviewsResources

Federated Learning with Bayesian Differential Privacy

Aleksei Triastcyn, Boi Faltings

Published 2019-11-22Version 1

We consider the problem of reinforcing federated learning with formal privacy guarantees. We propose to employ Bayesian differential privacy, a relaxation of differential privacy for similarly distributed data, to provide sharper privacy loss bounds. We adapt the Bayesian privacy accounting method to the federated setting and suggest multiple improvements for more efficient privacy budgeting at different levels. Our experiments show significant advantage over the state-of-the-art differential privacy bounds for federated learning on image classification tasks, including a medical application, bringing the privacy budget below 1 at the client level, and below 0.1 at the instance level. Lower amounts of noise also benefit the model accuracy and reduce the number of communication rounds.

Comments: Accepted at 2019 IEEE International Conference on Big Data (IEEE Big Data 2019). 10 pages, 2 figures, 4 tables
Categories: cs.LG, cs.AI, cs.CR, cs.DC, stat.ML
Related articles: Most relevant | Search more
arXiv:2009.06005 [cs.LG] (Published 2020-09-13)
FLaPS: Federated Learning and Privately Scaling
arXiv:2005.02503 [cs.LG] (Published 2020-05-05)
Information-Theoretic Bounds on the Generalization Error and Privacy Leakage in Federated Learning
arXiv:2011.14818 [cs.LG] (Published 2020-11-25)
Advancements of federated learning towards privacy preservation: from federated learning to split learning