arXiv Analytics

Sign in

arXiv:2005.02503 [cs.LG]AbstractReferencesReviewsResources

Information-Theoretic Bounds on the Generalization Error and Privacy Leakage in Federated Learning

Semih Yagli, Alex Dytso, H. Vincent Poor

Published 2020-05-05Version 1

Machine learning algorithms operating on mobile networks can be characterized into three different categories. First is the classical situation in which the end-user devices send their data to a central server where this data is used to train a model. Second is the distributed setting in which each device trains its own model and send its model parameters to a central server where these model parameters are aggregated to create one final model. Third is the federated learning setting in which, at any given time $t$, a certain number of active end users train with their own local data along with feedback provided by the central server and then send their newly estimated model parameters to the central server. The server, then, aggregates these new parameters, updates its own model, and feeds the updated parameters back to all the end users, continuing this process until it converges. The main objective of this work is to provide an information-theoretic framework for all of the aforementioned learning paradigms. Moreover, using the provided framework, we develop upper and lower bounds on the generalization error together with bounds on the privacy leakage in the classical, distributed and federated learning settings. Keywords: Federated Learning, Distributed Learning, Machine Learning, Model Aggregation.

Comments: Accepted for publication in Proceedings of 21st IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), 2020. arXiv version is 10pt font, 6 Pages. This is the same document as the SPAWC version, except that the conference version is written with 9pt font to meet the strict page margin requirements
Categories: cs.LG, cs.CR, cs.DC, cs.NE, stat.ML
Related articles: Most relevant | Search more
arXiv:1908.07873 [cs.LG] (Published 2019-08-21)
Federated Learning: Challenges, Methods, and Future Directions
arXiv:1911.04559 [cs.LG] (Published 2019-11-11)
Privacy is What We Care About: Experimental Investigation of Federated Learning on Edge Devices
arXiv:2001.08300 [cs.LG] (Published 2020-01-22)
Data Selection for Federated Learning with Relevant and Irrelevant Data at Clients