arXiv Analytics

Sign in

arXiv:2010.13600 [cs.LG]AbstractReferencesReviewsResources

Optimal Importance Sampling for Federated Learning

Elsa Rizk, Stefan Vlaski, Ali H. Sayed

Published 2020-10-26Version 1

Federated learning involves a mixture of centralized and decentralized processing tasks, where a server regularly selects a sample of the agents and these in turn sample their local data to compute stochastic gradients for their learning updates. This process runs continually. The sampling of both agents and data is generally uniform; however, in this work we consider non-uniform sampling. We derive optimal importance sampling strategies for both agent and data selection and show that non-uniform sampling without replacement improves the performance of the original FedAvg algorithm. We run experiments on a regression and classification problem to illustrate the theoretical results.

Related articles: Most relevant | Search more
arXiv:2001.08300 [cs.LG] (Published 2020-01-22)
Data Selection for Federated Learning with Relevant and Irrelevant Data at Clients
arXiv:1910.06799 [cs.LG] (Published 2019-10-14)
Federated Learning for Coalition Operations
D. Verma et al.
arXiv:1902.01046 [cs.LG] (Published 2019-02-04)
Towards Federated Learning at Scale: System Design