arXiv Analytics

Sign in

arXiv:2107.10663 [stat.ML]AbstractReferencesReviewsResources

Fed-ensemble: Improving Generalization through Model Ensembling in Federated Learning

Naichen Shi, Fan Lai, Raed Al Kontar, Mosharaf Chowdhury

Published 2021-07-21Version 1

In this paper we propose Fed-ensemble: a simple approach that bringsmodel ensembling to federated learning (FL). Instead of aggregating localmodels to update a single global model, Fed-ensemble uses random permutations to update a group of K models and then obtains predictions through model averaging. Fed-ensemble can be readily utilized within established FL methods and does not impose a computational overhead as it only requires one of the K models to be sent to a client in each communication round. Theoretically, we show that predictions on newdata from all K models belong to the same predictive posterior distribution under a neural tangent kernel regime. This result in turn sheds light onthe generalization advantages of model averaging. We also illustrate thatFed-ensemble has an elegant Bayesian interpretation. Empirical results show that our model has superior performance over several FL algorithms,on a wide range of data sets, and excels in heterogeneous settings often encountered in FL applications.

Related articles: Most relevant | Search more
arXiv:2211.14115 [stat.ML] (Published 2022-11-25)
Inverse Solvability and Security with Applications to Federated Learning
arXiv:2208.11512 [stat.ML] (Published 2022-08-22)
FedOS: using open-set learning to stabilize training in federated learning
arXiv:2107.03770 [stat.ML] (Published 2021-07-08)
Federated Learning as a Mean-Field Game