{ "id": "2107.10663", "version": "v1", "published": "2021-07-21T14:40:14.000Z", "updated": "2021-07-21T14:40:14.000Z", "title": "Fed-ensemble: Improving Generalization through Model Ensembling in Federated Learning", "authors": [ "Naichen Shi", "Fan Lai", "Raed Al Kontar", "Mosharaf Chowdhury" ], "categories": [ "stat.ML", "cs.LG" ], "abstract": "In this paper we propose Fed-ensemble: a simple approach that bringsmodel ensembling to federated learning (FL). Instead of aggregating localmodels to update a single global model, Fed-ensemble uses random permutations to update a group of K models and then obtains predictions through model averaging. Fed-ensemble can be readily utilized within established FL methods and does not impose a computational overhead as it only requires one of the K models to be sent to a client in each communication round. Theoretically, we show that predictions on newdata from all K models belong to the same predictive posterior distribution under a neural tangent kernel regime. This result in turn sheds light onthe generalization advantages of model averaging. We also illustrate thatFed-ensemble has an elegant Bayesian interpretation. Empirical results show that our model has superior performance over several FL algorithms,on a wide range of data sets, and excels in heterogeneous settings often encountered in FL applications.", "revisions": [ { "version": "v1", "updated": "2021-07-21T14:40:14.000Z" } ], "analyses": { "keywords": [ "federated learning", "improving generalization", "model ensembling", "sheds light onthe generalization advantages", "fed-ensemble" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }