arXiv Analytics

Sign in

arXiv:2303.17942 [cs.LG]AbstractReferencesReviewsResources

Benchmarking FedAvg and FedCurv for Image Classification Tasks

Bruno Casella, Roberto Esposito, Carlo Cavazzoni, Marco Aldinucci

Published 2023-03-31Version 1

Classic Machine Learning techniques require training on data available in a single data lake. However, aggregating data from different owners is not always convenient for different reasons, including security, privacy and secrecy. Data carry a value that might vanish when shared with others; the ability to avoid sharing the data enables industrial applications where security and privacy are of paramount importance, making it possible to train global models by implementing only local policies which can be run independently and even on air-gapped data centres. Federated Learning (FL) is a distributed machine learning approach which has emerged as an effective way to address privacy concerns by only sharing local AI models while keeping the data decentralized. Two critical challenges of Federated Learning are managing the heterogeneous systems in the same federated network and dealing with real data, which are often not independently and identically distributed (non-IID) among the clients. In this paper, we focus on the second problem, i.e., the problem of statistical heterogeneity of the data in the same federated network. In this setting, local models might be strayed far from the local optimum of the complete dataset, thus possibly hindering the convergence of the federated model. Several Federated Learning algorithms, such as FedAvg, FedProx and Federated Curvature (FedCurv), aiming at tackling the non-IID setting, have already been proposed. This work provides an empirical assessment of the behaviour of FedAvg and FedCurv in common non-IID scenarios. Results show that the number of epochs per round is an important hyper-parameter that, when tuned appropriately, can lead to significant performance gains while reducing the communication cost. As a side product of this work, we release the non-IID version of the datasets we used so to facilitate further comparisons from the FL community.

Comments: 12 pages, Proceedings of ITADATA22, The 1st Italian Conference on Big Data and Data Science; Published on CEUR Workshop Proceedings (CEUR-WS.org, ISSN 1613-0073), Vol. 3340, pp. 99-110, 2022
Journal: CEUR Workshop Proceedings Vol. 3340, pp. 99-110, (2022)
Categories: cs.LG, cs.CV, cs.DC
Subjects: 68T07, I.2.6
Related articles: Most relevant | Search more
arXiv:2004.09466 [cs.LG] (Published 2020-04-20)
Counterfactual confounding adjustment for feature representations learned by deep models: with an application to image classification tasks
arXiv:2207.03324 [cs.LG] (Published 2022-07-07)
Calibrate to Interpret
arXiv:1905.02610 [cs.LG] (Published 2019-05-06)
Learning Optimal Data Augmentation Policies via Bayesian Optimization for Image Classification Tasks