arXiv Analytics

Sign in

arXiv:2108.03508 [cs.LG]AbstractReferencesReviewsResources

The Effect of Training Parameters and Mechanisms on Decentralized Federated Learning based on MNIST Dataset

Zhuofan Zhang, Mi Zhou, Kaicheng Niu, Chaouki Abdallah

Published 2021-08-07Version 1

Federated Learning is an algorithm suited for training models on decentralized data, but the requirement of a central "server" node is a bottleneck. In this document, we first introduce the notion of Decentralized Federated Learning (DFL). We then perform various experiments on different setups, such as changing model aggregation frequency, switching from independent and identically distributed (IID) dataset partitioning to non-IID partitioning with partial global sharing, using different optimization methods across clients, and breaking models into segments with partial sharing. All experiments are run on the MNIST handwritten digits dataset. We observe that those altered training procedures are generally robust, albeit non-optimal. We also observe failures in training when the variance between model weights is too large. The open-source experiment code is accessible through GitHub\footnote{Code was uploaded at \url{https://github.com/zhzhang2018/DecentralizedFL}}.

Related articles: Most relevant | Search more
arXiv:2501.03119 [cs.LG] (Published 2025-01-06)
From Models to Network Topologies: A Topology Inference Attack in Decentralized Federated Learning
arXiv:1905.06731 [cs.LG] (Published 2019-05-16)
BrainTorrent: A Peer-to-Peer Environment for Decentralized Federated Learning
arXiv:2104.07365 [cs.LG] (Published 2021-04-15)
D-Cliques: Compensating NonIIDness in Decentralized Federated Learning with Topology