arXiv Analytics

Sign in

arXiv:2104.07365 [cs.LG]AbstractReferencesReviewsResources

D-Cliques: Compensating NonIIDness in Decentralized Federated Learning with Topology

Aurélien Bellet, Anne-Marie Kermarrec, Erick Lavoie

Published 2021-04-15Version 1

The convergence speed of machine learning models trained with Federated Learning is significantly affected by non-independent and identically distributed (non-IID) data partitions, even more so in a fully decentralized setting without a central server. In this paper, we show that the impact of local class bias, an important type of data non-IIDness, can be significantly reduced by carefully designing the underlying communication topology. We present D-Cliques, a novel topology that reduces gradient bias by grouping nodes in interconnected cliques such that the local joint distribution in a clique is representative of the global class distribution. We also show how to adapt the updates of decentralized SGD to obtain unbiased gradients and implement an effective momentum with D-Cliques. Our empirical evaluation on MNIST and CIFAR10 demonstrates that our approach provides similar convergence speed as a fully-connected topology with a significant reduction in the number of edges and messages. In a 1000-node topology, D-Cliques requires 98% less edges and 96% less total messages, with further possible gains using a small-world topology across cliques.

Related articles: Most relevant | Search more
arXiv:2306.09750 [cs.LG] (Published 2023-06-16)
Fedstellar: A Platform for Decentralized Federated Learning
arXiv:2501.03119 [cs.LG] (Published 2025-01-06)
From Models to Network Topologies: A Topology Inference Attack in Decentralized Federated Learning
arXiv:2407.05141 [cs.LG] (Published 2024-07-06)
Impact of Network Topology on Byzantine Resilience in Decentralized Federated Learning