arXiv Analytics

Sign in

arXiv:2407.04460 [cs.LG]AbstractReferencesReviewsResources

Smart Sampling: Helping from Friendly Neighbors for Decentralized Federated Learning

Lin Wang, Yang Chen, Yongxin Guo, Xiaoying Tang

Published 2024-07-05Version 1

Federated Learning (FL) is gaining widespread interest for its ability to share knowledge while preserving privacy and reducing communication costs. Unlike Centralized FL, Decentralized FL (DFL) employs a network architecture that eliminates the need for a central server, allowing direct communication among clients and leading to significant communication resource savings. However, due to data heterogeneity, not all neighboring nodes contribute to enhancing the local client's model performance. In this work, we introduce \textbf{\emph{AFIND+}}, a simple yet efficient algorithm for sampling and aggregating neighbors in DFL, with the aim of leveraging collaboration to improve clients' model performance. AFIND+ identifies helpful neighbors, adaptively adjusts the number of selected neighbors, and strategically aggregates the sampled neighbors' models based on their contributions. Numerical results on real-world datasets with diverse data partitions demonstrate that AFIND+ outperforms other sampling algorithms in DFL and is compatible with most existing DFL optimization algorithms.

Related articles: Most relevant | Search more
arXiv:1905.06731 [cs.LG] (Published 2019-05-16)
BrainTorrent: A Peer-to-Peer Environment for Decentralized Federated Learning
arXiv:2405.08252 [cs.LG] (Published 2024-05-14)
Smart Sampling: Self-Attention and Bootstrapping for Improved Ensembled Q-Learning
arXiv:2104.07365 [cs.LG] (Published 2021-04-15)
D-Cliques: Compensating NonIIDness in Decentralized Federated Learning with Topology