arXiv Analytics

Sign in

arXiv:2006.09637 [cs.LG]AbstractReferencesReviewsResources

FedCD: Improving Performance in non-IID Federated Learning

Kavya Kopparapu, Eric Lin, Jessica Zhao

Published 2020-06-17Version 1

Federated learning has been widely applied to enable decentralized devices, which each have their own local data, to learn a shared model. However, learning from real-world data can be challenging, as it is rarely identically and independently distributed (IID) across edge devices (a key assumption for current high-performing and low-bandwidth algorithms). We present a novel approach, FedCD, which clones and deletes models to dynamically group devices with similar data. Experiments on the CIFAR-10 dataset show that FedCD achieves higher accuracy and faster convergence compared to a FedAvg baseline on non-IID data while incurring minimal computation, communication, and storage overheads.

Related articles: Most relevant | Search more
arXiv:2006.09791 [cs.LG] (Published 2020-06-17)
Optimizing Grouped Convolutions on Edge Devices
arXiv:2407.18114 [cs.LG] (Published 2024-07-25)
Unsupervised Training of Neural Cellular Automata on Edge Devices
arXiv:2210.03204 [cs.LG] (Published 2022-10-06)
Enabling Deep Learning on Edge Devices