arXiv Analytics

Sign in

arXiv:2403.14371 [cs.LG]AbstractReferencesReviewsResources

Loop Improvement: An Efficient Approach for Extracting Shared Features from Heterogeneous Data without Central Server

Fei Li, Chu Kiong Loo, Wei Shiung Liew, Xiaofeng Liu

Published 2024-03-21Version 1

In federated learning, data heterogeneity significantly impacts performance. A typical solution involves segregating these parameters into shared and personalized components, a concept also relevant in multi-task learning. Addressing this, we propose "Loop Improvement" (LI), a novel method enhancing this separation and feature extraction without necessitating a central server or data interchange among participants. Our experiments reveal LI's superiority in several aspects: In personalized federated learning environments, LI consistently outperforms the advanced FedALA algorithm in accuracy across diverse scenarios. Additionally, LI's feature extractor closely matches the performance achieved when aggregating data from all clients. In global model contexts, employing LI with stacked personalized layers and an additional network also yields comparable results to combined client data scenarios. Furthermore, LI's adaptability extends to multi-task learning, streamlining the extraction of common features across tasks and obviating the need for simultaneous training. This approach not only enhances individual task performance but also achieves accuracy levels on par with classic multi-task learning methods where all tasks are trained simultaneously. LI integrates a loop topology with layer-wise and end-to-end training, compatible with various neural network models. This paper also delves into the theoretical underpinnings of LI's effectiveness, offering insights into its potential applications. The code is on https://github.com/axedge1983/LI

Related articles: Most relevant | Search more
arXiv:1906.01736 [cs.LG] (Published 2019-06-04)
Distributed Training with Heterogeneous Data: Bridging Median and Mean Based Algorithms
arXiv:2303.02278 [cs.LG] (Published 2023-03-04, updated 2023-06-05)
Federated Virtual Learning on Heterogeneous Data with Local-global Distillation
arXiv:2110.04175 [cs.LG] (Published 2021-10-08, updated 2022-01-31)
RelaySum for Decentralized Deep Learning on Heterogeneous Data