arXiv Analytics

Sign in

arXiv:2108.12124 [cs.LG]AbstractReferencesReviewsResources

Canoe : A System for Collaborative Learning for Neural Nets

Harshit Daga, Yiwen Chen, Aastha Agrawal, Ada Gavrilovska

Published 2021-08-27Version 1

For highly distributed environments such as edge computing, collaborative learning approaches eschew the dependence on a global, shared model, in favor of models tailored for each location. Creating tailored models for individual learning contexts reduces the amount of data transfer, while collaboration among peers provides acceptable model performance. Collaboration assumes, however, the availability of knowledge transfer mechanisms, which are not trivial for deep learning models where knowledge isn't easily attributed to precise model slices. We present Canoe - a framework that facilitates knowledge transfer for neural networks. Canoe provides new system support for dynamically extracting significant parameters from a helper node's neural network and uses this with a multi-model boosting-based approach to improve the predictive performance of the target node. The evaluation of Canoe with different PyTorch and TensorFlow neural network models demonstrates that the knowledge transfer mechanism improves the model's adaptiveness to changes up to 3.5X compared to learning in isolation, while affording several magnitudes reduction in data movement costs compared to federated learning.

Related articles: Most relevant | Search more
arXiv:2008.00742 [cs.LG] (Published 2020-08-03)
Collaborative Learning as an Agreement Problem
arXiv:2006.00082 [cs.LG] (Published 2020-05-29)
Meta Clustering for Collaborative Learning
arXiv:2205.02652 [cs.LG] (Published 2022-05-05)
Can collaborative learning be private, robust and scalable?