arXiv Analytics

Sign in

arXiv:1611.09726 [cs.CV]AbstractReferencesReviewsResources

Gossip training for deep learning

Michael Blot, David Picard, Matthieu Cord, Nicolas Thome

Published 2016-11-29Version 1

We address the issue of speeding up the training of convolutional networks. Here we study a distributed method adapted to stochastic gradient descent (SGD). The parallel optimization setup uses several threads, each applying individual gradient descents on a local variable. We propose a new way to share information between different threads inspired by gossip algorithms and showing good consensus convergence properties. Our method called GoSGD has the advantage to be fully asynchronous and decentralized. We compared our method to the recent EASGD in \cite{elastic} on CIFAR-10 show encouraging results.

Related articles: Most relevant | Search more
arXiv:1709.04108 [cs.CV] (Published 2017-09-13)
Co-training for Demographic Classification Using Deep Learning from Label Proportions
arXiv:1602.00172 [cs.CV] (Published 2016-01-30)
Deep Learning For Smile Recognition
arXiv:1412.7725 [cs.CV] (Published 2014-12-24)
Automatic Photo Adjustment Using Deep Learning