Gossip training for deep learning

11/29/2016
by   Michael Blot, et al.
0

We address the issue of speeding up the training of convolutional networks. Here we study a distributed method adapted to stochastic gradient descent (SGD). The parallel optimization setup uses several threads, each applying individual gradient descents on a local variable. We propose a new way to share information between different threads inspired by gossip algorithms and showing good consensus convergence properties. Our method called GoSGD has the advantage to be fully asynchronous and decentralized. We compared our method to the recent EASGD in elastic on CIFAR-10 show encouraging results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset