A^2CiD^2: Accelerating Asynchronous Communication in Decentralized Deep Learning

06/14/2023
by   Adel Nabli, et al.
0

Distributed training of Deep Learning models has been critical to many recent successes in the field. Current standard methods primarily rely on synchronous centralized algorithms which induce major communication bottlenecks and limit their usability to High-Performance Computing (HPC) environments with strong connectivity. Decentralized asynchronous algorithms are emerging as a potential alternative but their practical applicability still lags. In this work, we focus on peerto-peer asynchronous methods due to their flexibility and parallelization potentials. In order to mitigate the increase in bandwidth they require at large scale and in poorly connected contexts, we introduce a principled asynchronous, randomized, gossip-based algorithm which works thanks to a continuous momentum named A^2CiD^2. In addition to inducing a significant communication acceleration at no cost other than doubling the parameters, minimal adaptation is required to incorporate A^2CiD^2 to other asynchronous approaches. We demonstrate its efficiency theoretically and numerically. Empirically on the ring graph, adding A^2CiD^2 has the same effect as doubling the communication rate. In particular, we show consistent improvement on the ImageNet dataset using up to 64 asynchronous workers (A100 GPUs) and various communication network topologies.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset