Gradient Scheduling with Global Momentum for Non-IID Data Distributed Asynchronous Training

02/21/2019
by   Chengjie Li, et al.
0

Distributed asynchronous offline training has received widespread attention in recent years because of its high performance on large-scale data and complex models. As data are processed from cloud-centric positions to edge locations, a big challenge for distributed systems is how to handle native and natural non-independent and identically distributed (non-IID) data for training. Previous asynchronous training methods do not have a satisfying performance on non-IID data because it would result in that the training process fluctuates greatly which leads to an abnormal convergence. We propose a gradient scheduling algorithm with global momentum (GSGM) for non-IID data distributed asynchronous training. Our key idea is to schedule the gradients contributed by computing nodes based on a white list so that each training node's update frequency remains even. Furthermore, our new momentum method can solve the biased gradient problem. GSGM can make model converge effectively, and maintain high availability eventually. Experimental results show that for non-IID data training under the same experimental conditions, GSGM on popular optimization algorithms can achieve an 20 improvement in accuracy on Fashion-Mnist and CIFAR-10 datasets. Meanwhile, when expanding distributed scale on CIFAR-100 dataset that results in sparse data distribution, GSGM can perform an 37 Moreover, only GSGM can converge well when the number of computing nodes is 30, compared to the state-of-the-art distributed asynchronous algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset