Adaptive Gradient Quantization for Data-Parallel SGD

10/23/2020
by   Fartash Faghri, et al.
0

Many communication-efficient variants of SGD use gradient quantization schemes. These schemes are often heuristic and fixed over the course of training. We empirically observe that the statistics of gradients of deep models change during the training. Motivated by this observation, we introduce two adaptive quantization schemes, ALQ and AMQ. In both schemes, processors update their compression schemes in parallel by efficiently computing sufficient statistics of a parametric distribution. We improve the validation accuracy by almost 2 communication setups. Our adaptive methods are also significantly more robust to the choice of hyperparameters.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset