Communication-Efficient Federated Learning via Robust Distributed Mean Estimation

08/19/2021
by   Shay Vargaftik, et al.
0

Federated learning commonly relies on algorithms such as distributed (mini-batch) SGD, where multiple clients compute their gradients and send them to a central coordinator for averaging and updating the model. To optimize the transmission time and the scalability of the training process, clients often use lossy compression to reduce the message sizes. DRIVE is a recent state of the art algorithm that compresses gradients using one bit per coordinate (with some lower-order overhead). In this technical report, we generalize DRIVE to support any bandwidth constraint as well as extend it to support heterogeneous client resources and make it robust to packet loss.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset