FedHe: Heterogeneous Models and Communication-Efficient Federated Learning

10/19/2021
by   Chan Yun Hin, et al.
0

Federated learning (FL) is able to manage edge devices to cooperatively train a model while maintaining the training data local and private. One common assumption in FL is that all edge devices share the same machine learning model in training, for example, identical neural network architecture. However, the computation and store capability of different devices may not be the same. Moreover, reducing communication overheads can improve the training efficiency though it is still a challenging problem in FL. In this paper, we propose a novel FL method, called FedHe, inspired by knowledge distillation, which can train heterogeneous models and support asynchronous training processes with significantly reduced communication overheads. Our analysis and experimental results demonstrate that the performance of our proposed method is better than the state-of-the-art algorithms in terms of communication overheads and model accuracy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset