Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup

06/17/2020
by   Seungeun Oh, et al.
0

This letter proposes a novel communication-efficient and privacy-preserving distributed machine learning framework, coined Mix2FLD. To address uplink-downlink capacity asymmetry, local model outputs are uploaded to a server in the uplink as in federated distillation (FD), whereas global model parameters are downloaded in the downlink as in federated learning (FL). This requires a model output-to-parameter conversion at the server, after collecting additional data samples from devices. To preserve privacy while not compromising accuracy, linearly mixed-up local samples are uploaded, and inversely mixed up across different devices at the server. Numerical evaluations show that Mix2FLD achieves up to 16.7 reducing convergence time by up to 18.8 channels compared to FL.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset