DP-NormFedAvg: Normalizing Client Updates for Privacy-Preserving Federated Learning
In this paper, we focus on facilitating differentially private quantized communication between the clients and server in federated learning (FL). Towards this end, we propose to have the clients send a private quantized version of only the unit vector along the change in their local parameters to the server, completely throwing away the magnitude information. We call this algorithm and show that it has the same order-wise convergence rate as on smooth quasar-convex functions (an important class of non-convex functions for modeling optimization of deep neural networks), thereby establishing that discarding the magnitude information is not detrimental from an optimization point of view. We also introduce QTDL, a new differentially private quantization mechanism for unit-norm vectors, which we use in . QTDL employs discrete noise having a Laplacian-like distribution on a finite support to provide privacy. We show that under a growth-condition assumption on the per-sample client losses, the extra per-coordinate communication cost in each round incurred due to privacy by our method is 𝒪(1) with respect to the model dimension, which is an improvement over prior work. Finally, we show the efficacy of our proposed method with experiments on fully-connected neural networks trained on CIFAR-10 and Fashion-MNIST.
READ FULL TEXT