Aggregating Gradients in Encoded Domain for Federated Learning

05/26/2022
by   Dun Zeng, et al.
0

Malicious attackers and an honest-but-curious server can steal private client data from uploaded gradients in federated learning. Although current protection methods (e.g., additive homomorphic cryptosystem) can guarantee the security of the federated learning system, they bring additional computation and communication costs. To mitigate the cost, we propose the framework, which enables the server to aggregate gradients in an encoded domain without accessing raw gradients of any single client. Thus, can prevent the curious server from gradient stealing while maintaining the same prediction performance without additional communication costs. Furthermore, we theoretically prove that the proposed encoding-decoding framework is a Gaussian mechanism for differential privacy. Finally, we evaluate under several federated settings, and the results have demonstrated the efficacy of the proposed framework.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset