Hercules: Boosting the Performance of Privacy-preserving Federated Learning

07/11/2022
by   Guowen Xu, et al.
0

In this paper, we address the problem of privacy-preserving federated neural network training with N users. We present Hercules, an efficient and high-precision training framework that can tolerate collusion of up to N-1 users. Hercules follows the POSEIDON framework proposed by Sav et al. (NDSS'21), but makes a qualitative leap in performance with the following contributions: (i) we design a novel parallel homomorphic computation method for matrix operations, which enables fast Single Instruction and Multiple Data (SIMD) operations over ciphertexts. For the multiplication of two h× h dimensional matrices, our method reduces the computation complexity from O(h^3) to O(h). This greatly improves the training efficiency of the neural network since the ciphertext computation is dominated by the convolution operations; (ii) we present an efficient approximation on the sign function based on the composite polynomial approximation. It is used to approximate non-polynomial functions (i.e., ReLU and max), with the optimal asymptotic complexity. Extensive experiments on various benchmark datasets (BCW, ESR, CREDIT, MNIST, SVHN, CIFAR-10 and CIFAR-100) show that compared with POSEIDON, Hercules obtains up to 4 reduction in the computation and communication cost.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset