MPC-enabled Privacy-Preserving Neural Network Training against Malicious Attack
In the past decades, the application of secure multiparty computation (MPC) to machine learning, especially privacy-preserving neural network training, has attracted tremendous attention from both academia and industry. MPC enables several data owners to jointly train a neural network while preserving their data privacy. However, most previous works focus on semi-honest threat model which cannot withstand fraudulent messages sent by malicious participants. In this work, we propose a construction of efficient n-party protocols for secure neural network training that can secure the privacy of all honest participants even when a majority of the parties are malicious. Compared to the other designs that provides semi-honest security in a dishonest majority setting, our actively secured neural network training incurs affordable efficiency overheads. In addition, we propose a scheme to allow additive shares defined over an integer ring ℤ_N to be securely converted to additive shares over a finite field ℤ_Q. This conversion scheme is essential in correctly converting shared Beaver triples in order to make the values generated in preprocessing phase to be usable in online phase, which may be of independent interest.
READ FULL TEXT