Tabula: Efficiently Computing Nonlinear Activation Functions for Secure Neural Network Inference

03/05/2022
by   Maximilian Lam, et al.
0

Multiparty computation approaches to secure neural network inference traditionally rely on garbled circuits for securely executing nonlinear activation functions. However, garbled circuits require excessive communication between server and client, impose significant storage overheads, and incur large runtime penalties. To eliminate these costs, we propose an alternative to garbled circuits: Tabula, an algorithm based on secure lookup tables. Tabula leverages neural networks' ability to be quantized and employs a secure lookup table approach to efficiently, securely, and accurately compute neural network nonlinear activation functions. Compared to garbled circuits with quantized inputs, when computing individual nonlinear functions, our experiments show Tabula uses between 35 ×-70 × less communication, is over 100× faster, and uses a comparable amount of storage. This leads to significant performance gains over garbled circuits with quantized inputs during secure inference on neural networks: Tabula reduces overall communication by up to 9 × and achieves a speedup of up to 50 ×, while imposing comparable storage costs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset