Robust error bounds for quantised and pruned neural networks

11/30/2020
by   Jiaqi Li, et al.
0

With the rise of smartphones and the internet-of-things, data is increasingly getting generated at the edge on local, personal devices. For privacy, latency and energy saving reasons, this shift is causing machine learning algorithms to move towards a decentralised approach, with the data and algorithms stored and even trained locally on devices. The device hardware becomes the main bottleneck for model performance in this set-up, creating a need for slimmed down, more efficient neural networks. Neural network pruning and quantisation are two methods that have been developed to achieve this, with both approaches demonstrating impressive results in reducing the computational cost without sacrificing too much on model performance. However, our understanding behind these methods remains underdeveloped. To address this issue, a semi-definite program to robustly bound the error caused by pruning and quantising a neural network is introduced in this paper. The method can be applied to generic neural networks, accounts for the many nonlinearities of the problem and holds robustly for all inputs in specified sets. It is hoped that the computed bounds will give certainty to software/control/machine learning engineers implementing these algorithms efficiently on limited hardware.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset