Renormalized Sparse Neural Network Pruning

06/21/2022
by   Michael G. Rawson, et al.
0

Large neural networks are heavily over-parameterized. This is done because it improves training to optimality. However once the network is trained, this means many parameters can be zeroed, or pruned, leaving an equivalent sparse neural network. We propose renormalizing sparse neural networks in order to improve accuracy. We prove that our method's error converges to zero as network parameters cluster or concentrate. We prove that without renormalizing, the error does not converge to zero in general. We experiment with our method on real world datasets MNIST, Fashion MNIST, and CIFAR-10 and confirm a large improvement in accuracy with renormalization versus standard pruning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset