Hessian-Aware Pruning and Optimal Neural Implant

01/22/2021
by   Shixing Yu, et al.
1

Pruning is an effective method to reduce the memory footprint and FLOPs associated with neural network models. However, existing pruning methods often result in significant accuracy degradation for moderate pruning levels. To address this problem, we introduce a new Hessian Aware Pruning (HAP) method which uses second-order sensitivity as a metric for structured pruning. In particular, we use the Hessian trace to find insensitive parameters in the neural network. This is different than magnitude based pruning methods, which prune small weight values. We also propose a new neural implant method, which replaces pruned spatial convolutions with point-wise convolution. We show that this method can improve the accuracy of pruned models while preserving the model size. We test HAP on multiple models (ResNet56, WideResNet32, PreResNet29, VGG16) on CIFAR-10 and (ResNet50) on ImageNet, and we achieve new state-of-the-art results. Specifically, HAP achieves 94.3% accuracy (<0.1% degradation) on PreResNet29 (CIFAR-10), with more than 70% of parameters pruned. In comparison to EigenDamage <cit.>, we achieve up to 1.2% higher accuracy with fewer parameters and FLOPs. Moreover, for ResNet50 HAP achieves 75.1% top-1 accuracy (0.5% degradation) on ImageNet, after pruning more than half of the parameters. In comparison to prior state-of-the-art of HRank <cit.>, we achieve up to 2% higher accuracy with fewer parameters and FLOPs. The framework has been open source and available online.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset