Prune Your Model Before Distill It
Unstructured pruning reduces a significant amount of weights of neural networks. However, unstructured pruning provides a sparse network with the same network architecture as the original network. On the other hand, structured pruning provides an efficient network architecture by removing channels, but the parameter reduction is not significant. In this paper, we consider transferring knowledge from unstructured pruning to a network with efficient architecture (with fewer channels). In particular, we apply the knowledge distillation (KD), where the teacher network is a sparse network (obtained from unstructured pruning), and the student network has an efficient architecture. We observe that learning from the pruned teacher is more effective than learning from the unpruned teacher. We further achieve the promising experimental results that unstructured pruning can improve the performance of knowledge distillation in general.
READ FULL TEXT