Unlike the conventional Knowledge Distillation (KD), Self-KD allows a ne...
Knowledge distillation (KD) is an effective framework that aims to trans...
Knowledge distillation often involves how to define and transfer knowled...
We present a collaborative learning method called Mutual Contrastive Lea...
Filter pruning is widely used to reduce the computation of deep learning...
Network pruning is widely used to compress Deep Neural Networks (DNNs). ...