CLIP has become a promising language-supervised visual pre-training fram...
Deep neural networks have achieved remarkable performance for artificial...
In this report, we describe the technical details of our submission to t...
Class-Incremental Learning (CIL) aims to solve the neural networks'
cata...
Unlike the conventional Knowledge Distillation (KD), Self-KD allows a ne...
The teacher-free online Knowledge Distillation (KD) aims to train an ens...
Existing works often focus on reducing the architecture redundancy for
a...
Current Knowledge Distillation (KD) methods for semantic segmentation of...
Generative models often incur the catastrophic forgetting problem when t...
Knowledge distillation (KD) is an effective framework that aims to trans...
The underwater acoustic channel is one of the most challenging communica...
Knowledge distillation often involves how to define and transfer knowled...
We present a collaborative learning method called Mutual Contrastive Lea...
Filter pruning is widely used to reduce the computation of deep learning...
Network pruning is widely used to compress Deep Neural Networks (DNNs). ...
Existing Online Knowledge Distillation (OKD) aims to perform collaborati...
Deep convolutional neural networks (CNN) always non-linearly aggregate t...
Existing approaches to improve the performances of convolutional neural
...
This paper proposes to use an interpretable method to dissect the channe...
Latest algorithms for automatic neural architecture search perform remar...
We design a highly efficient architecture called Gated Convolutional Net...
In this work, we propose a heuristic genetic algorithm (GA) for pruning
...
Latest algorithms for automatic neural architecture search perform remar...