CLIP has become a promising language-supervised visual pre-training fram...
Deep neural networks have achieved remarkable performance for artificial...
In this report, we describe the technical details of our submission to t...
Class-Incremental Learning (CIL) aims to solve the neural networks'
cata...
Unlike the conventional Knowledge Distillation (KD), Self-KD allows a ne...
The teacher-free online Knowledge Distillation (KD) aims to train an ens...
Existing works often focus on reducing the architecture redundancy for
a...
Current Knowledge Distillation (KD) methods for semantic segmentation of...
Knowledge distillation (KD) is an effective framework that aims to trans...
Knowledge distillation often involves how to define and transfer knowled...
We present a collaborative learning method called Mutual Contrastive Lea...
The target of 2D human pose estimation is to locate the keypoints of bod...
Network pruning is widely used to compress Deep Neural Networks (DNNs). ...
Existing Online Knowledge Distillation (OKD) aims to perform collaborati...
Deep convolutional neural networks (CNN) always non-linearly aggregate t...
This paper proposes to use an interpretable method to dissect the channe...
Recently, multi-resolution networks (such as Hourglass, CPN, HRNet, etc....
Latest algorithms for automatic neural architecture search perform remar...
We design a highly efficient architecture called Gated Convolutional Net...
In this work, we propose a heuristic genetic algorithm (GA) for pruning
...
Latest algorithms for automatic neural architecture search perform remar...