Self-Supervised Pre-training of Vision Transformers for Dense Prediction Tasks

05/30/2022
by   Jaonary Rabarisoa, et al.
0

We present a new self-supervised pre-training of Vision Transformers for dense prediction tasks. It is based on a contrastive loss across views that compares pixel-level representations to global image representations. This strategy produces better local features suitable for dense prediction tasks as opposed to contrastive pre-training based on global image representation only. Furthermore, our approach does not suffer from a reduced batch size since the number of negative examples needed in the contrastive loss is in the order of the number of local features. We demonstrate the effectiveness of our pre-training strategy on two dense prediction tasks: semantic segmentation and monocular depth estimation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset