Self-Contrastive Learning

06/29/2021
by   Sangmin Bae, et al.
10

This paper proposes a novel contrastive learning framework, coined as Self-Contrastive (SelfCon) Learning, that self-contrasts within multiple outputs from the different levels of a network. We confirmed that SelfCon loss guarantees the lower bound of mutual information (MI) between the intermediate and last representations. Besides, we empirically showed, via various MI estimators, that SelfCon loss highly correlates to the increase of MI and better classification performance. In our experiments, SelfCon surpasses supervised contrastive (SupCon) learning without the need for a multi-viewed batch and with the cheaper computational cost. Especially on ResNet-18, we achieved top-1 classification accuracy of 76.45 which is 2.87 respectively. We found that mitigating both vanishing gradient and overfitting issue makes our method outperform the counterparts.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset