Contrastive Learning with Stronger Augmentations

04/15/2021
by   Xiao Wang, et al.
0

Representation learning has significantly been developed with the advance of contrastive learning methods. Most of those methods have benefited from various data augmentations that are carefully designated to maintain their identities so that the images transformed from the same instance can still be retrieved. However, those carefully designed transformations limited us to further explore the novel patterns exposed by other transformations. Meanwhile, as found in our experiments, the strong augmentations distorted the images' structures, resulting in difficult retrieval. Thus, we propose a general framework called Contrastive Learning with Stronger Augmentations (CLSA) to complement current contrastive learning approaches. Here, the distribution divergence between the weakly and strongly augmented images over the representation bank is adopted to supervise the retrieval of strongly augmented queries from a pool of instances. Experiments on the ImageNet dataset and downstream datasets showed the information from the strongly augmented images can significantly boost the performance. For example, CLSA achieves top-1 accuracy of 76.2 with a standard ResNet-50 architecture with a single-layer classifier fine-tuned, which is almost the same level as 76.5 code and pre-trained models are available in https://github.com/maple-research-lab/CLSA.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset