Adversarial Momentum-Contrastive Pre-Training

12/24/2020
by   Cong Xu, et al.
0

Deep neural networks are vulnerable to semantic invariant corruptions and imperceptible artificial perturbations. Although data augmentation can improve the robustness against the former, it offers no guarantees against the latter. Adversarial training, on the other hand, is quite the opposite. Recent studies have shown that adversarial self-supervised pre-training is helpful to extract the invariant representations under both data augmentations and adversarial perturbations. Based on the MoCo's idea, this paper proposes a novel adversarial momentum-contrastive (AMOC) pre-training approach, which designs two dynamic memory banks to maintain the historical clean and adversarial representations respectively, so as to exploit the discriminative representations that are consistent in a long period. Compared with the existing self-supervised pre-training approaches, AMOC can use a smaller batch size and fewer training epochs but learn more robust features. Empirical results show that the developed approach further improves the current state-of-the-art adversarial robustness.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset