Small in-distribution changes in 3D perspective and lighting fool both CNNs and Transformers

06/30/2021
by   Spandan Madan, et al.
32

Neural networks are susceptible to small transformations including 2D rotations and shifts, image crops, and even changes in object colors. This is often attributed to biases in the training dataset, and the lack of 2D shift-invariance due to not respecting the sampling theorem. In this paper, we challenge this hypothesis by training and testing on unbiased datasets, and showing that networks are brittle to both small 3D perspective changes and lighting variations which cannot be explained by dataset bias or lack of shift-invariance. To find these in-distribution errors, we introduce an evolution strategies (ES) based approach, which we call CMA-Search. Despite training with a large-scale (0.5 million images), unbiased dataset of camera and light variations, in over 71 in the vicinity of a correctly classified image which lead to in-distribution misclassifications with < 3.6 CMA-Search finds misclassifications in 33 parameters. Finally, we extend this method to find misclassifications in the vicinity of ImageNet images for both ResNet and OpenAI's CLIP model.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset