Divide, Denoise, and Defend against Adversarial Attacks

Deep neural networks, although shown to be a successful class of machine learning algorithms, are known to be extremely unstable to adversarial perturbations. Improving the robustness of neural networks against these attacks is important, especially for security-critical applications. To defend against such attacks, we propose dividing the input image into multiple patches, denoising each patch independently, and reconstructing the image, without losing significant image content. This proposed defense mechanism is non-differentiable which makes it non-trivial for an adversary to apply gradient-based attacks. Moreover, we do not fine-tune the network with adversarial examples, making it more robust against unknown attacks. We present a thorough analysis of the tradeoff between accuracy and robustness against adversarial attacks. We evaluate our method under black-box, grey-box, and white-box settings. The proposed method outperforms the state-of-the-art by a significant margin on the ImageNet dataset under grey-box attacks while maintaining good accuracy on clean images. We also establish a strong baseline for a novel white-box attack.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset