Improving Network Robustness against Adversarial Attacks with Compact Convolution

12/03/2017
by   Rajeev Ranjan, et al.
0

Though Convolutional Neural Networks (CNNs) have surpassed human-level performance on tasks such as object classification and face verification, they can easily be fooled by adversarial attacks. These attacks add a small perturbation to the input image that causes the network to mis-classify the sample. In this paper, we focus on neutralizing adversarial attacks by exploring the effect of different loss functions such as CenterLoss and L2-Softmax Loss for enhanced robustness to adversarial perturbations. Additionally, we propose power convolution, a novel method of convolution that when incorporated in conventional CNNs improve their robustness. Power convolution ensures that features at every layer are bounded and close to each other. Extensive experiments show that Power Convolutional Networks (PCNs) neutralize multiple types of attacks, and perform better than existing methods for defending adversarial attacks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset