On Configurable Defense against Adversarial Example Attacks

12/06/2018
by   Bo Luo, et al.
0

Machine learning systems based on deep neural networks (DNNs) have gained mainstream adoption in many applications. Recently, however, DNNs are shown to be vulnerable to adversarial example attacks with slight perturbations on the inputs. Existing defense mechanisms against such attacks try to improve the overall robustness of the system, but they do not differentiate different targeted attacks even though the corresponding impacts may vary significantly. To tackle this problem, we propose a novel configurable defense mechanism in this work, wherein we are able to flexibly tune the robustness of the system against different targeted attacks to satisfy application requirements. This is achieved by refining the DNN loss function with an attack sensitive matrix to represent the impacts of different targeted attacks. Experimental results on CIFAR-10 and GTSRB data sets demonstrate the efficacy of the proposed solution.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset