A Formalization of Robustness for Deep Neural Networks

03/24/2019
by   Tommaso Dreossi, et al.
0

Deep neural networks have been shown to lack robustness to small input perturbations. The process of generating the perturbations that expose the lack of robustness of neural networks is known as adversarial input generation. This process depends on the goals and capabilities of the adversary, In this paper, we propose a unifying formalization of the adversarial input generation process from a formal methods perspective. We provide a definition of robustness that is general enough to capture different formulations. The expressiveness of our formalization is shown by modeling and comparing a variety of adversarial attack techniques.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset