TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation

03/18/2021
by   Todd Huster, et al.
3

Deep neural networks (DNNs) are vulnerable to "backdoor" poisoning attacks, in which an adversary implants a secret trigger into an otherwise normally functioning model. Detection of backdoors in trained models without access to the training data or example triggers is an important open problem. In this paper, we identify an interesting property of these models: adversarial perturbations transfer from image to image more readily in poisoned models than in clean models. This holds for a variety of model and trigger types, including triggers that are not linearly separable from clean data. We use this feature to detect poisoned models in the TrojAI benchmark, as well as additional models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset