Defensive Distillation is Not Robust to Adversarial Examples

07/14/2016
by   Nicholas Carlini, et al.
0

We show that defensive distillation is not secure: it is no more resistant to targeted misclassification attacks than unprotected neural networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro