Disrupting Model Training with Adversarial Shortcuts

06/12/2021
by   Ivan Evtimov, et al.
0

When data is publicly released for human consumption, it is unclear how to prevent its unauthorized usage for machine learning purposes. Successful model training may be preventable with carefully designed dataset modifications, and we present a proof-of-concept approach for the image classification setting. We propose methods based on the notion of adversarial shortcuts, which encourage models to rely on non-robust signals rather than semantic features, and our experiments demonstrate that these measures successfully prevent deep learning models from achieving high accuracy on real, unmodified data examples.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset