A law of adversarial risk, interpolation, and label noise

07/08/2022
by   Daniel Paleka, et al.
0

In supervised learning, it has been shown that label noise in the data can be interpolated without penalties on test accuracy under many circumstances. We show that interpolating label noise induces adversarial vulnerability, and prove the first theorem showing the dependence of label noise and adversarial risk in terms of the data distribution. Our results are almost sharp without accounting for the inductive bias of the learning algorithm. We also show that inductive bias makes the effect of label noise much stronger.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset