Trust but Verify: Assigning Prediction Credibility by Counterfactual Constrained Learning

11/24/2020
by   Luiz F. O. Chamon, et al.
0

Prediction credibility measures, in the form of confidence intervals or probability distributions, are fundamental in statistics and machine learning to characterize model robustness, detect out-of-distribution samples (outliers), and protect against adversarial attacks. To be effective, these measures should (i) account for the wide variety of models used in practice, (ii) be computable for trained models or at least avoid modifying established training procedures, (iii) forgo the use of data, which can expose them to the same robustness issues and attacks as the underlying model, and (iv) be followed by theoretical guarantees. These principles underly the framework developed in this work, which expresses the credibility as a risk-fit trade-off, i.e., a compromise between how much can fit be improved by perturbing the model input and the magnitude of this perturbation (risk). Using a constrained optimization formulation and duality theory, we analyze this compromise and show that this balance can be determined counterfactually, without having to test multiple perturbations. This results in an unsupervised, a posteriori method of assigning prediction credibility for any (possibly non-convex) differentiable model, from RKHS-based solutions to any architecture of (feedforward, convolutional, graph) neural network. Its use is illustrated in data filtering and defense against adversarial attacks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset