Robustness Quantification for Classification with Gaussian Processes

05/28/2019
by   Arno Blaas, et al.
0

We consider Bayesian classification with Gaussian processes (GPs) and define robustness of a classifier in terms of the worst-case difference in the classification probabilities with respect to input perturbations. For a subset of the input space T⊆R^m such properties reduce to computing the infimum and supremum of the classification probabilities for all points in T. Unfortunately, computing the above values is very challenging, as the classification probabilities cannot be expressed analytically. Nevertheless, using the theory of Gaussian processes, we develop a framework that, for a given dataset D, a compact set of input points T⊆R^m and an error threshold ϵ>0, computes lower and upper bounds of the classification probabilities by over-approximating the exact range with an error bounded by ϵ. We provide experimental comparison of several approximate inference methods for classification on tasks associated to MNIST and SPAM datasets showing that our results enable quantification of uncertainty in adversarial classification settings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset