Monge beats Bayes: Hardness Results for Adversarial Training

06/08/2018
by   Zac Cranko, et al.
0

The last few years have seen extensive empirical study of the robustness of neural networks, with a concerning conclusion: several state-of-the-art approaches are highly sensitive to adversarial perturbations of their inputs. There has been an accompanying surge of interest in learning including defense mechanisms against specific adversaries, known as adversarial training. Despite some impressive advances, little remains known on how to best frame a resource-bounded adversary so that it can be severely detrimental to learning, a non-trivial problem which entails at a minimum the choice of loss and classifiers. We suggest here a formal answer to this question, and pin down a simple sufficient property for any given class of adversaries to be detrimental to learning. This property involves a central measure of "harmfulness" which generalizes the well-known class of integral probability metrics, and thus the maximum mean discrepancy. A key feature of our result is that it holds for all proper losses, and for a popular subset of these, the optimisation of this central measure appears to be independent of the loss. We then deliver a sufficient condition for this sufficient property to hold for Lipschitz classifiers, which relies on framing it into optimal transport theory. We finally deliver a negative boosting result which shows how weakly contractive adversaries for a RKHS can be combined to build a maximally detrimental adversary, show that some implemented existing adversaries involve proxies of our optimal transport adversaries and finally provide a toy experiment assessing such adversaries in a simple context, displaying that additional robustness on testing can be granted through adversarial training.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset