Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks

06/08/2019
by   Maksym Andriushchenko, et al.
0

The problem of adversarial samples has been studied extensively for neural networks. However, for boosting, in particular boosted decision trees and decision stumps there are almost no results, even though boosted decision trees, as e.g. XGBoost, are quite popular due to their interpretability and good prediction performance. We show in this paper that for boosted decision stumps the exact min-max optimal robust loss and test error for an l_∞-attack can be computed in O(n T T), where T is the number of decision stumps and n the number of data points, as well as an optimal update of the ensemble in O(n^2 T T). While not exact, we show how to optimize an upper bound on the robust loss for boosted trees. Up to our knowledge, these are the first algorithms directly optimizing provable robustness guarantees in the area of boosting. We make the code of all our experiments publicly available at https://github.com/max-andr/provably-robust-boosting

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset