Regularization Can Help Mitigate Poisoning Attacks... with the Right Hyperparameters

05/23/2021
by   Javier Carnerero-Cano, et al.
6

Machine learning algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to degrade the algorithms' performance. We show that current approaches, which typically assume that regularization hyperparameters remain constant, lead to an overly pessimistic view of the algorithms' robustness and of the impact of regularization. We propose a novel optimal attack formulation that considers the effect of the attack on the hyperparameters, modelling the attack as a minimax bilevel optimization problem. This allows to formulate optimal attacks, select hyperparameters and evaluate robustness under worst case conditions. We apply this formulation to logistic regression using L_2 regularization, empirically show the limitations of previous strategies and evidence the benefits of using L_2 regularization to dampen the effect of poisoning attacks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset