A SMART Stochastic Algorithm for Nonconvex Optimization with Applications to Robust Machine Learning
In this paper, we show how to transform any optimization problem that arises from fitting a machine learning model into one that (1) detects and removes contaminated data from the training set while (2) simultaneously fitting the trimmed model on the uncontaminated data that remains. To solve the resulting nonconvex optimization problem, we introduce a fast stochastic proximal-gradient algorithm that incorporates prior knowledge through nonsmooth regularization. For datasets of size n, our approach requires O(n^2/3/ε) gradient evaluations to reach ε-accuracy and, when a certain error bound holds, the complexity improves to O(κ n^2/3(1/ε)). These rates are n^1/3 times better than those achieved by typical, full gradient methods.
READ FULL TEXT