Making Convex Loss Functions Robust to Outliers using e-Exponentiated Transformation
In this paper, we propose a novel e-exponentiated transformation, 0.5< e<1, for loss functions. When the transformation is applied to a convex loss function, the transformed loss function enjoys the following desirable property: for one layer network, all the stationary points of the empirical risk are global minima. Moreover, for a deep linear network if the risk is differentiable at all the local minima and the hidden layers are at least as wide as either the input layer or the output layer, all local minima can be shown to be a global minimum. Using a novel generalization error bound, we have theoretically shown that the transformed loss function has a tighter bound for datasets corrupted by outliers. Our empirical observation shows that the accuracy obtained using the transformed loss function can be significantly better than the same obtained using the original loss function and comparable to that obtained by some other state of the art methods in the presence of label noise.
READ FULL TEXT