Midpoint Regularization: from High Uncertainty Training to Conservative Classification

06/26/2021
by   Hongyu Guo, et al.
0

Label Smoothing (LS) improves model generalization through penalizing models from generating overconfident output distributions. For each training sample the LS strategy smooths the one-hot encoded training signal by distributing its distribution mass over the non-ground truth classes. We extend this technique by considering example pairs, coined PLS. PLS first creates midpoint samples by averaging random sample pairs and then learns a smoothing distribution during training for each of these midpoint samples, resulting in midpoints with high uncertainty labels for training. We empirically show that PLS significantly outperforms LS, achieving up to 30 We also visualize that PLS produces very low winning softmax scores for both in and out of distribution samples.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset