Efficient Evaluation-Time Uncertainty Estimation by Improved Distillation

06/12/2019
by   Erik Englesson, et al.
0

In this work we aim to obtain computationally-efficient uncertainty estimates with deep networks. For this, we propose a modified knowledge distillation procedure that achieves state-of-the-art uncertainty estimates both for in and out-of-distribution samples. Our contributions include a) demonstrating and adapting to distillation's regularization effect b) proposing a novel target teacher distribution c) a simple augmentation procedure to improve out-of-distribution uncertainty estimates d) shedding light on the distillation procedure through comprehensive set of experiments.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset