Why Out-of-distribution Detection in CNNs Does Not Like Mahalanobis – and What to Use Instead

10/13/2021
by   Kamil Szyc, et al.
0

Convolutional neural networks applied for real-world classification tasks need to recognize inputs that are far or out-of-distribution (OoD) with respect to the known or training data. To achieve this, many methods estimate class-conditional posterior probabilities and use confidence scores obtained from the posterior distributions. Recent works propose to use multivariate Gaussian distributions as models of posterior distributions at different layers of the CNN (i.e., for low- and upper-level features), which leads to the confidence scores based on the Mahalanobis distance. However, this procedure involves estimating probability density in high dimensional data using the insufficient number of observations (e.g. the dimensionality of features at the last two layers in the ResNet-101 model are 2048 and 1024, with ca. 1000 observations per class used to estimate density). In this work, we want to address this problem. We show that in many OoD studies in high-dimensional data, LOF-based (Local Outlierness-Factor) methods outperform the parametric, Mahalanobis distance-based methods. This motivates us to propose the nonparametric, LOF-based method of generating the confidence scores for CNNs. We performed several feasibility studies involving ResNet-101 and EffcientNet-B3, based on CIFAR-10 and ImageNet (as known data), and CIFAR-100, SVHN, ImageNet2010, Places365, or ImageNet-O (as outliers). We demonstrated that nonparametric LOF-based confidence estimation can improve current Mahalanobis-based SOTA or obtain similar performance in a simpler way.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset