Out-of-distribution Detection by Cross-class Vicinity Distribution of In-distribution Data
Deep neural networks only learn to map in-distribution inputs to their corresponding ground truth labels in the training phase without differentiating out-of-distribution samples from in-distribution ones. This results from the assumption that all samples are independent and identically distributed without distributional distinction. Therefore, a pretrained network learned from the in-distribution samples treats out-of-distribution samples as in-distribution and makes high-confidence predictions on them in the test phase. To address this issue, we draw out-of-distribution samples from the vicinity distribution of training in-distribution samples for learning to reject the prediction on out-of-distribution inputs. A Cross-class Vicinity Distribution is introduced by assuming that an out-of-distribution sample generated by mixing multiple in-distribution samples does not share the same classes of its constituents. We thus improve the discriminability of a pretrained network by finetuning it with out-of-distribution samples drawn from the cross-class vicinity distribution, where each out-of-distribution input corresponds to a complementary label. Experiments on various in-/out-of-distribution datasets show that the proposed method significantly outperforms existing methods in improving the capacity of discriminating between in- and out-of-distribution samples.
READ FULL TEXT