Efficient Image Evidence Analysis of CNN Classification Results

01/05/2018
by   Keyang Zhou, et al.
0

Convolutional neural networks (CNNs) define the current state-of-the-art for image recognition. With their emerging popularity, especially for critical applications like medical image analysis or self-driving cars, confirmability is becoming an issue. The black-box nature of trained predictors make it difficult to trace failure cases or to understand the internal reasoning processes leading to results. In this paper we introduce a novel efficient method to visualise evidence that lead to decisions in CNNs. In contrast to network fixation or saliency map methods, our method is able to illustrate the evidence for or against a classifier's decision in input pixel space approximately 10 times faster than previous methods. We also show that our approach is less prone to noise and can focus on the most relevant input regions, thus making it more accurate and interpretable. Moreover, by making simplifications we link our method with other visualisation methods, providing a general explanation for gradient-based visualisation techniques. We believe that our work makes network introspection more feasible for debugging and understanding deep convolutional networks. This will increase trust between humans and deep learning models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset