Noise-adding Methods of Saliency Map as Series of Higher Order Partial Derivative

06/08/2018
by   Junghoon Seo, et al.
0

SmoothGrad and VarGrad are techniques that enhance the empirical quality of standard saliency maps by adding noise to input. However, there were few works that provide a rigorous theoretical interpretation of those methods. We analytically formalize the result of these noise-adding methods. As a result, we observe two interesting results from the existing noise-adding methods. First, SmoothGrad does not make the gradient of the score function smooth. Second, VarGrad is independent of the gradient of the score function. We believe that our findings provide a clue to reveal the relationship between local explanation methods of deep neural networks and higher-order partial derivatives of the score function.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset