A Framework for Rationale Extraction for Deep QA models

10/09/2021
by   Sahana Ramnath, et al.
0

As neural-network-based QA models become deeper and more complex, there is a demand for robust frameworks which can access a model's rationale for its prediction. Current techniques that provide insights on a model's working are either dependent on adversarial datasets or are proposing models with explicit explanation generation components. These techniques are time-consuming and challenging to extend to existing models and new datasets. In this work, we use `Integrated Gradients' to extract rationale for existing state-of-the-art models in the task of Reading Comprehension based Question Answering (RCQA). On detailed analysis and comparison with collected human rationales, we find that though  40-80 (precision), only 6-19 rationale (recall).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset