Reliable Visual Question Answering: Abstain Rather Than Answer Incorrectly

04/28/2022
by   Spencer Whitehead, et al.
0

Machine learning has advanced dramatically, narrowing the accuracy gap to humans in multimodal tasks like visual question answering (VQA). However, while humans can say "I don't know" when they are uncertain (i.e., abstain from answering a question), such ability has been largely neglected in multimodal research, despite the importance of this problem to the usage of VQA in real settings. In this work, we promote a problem formulation for reliable VQA, where we prefer abstention over providing an incorrect answer. We first enable abstention capabilities for several VQA models, and analyze both their coverage, the portion of questions answered, and risk, the error on that portion. For that we explore several abstention approaches. We find that although the best performing models achieve over 71 dataset, introducing the option to abstain by directly using a model's softmax scores limits them to answering less than 8 risk of error (i.e., 1 function to directly estimate the correctness of the predicted answers, which we show can triple the coverage from, for example, 5.0 While it is important to analyze both coverage and risk, these metrics have a trade-off which makes comparing VQA models challenging. To address this, we also propose an Effective Reliability metric for VQA that places a larger cost on incorrect answers compared to abstentions. This new problem formulation, metric, and analysis for VQA provide the groundwork for building effective and reliable VQA models that have the self-awareness to abstain if and only if they don't know the answer.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset