DocVQA: A Dataset for VQA on Document Images

07/01/2020
by   Minesh Mathew, et al.
24

We present a new dataset for Visual Question Answering on document images called DocVQA. The dataset consistsof 50,000 questions defined on 12,000+ document images. We provide detailed analysis of the dataset in comparison with similar datasets for VQA and reading comprehension. We report several baseline results by adopting existing VQA and reading comprehension models. Although the existing models perform reasonably well on certain types of questions, there is large performance gap compared to human performance (94.36 models need to improve specifically on questions where understanding structure of the document is crucial.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset