Encyclopedic VQA: Visual questions about detailed properties of fine-grained categories

06/15/2023
by   Thomas Mensink, et al.
0

We propose Encyclopedic-VQA, a large scale visual question answering (VQA) dataset featuring visual questions about detailed properties of fine-grained categories and instances. It contains 221k unique question+answer pairs each matched with (up to) 5 images, resulting in a total of 1M VQA samples. Moreover, our dataset comes with a controlled knowledge base derived from Wikipedia, marking the evidence to support each answer. Empirically, we show that our dataset poses a hard challenge for large vision+language models as they perform poorly on our dataset: PaLI [14] is state-of-the-art on OK-VQA [37], yet it only achieves 13.0 experimentally show that progress on answering our encyclopedic questions can be achieved by augmenting large models with a mechanism that retrieves relevant information from the knowledge base. An oracle experiment with perfect retrieval achieves 87.0 an automatic retrieval-augmented prototype yields 48.8 dataset enables future research on retrieval-augmented vision+language models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset