WikiWeb2M: A Page-Level Multimodal Wikipedia Dataset

05/09/2023
by   Andrea Burns, et al.
0

Webpages have been a rich resource for language and vision-language tasks. Yet only pieces of webpages are kept: image-caption pairs, long text articles, or raw HTML, never all in one place. Webpage tasks have resultingly received little attention and structured image-text data underused. To study multimodal webpage understanding, we introduce the Wikipedia Webpage 2M (WikiWeb2M) suite; the first to retain the full set of images, text, and structure data available in a page. WikiWeb2M can be used for tasks like page description generation, section summarization, and contextual image captioning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset