MixGen: A New Multi-Modal Data Augmentation

06/16/2022
by   Xiaoshuai Hao, et al.
36

Data augmentation is a necessity to enhance data efficiency in deep learning. For vision-language pre-training, data is only augmented either for images or for text in previous works. In this paper, we present MixGen: a joint data augmentation for vision-language representation learning to further improve data efficiency. It generates new image-text pairs with semantic relationships preserved by interpolating images and concatenating text. It's simple, and can be plug-and-played into existing pipelines. We evaluate MixGen on four architectures, including CLIP, ViLT, ALBEF and TCL, across five downstream vision-language tasks to show its versatility and effectiveness. For example, adding MixGen in ALBEF pre-training leads to absolute performance improvements on downstream tasks: image-text retrieval (+6.2 on Flicker30K zero-shot), visual grounding (+0.9 reasoning (+0.9 and visual entailment (+0.4

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset