MDMMT-2: Multidomain Multimodal Transformer for Video Retrieval, One More Step Towards Generalization

03/14/2022
by   Alexander Kunitsyn, et al.
0

In this work we present a new State-of-The-Art on the text-to-video retrieval task on MSR-VTT, LSMDC, MSVD, YouCook2 and TGIF obtained by a single model. Three different data sources are combined: weakly-supervised videos, crowd-labeled text-image pairs and text-video pairs. A careful analysis of available pre-trained networks helps to choose the best prior-knowledge ones. We introduce three-stage training procedure that provides high transfer knowledge efficiency and allows to use noisy datasets during training without prior knowledge degradation. Additionally, double positional encoding is used for better fusion of different modalities and a simple method for non-square inputs processing is suggested.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset