Learning to Locate Visual Answer in Video Corpus Using Question
We introduce a new task, named video corpus visual answer localization (VCVAL), which aims to locate the visual answer in a large collection of untrimmed, unsegmented instructional videos using a natural language question. This task requires a range of skills - the interaction between vision and language, video retrieval, passage comprehension, and visual answer localization. In this paper, we propose a cross-modal contrastive global-span (CCGS) method for the VCVAL, jointly training the video corpus retrieval and visual answer localization subtasks. More precisely, we first enhance the video question-answer semantic by adding element-wise visual information into the pre-trained language model, and then design a novel global-span predictor through fusion information to locate the visual answer point. The global-span contrastive learning is adopted to sort the span point from the positive and negative samples with the global-span matrix. We have reconstructed a dataset named MedVidCQA, on which the VCVAL task is benchmarked. Experimental results show that the proposed method outperforms other competitive methods both in the video corpus retrieval and visual answer localization subtasks. Most importantly, we perform detailed analyses on extensive experiments, paving a new path for understanding the instructional videos, which ushers in further research.
READ FULL TEXT