Building a Video-and-Language Dataset with Human Actions for Multimodal Logical Inference

06/27/2021
by   Riko Suzuki, et al.
0

This paper introduces a new video-and-language dataset with human actions for multimodal logical inference, which focuses on intentional and aspectual expressions that describe dynamic human actions. The dataset consists of 200 videos, 5,554 action labels, and 1,942 action triplets of the form <subject, predicate, object> that can be translated into logical semantic representations. The dataset is expected to be useful for evaluating multimodal inference systems between videos and semantically complicated sentences including negation and quantification.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset