FlipDial: A Generative Model for Two-Way Visual Dialogue

02/11/2018
by   Daniela Massiceti, et al.
0

We present FlipDial, a generative model for visual dialogue that simultaneously plays the role of both participants in a visually-grounded dialogue. Given context in the form of an image and an associated caption summarising the contents of the image, FlipDial learns both to answer questions and put forward questions, capable of generating entire sequences of dialogue (question-answer pairs) which are diverse and relevant to the image. To do this, FlipDial relies on a simple but surprisingly powerful idea: it uses convolutional neural networks (CNNs) to encode entire dialogues directly, implicitly capturing dialogue context, and conditional VAEs to learn the generative model. FlipDial outperforms the state-of-the-art baseline in the sequential answering task (1VD) on the VisDial dataset by a significant margin of 12 points in Mean Rank. We are the first to extend this paradigm to full two-way visual dialogue (2VD), where our model is capable of generating visually-grounded both questions and answers in sequence, for which we propose a set of novel evaluation measures and metrics.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset