Multimodal Dialogs (MMD): A large-scale dataset for studying multimodal domain-aware conversations

04/01/2017
by   Amrita Saha, et al.
0

While multimodal conversation agents are gaining importance in several domains such as retail, travel etc., deep learning research in this area has been limited primarily due to the lack of availability of large-scale, open chatlogs. To overcome this bottleneck, in this paper we introduce the task of multimodal, domain-aware conversations, and propose the MMD benchmark dataset. This dataset was gathered by working in close coordination with large number of domain experts in the retail domain and consists of over 150K conversation sessions between shoppers and sales agents, with over 6.5Million utterances. With this dataset, we propose 5 new sub-tasks for multimodal conversations along with their evaluation methodology. We also propose two novel multimodal neural models in the encode-attend-decode paradigm and demonstrate their performance on two of the sub-tasks, namely text response generation and best image response selection. These experiments serve to establish baseline performance and open new research directions for each of these sub-tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset