Improving LSTM-based Video Description with Linguistic Knowledge Mined from Text

04/06/2016
by   Subhashini Venugopalan, et al.
0

This paper investigates how linguistic knowledge mined from large text corpora can aid the generation of natural language descriptions of videos. Specifically, we integrate both a neural language model and distributional semantics trained on large text corpora into a recent LSTM-based architecture for video description. We evaluate our approach on a collection of Youtube videos as well as two large movie description datasets showing significant improvements in grammaticality while modestly improving descriptive quality.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset