Generating Descriptions for Sequential Images with Local-Object Attention and Global Semantic Context Modelling

12/02/2020
by   Jing Su, et al.
0

In this paper, we propose an end-to-end CNN-LSTM model for generating descriptions for sequential images with a local-object attention mechanism. To generate coherent descriptions, we capture global semantic context using a multi-layer perceptron, which learns the dependencies between sequential images. A paralleled LSTM network is exploited for decoding the sequence descriptions. Experimental results show that our model outperforms the baseline across three different evaluation metrics on the datasets published by Microsoft.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset