Transfer learning from language models to image caption generators: Better models may not transfer better

01/01/2019
by   Marc Tanti, et al.
0

When designing a neural caption generator, a convolutional neural network can be used to extract image features. Is it possible to also use a neural language model to extract sentence prefix features? We answer this question by trying different ways to transfer the recurrent neural network and embedding layer from a neural language model to an image caption generator. We find that image caption generators with transferred parameters perform better than those trained from scratch, even when simply pre-training them on the text of the same captions dataset it will later be trained on. We also find that the best language models (in terms of perplexity) do not result in the best caption generators after transfer learning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset