Experiments with LVT and FRE for Transformer model

04/26/2020
by   Ilshat Gibadullin, et al.
0

In this paper, we experiment with Large Vocabulary Trick and Feature-rich encoding applied to the Transformer model for Text Summarization. We could not achieve better results, than the analogous RNN-based sequence-to-sequence model, so we tried more models to find out, what improves the results and what deteriorates them.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset