Split and Rephrase: Better Evaluation and a Stronger Baseline

05/02/2018
by   Roee Aharoni, et al.
0

Splitting and rephrasing a complex sentence into several shorter sentences that convey the same meaning is a challenging problem in NLP. We show that while vanilla seq2seq models can reach high scores on the proposed benchmark (Narayan et al., 2017), they suffer from memorization of the training set which contains more than 89 test sets. To aid this, we present a new train-development-test data split and neural models augmented with a copy-mechanism, outperforming the best reported baseline by 8.68 BLEU and fostering further progress on the task.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset