Boosting coherence of language models

10/15/2021
by   Nikolay Malkin, et al.
0

Naturality of long-term information structure – coherence – remains a challenge in language generation. Large language models have insufficiently learned such structure, as their long-form generations differ from natural text in measures of coherence. To alleviate this divergence, we propose coherence boosting, an inference procedure that increases the effect of distant context on next-token prediction. We show the benefits of coherence boosting with pretrained models by distributional analyses of generated ordinary text and dialog responses. We also find that coherence boosting with state-of-the-art models for various zero-shot NLP tasks yields performance gains with no additional training.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset