Model Criticism for Long-Form Text Generation

10/16/2022
by   Yuntian Deng, et al.
0

Language models have demonstrated the ability to generate highly fluent text; however, it remains unclear whether their output retains coherent high-level structure (e.g., story progression). Here, we propose to apply a statistical tool, model criticism in latent space, to evaluate the high-level structure of the generated text. Model criticism compares the distributions between real and generated data in a latent space obtained according to an assumptive generative process. Different generative processes identify specific failure modes of the underlying model. We perform experiments on three representative aspects of high-level discourse – coherence, coreference, and topicality – and find that transformer-based language models are able to capture topical structures but have a harder time maintaining structural coherence or modeling coreference.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset