Open-Domain Dialog Evaluation using Follow-Ups Likelihood

09/12/2022
by   Maxime De Bruyn, et al.
0

Automatic evaluation of open-domain dialogs remains an unsolved problem. Moreover, existing methods do not correlate strongly with human annotations. This paper presents a new automated evaluation method using follow-ups: we measure the probability that a language model will continue the conversation with a fixed set of follow-ups (e.g., not really relevant here, what are you trying to say). When compared against twelve existing methods, our new evaluation achieves the highest correlation with human evaluations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset