Extractive and Abstractive Explanations for Fact-Checking and Evaluation of News

04/27/2021
by   Ashkan Kazemi, et al.
11

In this paper, we explore the construction of natural language explanations for news claims, with the goal of assisting fact-checking and news evaluation applications. We experiment with two methods: (1) an extractive method based on Biased TextRank – a resource-effective unsupervised graph-based algorithm for content extraction; and (2) an abstractive method based on the GPT-2 language model. We perform comparative evaluations on two misinformation datasets in the political and health news domains, and find that the extractive method shows the most promise.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset