IDS at SemEval-2020 Task 10: Does Pre-trained Language Model Know What to Emphasize?

07/24/2020
by   Jaeyoul Shin, et al.
0

We propose a novel method that enables us to determine words that deserve to be emphasized from written text in visual media, relying only on the information from the self-attention distributions of pre-trained language models (PLMs). With extensive experiments and analyses, we show that 1) our zero-shot approach is superior to a reasonable baseline that adopts TF-IDF and that 2) there exist several attention heads in PLMs specialized for emphasis selection, confirming that PLMs are capable of recognizing important words in sentences.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset