Transformer-based Context-aware Sarcasm Detection in Conversation Threads from Social Media

05/22/2020
by   Xiangjue Dong, et al.
0

We present a transformer-based sarcasm detection model that accounts for the context from the entire conversation thread for more robust predictions. Our model uses deep transformer layers to perform multi-head attentions among the target utterance and the relevant context in the thread. The context-aware models are evaluated on two datasets from social media, Twitter and Reddit, and show 3.1 F1-scores of 79.0 becoming one of the highest performing systems among 36 participants in this shared task.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset