BERT based Transformers lead the way in Extraction of Health Information from Social Media

04/15/2021
by   Sidharth R, et al.
20

This paper describes our submissions for the Social Media Mining for Health (SMM4H)2021 shared tasks. We participated in 2 tasks:(1) Classification, extraction and normalization of adverse drug effect (ADE) mentions in English tweets (Task-1) and (2) Classification of COVID-19 tweets containing symptoms(Task-6). Our approach for the first task uses the language representation model RoBERTa with a binary classification head. For the second task, we use BERTweet, based on RoBERTa. Fine-tuning is performed on the pre-trained models for both tasks. The models are placed on top of a custom domain-specific processing pipeline. Our system ranked first among all the submissions for subtask-1(a) with an F1-score of 61 system obtained an F1-score of 50 score averaged across all submissions. The BERTweet model achieved an F1 score of 94

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset