Do CoNLL-2003 Named Entity Taggers Still Work Well in 2023?

12/19/2022
by   Shuheng Liu, et al.
0

Named Entity Recognition (NER) is an important and well-studied task in natural language processing. The classic CoNLL-2003 English dataset, published almost 20 years ago, is commonly used to train and evaluate named entity taggers. The age of this dataset raises the question of how well these models perform when applied to modern data. In this paper, we present CoNLL++, a new annotated test set that mimics the process used to create the original CoNLL-2003 test set as closely as possible, except with data collected from 2020. Using CoNLL++, we evaluate the generalization of 20+ different models to modern data. We observe that different models have very different generalization behavior. F1 scores of large transformer-based models which are pre-trained on recent data dropped much less than models using static word embeddings, and RoBERTa-based and T5 models achieve comparable F1 scores on both CoNLL-2003 and CoNLL++. Our experiments show that achieving good generalizability requires a combined effort of developing larger models and continuing pre-training with in-domain and recent data. These results suggest standard evaluation methodology may have under-estimated progress on named entity recognition over the past 20 years; in addition to improving performance on the original CoNLL-2003 dataset, we have also improved the ability of our models to generalize to modern data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset