A Rigourous Study on Named Entity Recognition: Can Fine-tuning Pretrained Model Lead to the Promised Land?
Fine-tuning pretrained model has achieved promising performance on standard NER benchmarks. Generally, these benchmarks are blessed with strong name regularity, high mention coverage and sufficient context diversity. Unfortunately, when scaling NER to open situations, these advantages may no longer exist, and therefore raise the critical question of whether pretrained supervised models can still work well when facing these issues. As there is no currently available dataset to investigate this problem, this paper proposes to conduct randomization test on standard benchmarks. Specifically, we erase name regularity, mention coverage and context diversity respectively from the benchmarks, in order to explore their impact on the generalization ability of models. Moreover, we also construct a new open NER dataset that focuses on entity types with weak name regularity such as book, song, and movie. From both randomization test and empirical experiments, we draw the conclusions that 1) name regularity is vital for generalization to unseen mentions; 2) high mention coverage may undermine the model generalization ability and 3) context patterns may not require enormous data to capture when using pretrained supervised models.
READ FULL TEXT