Remedying BiLSTM-CNN Deficiency in Modeling Cross-Context for NER

08/29/2019
by   Peng-Hsuan Li, et al.
0

Recent researches prevalently used BiLSTM-CNN as a core module for NER in a sequence-labeling setup. This paper formally shows the limitation of BiLSTM-CNN encoders in modeling cross-context patterns for each word, i.e., patterns crossing past and future for a specific time step. Two types of cross-structures are used to remedy the problem: A BiLSTM variant with cross-link between layers; a multi-head self-attention mechanism. These cross-structures bring consistent improvements across a wide range of NER domains for a core system using BiLSTM-CNN without additional gazetteers, POS taggers, language-modeling, or multi-task supervision. The model surpasses comparable previous models on OntoNotes 5.0 and WNUT 2017 by 1.4 especially improving emerging, complex, confusing, and multi-token entity mentions, showing the importance of remedying the core module of NER.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset