Why KDAC? A general activation function for knowledge discovery

11/27/2021
by   Zhenhua Wang, et al.
0

Named entity recognition based on deep learning (DNER) can effectively mine expected knowledge from large-scale unstructured and semi-structured text, and has gradually become the paradigm of knowledge discovery. Currently, Tanh, ReLU and Sigmoid dominate DNER, however, these activation functions failed to treat gradient vanishing, no negative output or non-differentiable existence, which may impede DNER's exploration of knowledge caused by the omission and the incomplete representation of latent semantics. To surmount the non-negligible obstacle, we present a novel and general activation function termed KDAC. Detailly, KDAC is a thought that can aggregate and inherit the merits of Tanh and ReLU since they are widely leveraged in various knowledge domains. The positive region corresponds to an adaptive linear design encouraged by ReLU. The negative region considers the interaction between exponent and linearity to surmount the obstacle of gradient vanishing and no negative value. Crucially, the non-differentiable points are alerted and eliminated by a smooth approximation. We perform experiments based on BERT-BiLSTM-CNN-CRF model on six benchmark datasets containing different domain knowledge, such as Weibo, Clinical, E-commerce, Resume, HAZOP and People's daily. The experimental results show that KDAC is advanced and effective, and can provide more generalized activation to stimulate the performance of DNER. We hope that KDAC can be exploited as a promising alternative activation function in DNER to devote itself to the construction of knowledge.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset