Enhancing Contrastive Learning with Noise-Guided Attack: Towards Continual Relation Extraction in the Wild

05/11/2023
by   Ting Wu, et al.
0

The principle of continual relation extraction (CRE) involves adapting to emerging novel relations while preserving od knowledge. While current endeavors in CRE succeed in preserving old knowledge, they tend to fail when exposed to contaminated data streams. We assume this is attributed to their reliance on an artificial hypothesis that the data stream has no annotation errors, which hinders real-world applications for CRE. Considering the ubiquity of noisy labels in real-world datasets, in this paper, we formalize a more practical learning scenario, termed as noisy-CRE. Building upon this challenging setting, we develop a noise-resistant contrastive framework named as Noise-guided attack in Contrative Learning (NaCL) to learn incremental corrupted relations. Compared to direct noise discarding or inaccessible noise relabeling, we present modifying the feature space to match the given noisy labels via attacking can better enrich contrastive representations. Extensive empirical validations highlight that NaCL can achieve consistent performance improvements with increasing noise rates, outperforming state-of-the-art baselines.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset