LabelVizier: Interactive Validation and Relabeling for Technical Text Annotations

03/31/2023
by   Xiaoyu Zhang, et al.
0

With the rapid accumulation of text data produced by data-driven techniques, the task of extracting "data annotations"–concise, high-quality data summaries from unstructured raw text–has become increasingly important. The recent advances in weak supervision and crowd-sourcing techniques provide promising solutions to efficiently create annotations (labels) for large-scale technical text data. However, such annotations may fail in practice because of the change in annotation requirements, application scenarios, and modeling goals, where label validation and relabeling by domain experts are required. To approach this issue, we present LabelVizier, a human-in-the-loop workflow that incorporates domain knowledge and user-specific requirements to reveal actionable insights into annotation flaws, then produce better-quality labels for large-scale multi-label datasets. We implement our workflow as an interactive notebook to facilitate flexible error profiling, in-depth annotation validation for three error types, and efficient annotation relabeling on different data scales. We evaluated the efficiency and generalizability of our workflow with two use cases and four expert reviews. The results indicate that LabelVizier is applicable in various application scenarios and assist domain experts with different knowledge backgrounds to efficiently improve technical text annotation quality.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset