Self-Training with Purpose Preserving Augmentation Improves Few-shot Generative Dialogue State Tracking

11/17/2022
by   Jihyun Lee, et al.
0

In dialogue state tracking (DST), labeling the dataset involves considerable human labor. We propose a new self-training framework for few-shot generative DST that utilize unlabeled data. Our self-training method iteratively improves the model by pseudo labeling and employs Purpose Preserving Augmentation (PPAug) to prevent overfitting. We increaese the few-shot 10 approximately 4 values compared to baseline.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset