Self-Training for Unsupervised Parsing with PRPN

05/27/2020
by   Anhad Mohananey, et al.
0

Neural unsupervised parsing (UP) models learn to parse without access to syntactic annotations, while being optimized for another task like language modeling. In this work, we propose self-training for neural UP models: we leverage aggregated annotations predicted by copies of our model as supervision for future copies. To be able to use our model's predictions during training, we extend a recent neural UP architecture, the PRPN (Shen et al., 2018a) such that it can be trained in a semi-supervised fashion. We then add examples with parses predicted by our model to our unlabeled UP training data. Our self-trained model outperforms the PRPN by 8.1 the art by 1.6 helpful for semi-supervised parsing in ultra-low-resource settings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset