Prior Knowledge Integration for Neural Machine Translation using Posterior Regularization

11/02/2018
by   Jiacheng Zhang, et al.
0

Although neural machine translation has made significant progress recently, how to integrate multiple overlapping, arbitrary prior knowledge sources remains a challenge. In this work, we propose to use posterior regularization to provide a general framework for integrating prior knowledge into neural machine translation. We represent prior knowledge sources as features in a log-linear model, which guides the learning process of the neural translation model. Experiments on Chinese-English translation show that our approach leads to significant improvements.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset