From Credit Assignment to Entropy Regularization: Two New Algorithms for Neural Sequence Prediction

04/29/2018
by   Zihang Dai, et al.
0

In this work, we study the credit assignment problem in reward augmented maximum likelihood (RAML) learning, and establish a theoretical equivalence between the token-level counterpart of RAML and the entropy regularized reinforcement learning. Inspired by the connection, we propose two sequence prediction algorithms, one extending RAML with fine-grained credit assignment and the other improving Actor-Critic with a systematic entropy regularization. On two benchmark datasets, we show the proposed algorithms outperform RAML and Actor-Critic respectively, providing new alternatives to sequence prediction.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset