Improving Generalization of Transformer for Speech Recognition with Parallel Schedule Sampling and Relative Positional Embedding

11/01/2019
by   Pan Zhou, et al.
0

Transformer showed promising results in many sequence to sequence transformation tasks recently. It utilizes a number of feed-forward self-attention layers in the encoder and decoder to replace recurrent neural networks (RNN) in attention-based encoder decoder (AED). Self-attention layer learns temporal dependence by incorporating sinusoidal positional embedding of tokens in sequences for parallel computing. Quicker iteration speed in training than sequential operation of RNN can be obtained. The deeper layer of transformer also makes it perform better than RNN-based AED. However, this parallelization makes it hard to apply schedule sampling training. Self-attention with sinusoidal positional embedding may also cause performance degradations for longer sequence that has similar acoustic or semantic information at different positions. To address these problems, we propose to use parallel schedule sampling (PSS) and relative positional embedding (RPE) to help transformer generalize to unseen data. Our proposed methods achieve 7 relative improvement for short utterances and 30 utterances on a 10,000-hour ASR task.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset