Transformer with Gaussian weighted self-attention for speech enhancement

10/13/2019
by   Jaeyoung Kim, et al.
0

The Transformer architecture recently replaced recurrent neural networks such as LSTM or GRU on many natural language processing (NLP) tasks by presenting new state of the art performance. Self-attention is a core building block for Transformer, which not only enables parallelization of sequence computation but also provides the constant path length between symbols that is essential to learn long-range dependencies. However, Transformer did not perform well for speech enhancement due to the lack of consideration for speech and noise physical characteristics. In this paper, we propose Gaussian weighted self-attention that attenuates attention weights according to the distance between target and context symbols. The experimental results showed that the proposed attention scheme significantly improved over the original Transformer as well as recurrent networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset