Streaming Transformer-based Acoustic Models Using Self-attention with Augmented Memory

05/16/2020
by   Chunyang Wu, et al.
0

Transformer-based acoustic modeling has achieved great suc-cess for both hybrid and sequence-to-sequence speech recogni-tion. However, it requires access to the full sequence, and thecomputational cost grows quadratically with respect to the in-put sequence length. These factors limit its adoption for stream-ing applications. In this work, we proposed a novel augmentedmemory self-attention, which attends on a short segment of theinput sequence and a bank of memories. The memory bankstores the embedding information for all the processed seg-ments. On the librispeech benchmark, our proposed methodoutperforms all the existing streamable transformer methods bya large margin and achieved over 15 used LC-BLSTM baseline. Our find-ings are also confirmed on some large internal datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset