Recognizing long-form speech using streaming end-to-end models

10/24/2019
by   Arun Narayanan, et al.
0

All-neural end-to-end (E2E) automatic speech recognition (ASR) systems that use a single neural network to transduce audio to word sequences have been shown to achieve state-of-the-art results on several tasks. In this work, we examine the ability of E2E models to generalize to unseen domains, where we find that models trained on short utterances fail to generalize to long-form speech. We propose two complementary solutions to address this: training on diverse acoustic data, and LSTM state manipulation to simulate long-form audio when training using short utterances. On a synthesized long-form test set, adding data diversity improves word error rate (WER) by 90 simulating long-form training improves it by 67 combination doesn't improve over data diversity alone. On a real long-form call-center test set, adding data diversity improves WER by 40 Simulating long-form training on top of data diversity improves performance by an additional 27

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset