MLP-ASR: Sequence-length agnostic all-MLP architectures for speech recognition

02/17/2022
by   Jin Sakuma, et al.
0

We propose multi-layer perceptron (MLP)-based architectures suitable for variable length input. MLP-based architectures, recently proposed for image classification, can only be used for inputs of a fixed, pre-defined size. However, many types of data are naturally variable in length, for example, acoustic signals. We propose three approaches to extend MLP-based architectures for use with sequences of arbitrary length. The first one uses a circular convolution applied in the Fourier domain, the second applies a depthwise convolution, and the final relies on a shift operation. We evaluate the proposed architectures on an automatic speech recognition task with the Librispeech and Tedlium2 corpora. The best proposed MLP-based architectures improves WER by 1.0 / 0.9 test-clean/test-other set, and 0.8 / 1.1 the size of self-attention-based architecture.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset