Mixture-of-Expert Conformer for Streaming Multilingual ASR

05/25/2023
by   Ke Hu, et al.
0

End-to-end models with large capacity have significantly improved multilingual automatic speech recognition, but their computation cost poses challenges for on-device applications. We propose a streaming truly multilingual Conformer incorporating mixture-of-expert (MoE) layers that learn to only activate a subset of parameters in training and inference. The MoE layer consists of a softmax gate which chooses the best two experts among many in forward propagation. The proposed MoE layer offers efficient inference by activating a fixed number of parameters as the number of experts increases. We evaluate the proposed model on a set of 12 languages, and achieve an average 11.9 model using ground truth information, our MoE model achieves similar WER and activates similar number of parameters but without any language information. We further show around 3

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset