End-to-end spoken language understanding using joint CTC loss and self-supervised, pretrained acoustic encoders

05/04/2023
by   Jixuan Wang, et al.
0

It is challenging to extract semantic meanings directly from audio signals in spoken language understanding (SLU), due to the lack of textual information. Popular end-to-end (E2E) SLU models utilize sequence-to-sequence automatic speech recognition (ASR) models to extract textual embeddings as input to infer semantics, which, however, require computationally expensive auto-regressive decoding. In this work, we leverage self-supervised acoustic encoders fine-tuned with Connectionist Temporal Classification (CTC) to extract textual embeddings and use joint CTC and SLU losses for utterance-level SLU tasks. Experiments show that our model achieves 4 state-of-the-art (SOTA) dialogue act classification model on the DSTC2 dataset and 1.3

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset