Multi-Channel Auto-Encoder for Speech Emotion Recognition

10/25/2018
by   Zefang Zong, et al.
0

Inferring emotion status from users' queries plays an important role to enhance the capacity in voice dialogues applications. Even though several related works obtained satisfactory results, the performance can still be further improved. In this paper, we proposed a novel framework named multi-channel auto-encoder (MTC-AE) on emotion recognition from acoustic information. MTC-AE contains multiple local DNNs based on different low-level descriptors with different statistics functions that are partly concatenated together, by which the structure is enabled to consider both local and global features simultaneously. Experiment based on a benchmark dataset IEMOCAP shows that our method significantly outperforms the existing state-of-the-art results, achieving 64.8% leave-one-speaker-out unweighted accuracy, which is 2.4% higher than the best result on this dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset