Multimodal Emotion Recognition Using Multimodal Deep Learning

02/26/2016
by   Wei Liu, et al.
0

To enhance the performance of affective models and reduce the cost of acquiring physiological signals for real-world applications, we adopt multimodal deep learning approach to construct affective models from multiple physiological signals. For unimodal enhancement task, we indicate that the best recognition accuracy of 82.11 representations generated by Deep AutoEncoder (DAE) model. For multimodal facilitation tasks, we demonstrate that the Bimodal Deep AutoEncoder (BDAE) achieves the mean accuracies of 91.01 respectively, which are much superior to the state-of-the-art approaches. For cross-modal learning task, our experimental results demonstrate that the mean accuracy of 66.34 generated by EEG-based DAE as training samples and shared representations generated by eye-based DAE as testing sample, and vice versa.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset