Framewise approach in multimodal emotion recognition in OMG challenge

05/03/2018
by   Grigoriy Sterling, et al.
0

In this report we described our approach achieves 53% of unweighted accuracy over 7 emotions and 0.05 and 0.09 mean squared errors for arousal and valence in OMG emotion recognition challenge. Our results were obtained with ensemble of single modality models trained on voice and face data from video separately. We consider each stream as a sequence of frames. Next we estimated features from frames and handle it with recurrent neural network. As audio frame we mean short 0.4 second spectrogram interval. For features estimation for face pictures we used own ResNet neural network pretrained on AffectNet database. Each short spectrogram was considered as a picture and processed by convolutional network too. As a base audio model we used ResNet pretrained in speaker recognition task. Predictions from both modalities were fused on decision level and improve single-channel approaches by a few percent

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset