The xmuspeech system for multi-channel multi-party meeting transcription challenge

02/11/2022
by   Jie Wang, et al.
0

This paper describes the system developed by the XMUSPEECH team for the Multi-channel Multi-party Meeting Transcription Challenge (M2MeT). For the speaker diarization task, we propose a multi-channel speaker diarization system that obtains spatial information of speaker by Difference of Arrival (DOA) technology. Speaker-spatial embedding is generated by x-vector and s-vector derived from Filter-and-Sum Beamforming (FSB) which makes the embedding more robust. Specifically, we propose a novel multi-channel sequence-to-sequence neural network architecture named Discriminative Multi-stream Neural Network (DMSNet) which consists of Attention Filter-and-Sum block (AFSB) and Conformer encoder. We explore DMSNet to address overlapped speech problem on multi-channel audio. Compared with LSTM based OSD module, we achieve a decreases of 10.1 OSD module, the DER of cluster-based diarization system decrease significantly form 13.44 diarization error rate (DER) on evaluation set and test set.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset