ResNeXt and Res2Net Structure for Speaker Verification
ResNet-based architecture has been widely adopted as the speaker embedding extractor in speaker verification system. Its standard topology and modularized design ease the human efforts on hyper parameter tuning. Therefore, width and depth are left as two major dimensions to further improve ResNet's representation power. However, simply increasing width or depth is not efficient. In this paper, we investigate the effectiveness of two new structures, i.e., ResNeXt and Res2Net, for speaker verification task. They introduce another two effective dimensions to improve model's representation capacity, called cardinality and scale, respectively. Experimental results on VoxCeleb data demonstrated increasing these two dimensions is more efficient than going deeper or wider. Experiments on two internal test sets with mismatched acoustic conditions also proved the generalization of ResNeXt and Res2Net architecture. Particularly, with Res2Net structure, our best model achieved state-of-the-art performance on VoxCeleb1 test set by reducing the EER by 18.5 utterances has been largely improved as a result of Res2Net module's multi-scale feature representation ability.
READ FULL TEXT