Non-Contrastive Self-Supervised Learning of Utterance-Level Speech Representations

08/10/2022
by   Jaejin Cho, et al.
0

Considering the abundance of unlabeled speech data and the high labeling costs, unsupervised learning methods can be essential for better system development. One of the most successful methods is contrastive self-supervised methods, which require negative sampling: sampling alternative samples to contrast with the current sample (anchor). However, it is hard to ensure if all the negative samples belong to classes different from the anchor class without labels. This paper applies a non-contrastive self-supervised learning method on an unlabeled speech corpus to learn utterance-level embeddings. We used DIstillation with NO labels (DINO), proposed in computer vision, and adapted it to the speech domain. Unlike the contrastive methods, DINO does not require negative sampling. These embeddings were evaluated on speaker verification and emotion recognition. In speaker verification, the unsupervised DINO embedding with cosine scoring provided 4.38 outperforms the best contrastive self-supervised method by 40 An iterative pseudo-labeling training pipeline, not requiring speaker labels, further improved the EER to 1.89 performed 60.87, 79.21, and 56.98 MSP-Podcast, respectively. The results imply the generality of the DINO embedding to different speech applications.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset