Disentangled Speech Representation Learning for One-Shot Cross-lingual Voice Conversion Using β-VAE

10/25/2022
by   Hui Lu, et al.
0

We propose an unsupervised learning method to disentangle speech into content representation and speaker identity representation. We apply this method to the challenging one-shot cross-lingual voice conversion task to demonstrate the effectiveness of the disentanglement. Inspired by β-VAE, we introduce a learning objective that balances between the information captured by the content and speaker representations. In addition, the inductive biases from the architectural design and the training dataset further encourage the desired disentanglement. Both objective and subjective evaluations show the effectiveness of the proposed method in speech disentanglement and in one-shot cross-lingual voice conversion.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset