Investigating Deep Neural Transformations for Spectrogram-based Musical Source Separation

12/02/2019
by   Woosung Choi, et al.
1

Musical Source Separation (MSS) is a signal processing task that tries to separate the mixed musical signal into each acoustic sound source, such as singing voice or drums. Recently many machine learning-based methods have been proposed for the MSS task, but there were no existing works that evaluate and directly compare various types of networks. In this paper, we aim to design a variety of neural transformation methods, including time-invariant methods, time-frequency methods, and mixtures of two different transformations. Our experiments provide abundant material for future works by comparing several transformation methods. We train our models on raw complex-valued STFT outputs and achieve state-of-the-art SDR performance in the MUSDB18 singing voice separation task by a large margin of 1.0 dB.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset