Closing the Training/Inference Gap for Deep Attractor Networks

11/05/2019
by   Cyril Cadoux, et al.
0

This paper improves the deep attractor network (DANet) approach by closing its gap between training and inference. During training, DANet relies on attractors, which are computed from the ground truth separations. As this information is not available at inference time, the attractors have to be estimated, which is typically done by k-means. This results in two mismatches: The first mismatch stems from using classical k-means with Euclidean norm, whereas masks are computed during training using the dot product similarity. By using spherical k-means instead, we can show that we can already improve the performance of DANet. Furthermore, we show that we can fully incorporate k-means clustering into the DANet training. This yields the benefit of having no training/inference gap and consequently results in an scale-invariant signal-to-distortion ratio (SI-SDR) improvement of 1.1dB on the Wall Street Journal corpus (WSJ0).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset