Autoencoders for music sound synthesis: a comparison of linear, shallow, deep and variational models
This study investigates the use of non-linear unsupervised dimensionality reduction techniques to compress a music dataset into a low-dimensional representation, and its use for the synthesis of new sounds. We systematically compare (shallow) autoencoders (AE) and deep autoencoders (DAE) with principal component analysis (PCA) for representing the high-resolution short-term amplitude spectrum of a large and dense dataset of music notes into a lower-dimensional vector (and then convert it back to a synthetic amplitude spectrum used for sound resynthesis). In addition, we report results obtained with variational autoencoders (VAE) which to our knowledge have never been considered for processing musical sounds. Our experiments were conducted on the publicly available multi-instrument and multi-pitch database NSynth. Interestingly and contrary to the recent literature on image processing, they showed that PCA systematically outperforms shallow AE and that only a deep architecture (DAE) can lead to a lower reconstruction error. The optimization criterion in deep VAE being the sum of the reconstruction error and a regularization term, it naturally leads to a lower reconstruction accuracy than DAE but we show that VAEs are still able to outperform PCA while providing a low-dimensional latent space with nice "usability" properties.
READ FULL TEXT