Sparsity in Variational Autoencoders
Working in high-dimensional latent spaces, the internal encoding of data in Variational Autoencoders becomes - unexpectedly - sparse. We highlight and investigate this phenomenon, that seems to suggest that, at least for a given architecture, there exists an intrinsic internal dimension of data. This can be used both to understand if the network has sufficient internal capacity, augmenting it to attain sparsity, or conversely to reduce the dimension of the network removing links to zeroed out neurons. Sparsity does also explain the reduced variability in random generative sampling from the latent space one may sometimes observe with variational autoencoders.
READ FULL TEXT