Learning and Inference in Imaginary Noise Models
Inspired by recent developments in learning smoothed densities with empirical Bayes, we study variational autoencoders with a decoder that is tailored for the random variable Y=X+N(0,σ^2 I_d). A notion of smoothed variational inference emerges where the smoothing is implicitly enforced by the noise model of the decoder; "implicit" since during training the encoder only sees clean samples. This is the concept of imaginary noise model where the noise model dictates the functional form of the variational lower bound L(σ), but the noisy data are never seen during training. The model is named σ-VAE. We prove that all σ-VAEs are equivalent to each other via a simple β-VAE expansion: L(σ_2) ≡L(σ_1,β), where β=σ_2^2/σ_1^2. We prove a similar result for the Laplace distribution in the exponential family. Empirically, we report an intriguing power law D_ KL∝ 1/σ for the trained models and we study the inference in the σ-VAE for unseen noisy data. The experiments are performed on MNIST, where we show that quite remarkably the model can make reasonable inferences on extremely noisy samples even though it has not seen any during training. The vanilla VAE completely breaks down in this regime. We finish with a hypothesis (the XYZ hypothesis) on the findings here.
READ FULL TEXT