Doubly Stochastic Adversarial Autoencoder

07/19/2018
by   Mahdi Azarafrooz, et al.
0

Any autoencoder network can be turned into a generative model by imposing an arbitrary prior distribution on its hidden code vector. Variational Autoencoder (VAE) [2] uses a KL divergence penalty to impose the prior, whereas Adversarial Autoencoder (AAE) [1] uses generative adversarial networks GAN [3]. GAN trades the complexities of sampling algorithms with the complexities of searching Nash equilibrium in minimax games. Such minimax architectures get trained with the help of data examples and gradients flowing through a generator and an adversary. A straightforward modification of AAE is to replace the adversary with the maximum mean discrepancy (MMD) test [4-5]. This replacement leads to a new type of probabilistic autoencoder, which is also discussed in our paper. We propose a novel probabilistic autoencoder in which the adversary of AAE is replaced with a space of stochastic functions. This replacement introduces a new source of randomness, which can be considered as a continuous control for encouraging explorations. This prevents the adversary from fitting too closely to the generator and therefore leads to a more diverse set of generated samples.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset