Selective Sampling and Mixture Models in Generative Adversarial Networks

02/02/2018
by   Karim Said Barsim, et al.
0

In this paper, we propose a multi-generator extension to the adversarial training framework, in which the objective of each generator is to represent a unique component of a target mixture distribution. In the training phase, the generators cooperate to represent, as a mixture, the target distribution while maintaining distinct manifolds. As opposed to traditional generative models, inference from a particular generator after training resembles selective sampling from a unique component in the target distribution. We demonstrate the feasibility of the proposed architecture both analytically and with basic Multi-Layer Perceptron (MLP) models trained on the MNIST dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset