Classification Representations Can be Reused for Downstream Generations

Contrary to the convention of using supervision for class-conditioned generativemodeling, this work explores and demonstrates the feasibility of a learned supervised representation space trained on a discriminative classifier for the downstream task of sample generation. Unlike generative modeling approaches that aim to model the manifold distribution, we directly represent the given data manifold in the classification space and leverage properties of latent space representations to generate new representations that are guaranteed to be in the same class. Interestingly, such representations allow for controlled sample generations for any given class from existing samples and do not require enforcing prior distribution. We show that these latent space representations can be smartly manipulated (using convex combinations of n samples, n≥2) to yield meaningful sample generations. Experiments on image datasets of varying resolutions demonstrate that downstream generations have higher classification accuracy than existing conditional generative models while being competitive in terms of FID.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset