Rényi Generative Adversarial Networks

06/03/2020
by   Himesh Bhatia, et al.
8

We propose a loss function for generative adversarial networks (GANs) using Rényi information measures with parameter α. More specifically, we formulate GAN's generator loss function in terms of Rényi cross-entropy functionals. We demonstrate that for any α, this generalized loss function preserves the equilibrium point satisfied by the original GAN loss based on the Jensen-Renyi divergence, a natural extension of the Jensen-Shannon divergence. We also prove that the Rényi-centric loss function reduces to the original GAN loss function as α→ 1. We show empirically that the proposed loss function, when implemented on both DCGAN (with L_1 normalization) and StyleGAN architectures, confers performance benefits by virtue of the extra degree of freedom provided by the parameter α. More specifically, we show improvements with regard to: (a) the quality of the generated images as measured via the Fréchet Inception Distance (FID) score (e.g., best FID=8.33 for RenyiStyleGAN vs 9.7 for StyleGAN when evaluated over 64×64 CelebA images) and (b) training stability. While it was applied to GANs in this study, the proposed approach is generic and can be used in other applications of information theory to deep learning, e.g., AI bias or privacy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset