Varying k-Lipschitz Constraint for Generative Adversarial Networks

03/16/2018
by   Kanglin Liu, et al.
0

Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recent proposed Wasserstein GAN with gradient penalty (WGAN-GP) makes progress toward stable training. Gradient penalty acts as the role of enforcing a Lipschitz constraint. Further investigation on gradient penalty shows that gradient penalty may impose restriction on the capacity of discriminator. As a replacement, we introduce varying k-Lipschitz constraint. Proposed varying k-Lipschitz constraint witness better image quality and significantly improved training speed when testing on GAN architecture. Besides, we introduce an effective convergence measure, which correlates well with image quality.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset