Towards Addressing GAN Training Instabilities: Dual-objective GANs with Tunable Parameters

02/28/2023
by   Monica Welfert, et al.
0

In an effort to address the training instabilities of GANs, we introduce a class of dual-objective GANs with different value functions (objectives) for the generator (G) and discriminator (D). In particular, we model each objective using α-loss, a tunable classification loss, to obtain (α_D,α_G)-GANs, parameterized by (α_D,α_G)∈ [0,∞)^2. For sufficiently large number of samples and capacities for G and D, we show that the resulting non-zero sum game simplifies to minimizing an f-divergence under appropriate conditions on (α_D,α_G). In the finite sample and capacity setting, we define estimation error to quantify the gap in the generator's performance relative to the optimal setting with infinite samples and obtain upper bounds on this error, showing it to be order optimal under certain conditions. Finally, we highlight the value of tuning (α_D,α_G) in alleviating training instabilities for the synthetic 2D Gaussian mixture ring and the Stacked MNIST datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset