Private GANs, Revisited

02/06/2023
by   Alex Bie, et al.
0

We show that the canonical approach for training differentially private GANs – updating the discriminator with differentially private stochastic gradient descent (DPSGD) – can yield significantly improved results after modifications to training. Existing instantiations of this approach neglect to consider how adding noise only to discriminator updates disrupts the careful balance between the generator and discriminator necessary for successful GAN training. We show that a simple fix – taking more discriminator steps between generator steps – restores parity and improves results. Additionally, with the goal of restoring parity between the generator and discriminator, we experiment with other modifications to improve discriminator training and see further improvements in generation quality. Our results demonstrate that on standard benchmarks, DPSGD outperforms all alternative GAN privatization schemes.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset