UGAN: Untraceable GAN for Multi-Domain Face Translation

07/26/2019
by   Defa Zhu, et al.
5

The multi-domain image-to-image translation is received increasing attention in the computer vision community. However, the translated images often retain the characteristics of the source domain. In this paper, we propose a novel Untraceable GAN (UGAN) to tackle the phenomenon of source retaining. Specifically, the discriminator of UGAN contains a novel source classifier to tell which domain an image is translated from, with the purpose to determine whether the translated image still retains the characteristics of the source domain. After this adversarial training converges, the translator is able to synthesize the target-only characteristics and also erase the source-only characteristics. In this way, the source domain of the synthesized image becomes untraceable. We perform extensive experiments, and the results have demonstrated that the proposed UGAN can produce superior results over state-of-the-art StarGAN on three face editing tasks, including face aging, makeup, and expression editing. The source code will be made publicly available.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset