A recurrent cycle consistency loss for progressive face-to-face synthesis

04/14/2020
by   Enrique Sanchez, et al.
0

This paper addresses a major flaw of the cycle consistency loss when used to preserve the input appearance in the face-to-face synthesis domain. In particular, we show that the images generated by a network trained using this loss conceal a noise that hinders their use for further tasks. To overcome this limitation, we propose a ”recurrent cycle consistency loss" which for different sequences of target attributes minimises the distance between the output images, independent of any intermediate step. We empirically validate not only that our loss enables the re-use of generated images, but that it also improves their quality. In addition, we propose the very first network that covers the task of unconstrained landmark-guided face-to-face synthesis. Contrary to previous works, our proposed approach enables the transfer of a particular set of input features to a large span of poses and expressions, whereby the target landmarks become the ground-truth points. We then evaluate the consistency of our proposed approach to synthesise faces at the target landmarks. To the best of our knowledge, we are the first to propose a loss to overcome the limitation of the cycle consistency loss, and the first to propose an ”in-the-wild” landmark guided synthesis approach. Code and models for this paper can be found in https://github.com/ESanchezLozano/GANnotation

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset