Assembling Semantically-Disentangled Representations for Predictive-Generative Models via Adaptation from Synthetic Domain
Deep neural networks can form high-level hierarchical representations of input data. Various researchers have demonstrated that these representations can be used to enable a variety of useful applications. However, such representations are typically based on the statistics within the data, and may not conform with the semantic representation that may be necessitated by the application. Conditional models are typically used to overcome this challenge, but they require large annotated datasets which are difficult to come by and costly to create. In this paper, we show that semantically-aligned representations can be generated instead with the help of a physics based engine. This is accomplished by creating a synthetic dataset with decoupled attributes, learning an encoder for the synthetic dataset, and augmenting prescribed attributes from the synthetic domain with attributes from the real domain. It is shown that the proposed (SYNTH-VAE-GAN) method can construct a conditional predictive-generative model of human face attributes without relying on real data labels.
READ FULL TEXT