Adversarial Regression. Generative Adversarial Networks for Non-Linear Regression: Theory and Assessment

10/18/2019
by   Yoann Boget, et al.
0

Adversarial Regression is a proposition to perform high dimensional non-linear regression with uncertainty estimation. We used Conditional Generative Adversarial Network to obtain an estimate of the full predictive distribution for a new observation. Generative Adversarial Networks (GAN) are implicit generative models which produce samples from a distribution approximating the distribution of the data. The conditional version of it (CGAN) takes the following expression: min_G max_D V(D, G) = E_x∼ p_r(x) [log(D(x, y))] + E_z∼ p_z(z) [log (1-D(G(z, y)))]. An approximate solution can be found by training simultaneously two neural networks to model D and G and feeding G with a random noise vector z. After training, we have that G(z, y)∼̇ p_data(x, y). By fixing y, we have G(z|y) ∼̇ pdata(x|y). By sampling z, we can therefore obtain samples following approximately p(x|y), which is the predictive distribution of x for a new y. We ran experiments to test various loss functions, data distributions, sample size, size of the noise vector, etc. Even if we observed differences, no experiment outperformed consistently the others. The quality of CGAN for regression relies on fine-tuning a range of hyperparameters. In a broader view, the results show that CGANs are very promising methods to perform uncertainty estimation for high dimensional non-linear regression.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset