On the capacity of deep generative networks for approximating distributions

01/29/2021
by   Yunfei Yang, et al.
0

We study the efficacy and efficiency of deep generative networks for approximating probability distributions. We prove that neural networks can transform a one-dimensional source distribution to a distribution that is arbitrarily close to a high-dimensional target distribution in Wasserstein distances. Upper bounds of the approximation error are obtained in terms of neural networks' width and depth. It is shown that the approximation error grows at most linearly on the ambient dimension and that the approximation order only depends on the intrinsic dimension of the target distribution. On the contrary, when f-divergences are used as metrics of distributions, the approximation property is different. We prove that in order to approximate the target distribution in f-divergences, the dimension of the source distribution cannot be smaller than the intrinsic dimension of the target distribution. Therefore, f-divergences are less adequate than Waserstein distances as metrics of distributions for generating samples.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset