Shallow neural network representation of polynomials
We show that d-variate polynomials of degree R can be represented on [0,1]^d as shallow neural networks of width 2(R+d)^d. Also, by SNN representation of localized Taylor polynomials of univariate C^β-smooth functions, we derive for shallow networks the minimax optimal rate of convergence, up to a logarithmic factor, to unknown univariate regression function.
READ FULL TEXT