Approximation of Smoothness Classes by Deep ReLU Networks

07/30/2020
by   Mazen Ali, et al.
0

We consider approximation rates of sparsely connected deep rectified linear unit (ReLU) and rectified power unit (RePU) neural networks for functions in Besov spaces B^α_q(L^p) in arbitrary dimension d, on bounded or unbounded domains. We show that RePU networks with a fixed activation function attain optimal approximation rates for functions in the Besov space B^α_τ(L^τ) on the critical embedding line 1/τ=α/d+1/p for arbitrary smoothness order α>0. Moreover, we show that ReLU networks attain near to optimal rates for any Besov space strictly above the critical line. Using interpolation theory, this implies that the entire range of smoothness classes at or above the critical line is (near to) optimally approximated by deep ReLU/RePU networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset