L^p sampling numbers for the Fourier-analytic Barron space
In this paper, we consider Barron functions f : [0,1]^d →ℝ of smoothness σ > 0, which are functions that can be written as f(x) = ∫_ℝ^d F(ξ) e^2 π i ⟨ x, ξ⟩ d ξ with ∫_ℝ^d |F(ξ)| · (1 + |ξ|)^σ d ξ < ∞. For σ = 1, these functions play a prominent role in machine learning, since they can be efficiently approximated by (shallow) neural networks without suffering from the curse of dimensionality. For these functions, we study the following question: Given m point samples f(x_1),…,f(x_m) of an unknown Barron function f : [0,1]^d →ℝ of smoothness σ, how well can f be recovered from these samples, for an optimal choice of the sampling points and the reconstruction procedure? Denoting the optimal reconstruction error measured in L^p by s_m (σ; L^p), we show that m^- 1/max{ p,2 } - σ/d≲ s_m(σ;L^p) ≲ (ln (e + m))^α(σ,d) / p· m^- 1/max{ p,2 } - σ/d , where the implied constants only depend on σ and d and where α(σ,d) stays bounded as d →∞.
READ FULL TEXT