Neural network approximation of coarse-scale surrogates in numerical homogenization
Coarse-scale surrogate models in the context of numerical homogenization of linear elliptic problems with arbitrary rough diffusion coefficients rely on the efficient solution of fine-scale sub-problems on local subdomains whose solutions are then employed to deduce appropriate coarse contributions to the surrogate model. However, in the absence of periodicity and scale separation, the reliability of such models requires the local subdomains to cover the whole domain which may result in high offline costs, in particular for parameter-dependent and stochastic problems. This paper justifies the use of neural networks for the approximation of coarse-scale surrogate models by analyzing their approximation properties. For a prototypical and representative numerical homogenization technique, the Localized Orthogonal Decomposition method, we show that one single neural network is sufficient to approximate the coarse contributions of all occurring coefficient-dependent local sub-problems for a non-trivial class of diffusion coefficients up to arbitrary accuracy. We present rigorous upper bounds on the depth and number of non-zero parameters for such a network to achieve a given accuracy. Further, we analyze the overall error of the resulting neural network enhanced numerical homogenization surrogate model.
READ FULL TEXT