Implicit bias with Ritz-Galerkin method in understanding deep learning for solving PDEs

02/19/2020
by   Jihong Wang, et al.
0

This paper aims at studying the difference between Ritz-Galerkin (R-G) method and deep neural network (DNN) method in solving partial differential equations (PDEs) to better understand deep learning. To this end, we consider solving a particular Poisson problem, where the information of the right-hand side of the equation f is only available at n sample points while the bases (neuron) number is much larger than n, which is common in DNN-based methods. Through both theoretical study and numerical study, we show the R-G method solves this particular problem by a piecewise linear function because R-G method considers the discrete sampling points as linear combinations of Dirac delta functions. However, we show that DNNs solve the problem with a much smoother function based on previous study of F-Principle (Xu et al., (2019) [15] and Zhang et al., (2019) [17]), that is, DNN methods implicitly impose regularity on the function that interpolates the discrete sampling points. Our work shows that with implicit bias of DNNs, the traditional methods, e.g., FEM, could provide insights into understanding DNNs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset