Neural Networks with Inputs Based on Domain of Dependence and A Converging Sequence for Solving Conservation Laws, Part I: 1D Riemann Problems
Recent research works for solving partial differential equations (PDEs) with deep neural networks (DNNs) have demonstrated that spatiotemporal function approximators defined by auto-differentiation are effective for approximating nonlinear problems, e.g. the Burger's equation, heat conduction equations, Allen-Cahn and other reaction-diffusion equations, and Navier-Stokes equation. Meanwhile, researchers apply automatic differentiation in physics-informed neural network (PINN) to solve nonlinear hyperbolic systems based on conservation laws with highly discontinuous transition, such as Riemann problem, by inverse problem formulation in data-driven approach. However, it remains a challenge for forward methods using DNNs without knowing part of the solution to resolve discontinuities in nonlinear conservation laws. In this study, we incorporate 1st order numerical schemes into DNNs to set up the loss functional approximator instead of auto-differentiation from traditional deep learning framework, e.g. TensorFlow package, which improves the effectiveness of capturing discontinuities in Riemann problems. In particular, the 2-Coarse-Grid neural network (2CGNN) and 2-Diffusion-Coefficient neural network (2DCNN) are introduced in this work. We use 2 solutions of a conservation law from a converging sequence, computed from a low-cost numerical scheme, and in a domain of dependence of a space-time grid point as the input for a neural network to predict its high-fidelity solution at the grid point. Despite smeared input solutions, they output sharp approximations to solutions containing shocks and contacts and are efficient to use once trained.
READ FULL TEXT