A Non-gradient DG method for second-order Elliptic Equations in the Non-divergence Form
L^1 based optimization is widely used in image denoising, machine learning and related applications. One of the main features of such approach is that it naturally provide a sparse structure in the numerical solutions. In this paper, we study an L^1 based mixed DG method for second-order elliptic equations in the non-divergence form. The elliptic PDE in nondivergence form arises in the linearization of fully nonlinear PDEs. Due to the nature of the equations, classical finite element methods based on variational forms can not be employed directly. In this work, we propose a new optimization scheme coupling the classical DG framework with recently developed L^1 optimization technique. Convergence analysis in both energy norm and L^∞ norm are obtained under weak regularity assumption. Such L^1 models are nondifferentiable and therefore invalidate traditional gradient methods. Therefore all existing gradient based solvers are no longer feasible under this setting. To overcome this difficulty, we characterize solutions of L^1 optimization as fixed-points of proximity equations and utilize matrix splitting technique to obtain a class of fixed-point proximity algorithms with convergence analysis. Various numerical examples are displayed to illustrate the numerical solution has sparse structure with careful choice of the bases of the finite dimensional spaces. Numerical examples in both smooth and nonsmooth settings are provided to validate the theoretical results.
READ FULL TEXT