h-analysis and data-parallel physics-informed neural networks
We explore the data-parallel acceleration of physics-informed machine learning (PIML) schemes, with a focus on physics-informed neural networks (PINNs) for multiple graphics processing units (GPUs) architectures. In order to develop scale-robust PIML models for sophisticated applications (e.g., involving complex and high-dimensional domains, non-linear operators or multi-physics), which may require a large number of training points, we detail a protocol based on the Horovod training framework. This protocol is backed by h-analysis, including a new convergence bound for the generalization error. We show that the acceleration is straightforward to implement, does not compromise training, and proves to be highly efficient, paving the way towards generic scale-robust PIML. Extensive numerical experiments with increasing complexity illustrate its robustness and consistency, offering a wide range of possibilities for real-world simulations.
READ FULL TEXT