Recent works have shown that deep neural networks can be employed to solve partial differential equations, giving rise to the framework of physics informed neural networks (Raissi et al., 2007). We introduce a generalization for these methods that manifests as a scaling parameter which balances the relative importance of the different constraints imposed by partial differential equations. A mathematical motivation of these generalized methods is provided, which shows that for linear and well-posed partial differential equations, the functional form is convex. We then derive a choice for the scaling parameter that is optimal with respect to a measure of relative error. Because this optimal choice relies on having full knowledge of analytical solutions, we also propose a heuristic method to approximate this optimal choice. The proposed methods are compared numerically to the original methods on a variety of model partial differential equations, with the number of data points being updated adaptively. For several problems, including high-dimensional PDEs the proposed methods are shown to significantly enhance accuracy.

, , , , ,
doi.org/10.1016/j.cam.2021.113887
Journal of Computational and Applied Mathematics
Centrum Wiskunde & Informatica, Amsterdam (CWI), The Netherlands

van der Meer, R., Oosterlee, K., & Borovykh, A. (2022). Optimally weighted loss functions for solving PDEs with Neural Networks. Journal of Computational and Applied Mathematics, 405. doi:10.1016/j.cam.2021.113887