We propose a new neural network based large eddy simulation framework for the incompressible Navier-Stokes equations based on the paradigm “discretize first, filter and close next”. This leads to full model-data consistency and allows for employing neural closure models in the same environment as where they have been trained. Since the LES discretization error is included in the learning process, the closure models can learn to account for the discretization. Furthermore, we employ a divergence-consistent discrete filter defined through face-averaging and provide novel theoretical and numerical filter analysis. This filter preserves the discrete divergence-free constraint by construction, unlike general discrete filters such as volume-averaging filters. We show that using a divergence-consistent LES formulation coupled with a convolutional neural closure model produces stable and accurate results for both a-priori and a-posteriori training, while a general (divergence-inconsistent) LES model requires a-posteriori training or other stability-enforcing measures.

, , , , ,
doi.org/10.1016/j.jcp.2024.113577
Journal of Computational Physics

Agdestein, S., & Sanderse, B. (2025). Discretize first, filter next: Learning divergence-consistent closure models for large-eddy simulation. Journal of Computational Physics, 522, 113577:1–113577:34. doi:10.1016/j.jcp.2024.113577