Many real-world physical processes, such as fluid flows and molecular dynamics, are understood well enough that their behaviour can be accurately translated into mathematical systems of equations, which can then be solved by a computer algorithm. This process forms the basis of the research field of Scientific Computing and although it is very successful, performing accurate simulations can be computationally expensive. As a result, techniques have been developed for Model Order Reduction (MOR), which aim to drastically reduce the complexity of such systems of equations while sacrificing only little accuracy compared to the original Full-Order Model (FOM). The resulting Reduced-Order Models (ROMs) often need to include a correction (or ‘closure’) term to account for the error that was introduced when performing the reduction. In recent years, Machine Learning (ML) has become a popular way to obtain such closure terms. However, the research on ML for closure terms differs from more general ML research in that relevant domain knowledge (i.e. applicable laws of physics or statistical observations) can be used to design the ML model. Moreover, ML closure models do not need to learn the entire dynamics of the underlying system but only the error between the real dynamics of the FOM and the approximate dynamics of the ROM.

In this thesis, several sets of experiments are performed that aim to assess the efficacy of simple ML closure models for a number of problems in the form of ordinary or partial differential equations, and to inform future uses of ML in ROM by comparing several ML architectures and training procedures. Even simple ML closure models are found to perform drastically better than models without closure term, while also outperforming ‘pure’ ML models that do not use prior knowledge as an approximation. Furthermore, models that are formulated to be continuous in time (as the underlying processes are) outperform models that are discrete in time, and models with domain knowledge embedded in their designs outperform models without such properties.

As for training procedures, several methods are compared and although one method clearly outperforms the others, the specific problem considered determines which ODE solvers are applicable, which in turn influences the suitability of different training procedures. Finally, some models are compared that allow for a memory effect, in which future states depend not only on the current state but also on past states. While models with memory effects have been found to perform well in other works, they do not outperform simpler memoryless models on the problem considered in this thesis.

Nevertheless, the value of ML for closure terms is clear since the accuracy of a numerical method can be improved significantly by supplementing the method with a relatively small neural network. However, more research should be done to examine the performance of such closure terms compared to purely numerical methods, preferably using more complex problems for testing. Finally, ML closure models with memory should be examined more critically to see how models can be obtained that do not suffer from overfitting.