This is an up-to-date introduction to and overview of the Minimum Description Length (MDL) Principle, a theory of inductive inference that can be applied to general problems in statistics, machine learning and pattern recognition. While MDL was originally based on data compression ideas, this introduction can be read without any knowledge thereof. It takes into account all major developments since 2007, the last time an extensive overview was written. These include new methods for model selection and averaging and hypothesis testing, as well as the first completely general definition of {\em MDL estimators}. Incorporating these developments, MDL can be seen as a powerful extension of both penalized likelihood and Bayesian approaches, in which penalization functions and prior distributions are replaced by more general luckiness functions, average-case methodology is replaced by a more robust worst-case approach, and in which methods classically viewed as highly distinct, such as AIC vs BIC and cross-validation vs Bayes can, to a large extent, be viewed from a unified perspective.

, , ,
doi.org/10.1142/S2661335219300018
Mathematics for industry
Safe Bayesian Inference: A Theory of Misspecification based on Statistical Learning
Machine Learning

Grünwald, P., & Roos, T. (2019). Minimum description length revisited. Mathematics for industry, 11(1), 1930001:1–1930001:29. doi:10.1142/S2661335219300018