Minimum Description Length (MDL) inference is based on the intuition that understanding the available data can be defined in terms of the ability to compress the data, i.e. to describe it in full using a shorter representation. This brief introduction discusses the design of the various codes used to implement MDL, focusing on the philosophically intriguing concepts of luckiness and regret: a good MDL code exhibits good performance in the worst case over all possible data sets, but achieves even better performance when the data turn out to be simple (although we suggest making no a priori assumptions to that effect). We then discuss how data compression relates to performance in various learning tasks, including parameter estimation, parametric and nonparametric model selection and sequential prediction of outcomes from an unknown source. Last, we briefly outline the history of MDL and its technical and philosophical relationship to other approaches to learning such as Bayesian, frequentist and prequential statistics.
, , ,
,
Elsevier
P.S. Bandyopadhyay (Prasanta) , M. Forster (Malcolm )
Handbook of the Philosophy of Science
Algorithms and Complexity

de Rooij, S., & Grünwald, P. (2011). Luckiness and Regret in Minimum Description Length Inference. In P. Bandyopadhyay & M. Forster (Eds.), Handbook of the Philosophy of Science, Volume 7: Philosophy of Statistics. Elsevier.