The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification. But how could this leave room for a learning theory, that shows that some algorithms are better than others? Drawing parallels to the philosophy of induction, we point out that the no-free-lunch results presuppose a conception of learning algorithms as purely data-driven. On this conception, every algorithm must have an inherent inductive bias, that wants justification. We argue that many standard learning algorithms should rather be understood as model-dependent: in each application they also require for input a model, representing a bias. Generic algorithms themselves, they can be given a model-relative justification.

, ,
doi.org/10.1007/s11229-021-03233-1
Synthese
Safe Bayesian Inference: A Theory of Misspecification based on Statistical Learning

Sterkenburg, T., & Grünwald, P. (2021). The no-free-lunch theorems of supervised learning. Synthese, 199, 9979–10015. doi:10.1007/s11229-021-03233-1