We study generalized Bayesian inference under misspecification, i.e. when the model is `wrong but useful'. Generalized Bayes equips the likelihood with a learning rate η. We show that for generalized linear models (GLMs), η-generalized Bayes concentrates around the best approximation of the truth within the model for specific η≠1, even under severely misspecified noise, as long as the tails of the true distribution are exponential. We then derive MCMC samplers for generalized Bayesian lasso and logistic regression, and give examples of both simulated and real-world data in which generalized Bayes outperforms standard Bayes by a vast margin.

Additional Metadata
Series arXiv.org e-Print archive
Project Safe Bayesian Inference: A Theory of Misspecification based on Statistical Learning
Grant This work was funded by the The Netherlands Organisation for Scientific Research (NWO); grant id nwo/617.001.651 - Safe Bayesian Inference: A Theory of Misspecification based on Statistical Learning
Citation
de Heide, R, Kirichenko, A.A, Mehta, N.A, & Grünwald, P.D. (2019). Safe Bayesian Linear Regression. arXiv.org e-Print archive.