We study generalized Bayesian inference under misspecification, i.e. when the model is ‘wrong but useful’. Generalized Bayes equips the likelihood with a learning rate η. We show that for generalized linear models (GLMs), η-generalized Bayes concentrates around the best approximation of the truth within the model for specific ηeq1, even under severely misspecified noise, as long as the tails of the true distribution are exponential. We derive MCMC samplers for generalized Bayesian lasso and logistic regression and give examples of both simulated and real-world data in which generalized Bayes substantially outperforms standard Bayes.

International Conference on Artificial Intelligence and Statistics
Machine Learning

de Heide, R, Kirichenko, A.A, Mehta, N.A, & Grünwald, P.D. (2020). Safe-Bayesian Generalized Linear Regression. In Proceedings of the International Conference on Artificial Intelligence and Statistics (pp. 2623–2633).