2009
Computable Bayesian Compression for Uniformly Discretizable Statistical Models
Publication
Publication
Presented at the
Algorithmic Learning Theory, Porto, Portugal
Supplementing Vovk and V'yugin's `if' statement, we show that Bayesian compression provides the best enumerable compression for parameter-typical data if and only if the parameter is Martin-L\"of random with respect to the prior. The result is derived for uniformly discretizable statistical models, introduced here. They feature the crucial property that given a~discretized parameter, we can compute how much data is needed to learn its value with little uncertainty. Exponential families and certain nonparametric models are shown to be uniformly discretizable.
| Additional Metadata | |
|---|---|
| Springer | |
| R. Gavalda , G. Lugosi , T. Zeugmann | |
| Learning when all models are wrong | |
| Algorithmic Learning Theory | |
| Organisation | Quantum Computing and Advanced System Research |
|
Debowski, L. J. (2009). Computable Bayesian Compression for Uniformly Discretizable Statistical Models. In R. Gavalda, G. Lugosi, & T. Zeugmann (Eds.), Algorithmic Learning Theory: 20th International Conference, ALT 2009, Porto, Portugal, October 3-5, 2009, Proceedings (pp. 53–67). Springer. |
|