Algorithms for full-information online learning are classically tuned to minimize their worst-case regret. Modern algorithms additionally provide tighter guarantees outside the adversarial regime, most notably in the form of constant pseudoregret bounds under statistical margin assumptions. We investigate the multiscale extension of the problem where the loss ranges of the experts are vastly different. Here, the regret with respect to each expert needs to scale with its range, instead of the maximum overall range. We develop new multiscale algorithms, tuning schemes and analysis techniques to show that worst-case robustness and adaptation to easy data can be combined at a negligible cost. We further develop an extension with optimism and apply it to solve multiscale two-player zero-sum games. We demonstrate experimentally the superior performance of our scale-adaptive algorithm and discuss the subtle relationship of our results to Freund’s 2016 open problem.

, , , ,
Safe Bayesian Inference: A Theory of Misspecification based on Statistical Learning
Thirty-Sixth Conference on Neural Information Processing Systems - NeurIPS 2022
Machine Learning

Pérez, M., & Koolen-Wijkstra, W. (2022). Luckiness in multiscale online learning. In Proceedings NeurIPS (Annual Conference on Neural Information Processing Systems).