Lipschitz and comparator-norm adaptivity in online learning
Presented at the Annual Conference on Learning Theory (July 2020), Online
We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient are constrained. The goal is to simultaneously adapt to both the sequence of gradients and the comparator. We first develop parameter-free and scale-free algorithms for a simplified setting with hints. We present two versions: the first adapts to the squared norms of both comparator and gradients separately using $O(d)$ time per round, the second adapts to their squared inner products (which measure variance only in the comparator direction) in time $O(d^3)$ per round. We then generalize two prior reductions to the unbounded setting; one to not need hints, and a second to deal with the range ratio problem (which already arises in prior work). We discuss their optimality in light of prior and new lower bounds. We apply our methods to obtain sharper regret bounds for scale-invariant online prediction with linear models.
|Online Convex Optimization, Parameter-Free Online Learning, Scale-Invariant OnlineAlgorithms|
|Annual Conference on Learning Theory|
Mhammedi, Z, & Koolen-Wijkstra, W.M. (2020). Lipschitz and comparator-norm adaptivity in online learning. In Proceedings of Machine Learning Research (pp. 2858–2887).