2017
Fast rates with high probability in exp-concave statistical learning
Publication
Publication
Presented at the
International Conference on Artificial Intelligence and Statistics (April 2017), Fort Lauderdale, FL, USA
We present an algorithm for the statistical learning setting with a bounded exp-concave loss in d dimen-
sions that obtains excess risk O(d log(1/δ)/n) with probability at least 1 − δ. The core technique is to boost
the confidence of recent in-expectation O(d/n) excess risk bounds for empirical risk minimization (ERM),
without sacrificing the rate, by leveraging a Bernstein condition which holds due to exp-concavity. We also
show that with probability 1 − δ the standard ERM method obtains excess risk O(d(log(n) + log(1/δ))/n).
We further show that a regret bound for any online learner in this setting translates to a high probability ex-
cess risk bound for the corresponding online-to-batch conversion of the online learner. Lastly, we present two
high probability bounds for the exp-concave model selection aggregation problem that are quantile-adaptive
in a certain sense. The first bound is a purely exponential weights type algorithm, obtains a nearly optimal
rate, and has no explicit dependence on the Lipschitz continuity of the loss. The second bound requires
Lipschitz continuity but obtains the optimal rate
Additional Metadata | |
---|---|
International Conference on Artificial Intelligence and Statistics | |
Organisation | Machine Learning |
Mehta, N. (2017). Fast rates with high probability in exp-concave statistical learning. In International Conference on AI & Statistics (pp. 1085–1093). |