2011-12-01
Learning Eigenvectors for Free
Publication
Publication
Presented at the
Annual Conference on Advances in Neural Information Processing Systems, Granada
We extend the classical problem of predicting a sequence of outcomes from a finite alphabet to the matrix domain. In this extension, the alphabet of n outcomes is replaced by the set of all dyads, i.e. outer products uu^T where u is a vector in R^n of unit length. Whereas in the classical case the goal is to learn (i.e. sequentially predict as well as) the best multinomial distribution, in the matrix case we desire to learn the density matrix that best explains the observed sequence of dyads. We show how popular online algorithms for learning a multinomial distribution can be extended to learn density matrices. Intuitively, learning the n^2 parameters of a density matrix is much harder than learning the n parameters of a multinomial distribution. Completely surprisingly, we prove that the worst-case regrets of certain classical algorithms and their matrix generalizations are identical. The reason is that the worstcase sequence of dyads share a common eigensystem, i.e. the worst case regret is achieved in the classical case. So these matrix algorithms learn the eigenvectors without any regret.
Additional Metadata | |
---|---|
, | |
in print | |
Annual Conference on Advances in Neural Information Processing Systems | |
Organisation | Algorithms and Complexity |
Koolen-Wijkstra, W., Kotlowski, W., & Warmuth, M. K. (2011). Learning Eigenvectors for Free. In Proceedings of Annual Conference on Advances in Neural Information Processing Systems 2011. in print. |