2010
Efficient algorithms for learning kernels from multiple similarity matrices with general convex loss functions
Publication
Publication
In this paper we consider the problem of learning an n × n kernel matrix from m(> 1) similarity matrices under general convex loss. Past research have extensively studied the m = 1 case and have derived several algorithms which require sophisticated techniques like ACCP, SOCP, etc. The existing algorithms do not apply if one uses arbitrary losses and often can not handle m > 1 case. We present several provably convergent iterative algorithms, where each iteration requires either an SVM or a Multiple Kernel Learning (MKL) solver for m > 1 case. One of the major contributions of the paper is to extend the well knownMirror Descent(MD) framework to handle Cartesian product of psd matrices. This novel extension leads to an algorithm, called EMKL, which solves the problem in O(m2 log n/∈2 ) iterations; in each iteration one solves an MKL involving m kernels and m eigen-decomposition of n × n matrices. By suitably defining a restriction on the objective function, a faster version of EMKL is proposed, called REKL, which avoids the eigen-decomposition. An alternative to both EMKL and REKL is also suggested which requires only an SVMsolver. Experimental results on real world protein data set involving several similarity matrices illustrate the efficacy of the proposed algorithms.
Additional Metadata | |
---|---|
Annual Conference on Advances in Neural Information Processing Systems | |
Organisation | Centrum Wiskunde & Informatica, Amsterdam (CWI), The Netherlands |
Kundu, A., Tankasali, V., Bhattacharyya, C., & Ben-Tal, A. (2010). Efficient algorithms for learning kernels from multiple similarity matrices with general convex loss functions. In Advances in Neural Information Processing Systems. |