Cargando…

L(2)-norm multiple kernel learning and its application to biomedical data fusion

BACKGROUND: This paper introduces the notion of optimizing different norms in the dual problem of support vector machines with multiple kernels. The selection of norms yields different extensions of multiple kernel learning (MKL) such as L(∞), L(1), and L(2 )MKL. In particular, L(2 )MKL is a novel m...

Descripción completa

Detalles Bibliográficos
Autores principales: Yu, Shi, Falck, Tillmann, Daemen, Anneleen, Tranchevent, Leon-Charles, Suykens, Johan AK, De Moor, Bart, Moreau, Yves
Formato: Texto
Lenguaje:English
Publicado: BioMed Central 2010
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2906488/
https://www.ncbi.nlm.nih.gov/pubmed/20529363
http://dx.doi.org/10.1186/1471-2105-11-309
Descripción
Sumario:BACKGROUND: This paper introduces the notion of optimizing different norms in the dual problem of support vector machines with multiple kernels. The selection of norms yields different extensions of multiple kernel learning (MKL) such as L(∞), L(1), and L(2 )MKL. In particular, L(2 )MKL is a novel method that leads to non-sparse optimal kernel coefficients, which is different from the sparse kernel coefficients optimized by the existing L(∞ )MKL method. In real biomedical applications, L(2 )MKL may have more advantages over sparse integration method for thoroughly combining complementary information in heterogeneous data sources. RESULTS: We provide a theoretical analysis of the relationship between the L(2 )optimization of kernels in the dual problem with the L(2 )coefficient regularization in the primal problem. Understanding the dual L(2 )problem grants a unified view on MKL and enables us to extend the L(2 )method to a wide range of machine learning problems. We implement L(2 )MKL for ranking and classification problems and compare its performance with the sparse L(∞ )and the averaging L(1 )MKL methods. The experiments are carried out on six real biomedical data sets and two large scale UCI data sets. L(2 )MKL yields better performance on most of the benchmark data sets. In particular, we propose a novel L(2 )MKL least squares support vector machine (LSSVM) algorithm, which is shown to be an efficient and promising classifier for large scale data sets processing. CONCLUSIONS: This paper extends the statistical framework of genomic data fusion based on MKL. Allowing non-sparse weights on the data sources is an attractive option in settings where we believe most data sources to be relevant to the problem at hand and want to avoid a "winner-takes-all" effect seen in L(∞ )MKL, which can be detrimental to the performance in prospective studies. The notion of optimizing L(2 )kernels can be straightforwardly extended to ranking, classification, regression, and clustering algorithms. To tackle the computational burden of MKL, this paper proposes several novel LSSVM based MKL algorithms. Systematic comparison on real data sets shows that LSSVM MKL has comparable performance as the conventional SVM MKL algorithms. Moreover, large scale numerical experiments indicate that when cast as semi-infinite programming, LSSVM MKL can be solved more efficiently than SVM MKL. AVAILABILITY: The MATLAB code of algorithms implemented in this paper is downloadable from http://homes.esat.kuleuven.be/~sistawww/bioi/syu/l2lssvm.html.