Cargando…

Applications of Spectral Gradient Algorithm for Solving Matrix ℓ(2,1)-Norm Minimization Problems in Machine Learning

The main purpose of this study is to propose, then analyze, and later test a spectral gradient algorithm for solving a convex minimization problem. The considered problem covers the matrix ℓ(2,1)-norm regularized least squares which is widely used in multi-task learning for capturing the joint featu...

Descripción completa

Detalles Bibliográficos
Autores principales: Xiao, Yunhai, Wang, Qiuyu, Liu, Lihong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2016
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5115710/
https://www.ncbi.nlm.nih.gov/pubmed/27861526
http://dx.doi.org/10.1371/journal.pone.0166169
Descripción
Sumario:The main purpose of this study is to propose, then analyze, and later test a spectral gradient algorithm for solving a convex minimization problem. The considered problem covers the matrix ℓ(2,1)-norm regularized least squares which is widely used in multi-task learning for capturing the joint feature among each task. To solve the problem, we firstly minimize a quadratic approximated model of the objective function to derive a search direction at current iteration. We show that this direction descends automatically and reduces to the original spectral gradient direction if the regularized term is removed. Secondly, we incorporate a nonmonotone line search along this direction to improve the algorithm’s numerical performance. Furthermore, we show that the proposed algorithm converges to a critical point under some mild conditions. The attractive feature of the proposed algorithm is that it is easily performable and only requires the gradient of the smooth function and the objective function’s values at each and every step. Finally, we operate some experiments on synthetic data, which verifies that the proposed algorithm works quite well and performs better than the compared ones.