Cargando…
Limited-Memory Fast Gradient Descent Method for Graph Regularized Nonnegative Matrix Factorization
Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix [Image: see text] to the product of two lower-rank nonnegative factor matrices, i.e., [Image: see text] and [Image: see text] ([Image: see text]) and aims to preserve the local geometric structure of the d...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2013
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3804530/ https://www.ncbi.nlm.nih.gov/pubmed/24204761 http://dx.doi.org/10.1371/journal.pone.0077162 |
Sumario: | Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix [Image: see text] to the product of two lower-rank nonnegative factor matrices, i.e., [Image: see text] and [Image: see text] ([Image: see text]) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers. |
---|