Cargando…

Limited-Memory Fast Gradient Descent Method for Graph Regularized Nonnegative Matrix Factorization

Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix [Image: see text] to the product of two lower-rank nonnegative factor matrices, i.e., [Image: see text] and [Image: see text] ([Image: see text]) and aims to preserve the local geometric structure of the d...

Descripción completa

Detalles Bibliográficos
Autores principales: Guan, Naiyang, Wei, Lei, Luo, Zhigang, Tao, Dacheng
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2013
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3804530/
https://www.ncbi.nlm.nih.gov/pubmed/24204761
http://dx.doi.org/10.1371/journal.pone.0077162
_version_ 1782288164850761728
author Guan, Naiyang
Wei, Lei
Luo, Zhigang
Tao, Dacheng
author_facet Guan, Naiyang
Wei, Lei
Luo, Zhigang
Tao, Dacheng
author_sort Guan, Naiyang
collection PubMed
description Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix [Image: see text] to the product of two lower-rank nonnegative factor matrices, i.e., [Image: see text] and [Image: see text] ([Image: see text]) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers.
format Online
Article
Text
id pubmed-3804530
institution National Center for Biotechnology Information
language English
publishDate 2013
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-38045302013-11-07 Limited-Memory Fast Gradient Descent Method for Graph Regularized Nonnegative Matrix Factorization Guan, Naiyang Wei, Lei Luo, Zhigang Tao, Dacheng PLoS One Research Article Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix [Image: see text] to the product of two lower-rank nonnegative factor matrices, i.e., [Image: see text] and [Image: see text] ([Image: see text]) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers. Public Library of Science 2013-10-21 /pmc/articles/PMC3804530/ /pubmed/24204761 http://dx.doi.org/10.1371/journal.pone.0077162 Text en © 2013 Guan et al http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
spellingShingle Research Article
Guan, Naiyang
Wei, Lei
Luo, Zhigang
Tao, Dacheng
Limited-Memory Fast Gradient Descent Method for Graph Regularized Nonnegative Matrix Factorization
title Limited-Memory Fast Gradient Descent Method for Graph Regularized Nonnegative Matrix Factorization
title_full Limited-Memory Fast Gradient Descent Method for Graph Regularized Nonnegative Matrix Factorization
title_fullStr Limited-Memory Fast Gradient Descent Method for Graph Regularized Nonnegative Matrix Factorization
title_full_unstemmed Limited-Memory Fast Gradient Descent Method for Graph Regularized Nonnegative Matrix Factorization
title_short Limited-Memory Fast Gradient Descent Method for Graph Regularized Nonnegative Matrix Factorization
title_sort limited-memory fast gradient descent method for graph regularized nonnegative matrix factorization
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3804530/
https://www.ncbi.nlm.nih.gov/pubmed/24204761
http://dx.doi.org/10.1371/journal.pone.0077162
work_keys_str_mv AT guannaiyang limitedmemoryfastgradientdescentmethodforgraphregularizednonnegativematrixfactorization
AT weilei limitedmemoryfastgradientdescentmethodforgraphregularizednonnegativematrixfactorization
AT luozhigang limitedmemoryfastgradientdescentmethodforgraphregularizednonnegativematrixfactorization
AT taodacheng limitedmemoryfastgradientdescentmethodforgraphregularizednonnegativematrixfactorization