Cargando…
Robust Generalized Low Rank Approximations of Matrices
In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLR...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2015
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4569376/ https://www.ncbi.nlm.nih.gov/pubmed/26367116 http://dx.doi.org/10.1371/journal.pone.0138028 |
_version_ | 1782390034454806528 |
---|---|
author | Shi, Jiarong Yang, Wei Zheng, Xiuyun |
author_facet | Shi, Jiarong Yang, Wei Zheng, Xiuyun |
author_sort | Shi, Jiarong |
collection | PubMed |
description | In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l (1)-norm optimization problem which minimizes the l (1)-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l (1)-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods. |
format | Online Article Text |
id | pubmed-4569376 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2015 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-45693762015-09-18 Robust Generalized Low Rank Approximations of Matrices Shi, Jiarong Yang, Wei Zheng, Xiuyun PLoS One Research Article In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l (1)-norm optimization problem which minimizes the l (1)-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l (1)-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods. Public Library of Science 2015-09-14 /pmc/articles/PMC4569376/ /pubmed/26367116 http://dx.doi.org/10.1371/journal.pone.0138028 Text en © 2015 Shi et al http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited. |
spellingShingle | Research Article Shi, Jiarong Yang, Wei Zheng, Xiuyun Robust Generalized Low Rank Approximations of Matrices |
title | Robust Generalized Low Rank Approximations of Matrices |
title_full | Robust Generalized Low Rank Approximations of Matrices |
title_fullStr | Robust Generalized Low Rank Approximations of Matrices |
title_full_unstemmed | Robust Generalized Low Rank Approximations of Matrices |
title_short | Robust Generalized Low Rank Approximations of Matrices |
title_sort | robust generalized low rank approximations of matrices |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4569376/ https://www.ncbi.nlm.nih.gov/pubmed/26367116 http://dx.doi.org/10.1371/journal.pone.0138028 |
work_keys_str_mv | AT shijiarong robustgeneralizedlowrankapproximationsofmatrices AT yangwei robustgeneralizedlowrankapproximationsofmatrices AT zhengxiuyun robustgeneralizedlowrankapproximationsofmatrices |