Cargando…
A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning
Nowadays, deep representations have been attracting much attention owing to the great performance in various tasks. However, the interpretability of deep representations poses a vast challenge on real-world applications. To alleviate the challenge, a deep matrix factorization method with non-negativ...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8329448/ https://www.ncbi.nlm.nih.gov/pubmed/34354579 http://dx.doi.org/10.3389/fnbot.2021.701194 |
_version_ | 1783732506556628992 |
---|---|
author | Chen, Zhikui Jin, Shan Liu, Runze Zhang, Jianing |
author_facet | Chen, Zhikui Jin, Shan Liu, Runze Zhang, Jianing |
author_sort | Chen, Zhikui |
collection | PubMed |
description | Nowadays, deep representations have been attracting much attention owing to the great performance in various tasks. However, the interpretability of deep representations poses a vast challenge on real-world applications. To alleviate the challenge, a deep matrix factorization method with non-negative constraints is proposed to learn deep part-based representations of interpretability for big data in this paper. Specifically, a deep architecture with a supervisor network suppressing noise in data and a student network learning deep representations of interpretability is designed, which is an end-to-end framework for pattern mining. Furthermore, to train the deep matrix factorization architecture, an interpretability loss is defined, including a symmetric loss, an apposition loss, and a non-negative constraint loss, which can ensure the knowledge transfer from the supervisor network to the student network, enhancing the robustness of deep representations. Finally, extensive experimental results on two benchmark datasets demonstrate the superiority of the deep matrix factorization method. |
format | Online Article Text |
id | pubmed-8329448 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-83294482021-08-04 A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning Chen, Zhikui Jin, Shan Liu, Runze Zhang, Jianing Front Neurorobot Neuroscience Nowadays, deep representations have been attracting much attention owing to the great performance in various tasks. However, the interpretability of deep representations poses a vast challenge on real-world applications. To alleviate the challenge, a deep matrix factorization method with non-negative constraints is proposed to learn deep part-based representations of interpretability for big data in this paper. Specifically, a deep architecture with a supervisor network suppressing noise in data and a student network learning deep representations of interpretability is designed, which is an end-to-end framework for pattern mining. Furthermore, to train the deep matrix factorization architecture, an interpretability loss is defined, including a symmetric loss, an apposition loss, and a non-negative constraint loss, which can ensure the knowledge transfer from the supervisor network to the student network, enhancing the robustness of deep representations. Finally, extensive experimental results on two benchmark datasets demonstrate the superiority of the deep matrix factorization method. Frontiers Media S.A. 2021-07-20 /pmc/articles/PMC8329448/ /pubmed/34354579 http://dx.doi.org/10.3389/fnbot.2021.701194 Text en Copyright © 2021 Chen, Jin, Liu and Zhang. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Chen, Zhikui Jin, Shan Liu, Runze Zhang, Jianing A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning |
title | A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning |
title_full | A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning |
title_fullStr | A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning |
title_full_unstemmed | A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning |
title_short | A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning |
title_sort | deep non-negative matrix factorization model for big data representation learning |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8329448/ https://www.ncbi.nlm.nih.gov/pubmed/34354579 http://dx.doi.org/10.3389/fnbot.2021.701194 |
work_keys_str_mv | AT chenzhikui adeepnonnegativematrixfactorizationmodelforbigdatarepresentationlearning AT jinshan adeepnonnegativematrixfactorizationmodelforbigdatarepresentationlearning AT liurunze adeepnonnegativematrixfactorizationmodelforbigdatarepresentationlearning AT zhangjianing adeepnonnegativematrixfactorizationmodelforbigdatarepresentationlearning AT chenzhikui deepnonnegativematrixfactorizationmodelforbigdatarepresentationlearning AT jinshan deepnonnegativematrixfactorizationmodelforbigdatarepresentationlearning AT liurunze deepnonnegativematrixfactorizationmodelforbigdatarepresentationlearning AT zhangjianing deepnonnegativematrixfactorizationmodelforbigdatarepresentationlearning |