Cargando…

SLRProp: A Back-Propagation Variant of Sparse Low Rank Method for DNNs Reduction

Application of deep neural networks (DNN) in edge computing has emerged as a consequence of the need of real time and distributed response of different devices in a large number of scenarios. To this end, shredding these original structures is urgent due to the high number of parameters needed to re...

Descripción completa

Detalles Bibliográficos
Autores principales: Garmendia-Orbegozo, Asier, Nuñez-Gonzalez, Jose David, Anton, Miguel Angel
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10006865/
https://www.ncbi.nlm.nih.gov/pubmed/36904922
http://dx.doi.org/10.3390/s23052718
_version_ 1784905376601210880
author Garmendia-Orbegozo, Asier
Nuñez-Gonzalez, Jose David
Anton, Miguel Angel
author_facet Garmendia-Orbegozo, Asier
Nuñez-Gonzalez, Jose David
Anton, Miguel Angel
author_sort Garmendia-Orbegozo, Asier
collection PubMed
description Application of deep neural networks (DNN) in edge computing has emerged as a consequence of the need of real time and distributed response of different devices in a large number of scenarios. To this end, shredding these original structures is urgent due to the high number of parameters needed to represent them. As a consequence, the most representative components of different layers are kept in order to maintain the network’s accuracy as close as possible to the entire network’s ones. To do so, two different approaches have been developed in this work. First, the Sparse Low Rank Method (SLR) has been applied to two different Fully Connected (FC) layers to watch their effect on the final response, and the method has been applied to the latest of these layers as a duplicate. On the contrary, SLRProp has been proposed as a variant case, where the relevances of the previous FC layer’s components were weighed as the sum of the products of each of these neurons’ absolute values and the relevances of the neurons from the last FC layer that are connected with the neurons from the previous FC layer. Thus, the relationship of relevances across layer was considered. Experiments have been carried out in well-known architectures to conclude whether the relevances throughout layers have less effect on the final response of the network than the independent relevances intra-layer.
format Online
Article
Text
id pubmed-10006865
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-100068652023-03-12 SLRProp: A Back-Propagation Variant of Sparse Low Rank Method for DNNs Reduction Garmendia-Orbegozo, Asier Nuñez-Gonzalez, Jose David Anton, Miguel Angel Sensors (Basel) Article Application of deep neural networks (DNN) in edge computing has emerged as a consequence of the need of real time and distributed response of different devices in a large number of scenarios. To this end, shredding these original structures is urgent due to the high number of parameters needed to represent them. As a consequence, the most representative components of different layers are kept in order to maintain the network’s accuracy as close as possible to the entire network’s ones. To do so, two different approaches have been developed in this work. First, the Sparse Low Rank Method (SLR) has been applied to two different Fully Connected (FC) layers to watch their effect on the final response, and the method has been applied to the latest of these layers as a duplicate. On the contrary, SLRProp has been proposed as a variant case, where the relevances of the previous FC layer’s components were weighed as the sum of the products of each of these neurons’ absolute values and the relevances of the neurons from the last FC layer that are connected with the neurons from the previous FC layer. Thus, the relationship of relevances across layer was considered. Experiments have been carried out in well-known architectures to conclude whether the relevances throughout layers have less effect on the final response of the network than the independent relevances intra-layer. MDPI 2023-03-02 /pmc/articles/PMC10006865/ /pubmed/36904922 http://dx.doi.org/10.3390/s23052718 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Garmendia-Orbegozo, Asier
Nuñez-Gonzalez, Jose David
Anton, Miguel Angel
SLRProp: A Back-Propagation Variant of Sparse Low Rank Method for DNNs Reduction
title SLRProp: A Back-Propagation Variant of Sparse Low Rank Method for DNNs Reduction
title_full SLRProp: A Back-Propagation Variant of Sparse Low Rank Method for DNNs Reduction
title_fullStr SLRProp: A Back-Propagation Variant of Sparse Low Rank Method for DNNs Reduction
title_full_unstemmed SLRProp: A Back-Propagation Variant of Sparse Low Rank Method for DNNs Reduction
title_short SLRProp: A Back-Propagation Variant of Sparse Low Rank Method for DNNs Reduction
title_sort slrprop: a back-propagation variant of sparse low rank method for dnns reduction
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10006865/
https://www.ncbi.nlm.nih.gov/pubmed/36904922
http://dx.doi.org/10.3390/s23052718
work_keys_str_mv AT garmendiaorbegozoasier slrpropabackpropagationvariantofsparselowrankmethodfordnnsreduction
AT nunezgonzalezjosedavid slrpropabackpropagationvariantofsparselowrankmethodfordnnsreduction
AT antonmiguelangel slrpropabackpropagationvariantofsparselowrankmethodfordnnsreduction