Cargando…

Compressing Deep Networks by Neuron Agglomerative Clustering

In recent years, deep learning models have achieved remarkable successes in various applications, such as pattern recognition, computer vision, and signal processing. However, high-performance deep architectures are often accompanied by a large storage space and long computational time, which make i...

Descripción completa

Detalles Bibliográficos
Autores principales: Wang, Li-Na, Liu, Wenxue, Liu, Xiang, Zhong, Guoqiang, Roy, Partha Pratim, Dong, Junyu, Huang, Kaizhu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7660330/
https://www.ncbi.nlm.nih.gov/pubmed/33114078
http://dx.doi.org/10.3390/s20216033
_version_ 1783608984934023168
author Wang, Li-Na
Liu, Wenxue
Liu, Xiang
Zhong, Guoqiang
Roy, Partha Pratim
Dong, Junyu
Huang, Kaizhu
author_facet Wang, Li-Na
Liu, Wenxue
Liu, Xiang
Zhong, Guoqiang
Roy, Partha Pratim
Dong, Junyu
Huang, Kaizhu
author_sort Wang, Li-Na
collection PubMed
description In recent years, deep learning models have achieved remarkable successes in various applications, such as pattern recognition, computer vision, and signal processing. However, high-performance deep architectures are often accompanied by a large storage space and long computational time, which make it difficult to fully exploit many deep neural networks (DNNs), especially in scenarios in which computing resources are limited. In this paper, to tackle this problem, we introduce a method for compressing the structure and parameters of DNNs based on neuron agglomerative clustering (NAC). Specifically, we utilize the agglomerative clustering algorithm to find similar neurons, while these similar neurons and the connections linked to them are then agglomerated together. Using NAC, the number of parameters and the storage space of DNNs are greatly reduced, without the support of an extra library or hardware. Extensive experiments demonstrate that NAC is very effective for the neuron agglomeration of both the fully connected and convolutional layers, which are common building blocks of DNNs, delivering similar or even higher network accuracy. Specifically, on the benchmark CIFAR-10 and CIFAR-100 datasets, using NAC to compress the parameters of the original VGGNet by 92.96% and 81.10%, respectively, the compact network obtained still outperforms the original networks.
format Online
Article
Text
id pubmed-7660330
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-76603302020-11-13 Compressing Deep Networks by Neuron Agglomerative Clustering Wang, Li-Na Liu, Wenxue Liu, Xiang Zhong, Guoqiang Roy, Partha Pratim Dong, Junyu Huang, Kaizhu Sensors (Basel) Article In recent years, deep learning models have achieved remarkable successes in various applications, such as pattern recognition, computer vision, and signal processing. However, high-performance deep architectures are often accompanied by a large storage space and long computational time, which make it difficult to fully exploit many deep neural networks (DNNs), especially in scenarios in which computing resources are limited. In this paper, to tackle this problem, we introduce a method for compressing the structure and parameters of DNNs based on neuron agglomerative clustering (NAC). Specifically, we utilize the agglomerative clustering algorithm to find similar neurons, while these similar neurons and the connections linked to them are then agglomerated together. Using NAC, the number of parameters and the storage space of DNNs are greatly reduced, without the support of an extra library or hardware. Extensive experiments demonstrate that NAC is very effective for the neuron agglomeration of both the fully connected and convolutional layers, which are common building blocks of DNNs, delivering similar or even higher network accuracy. Specifically, on the benchmark CIFAR-10 and CIFAR-100 datasets, using NAC to compress the parameters of the original VGGNet by 92.96% and 81.10%, respectively, the compact network obtained still outperforms the original networks. MDPI 2020-10-23 /pmc/articles/PMC7660330/ /pubmed/33114078 http://dx.doi.org/10.3390/s20216033 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Wang, Li-Na
Liu, Wenxue
Liu, Xiang
Zhong, Guoqiang
Roy, Partha Pratim
Dong, Junyu
Huang, Kaizhu
Compressing Deep Networks by Neuron Agglomerative Clustering
title Compressing Deep Networks by Neuron Agglomerative Clustering
title_full Compressing Deep Networks by Neuron Agglomerative Clustering
title_fullStr Compressing Deep Networks by Neuron Agglomerative Clustering
title_full_unstemmed Compressing Deep Networks by Neuron Agglomerative Clustering
title_short Compressing Deep Networks by Neuron Agglomerative Clustering
title_sort compressing deep networks by neuron agglomerative clustering
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7660330/
https://www.ncbi.nlm.nih.gov/pubmed/33114078
http://dx.doi.org/10.3390/s20216033
work_keys_str_mv AT wanglina compressingdeepnetworksbyneuronagglomerativeclustering
AT liuwenxue compressingdeepnetworksbyneuronagglomerativeclustering
AT liuxiang compressingdeepnetworksbyneuronagglomerativeclustering
AT zhongguoqiang compressingdeepnetworksbyneuronagglomerativeclustering
AT royparthapratim compressingdeepnetworksbyneuronagglomerativeclustering
AT dongjunyu compressingdeepnetworksbyneuronagglomerativeclustering
AT huangkaizhu compressingdeepnetworksbyneuronagglomerativeclustering