Cargando…

Kernel-wise difference minimization for convolutional neural network compression in metaverse

Convolutional neural networks have achieved remarkable success in computer vision research. However, to further improve their performance, network models have become increasingly complex and require more memory and computational resources. As a result, model compression has become an essential area...

Descripción completa

Detalles Bibliográficos
Autor principal: Chang, Yi-Ting
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10438991/
https://www.ncbi.nlm.nih.gov/pubmed/37600500
http://dx.doi.org/10.3389/fdata.2023.1200382
_version_ 1785092841464135680
author Chang, Yi-Ting
author_facet Chang, Yi-Ting
author_sort Chang, Yi-Ting
collection PubMed
description Convolutional neural networks have achieved remarkable success in computer vision research. However, to further improve their performance, network models have become increasingly complex and require more memory and computational resources. As a result, model compression has become an essential area of research in recent years. In this study, we focus on the best-case scenario for Huffman coding, which involves data with lower entropy. Building on this concept, we formulate a compression with a filter-wise difference minimization problem and propose a novel algorithm to solve it. Our approach involves filter-level pruning, followed by minimizing the difference between filters. Additionally, we perform filter permutation to further enhance compression. Our proposed algorithm achieves a compression rate of 94× on Lenet-5 and 50× on VGG16. The results demonstrate the effectiveness of our method in significantly reducing the size of deep neural networks while maintaining a high level of accuracy. We believe that our approach holds great promise in advancing the field of model compression and can benefit various applications that require efficient neural network models. Overall, this study provides important insights and contributions toward addressing the challenges of model compression in deep neural networks.
format Online
Article
Text
id pubmed-10438991
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-104389912023-08-19 Kernel-wise difference minimization for convolutional neural network compression in metaverse Chang, Yi-Ting Front Big Data Big Data Convolutional neural networks have achieved remarkable success in computer vision research. However, to further improve their performance, network models have become increasingly complex and require more memory and computational resources. As a result, model compression has become an essential area of research in recent years. In this study, we focus on the best-case scenario for Huffman coding, which involves data with lower entropy. Building on this concept, we formulate a compression with a filter-wise difference minimization problem and propose a novel algorithm to solve it. Our approach involves filter-level pruning, followed by minimizing the difference between filters. Additionally, we perform filter permutation to further enhance compression. Our proposed algorithm achieves a compression rate of 94× on Lenet-5 and 50× on VGG16. The results demonstrate the effectiveness of our method in significantly reducing the size of deep neural networks while maintaining a high level of accuracy. We believe that our approach holds great promise in advancing the field of model compression and can benefit various applications that require efficient neural network models. Overall, this study provides important insights and contributions toward addressing the challenges of model compression in deep neural networks. Frontiers Media S.A. 2023-08-04 /pmc/articles/PMC10438991/ /pubmed/37600500 http://dx.doi.org/10.3389/fdata.2023.1200382 Text en Copyright © 2023 Chang. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Big Data
Chang, Yi-Ting
Kernel-wise difference minimization for convolutional neural network compression in metaverse
title Kernel-wise difference minimization for convolutional neural network compression in metaverse
title_full Kernel-wise difference minimization for convolutional neural network compression in metaverse
title_fullStr Kernel-wise difference minimization for convolutional neural network compression in metaverse
title_full_unstemmed Kernel-wise difference minimization for convolutional neural network compression in metaverse
title_short Kernel-wise difference minimization for convolutional neural network compression in metaverse
title_sort kernel-wise difference minimization for convolutional neural network compression in metaverse
topic Big Data
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10438991/
https://www.ncbi.nlm.nih.gov/pubmed/37600500
http://dx.doi.org/10.3389/fdata.2023.1200382
work_keys_str_mv AT changyiting kernelwisedifferenceminimizationforconvolutionalneuralnetworkcompressioninmetaverse