Cargando…

Structural Compression of Convolutional Neural Networks with Applications in Interpretability

Deep convolutional neural networks (CNNs) have been successful in many tasks in machine vision, however, millions of weights in the form of thousands of convolutional filters in CNNs make them difficult for human interpretation or understanding in science. In this article, we introduce a greedy stru...

Descripción completa

Detalles Bibliográficos
Autores principales: Abbasi-Asl, Reza, Yu, Bin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8427695/
https://www.ncbi.nlm.nih.gov/pubmed/34514381
http://dx.doi.org/10.3389/fdata.2021.704182
_version_ 1783750229617541120
author Abbasi-Asl, Reza
Yu, Bin
author_facet Abbasi-Asl, Reza
Yu, Bin
author_sort Abbasi-Asl, Reza
collection PubMed
description Deep convolutional neural networks (CNNs) have been successful in many tasks in machine vision, however, millions of weights in the form of thousands of convolutional filters in CNNs make them difficult for human interpretation or understanding in science. In this article, we introduce a greedy structural compression scheme to obtain smaller and more interpretable CNNs, while achieving close to original accuracy. The compression is based on pruning filters with the least contribution to the classification accuracy or the lowest Classification Accuracy Reduction (CAR) importance index. We demonstrate the interpretability of CAR-compressed CNNs by showing that our algorithm prunes filters with visually redundant functionalities such as color filters. These compressed networks are easier to interpret because they retain the filter diversity of uncompressed networks with an order of magnitude fewer filters. Finally, a variant of CAR is introduced to quantify the importance of each image category to each CNN filter. Specifically, the most and the least important class labels are shown to be meaningful interpretations of each filter.
format Online
Article
Text
id pubmed-8427695
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-84276952021-09-10 Structural Compression of Convolutional Neural Networks with Applications in Interpretability Abbasi-Asl, Reza Yu, Bin Front Big Data Big Data Deep convolutional neural networks (CNNs) have been successful in many tasks in machine vision, however, millions of weights in the form of thousands of convolutional filters in CNNs make them difficult for human interpretation or understanding in science. In this article, we introduce a greedy structural compression scheme to obtain smaller and more interpretable CNNs, while achieving close to original accuracy. The compression is based on pruning filters with the least contribution to the classification accuracy or the lowest Classification Accuracy Reduction (CAR) importance index. We demonstrate the interpretability of CAR-compressed CNNs by showing that our algorithm prunes filters with visually redundant functionalities such as color filters. These compressed networks are easier to interpret because they retain the filter diversity of uncompressed networks with an order of magnitude fewer filters. Finally, a variant of CAR is introduced to quantify the importance of each image category to each CNN filter. Specifically, the most and the least important class labels are shown to be meaningful interpretations of each filter. Frontiers Media S.A. 2021-08-26 /pmc/articles/PMC8427695/ /pubmed/34514381 http://dx.doi.org/10.3389/fdata.2021.704182 Text en Copyright © 2021 Abbasi-Asl and Yu. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Big Data
Abbasi-Asl, Reza
Yu, Bin
Structural Compression of Convolutional Neural Networks with Applications in Interpretability
title Structural Compression of Convolutional Neural Networks with Applications in Interpretability
title_full Structural Compression of Convolutional Neural Networks with Applications in Interpretability
title_fullStr Structural Compression of Convolutional Neural Networks with Applications in Interpretability
title_full_unstemmed Structural Compression of Convolutional Neural Networks with Applications in Interpretability
title_short Structural Compression of Convolutional Neural Networks with Applications in Interpretability
title_sort structural compression of convolutional neural networks with applications in interpretability
topic Big Data
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8427695/
https://www.ncbi.nlm.nih.gov/pubmed/34514381
http://dx.doi.org/10.3389/fdata.2021.704182
work_keys_str_mv AT abbasiaslreza structuralcompressionofconvolutionalneuralnetworkswithapplicationsininterpretability
AT yubin structuralcompressionofconvolutionalneuralnetworkswithapplicationsininterpretability