Cargando…

Interpretable Neuron Structuring with Graph Spectral Regularization

While neural networks are powerful approximators used to classify or embed data into lower dimensional spaces, they are often regarded as black boxes with uninterpretable features. Here we propose Graph Spectral Regularization for making hidden layers more interpretable without significantly impacti...

Descripción completa

Detalles Bibliográficos
Autores principales: Tong, Alexander, van Dijk, David, Stanley, Jay S., Amodio, Matthew, Yim, Kristina, Muhle, Rebecca, Noonan, James, Wolf, Guy, Krishnaswamy, Smita
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8201816/
https://www.ncbi.nlm.nih.gov/pubmed/34131660
http://dx.doi.org/10.1007/978-3-030-44584-3_40
_version_ 1783707873745829888
author Tong, Alexander
van Dijk, David
Stanley, Jay S.
Amodio, Matthew
Yim, Kristina
Muhle, Rebecca
Noonan, James
Wolf, Guy
Krishnaswamy, Smita
author_facet Tong, Alexander
van Dijk, David
Stanley, Jay S.
Amodio, Matthew
Yim, Kristina
Muhle, Rebecca
Noonan, James
Wolf, Guy
Krishnaswamy, Smita
author_sort Tong, Alexander
collection PubMed
description While neural networks are powerful approximators used to classify or embed data into lower dimensional spaces, they are often regarded as black boxes with uninterpretable features. Here we propose Graph Spectral Regularization for making hidden layers more interpretable without significantly impacting performance on the primary task. Taking inspiration from spatial organization and localization of neuron activations in biological networks, we use a graph Laplacian penalty to structure the activations within a layer. This penalty encourages activations to be smooth either on a predetermined graph or on a feature-space graph learned from the data via co-activations of a hidden layer of the neural network. We show numerous uses for this additional structure including cluster indication and visualization in biological and image data sets.
format Online
Article
Text
id pubmed-8201816
institution National Center for Biotechnology Information
language English
publishDate 2020
record_format MEDLINE/PubMed
spelling pubmed-82018162021-06-14 Interpretable Neuron Structuring with Graph Spectral Regularization Tong, Alexander van Dijk, David Stanley, Jay S. Amodio, Matthew Yim, Kristina Muhle, Rebecca Noonan, James Wolf, Guy Krishnaswamy, Smita Adv Intell Data Anal Article While neural networks are powerful approximators used to classify or embed data into lower dimensional spaces, they are often regarded as black boxes with uninterpretable features. Here we propose Graph Spectral Regularization for making hidden layers more interpretable without significantly impacting performance on the primary task. Taking inspiration from spatial organization and localization of neuron activations in biological networks, we use a graph Laplacian penalty to structure the activations within a layer. This penalty encourages activations to be smooth either on a predetermined graph or on a feature-space graph learned from the data via co-activations of a hidden layer of the neural network. We show numerous uses for this additional structure including cluster indication and visualization in biological and image data sets. 2020-04-22 2020-04 /pmc/articles/PMC8201816/ /pubmed/34131660 http://dx.doi.org/10.1007/978-3-030-44584-3_40 Text en https://creativecommons.org/licenses/by/4.0/Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
spellingShingle Article
Tong, Alexander
van Dijk, David
Stanley, Jay S.
Amodio, Matthew
Yim, Kristina
Muhle, Rebecca
Noonan, James
Wolf, Guy
Krishnaswamy, Smita
Interpretable Neuron Structuring with Graph Spectral Regularization
title Interpretable Neuron Structuring with Graph Spectral Regularization
title_full Interpretable Neuron Structuring with Graph Spectral Regularization
title_fullStr Interpretable Neuron Structuring with Graph Spectral Regularization
title_full_unstemmed Interpretable Neuron Structuring with Graph Spectral Regularization
title_short Interpretable Neuron Structuring with Graph Spectral Regularization
title_sort interpretable neuron structuring with graph spectral regularization
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8201816/
https://www.ncbi.nlm.nih.gov/pubmed/34131660
http://dx.doi.org/10.1007/978-3-030-44584-3_40
work_keys_str_mv AT tongalexander interpretableneuronstructuringwithgraphspectralregularization
AT vandijkdavid interpretableneuronstructuringwithgraphspectralregularization
AT stanleyjays interpretableneuronstructuringwithgraphspectralregularization
AT amodiomatthew interpretableneuronstructuringwithgraphspectralregularization
AT yimkristina interpretableneuronstructuringwithgraphspectralregularization
AT muhlerebecca interpretableneuronstructuringwithgraphspectralregularization
AT noonanjames interpretableneuronstructuringwithgraphspectralregularization
AT wolfguy interpretableneuronstructuringwithgraphspectralregularization
AT krishnaswamysmita interpretableneuronstructuringwithgraphspectralregularization