Cargando…
Neural network interpretation using descrambler groups
The lack of interpretability and trust is a much-criticized feature of deep neural networks. In fully connected nets, the signaling between inner layers is scrambled because backpropagation training does not require perceptrons to be arranged in any particular order. The result is a black box; this...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
National Academy of Sciences
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7865153/ https://www.ncbi.nlm.nih.gov/pubmed/33500352 http://dx.doi.org/10.1073/pnas.2016917118 |
_version_ | 1783647784593784832 |
---|---|
author | Amey, Jake L. Keeley, Jake Choudhury, Tajwar Kuprov, Ilya |
author_facet | Amey, Jake L. Keeley, Jake Choudhury, Tajwar Kuprov, Ilya |
author_sort | Amey, Jake L. |
collection | PubMed |
description | The lack of interpretability and trust is a much-criticized feature of deep neural networks. In fully connected nets, the signaling between inner layers is scrambled because backpropagation training does not require perceptrons to be arranged in any particular order. The result is a black box; this problem is particularly severe in scientific computing and digital signal processing (DSP), where neural nets perform abstract mathematical transformations that do not reduce to features or concepts. We present here a group-theoretical procedure that attempts to bring inner-layer signaling into a human-readable form, the assumption being that this form exists and has identifiable and quantifiable features—for example, smoothness or locality. We applied the proposed method to DEERNet (a DSP network used in electron spin resonance) and managed to descramble it. We found considerable internal sophistication: the network spontaneously invents a bandpass filter, a notch filter, a frequency axis rescaling transformation, frequency-division multiplexing, group embedding, spectral filtering regularization, and a map from harmonic functions into Chebyshev polynomials—in 10 min of unattended training from a random initial guess. |
format | Online Article Text |
id | pubmed-7865153 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | National Academy of Sciences |
record_format | MEDLINE/PubMed |
spelling | pubmed-78651532021-02-17 Neural network interpretation using descrambler groups Amey, Jake L. Keeley, Jake Choudhury, Tajwar Kuprov, Ilya Proc Natl Acad Sci U S A Physical Sciences The lack of interpretability and trust is a much-criticized feature of deep neural networks. In fully connected nets, the signaling between inner layers is scrambled because backpropagation training does not require perceptrons to be arranged in any particular order. The result is a black box; this problem is particularly severe in scientific computing and digital signal processing (DSP), where neural nets perform abstract mathematical transformations that do not reduce to features or concepts. We present here a group-theoretical procedure that attempts to bring inner-layer signaling into a human-readable form, the assumption being that this form exists and has identifiable and quantifiable features—for example, smoothness or locality. We applied the proposed method to DEERNet (a DSP network used in electron spin resonance) and managed to descramble it. We found considerable internal sophistication: the network spontaneously invents a bandpass filter, a notch filter, a frequency axis rescaling transformation, frequency-division multiplexing, group embedding, spectral filtering regularization, and a map from harmonic functions into Chebyshev polynomials—in 10 min of unattended training from a random initial guess. National Academy of Sciences 2021-02-02 2021-01-26 /pmc/articles/PMC7865153/ /pubmed/33500352 http://dx.doi.org/10.1073/pnas.2016917118 Text en Copyright © 2021 the Author(s). Published by PNAS. http://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/This open access article is distributed under Creative Commons Attribution License 4.0 (CC BY) (http://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Physical Sciences Amey, Jake L. Keeley, Jake Choudhury, Tajwar Kuprov, Ilya Neural network interpretation using descrambler groups |
title | Neural network interpretation using descrambler groups |
title_full | Neural network interpretation using descrambler groups |
title_fullStr | Neural network interpretation using descrambler groups |
title_full_unstemmed | Neural network interpretation using descrambler groups |
title_short | Neural network interpretation using descrambler groups |
title_sort | neural network interpretation using descrambler groups |
topic | Physical Sciences |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7865153/ https://www.ncbi.nlm.nih.gov/pubmed/33500352 http://dx.doi.org/10.1073/pnas.2016917118 |
work_keys_str_mv | AT ameyjakel neuralnetworkinterpretationusingdescramblergroups AT keeleyjake neuralnetworkinterpretationusingdescramblergroups AT choudhurytajwar neuralnetworkinterpretationusingdescramblergroups AT kuprovilya neuralnetworkinterpretationusingdescramblergroups |