Cargando…

BEAN: Interpretable and Efficient Learning With Biologically-Enhanced Artificial Neuronal Assembly Regularization

Deep neural networks (DNNs) are known for extracting useful information from large amounts of data. However, the representations learned in DNNs are typically hard to interpret, especially in dense layers. One crucial issue of the classical DNN model such as multilayer perceptron (MLP) is that neuro...

Descripción completa

Detalles Bibliográficos
Autores principales: Gao, Yuyang, Ascoli, Giorgio A., Zhao, Liang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8203915/
https://www.ncbi.nlm.nih.gov/pubmed/34140886
http://dx.doi.org/10.3389/fnbot.2021.567482
_version_ 1783708261042618368
author Gao, Yuyang
Ascoli, Giorgio A.
Zhao, Liang
author_facet Gao, Yuyang
Ascoli, Giorgio A.
Zhao, Liang
author_sort Gao, Yuyang
collection PubMed
description Deep neural networks (DNNs) are known for extracting useful information from large amounts of data. However, the representations learned in DNNs are typically hard to interpret, especially in dense layers. One crucial issue of the classical DNN model such as multilayer perceptron (MLP) is that neurons in the same layer of DNNs are conditionally independent of each other, which makes co-training and emergence of higher modularity difficult. In contrast to DNNs, biological neurons in mammalian brains display substantial dependency patterns. Specifically, biological neural networks encode representations by so-called neuronal assemblies: groups of neurons interconnected by strong synaptic interactions and sharing joint semantic content. The resulting population coding is essential for human cognitive and mnemonic processes. Here, we propose a novel Biologically Enhanced Artificial Neuronal assembly (BEAN) regularization to model neuronal correlations and dependencies, inspired by cell assembly theory from neuroscience. Experimental results show that BEAN enables the formation of interpretable neuronal functional clusters and consequently promotes a sparse, memory/computation-efficient network without loss of model performance. Moreover, our few-shot learning experiments demonstrate that BEAN could also enhance the generalizability of the model when training samples are extremely limited.
format Online
Article
Text
id pubmed-8203915
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-82039152021-06-16 BEAN: Interpretable and Efficient Learning With Biologically-Enhanced Artificial Neuronal Assembly Regularization Gao, Yuyang Ascoli, Giorgio A. Zhao, Liang Front Neurorobot Neuroscience Deep neural networks (DNNs) are known for extracting useful information from large amounts of data. However, the representations learned in DNNs are typically hard to interpret, especially in dense layers. One crucial issue of the classical DNN model such as multilayer perceptron (MLP) is that neurons in the same layer of DNNs are conditionally independent of each other, which makes co-training and emergence of higher modularity difficult. In contrast to DNNs, biological neurons in mammalian brains display substantial dependency patterns. Specifically, biological neural networks encode representations by so-called neuronal assemblies: groups of neurons interconnected by strong synaptic interactions and sharing joint semantic content. The resulting population coding is essential for human cognitive and mnemonic processes. Here, we propose a novel Biologically Enhanced Artificial Neuronal assembly (BEAN) regularization to model neuronal correlations and dependencies, inspired by cell assembly theory from neuroscience. Experimental results show that BEAN enables the formation of interpretable neuronal functional clusters and consequently promotes a sparse, memory/computation-efficient network without loss of model performance. Moreover, our few-shot learning experiments demonstrate that BEAN could also enhance the generalizability of the model when training samples are extremely limited. Frontiers Media S.A. 2021-06-01 /pmc/articles/PMC8203915/ /pubmed/34140886 http://dx.doi.org/10.3389/fnbot.2021.567482 Text en Copyright © 2021 Gao, Ascoli and Zhao. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Gao, Yuyang
Ascoli, Giorgio A.
Zhao, Liang
BEAN: Interpretable and Efficient Learning With Biologically-Enhanced Artificial Neuronal Assembly Regularization
title BEAN: Interpretable and Efficient Learning With Biologically-Enhanced Artificial Neuronal Assembly Regularization
title_full BEAN: Interpretable and Efficient Learning With Biologically-Enhanced Artificial Neuronal Assembly Regularization
title_fullStr BEAN: Interpretable and Efficient Learning With Biologically-Enhanced Artificial Neuronal Assembly Regularization
title_full_unstemmed BEAN: Interpretable and Efficient Learning With Biologically-Enhanced Artificial Neuronal Assembly Regularization
title_short BEAN: Interpretable and Efficient Learning With Biologically-Enhanced Artificial Neuronal Assembly Regularization
title_sort bean: interpretable and efficient learning with biologically-enhanced artificial neuronal assembly regularization
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8203915/
https://www.ncbi.nlm.nih.gov/pubmed/34140886
http://dx.doi.org/10.3389/fnbot.2021.567482
work_keys_str_mv AT gaoyuyang beaninterpretableandefficientlearningwithbiologicallyenhancedartificialneuronalassemblyregularization
AT ascoligiorgioa beaninterpretableandefficientlearningwithbiologicallyenhancedartificialneuronalassemblyregularization
AT zhaoliang beaninterpretableandefficientlearningwithbiologicallyenhancedartificialneuronalassemblyregularization