Cargando…
BEAN: Interpretable and Efficient Learning With Biologically-Enhanced Artificial Neuronal Assembly Regularization
Deep neural networks (DNNs) are known for extracting useful information from large amounts of data. However, the representations learned in DNNs are typically hard to interpret, especially in dense layers. One crucial issue of the classical DNN model such as multilayer perceptron (MLP) is that neuro...
Autores principales: | Gao, Yuyang, Ascoli, Giorgio A., Zhao, Liang |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8203915/ https://www.ncbi.nlm.nih.gov/pubmed/34140886 http://dx.doi.org/10.3389/fnbot.2021.567482 |
Ejemplares similares
-
Input Complexity Affects Long-Term Retention of Statistically Learned Regularities in an Artificial Language Learning Task
por: Jost, Ethan, et al.
Publicado: (2019) -
Interpretable Neuron Structuring with Graph Spectral Regularization
por: Tong, Alexander, et al.
Publicado: (2020) -
Potential Synaptic Connectivity of Different Neurons onto Pyramidal Cells in a 3D Reconstruction of the Rat Hippocampus
por: Ropireddy, Deepak, et al.
Publicado: (2011) -
An ontological approach to describing neurons and their relationships
por: Hamilton, David J., et al.
Publicado: (2012) -
Digital Reconstructions of Neuronal Morphology: Three Decades of Research Trends
por: Halavi, Maryam, et al.
Publicado: (2012)