Cargando…
The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding
The recent “deep learning revolution” in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an i...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2017
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5360096/ https://www.ncbi.nlm.nih.gov/pubmed/28377709 http://dx.doi.org/10.3389/fncom.2017.00013 |
_version_ | 1782516534919299072 |
---|---|
author | Testolin, Alberto De Filippo De Grazia, Michele Zorzi, Marco |
author_facet | Testolin, Alberto De Filippo De Grazia, Michele Zorzi, Marco |
author_sort | Testolin, Alberto |
collection | PubMed |
description | The recent “deep learning revolution” in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems. |
format | Online Article Text |
id | pubmed-5360096 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2017 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-53600962017-04-04 The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding Testolin, Alberto De Filippo De Grazia, Michele Zorzi, Marco Front Comput Neurosci Neuroscience The recent “deep learning revolution” in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems. Frontiers Media S.A. 2017-03-21 /pmc/articles/PMC5360096/ /pubmed/28377709 http://dx.doi.org/10.3389/fncom.2017.00013 Text en Copyright © 2017 Testolin, De Filippo De Grazia and Zorzi. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Testolin, Alberto De Filippo De Grazia, Michele Zorzi, Marco The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding |
title | The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding |
title_full | The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding |
title_fullStr | The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding |
title_full_unstemmed | The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding |
title_short | The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding |
title_sort | role of architectural and learning constraints in neural network models: a case study on visual space coding |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5360096/ https://www.ncbi.nlm.nih.gov/pubmed/28377709 http://dx.doi.org/10.3389/fncom.2017.00013 |
work_keys_str_mv | AT testolinalberto theroleofarchitecturalandlearningconstraintsinneuralnetworkmodelsacasestudyonvisualspacecoding AT defilippodegraziamichele theroleofarchitecturalandlearningconstraintsinneuralnetworkmodelsacasestudyonvisualspacecoding AT zorzimarco theroleofarchitecturalandlearningconstraintsinneuralnetworkmodelsacasestudyonvisualspacecoding AT testolinalberto roleofarchitecturalandlearningconstraintsinneuralnetworkmodelsacasestudyonvisualspacecoding AT defilippodegraziamichele roleofarchitecturalandlearningconstraintsinneuralnetworkmodelsacasestudyonvisualspacecoding AT zorzimarco roleofarchitecturalandlearningconstraintsinneuralnetworkmodelsacasestudyonvisualspacecoding |