Cargando…

Multiplicative processing in the modeling of cognitive activities in large neural networks

Explaining the foundation of cognitive abilities in the processing of information by neural systems has been in the beginnings of biophysics since McCulloch and Pitts pioneered work within the biophysics school of Chicago in the 1940s and the interdisciplinary cybernetists meetings in the 1950s, ins...

Descripción completa

Detalles Bibliográficos
Autores principales: Valle-Lisboa, Juan C., Pomi, Andrés, Mizraji, Eduardo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Berlin Heidelberg 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10480136/
https://www.ncbi.nlm.nih.gov/pubmed/37681105
http://dx.doi.org/10.1007/s12551-023-01074-5
_version_ 1785101725800071168
author Valle-Lisboa, Juan C.
Pomi, Andrés
Mizraji, Eduardo
author_facet Valle-Lisboa, Juan C.
Pomi, Andrés
Mizraji, Eduardo
author_sort Valle-Lisboa, Juan C.
collection PubMed
description Explaining the foundation of cognitive abilities in the processing of information by neural systems has been in the beginnings of biophysics since McCulloch and Pitts pioneered work within the biophysics school of Chicago in the 1940s and the interdisciplinary cybernetists meetings in the 1950s, inseparable from the birth of computing and artificial intelligence. Since then, neural network models have traveled a long path, both in the biophysical and the computational disciplines. The biological, neurocomputational aspect reached its representational maturity with the Distributed Associative Memory models developed in the early 70 s. In this framework, the inclusion of signal-signal multiplication within neural network models was presented as a necessity to provide matrix associative memories with adaptive, context-sensitive associations, while greatly enhancing their computational capabilities. In this review, we show that several of the most successful neural network models use a form of multiplication of signals. We present several classical models that included such kind of multiplication and the computational reasons for the inclusion. We then turn to the different proposals about the possible biophysical implementation that underlies these computational capacities. We pinpoint the important ideas put forth by different theoretical models using a tensor product representation and show that these models endow memories with the context-dependent adaptive capabilities necessary to allow for evolutionary adaptation to changing and unpredictable environments. Finally, we show how the powerful abilities of contemporary computationally deep-learning models, inspired in neural networks, also depend on multiplications, and discuss some perspectives in view of the wide panorama unfolded. The computational relevance of multiplications calls for the development of new avenues of research that uncover the mechanisms our nervous system uses to achieve multiplication.
format Online
Article
Text
id pubmed-10480136
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Springer Berlin Heidelberg
record_format MEDLINE/PubMed
spelling pubmed-104801362023-09-07 Multiplicative processing in the modeling of cognitive activities in large neural networks Valle-Lisboa, Juan C. Pomi, Andrés Mizraji, Eduardo Biophys Rev Review Explaining the foundation of cognitive abilities in the processing of information by neural systems has been in the beginnings of biophysics since McCulloch and Pitts pioneered work within the biophysics school of Chicago in the 1940s and the interdisciplinary cybernetists meetings in the 1950s, inseparable from the birth of computing and artificial intelligence. Since then, neural network models have traveled a long path, both in the biophysical and the computational disciplines. The biological, neurocomputational aspect reached its representational maturity with the Distributed Associative Memory models developed in the early 70 s. In this framework, the inclusion of signal-signal multiplication within neural network models was presented as a necessity to provide matrix associative memories with adaptive, context-sensitive associations, while greatly enhancing their computational capabilities. In this review, we show that several of the most successful neural network models use a form of multiplication of signals. We present several classical models that included such kind of multiplication and the computational reasons for the inclusion. We then turn to the different proposals about the possible biophysical implementation that underlies these computational capacities. We pinpoint the important ideas put forth by different theoretical models using a tensor product representation and show that these models endow memories with the context-dependent adaptive capabilities necessary to allow for evolutionary adaptation to changing and unpredictable environments. Finally, we show how the powerful abilities of contemporary computationally deep-learning models, inspired in neural networks, also depend on multiplications, and discuss some perspectives in view of the wide panorama unfolded. The computational relevance of multiplications calls for the development of new avenues of research that uncover the mechanisms our nervous system uses to achieve multiplication. Springer Berlin Heidelberg 2023-06-22 /pmc/articles/PMC10480136/ /pubmed/37681105 http://dx.doi.org/10.1007/s12551-023-01074-5 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Review
Valle-Lisboa, Juan C.
Pomi, Andrés
Mizraji, Eduardo
Multiplicative processing in the modeling of cognitive activities in large neural networks
title Multiplicative processing in the modeling of cognitive activities in large neural networks
title_full Multiplicative processing in the modeling of cognitive activities in large neural networks
title_fullStr Multiplicative processing in the modeling of cognitive activities in large neural networks
title_full_unstemmed Multiplicative processing in the modeling of cognitive activities in large neural networks
title_short Multiplicative processing in the modeling of cognitive activities in large neural networks
title_sort multiplicative processing in the modeling of cognitive activities in large neural networks
topic Review
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10480136/
https://www.ncbi.nlm.nih.gov/pubmed/37681105
http://dx.doi.org/10.1007/s12551-023-01074-5
work_keys_str_mv AT vallelisboajuanc multiplicativeprocessinginthemodelingofcognitiveactivitiesinlargeneuralnetworks
AT pomiandres multiplicativeprocessinginthemodelingofcognitiveactivitiesinlargeneuralnetworks
AT mizrajieduardo multiplicativeprocessinginthemodelingofcognitiveactivitiesinlargeneuralnetworks