Cargando…
Abstract representations emerge naturally in neural networks trained to perform multiple tasks
Humans and other animals demonstrate a remarkable ability to generalize knowledge across distinct contexts and objects during natural behavior. We posit that this ability to generalize arises from a specific representational geometry, that we call abstract and that is referred to as disentangled in...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9950464/ https://www.ncbi.nlm.nih.gov/pubmed/36823136 http://dx.doi.org/10.1038/s41467-023-36583-0 |
_version_ | 1784893169756798976 |
---|---|
author | Johnston, W. Jeffrey Fusi, Stefano |
author_facet | Johnston, W. Jeffrey Fusi, Stefano |
author_sort | Johnston, W. Jeffrey |
collection | PubMed |
description | Humans and other animals demonstrate a remarkable ability to generalize knowledge across distinct contexts and objects during natural behavior. We posit that this ability to generalize arises from a specific representational geometry, that we call abstract and that is referred to as disentangled in machine learning. These abstract representations have been observed in recent neurophysiological studies. However, it is unknown how they emerge. Here, using feedforward neural networks, we demonstrate that the learning of multiple tasks causes abstract representations to emerge, using both supervised and reinforcement learning. We show that these abstract representations enable few-sample learning and reliable generalization on novel tasks. We conclude that abstract representations of sensory and cognitive variables may emerge from the multiple behaviors that animals exhibit in the natural world, and, as a consequence, could be pervasive in high-level brain regions. We also make several specific predictions about which variables will be represented abstractly. |
format | Online Article Text |
id | pubmed-9950464 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-99504642023-02-25 Abstract representations emerge naturally in neural networks trained to perform multiple tasks Johnston, W. Jeffrey Fusi, Stefano Nat Commun Article Humans and other animals demonstrate a remarkable ability to generalize knowledge across distinct contexts and objects during natural behavior. We posit that this ability to generalize arises from a specific representational geometry, that we call abstract and that is referred to as disentangled in machine learning. These abstract representations have been observed in recent neurophysiological studies. However, it is unknown how they emerge. Here, using feedforward neural networks, we demonstrate that the learning of multiple tasks causes abstract representations to emerge, using both supervised and reinforcement learning. We show that these abstract representations enable few-sample learning and reliable generalization on novel tasks. We conclude that abstract representations of sensory and cognitive variables may emerge from the multiple behaviors that animals exhibit in the natural world, and, as a consequence, could be pervasive in high-level brain regions. We also make several specific predictions about which variables will be represented abstractly. Nature Publishing Group UK 2023-02-23 /pmc/articles/PMC9950464/ /pubmed/36823136 http://dx.doi.org/10.1038/s41467-023-36583-0 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Johnston, W. Jeffrey Fusi, Stefano Abstract representations emerge naturally in neural networks trained to perform multiple tasks |
title | Abstract representations emerge naturally in neural networks trained to perform multiple tasks |
title_full | Abstract representations emerge naturally in neural networks trained to perform multiple tasks |
title_fullStr | Abstract representations emerge naturally in neural networks trained to perform multiple tasks |
title_full_unstemmed | Abstract representations emerge naturally in neural networks trained to perform multiple tasks |
title_short | Abstract representations emerge naturally in neural networks trained to perform multiple tasks |
title_sort | abstract representations emerge naturally in neural networks trained to perform multiple tasks |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9950464/ https://www.ncbi.nlm.nih.gov/pubmed/36823136 http://dx.doi.org/10.1038/s41467-023-36583-0 |
work_keys_str_mv | AT johnstonwjeffrey abstractrepresentationsemergenaturallyinneuralnetworkstrainedtoperformmultipletasks AT fusistefano abstractrepresentationsemergenaturallyinneuralnetworkstrainedtoperformmultipletasks |