Cargando…
The geometry of representational drift in natural and artificial neural networks
Neurons in sensory areas encode/represent stimuli. Surprisingly, recent studies have suggested that, even during persistent performance, these representations are not stable and change over the course of days and weeks. We examine stimulus representations from fluorescence recordings across hundreds...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9731438/ https://www.ncbi.nlm.nih.gov/pubmed/36441762 http://dx.doi.org/10.1371/journal.pcbi.1010716 |
_version_ | 1784845901799358464 |
---|---|
author | Aitken, Kyle Garrett, Marina Olsen, Shawn Mihalas, Stefan |
author_facet | Aitken, Kyle Garrett, Marina Olsen, Shawn Mihalas, Stefan |
author_sort | Aitken, Kyle |
collection | PubMed |
description | Neurons in sensory areas encode/represent stimuli. Surprisingly, recent studies have suggested that, even during persistent performance, these representations are not stable and change over the course of days and weeks. We examine stimulus representations from fluorescence recordings across hundreds of neurons in the visual cortex using in vivo two-photon calcium imaging and we corroborate previous studies finding that such representations change as experimental trials are repeated across days. This phenomenon has been termed “representational drift”. In this study we geometrically characterize the properties of representational drift in the primary visual cortex of mice in two open datasets from the Allen Institute and propose a potential mechanism behind such drift. We observe representational drift both for passively presented stimuli, as well as for stimuli which are behaviorally relevant. Across experiments, the drift differs from in-session variance and most often occurs along directions that have the most in-class variance, leading to a significant turnover in the neurons used for a given representation. Interestingly, despite this significant change due to drift, linear classifiers trained to distinguish neuronal representations show little to no degradation in performance across days. The features we observe in the neural data are similar to properties of artificial neural networks where representations are updated by continual learning in the presence of dropout, i.e. a random masking of nodes/weights, but not other types of noise. Therefore, we conclude that a potential reason for the representational drift in biological networks is driven by an underlying dropout-like noise while continuously learning and that such a mechanism may be computational advantageous for the brain in the same way it is for artificial neural networks, e.g. preventing overfitting. |
format | Online Article Text |
id | pubmed-9731438 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-97314382022-12-09 The geometry of representational drift in natural and artificial neural networks Aitken, Kyle Garrett, Marina Olsen, Shawn Mihalas, Stefan PLoS Comput Biol Research Article Neurons in sensory areas encode/represent stimuli. Surprisingly, recent studies have suggested that, even during persistent performance, these representations are not stable and change over the course of days and weeks. We examine stimulus representations from fluorescence recordings across hundreds of neurons in the visual cortex using in vivo two-photon calcium imaging and we corroborate previous studies finding that such representations change as experimental trials are repeated across days. This phenomenon has been termed “representational drift”. In this study we geometrically characterize the properties of representational drift in the primary visual cortex of mice in two open datasets from the Allen Institute and propose a potential mechanism behind such drift. We observe representational drift both for passively presented stimuli, as well as for stimuli which are behaviorally relevant. Across experiments, the drift differs from in-session variance and most often occurs along directions that have the most in-class variance, leading to a significant turnover in the neurons used for a given representation. Interestingly, despite this significant change due to drift, linear classifiers trained to distinguish neuronal representations show little to no degradation in performance across days. The features we observe in the neural data are similar to properties of artificial neural networks where representations are updated by continual learning in the presence of dropout, i.e. a random masking of nodes/weights, but not other types of noise. Therefore, we conclude that a potential reason for the representational drift in biological networks is driven by an underlying dropout-like noise while continuously learning and that such a mechanism may be computational advantageous for the brain in the same way it is for artificial neural networks, e.g. preventing overfitting. Public Library of Science 2022-11-28 /pmc/articles/PMC9731438/ /pubmed/36441762 http://dx.doi.org/10.1371/journal.pcbi.1010716 Text en © 2022 Aitken et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Aitken, Kyle Garrett, Marina Olsen, Shawn Mihalas, Stefan The geometry of representational drift in natural and artificial neural networks |
title | The geometry of representational drift in natural and artificial neural networks |
title_full | The geometry of representational drift in natural and artificial neural networks |
title_fullStr | The geometry of representational drift in natural and artificial neural networks |
title_full_unstemmed | The geometry of representational drift in natural and artificial neural networks |
title_short | The geometry of representational drift in natural and artificial neural networks |
title_sort | geometry of representational drift in natural and artificial neural networks |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9731438/ https://www.ncbi.nlm.nih.gov/pubmed/36441762 http://dx.doi.org/10.1371/journal.pcbi.1010716 |
work_keys_str_mv | AT aitkenkyle thegeometryofrepresentationaldriftinnaturalandartificialneuralnetworks AT garrettmarina thegeometryofrepresentationaldriftinnaturalandartificialneuralnetworks AT olsenshawn thegeometryofrepresentationaldriftinnaturalandartificialneuralnetworks AT mihalasstefan thegeometryofrepresentationaldriftinnaturalandartificialneuralnetworks AT aitkenkyle geometryofrepresentationaldriftinnaturalandartificialneuralnetworks AT garrettmarina geometryofrepresentationaldriftinnaturalandartificialneuralnetworks AT olsenshawn geometryofrepresentationaldriftinnaturalandartificialneuralnetworks AT mihalasstefan geometryofrepresentationaldriftinnaturalandartificialneuralnetworks |