Cargando…

Grid Cells, Place Cells, and Geodesic Generalization for Spatial Reinforcement Learning

Reinforcement learning (RL) provides an influential characterization of the brain's mechanisms for learning to make advantageous choices. An important problem, though, is how complex tasks can be represented in a way that enables efficient learning. We consider this problem through the lens of...

Descripción completa

Detalles Bibliográficos
Autores principales: Gustafson, Nicholas J., Daw, Nathaniel D.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2011
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3203050/
https://www.ncbi.nlm.nih.gov/pubmed/22046115
http://dx.doi.org/10.1371/journal.pcbi.1002235
_version_ 1782215060400111616
author Gustafson, Nicholas J.
Daw, Nathaniel D.
author_facet Gustafson, Nicholas J.
Daw, Nathaniel D.
author_sort Gustafson, Nicholas J.
collection PubMed
description Reinforcement learning (RL) provides an influential characterization of the brain's mechanisms for learning to make advantageous choices. An important problem, though, is how complex tasks can be represented in a way that enables efficient learning. We consider this problem through the lens of spatial navigation, examining how two of the brain's location representations—hippocampal place cells and entorhinal grid cells—are adapted to serve as basis functions for approximating value over space for RL. Although much previous work has focused on these systems' roles in combining upstream sensory cues to track location, revisiting these representations with a focus on how they support this downstream decision function offers complementary insights into their characteristics. Rather than localization, the key problem in learning is generalization between past and present situations, which may not match perfectly. Accordingly, although neural populations collectively offer a precise representation of position, our simulations of navigational tasks verify the suggestion that RL gains efficiency from the more diffuse tuning of individual neurons, which allows learning about rewards to generalize over longer distances given fewer training experiences. However, work on generalization in RL suggests the underlying representation should respect the environment's layout. In particular, although it is often assumed that neurons track location in Euclidean coordinates (that a place cell's activity declines “as the crow flies” away from its peak), the relevant metric for value is geodesic: the distance along a path, around any obstacles. We formalize this intuition and present simulations showing how Euclidean, but not geodesic, representations can interfere with RL by generalizing inappropriately across barriers. Our proposal that place and grid responses should be modulated by geodesic distances suggests novel predictions about how obstacles should affect spatial firing fields, which provides a new viewpoint on data concerning both spatial codes.
format Online
Article
Text
id pubmed-3203050
institution National Center for Biotechnology Information
language English
publishDate 2011
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-32030502011-11-01 Grid Cells, Place Cells, and Geodesic Generalization for Spatial Reinforcement Learning Gustafson, Nicholas J. Daw, Nathaniel D. PLoS Comput Biol Research Article Reinforcement learning (RL) provides an influential characterization of the brain's mechanisms for learning to make advantageous choices. An important problem, though, is how complex tasks can be represented in a way that enables efficient learning. We consider this problem through the lens of spatial navigation, examining how two of the brain's location representations—hippocampal place cells and entorhinal grid cells—are adapted to serve as basis functions for approximating value over space for RL. Although much previous work has focused on these systems' roles in combining upstream sensory cues to track location, revisiting these representations with a focus on how they support this downstream decision function offers complementary insights into their characteristics. Rather than localization, the key problem in learning is generalization between past and present situations, which may not match perfectly. Accordingly, although neural populations collectively offer a precise representation of position, our simulations of navigational tasks verify the suggestion that RL gains efficiency from the more diffuse tuning of individual neurons, which allows learning about rewards to generalize over longer distances given fewer training experiences. However, work on generalization in RL suggests the underlying representation should respect the environment's layout. In particular, although it is often assumed that neurons track location in Euclidean coordinates (that a place cell's activity declines “as the crow flies” away from its peak), the relevant metric for value is geodesic: the distance along a path, around any obstacles. We formalize this intuition and present simulations showing how Euclidean, but not geodesic, representations can interfere with RL by generalizing inappropriately across barriers. Our proposal that place and grid responses should be modulated by geodesic distances suggests novel predictions about how obstacles should affect spatial firing fields, which provides a new viewpoint on data concerning both spatial codes. Public Library of Science 2011-10-27 /pmc/articles/PMC3203050/ /pubmed/22046115 http://dx.doi.org/10.1371/journal.pcbi.1002235 Text en Gustafson, Daw. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
spellingShingle Research Article
Gustafson, Nicholas J.
Daw, Nathaniel D.
Grid Cells, Place Cells, and Geodesic Generalization for Spatial Reinforcement Learning
title Grid Cells, Place Cells, and Geodesic Generalization for Spatial Reinforcement Learning
title_full Grid Cells, Place Cells, and Geodesic Generalization for Spatial Reinforcement Learning
title_fullStr Grid Cells, Place Cells, and Geodesic Generalization for Spatial Reinforcement Learning
title_full_unstemmed Grid Cells, Place Cells, and Geodesic Generalization for Spatial Reinforcement Learning
title_short Grid Cells, Place Cells, and Geodesic Generalization for Spatial Reinforcement Learning
title_sort grid cells, place cells, and geodesic generalization for spatial reinforcement learning
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3203050/
https://www.ncbi.nlm.nih.gov/pubmed/22046115
http://dx.doi.org/10.1371/journal.pcbi.1002235
work_keys_str_mv AT gustafsonnicholasj gridcellsplacecellsandgeodesicgeneralizationforspatialreinforcementlearning
AT dawnathanield gridcellsplacecellsandgeodesicgeneralizationforspatialreinforcementlearning