Cargando…

Compositional clustering in task structure learning

Humans are remarkably adept at generalizing knowledge between experiences in a way that can be difficult for computers. Often, this entails generalizing constituent pieces of experiences that do not fully overlap, but nonetheless share useful similarities with, previously acquired knowledge. However...

Descripción completa

Detalles Bibliográficos
Autores principales: Franklin, Nicholas T., Frank, Michael J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5929577/
https://www.ncbi.nlm.nih.gov/pubmed/29672581
http://dx.doi.org/10.1371/journal.pcbi.1006116
_version_ 1783319434310451200
author Franklin, Nicholas T.
Frank, Michael J.
author_facet Franklin, Nicholas T.
Frank, Michael J.
author_sort Franklin, Nicholas T.
collection PubMed
description Humans are remarkably adept at generalizing knowledge between experiences in a way that can be difficult for computers. Often, this entails generalizing constituent pieces of experiences that do not fully overlap, but nonetheless share useful similarities with, previously acquired knowledge. However, it is often unclear how knowledge gained in one context should generalize to another. Previous computational models and data suggest that rather than learning about each individual context, humans build latent abstract structures and learn to link these structures to arbitrary contexts, facilitating generalization. In these models, task structures that are more popular across contexts are more likely to be revisited in new contexts. However, these models can only re-use policies as a whole and are unable to transfer knowledge about the transition structure of the environment even if only the goal has changed (or vice-versa). This contrasts with ecological settings, where some aspects of task structure, such as the transition function, will be shared between context separately from other aspects, such as the reward function. Here, we develop a novel non-parametric Bayesian agent that forms independent latent clusters for transition and reward functions, affording separable transfer of their constituent parts across contexts. We show that the relative performance of this agent compared to an agent that jointly clusters reward and transition functions depends environmental task statistics: the mutual information between transition and reward functions and the stochasticity of the observations. We formalize our analysis through an information theoretic account of the priors, and propose a meta learning agent that dynamically arbitrates between strategies across task domains to optimize a statistical tradeoff.
format Online
Article
Text
id pubmed-5929577
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-59295772018-05-11 Compositional clustering in task structure learning Franklin, Nicholas T. Frank, Michael J. PLoS Comput Biol Research Article Humans are remarkably adept at generalizing knowledge between experiences in a way that can be difficult for computers. Often, this entails generalizing constituent pieces of experiences that do not fully overlap, but nonetheless share useful similarities with, previously acquired knowledge. However, it is often unclear how knowledge gained in one context should generalize to another. Previous computational models and data suggest that rather than learning about each individual context, humans build latent abstract structures and learn to link these structures to arbitrary contexts, facilitating generalization. In these models, task structures that are more popular across contexts are more likely to be revisited in new contexts. However, these models can only re-use policies as a whole and are unable to transfer knowledge about the transition structure of the environment even if only the goal has changed (or vice-versa). This contrasts with ecological settings, where some aspects of task structure, such as the transition function, will be shared between context separately from other aspects, such as the reward function. Here, we develop a novel non-parametric Bayesian agent that forms independent latent clusters for transition and reward functions, affording separable transfer of their constituent parts across contexts. We show that the relative performance of this agent compared to an agent that jointly clusters reward and transition functions depends environmental task statistics: the mutual information between transition and reward functions and the stochasticity of the observations. We formalize our analysis through an information theoretic account of the priors, and propose a meta learning agent that dynamically arbitrates between strategies across task domains to optimize a statistical tradeoff. Public Library of Science 2018-04-19 /pmc/articles/PMC5929577/ /pubmed/29672581 http://dx.doi.org/10.1371/journal.pcbi.1006116 Text en © 2018 Franklin, Frank http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Franklin, Nicholas T.
Frank, Michael J.
Compositional clustering in task structure learning
title Compositional clustering in task structure learning
title_full Compositional clustering in task structure learning
title_fullStr Compositional clustering in task structure learning
title_full_unstemmed Compositional clustering in task structure learning
title_short Compositional clustering in task structure learning
title_sort compositional clustering in task structure learning
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5929577/
https://www.ncbi.nlm.nih.gov/pubmed/29672581
http://dx.doi.org/10.1371/journal.pcbi.1006116
work_keys_str_mv AT franklinnicholast compositionalclusteringintaskstructurelearning
AT frankmichaelj compositionalclusteringintaskstructurelearning