Cargando…

Human and Machine Learning in Non-Markovian Decision Making

Humans can learn under a wide variety of feedback conditions. Reinforcement learning (RL), where a series of rewarded decisions must be made, is a particularly important type of learning. Computational and behavioral studies of RL have focused mainly on Markovian decision processes, where the next s...

Descripción completa

Detalles Bibliográficos
Autores principales: Clarke, Aaron Michael, Friedrich, Johannes, Tartaglia, Elisa M., Marchesotti, Silvia, Senn, Walter, Herzog, Michael H.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4405578/
https://www.ncbi.nlm.nih.gov/pubmed/25898139
http://dx.doi.org/10.1371/journal.pone.0123105
_version_ 1782367649177534464
author Clarke, Aaron Michael
Friedrich, Johannes
Tartaglia, Elisa M.
Marchesotti, Silvia
Senn, Walter
Herzog, Michael H.
author_facet Clarke, Aaron Michael
Friedrich, Johannes
Tartaglia, Elisa M.
Marchesotti, Silvia
Senn, Walter
Herzog, Michael H.
author_sort Clarke, Aaron Michael
collection PubMed
description Humans can learn under a wide variety of feedback conditions. Reinforcement learning (RL), where a series of rewarded decisions must be made, is a particularly important type of learning. Computational and behavioral studies of RL have focused mainly on Markovian decision processes, where the next state depends on only the current state and action. Little is known about non-Markovian decision making, where the next state depends on more than the current state and action. Learning is non-Markovian, for example, when there is no unique mapping between actions and feedback. We have produced a model based on spiking neurons that can handle these non-Markovian conditions by performing policy gradient descent [1]. Here, we examine the model’s performance and compare it with human learning and a Bayes optimal reference, which provides an upper-bound on performance. We find that in all cases, our population of spiking neurons model well-describes human performance.
format Online
Article
Text
id pubmed-4405578
institution National Center for Biotechnology Information
language English
publishDate 2015
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-44055782015-05-07 Human and Machine Learning in Non-Markovian Decision Making Clarke, Aaron Michael Friedrich, Johannes Tartaglia, Elisa M. Marchesotti, Silvia Senn, Walter Herzog, Michael H. PLoS One Research Article Humans can learn under a wide variety of feedback conditions. Reinforcement learning (RL), where a series of rewarded decisions must be made, is a particularly important type of learning. Computational and behavioral studies of RL have focused mainly on Markovian decision processes, where the next state depends on only the current state and action. Little is known about non-Markovian decision making, where the next state depends on more than the current state and action. Learning is non-Markovian, for example, when there is no unique mapping between actions and feedback. We have produced a model based on spiking neurons that can handle these non-Markovian conditions by performing policy gradient descent [1]. Here, we examine the model’s performance and compare it with human learning and a Bayes optimal reference, which provides an upper-bound on performance. We find that in all cases, our population of spiking neurons model well-describes human performance. Public Library of Science 2015-04-21 /pmc/articles/PMC4405578/ /pubmed/25898139 http://dx.doi.org/10.1371/journal.pone.0123105 Text en © 2015 Clarke et al http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
spellingShingle Research Article
Clarke, Aaron Michael
Friedrich, Johannes
Tartaglia, Elisa M.
Marchesotti, Silvia
Senn, Walter
Herzog, Michael H.
Human and Machine Learning in Non-Markovian Decision Making
title Human and Machine Learning in Non-Markovian Decision Making
title_full Human and Machine Learning in Non-Markovian Decision Making
title_fullStr Human and Machine Learning in Non-Markovian Decision Making
title_full_unstemmed Human and Machine Learning in Non-Markovian Decision Making
title_short Human and Machine Learning in Non-Markovian Decision Making
title_sort human and machine learning in non-markovian decision making
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4405578/
https://www.ncbi.nlm.nih.gov/pubmed/25898139
http://dx.doi.org/10.1371/journal.pone.0123105
work_keys_str_mv AT clarkeaaronmichael humanandmachinelearninginnonmarkoviandecisionmaking
AT friedrichjohannes humanandmachinelearninginnonmarkoviandecisionmaking
AT tartagliaelisam humanandmachinelearninginnonmarkoviandecisionmaking
AT marchesottisilvia humanandmachinelearninginnonmarkoviandecisionmaking
AT sennwalter humanandmachinelearninginnonmarkoviandecisionmaking
AT herzogmichaelh humanandmachinelearninginnonmarkoviandecisionmaking