Cargando…

Navigational Behavior of Humans and Deep Reinforcement Learning Agents

Rapid advances in the field of Deep Reinforcement Learning (DRL) over the past several years have led to artificial agents (AAs) capable of producing behavior that meets or exceeds human-level performance in a wide variety of tasks. However, research on DRL frequently lacks adequate discussion of th...

Descripción completa

Detalles Bibliográficos
Autores principales: Rigoli, Lillian M., Patil, Gaurav, Stening, Hamish F., Kallen, Rachel W., Richardson, Michael J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8493935/
https://www.ncbi.nlm.nih.gov/pubmed/34630238
http://dx.doi.org/10.3389/fpsyg.2021.725932
_version_ 1784579215393292288
author Rigoli, Lillian M.
Patil, Gaurav
Stening, Hamish F.
Kallen, Rachel W.
Richardson, Michael J.
author_facet Rigoli, Lillian M.
Patil, Gaurav
Stening, Hamish F.
Kallen, Rachel W.
Richardson, Michael J.
author_sort Rigoli, Lillian M.
collection PubMed
description Rapid advances in the field of Deep Reinforcement Learning (DRL) over the past several years have led to artificial agents (AAs) capable of producing behavior that meets or exceeds human-level performance in a wide variety of tasks. However, research on DRL frequently lacks adequate discussion of the low-level dynamics of the behavior itself and instead focuses on meta-level or global-level performance metrics. In doing so, the current literature lacks perspective on the qualitative nature of AA behavior, leaving questions regarding the spatiotemporal patterning of their behavior largely unanswered. The current study explored the degree to which the navigation and route selection trajectories of DRL agents (i.e., AAs trained using DRL) through simple obstacle ridden virtual environments were equivalent (and/or different) from those produced by human agents. The second and related aim was to determine whether a task-dynamical model of human route navigation could not only be used to capture both human and DRL navigational behavior, but also to help identify whether any observed differences in the navigational trajectories of humans and DRL agents were a function of differences in the dynamical environmental couplings.
format Online
Article
Text
id pubmed-8493935
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-84939352021-10-07 Navigational Behavior of Humans and Deep Reinforcement Learning Agents Rigoli, Lillian M. Patil, Gaurav Stening, Hamish F. Kallen, Rachel W. Richardson, Michael J. Front Psychol Psychology Rapid advances in the field of Deep Reinforcement Learning (DRL) over the past several years have led to artificial agents (AAs) capable of producing behavior that meets or exceeds human-level performance in a wide variety of tasks. However, research on DRL frequently lacks adequate discussion of the low-level dynamics of the behavior itself and instead focuses on meta-level or global-level performance metrics. In doing so, the current literature lacks perspective on the qualitative nature of AA behavior, leaving questions regarding the spatiotemporal patterning of their behavior largely unanswered. The current study explored the degree to which the navigation and route selection trajectories of DRL agents (i.e., AAs trained using DRL) through simple obstacle ridden virtual environments were equivalent (and/or different) from those produced by human agents. The second and related aim was to determine whether a task-dynamical model of human route navigation could not only be used to capture both human and DRL navigational behavior, but also to help identify whether any observed differences in the navigational trajectories of humans and DRL agents were a function of differences in the dynamical environmental couplings. Frontiers Media S.A. 2021-09-22 /pmc/articles/PMC8493935/ /pubmed/34630238 http://dx.doi.org/10.3389/fpsyg.2021.725932 Text en Copyright © 2021 Rigoli, Patil, Stening, Kallen and Richardson. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Psychology
Rigoli, Lillian M.
Patil, Gaurav
Stening, Hamish F.
Kallen, Rachel W.
Richardson, Michael J.
Navigational Behavior of Humans and Deep Reinforcement Learning Agents
title Navigational Behavior of Humans and Deep Reinforcement Learning Agents
title_full Navigational Behavior of Humans and Deep Reinforcement Learning Agents
title_fullStr Navigational Behavior of Humans and Deep Reinforcement Learning Agents
title_full_unstemmed Navigational Behavior of Humans and Deep Reinforcement Learning Agents
title_short Navigational Behavior of Humans and Deep Reinforcement Learning Agents
title_sort navigational behavior of humans and deep reinforcement learning agents
topic Psychology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8493935/
https://www.ncbi.nlm.nih.gov/pubmed/34630238
http://dx.doi.org/10.3389/fpsyg.2021.725932
work_keys_str_mv AT rigolilillianm navigationalbehaviorofhumansanddeepreinforcementlearningagents
AT patilgaurav navigationalbehaviorofhumansanddeepreinforcementlearningagents
AT steninghamishf navigationalbehaviorofhumansanddeepreinforcementlearningagents
AT kallenrachelw navigationalbehaviorofhumansanddeepreinforcementlearningagents
AT richardsonmichaelj navigationalbehaviorofhumansanddeepreinforcementlearningagents