Cargando…

Pathfinding in stochastic environments: learning vs planning

Among the main challenges associated with navigating a mobile robot in complex environments are partial observability and stochasticity. This work proposes a stochastic formulation of the pathfinding problem, assuming that obstacles of arbitrary shapes may appear and disappear at random moments of t...

Descripción completa

Detalles Bibliográficos
Autores principales: Skrynnik, Alexey, Andreychuk, Anton, Yakovlev, Konstantin, Panov, Aleksandr
Formato: Online Artículo Texto
Lenguaje:English
Publicado: PeerJ Inc. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9455045/
https://www.ncbi.nlm.nih.gov/pubmed/36091975
http://dx.doi.org/10.7717/peerj-cs.1056
_version_ 1784785496284594176
author Skrynnik, Alexey
Andreychuk, Anton
Yakovlev, Konstantin
Panov, Aleksandr
author_facet Skrynnik, Alexey
Andreychuk, Anton
Yakovlev, Konstantin
Panov, Aleksandr
author_sort Skrynnik, Alexey
collection PubMed
description Among the main challenges associated with navigating a mobile robot in complex environments are partial observability and stochasticity. This work proposes a stochastic formulation of the pathfinding problem, assuming that obstacles of arbitrary shapes may appear and disappear at random moments of time. Moreover, we consider the case when the environment is only partially observable for an agent. We study and evaluate two orthogonal approaches to tackle the problem of reaching the goal under such conditions: planning and learning. Within planning, an agent constantly re-plans and updates the path based on the history of the observations using a search-based planner. Within learning, an agent asynchronously learns to optimize a policy function using recurrent neural networks (we propose an original efficient, scalable approach). We carry on an extensive empirical evaluation of both approaches that show that the learning-based approach scales better to the increasing number of the unpredictably appearing/disappearing obstacles. At the same time, the planning-based one is preferable when the environment is close-to-the-deterministic (i.e., external disturbances are rare). Code available at https://github.com/Tviskaron/pathfinding-in-stochastic-envs.
format Online
Article
Text
id pubmed-9455045
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher PeerJ Inc.
record_format MEDLINE/PubMed
spelling pubmed-94550452022-09-09 Pathfinding in stochastic environments: learning vs planning Skrynnik, Alexey Andreychuk, Anton Yakovlev, Konstantin Panov, Aleksandr PeerJ Comput Sci Artificial Intelligence Among the main challenges associated with navigating a mobile robot in complex environments are partial observability and stochasticity. This work proposes a stochastic formulation of the pathfinding problem, assuming that obstacles of arbitrary shapes may appear and disappear at random moments of time. Moreover, we consider the case when the environment is only partially observable for an agent. We study and evaluate two orthogonal approaches to tackle the problem of reaching the goal under such conditions: planning and learning. Within planning, an agent constantly re-plans and updates the path based on the history of the observations using a search-based planner. Within learning, an agent asynchronously learns to optimize a policy function using recurrent neural networks (we propose an original efficient, scalable approach). We carry on an extensive empirical evaluation of both approaches that show that the learning-based approach scales better to the increasing number of the unpredictably appearing/disappearing obstacles. At the same time, the planning-based one is preferable when the environment is close-to-the-deterministic (i.e., external disturbances are rare). Code available at https://github.com/Tviskaron/pathfinding-in-stochastic-envs. PeerJ Inc. 2022-08-18 /pmc/articles/PMC9455045/ /pubmed/36091975 http://dx.doi.org/10.7717/peerj-cs.1056 Text en ©2022 Skrynnik et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.
spellingShingle Artificial Intelligence
Skrynnik, Alexey
Andreychuk, Anton
Yakovlev, Konstantin
Panov, Aleksandr
Pathfinding in stochastic environments: learning vs planning
title Pathfinding in stochastic environments: learning vs planning
title_full Pathfinding in stochastic environments: learning vs planning
title_fullStr Pathfinding in stochastic environments: learning vs planning
title_full_unstemmed Pathfinding in stochastic environments: learning vs planning
title_short Pathfinding in stochastic environments: learning vs planning
title_sort pathfinding in stochastic environments: learning vs planning
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9455045/
https://www.ncbi.nlm.nih.gov/pubmed/36091975
http://dx.doi.org/10.7717/peerj-cs.1056
work_keys_str_mv AT skrynnikalexey pathfindinginstochasticenvironmentslearningvsplanning
AT andreychukanton pathfindinginstochasticenvironmentslearningvsplanning
AT yakovlevkonstantin pathfindinginstochasticenvironmentslearningvsplanning
AT panovaleksandr pathfindinginstochasticenvironmentslearningvsplanning