Cargando…

Image-based robot navigation with task achievability

Image-based robot action planning is becoming an active area of research owing to recent advances in deep learning. To evaluate and execute robot actions, recently proposed approaches require the estimation of the optimal cost-minimizing path, such as the shortest distance or time, between two state...

Descripción completa

Detalles Bibliográficos
Autores principales: Ishihara, Yu, Takahashi, Masaki
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10264687/
https://www.ncbi.nlm.nih.gov/pubmed/37323640
http://dx.doi.org/10.3389/frobt.2023.944375
_version_ 1785058377521430528
author Ishihara, Yu
Takahashi, Masaki
author_facet Ishihara, Yu
Takahashi, Masaki
author_sort Ishihara, Yu
collection PubMed
description Image-based robot action planning is becoming an active area of research owing to recent advances in deep learning. To evaluate and execute robot actions, recently proposed approaches require the estimation of the optimal cost-minimizing path, such as the shortest distance or time, between two states. To estimate the cost, parametric models consisting of deep neural networks are widely used. However, such parametric models require large amounts of correctly labeled data to accurately estimate the cost. In real robotic tasks, collecting such data is not always feasible, and the robot itself may require collecting it. In this study, we empirically show that when a model is trained with data autonomously collected by a robot, the estimation of such parametric models could be inaccurate to perform a task. Specifically, the higher the maximum predicted distance, the more inaccurate the estimation, and the robot fails navigating in the environment. To overcome this issue, we propose an alternative metric, “task achievability” (TA), which is defined as the probability that a robot will reach a goal state within a specified number of timesteps. Compared to the training of optimal cost estimator, TA can use both optimal and non-optimal trajectories in the training dataset to train, which leads to a stable estimation. We demonstrate the effectiveness of TA through robot navigation experiments in an environment resembling a real living room. We show that TA-based navigation succeeds in navigating a robot to different target positions, even when conventional cost estimator-based navigation fails.
format Online
Article
Text
id pubmed-10264687
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-102646872023-06-15 Image-based robot navigation with task achievability Ishihara, Yu Takahashi, Masaki Front Robot AI Robotics and AI Image-based robot action planning is becoming an active area of research owing to recent advances in deep learning. To evaluate and execute robot actions, recently proposed approaches require the estimation of the optimal cost-minimizing path, such as the shortest distance or time, between two states. To estimate the cost, parametric models consisting of deep neural networks are widely used. However, such parametric models require large amounts of correctly labeled data to accurately estimate the cost. In real robotic tasks, collecting such data is not always feasible, and the robot itself may require collecting it. In this study, we empirically show that when a model is trained with data autonomously collected by a robot, the estimation of such parametric models could be inaccurate to perform a task. Specifically, the higher the maximum predicted distance, the more inaccurate the estimation, and the robot fails navigating in the environment. To overcome this issue, we propose an alternative metric, “task achievability” (TA), which is defined as the probability that a robot will reach a goal state within a specified number of timesteps. Compared to the training of optimal cost estimator, TA can use both optimal and non-optimal trajectories in the training dataset to train, which leads to a stable estimation. We demonstrate the effectiveness of TA through robot navigation experiments in an environment resembling a real living room. We show that TA-based navigation succeeds in navigating a robot to different target positions, even when conventional cost estimator-based navigation fails. Frontiers Media S.A. 2023-05-31 /pmc/articles/PMC10264687/ /pubmed/37323640 http://dx.doi.org/10.3389/frobt.2023.944375 Text en Copyright © 2023 Ishihara and Takahashi. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Ishihara, Yu
Takahashi, Masaki
Image-based robot navigation with task achievability
title Image-based robot navigation with task achievability
title_full Image-based robot navigation with task achievability
title_fullStr Image-based robot navigation with task achievability
title_full_unstemmed Image-based robot navigation with task achievability
title_short Image-based robot navigation with task achievability
title_sort image-based robot navigation with task achievability
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10264687/
https://www.ncbi.nlm.nih.gov/pubmed/37323640
http://dx.doi.org/10.3389/frobt.2023.944375
work_keys_str_mv AT ishiharayu imagebasedrobotnavigationwithtaskachievability
AT takahashimasaki imagebasedrobotnavigationwithtaskachievability