Cargando…

Leveraging explainability for understanding object descriptions in ambiguous 3D environments

For effective human-robot collaboration, it is crucial for robots to understand requests from users perceiving the three-dimensional space and ask reasonable follow-up questions when there are ambiguities. While comprehending the users’ object descriptions in the requests, existing studies have focu...

Descripción completa

Detalles Bibliográficos
Autores principales: Doğan, Fethiye Irmak, Melsión, Gaspar I., Leite, Iolanda
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9872646/
https://www.ncbi.nlm.nih.gov/pubmed/36704241
http://dx.doi.org/10.3389/frobt.2022.937772
_version_ 1784877447699759104
author Doğan, Fethiye Irmak
Melsión, Gaspar I.
Leite, Iolanda
author_facet Doğan, Fethiye Irmak
Melsión, Gaspar I.
Leite, Iolanda
author_sort Doğan, Fethiye Irmak
collection PubMed
description For effective human-robot collaboration, it is crucial for robots to understand requests from users perceiving the three-dimensional space and ask reasonable follow-up questions when there are ambiguities. While comprehending the users’ object descriptions in the requests, existing studies have focused on this challenge for limited object categories that can be detected or localized with existing object detection and localization modules. Further, they have mostly focused on comprehending the object descriptions using flat RGB images without considering the depth dimension. On the other hand, in the wild, it is impossible to limit the object categories that can be encountered during the interaction, and 3-dimensional space perception that includes depth information is fundamental in successful task completion. To understand described objects and resolve ambiguities in the wild, for the first time, we suggest a method leveraging explainability. Our method focuses on the active areas of an RGB scene to find the described objects without putting the previous constraints on object categories and natural language instructions. We further improve our method to identify the described objects considering depth dimension. We evaluate our method in varied real-world images and observe that the regions suggested by our method can help resolve ambiguities. When we compare our method with a state-of-the-art baseline, we show that our method performs better in scenes with ambiguous objects which cannot be recognized by existing object detectors. We also show that using depth features significantly improves performance in scenes where depth data is critical to disambiguate the objects and across our evaluation dataset that contains objects that can be specified with and without the depth dimension.
format Online
Article
Text
id pubmed-9872646
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-98726462023-01-25 Leveraging explainability for understanding object descriptions in ambiguous 3D environments Doğan, Fethiye Irmak Melsión, Gaspar I. Leite, Iolanda Front Robot AI Robotics and AI For effective human-robot collaboration, it is crucial for robots to understand requests from users perceiving the three-dimensional space and ask reasonable follow-up questions when there are ambiguities. While comprehending the users’ object descriptions in the requests, existing studies have focused on this challenge for limited object categories that can be detected or localized with existing object detection and localization modules. Further, they have mostly focused on comprehending the object descriptions using flat RGB images without considering the depth dimension. On the other hand, in the wild, it is impossible to limit the object categories that can be encountered during the interaction, and 3-dimensional space perception that includes depth information is fundamental in successful task completion. To understand described objects and resolve ambiguities in the wild, for the first time, we suggest a method leveraging explainability. Our method focuses on the active areas of an RGB scene to find the described objects without putting the previous constraints on object categories and natural language instructions. We further improve our method to identify the described objects considering depth dimension. We evaluate our method in varied real-world images and observe that the regions suggested by our method can help resolve ambiguities. When we compare our method with a state-of-the-art baseline, we show that our method performs better in scenes with ambiguous objects which cannot be recognized by existing object detectors. We also show that using depth features significantly improves performance in scenes where depth data is critical to disambiguate the objects and across our evaluation dataset that contains objects that can be specified with and without the depth dimension. Frontiers Media S.A. 2023-01-04 /pmc/articles/PMC9872646/ /pubmed/36704241 http://dx.doi.org/10.3389/frobt.2022.937772 Text en Copyright © 2023 Doğan, Melsión and Leite. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Doğan, Fethiye Irmak
Melsión, Gaspar I.
Leite, Iolanda
Leveraging explainability for understanding object descriptions in ambiguous 3D environments
title Leveraging explainability for understanding object descriptions in ambiguous 3D environments
title_full Leveraging explainability for understanding object descriptions in ambiguous 3D environments
title_fullStr Leveraging explainability for understanding object descriptions in ambiguous 3D environments
title_full_unstemmed Leveraging explainability for understanding object descriptions in ambiguous 3D environments
title_short Leveraging explainability for understanding object descriptions in ambiguous 3D environments
title_sort leveraging explainability for understanding object descriptions in ambiguous 3d environments
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9872646/
https://www.ncbi.nlm.nih.gov/pubmed/36704241
http://dx.doi.org/10.3389/frobt.2022.937772
work_keys_str_mv AT doganfethiyeirmak leveragingexplainabilityforunderstandingobjectdescriptionsinambiguous3denvironments
AT melsiongaspari leveragingexplainabilityforunderstandingobjectdescriptionsinambiguous3denvironments
AT leiteiolanda leveragingexplainabilityforunderstandingobjectdescriptionsinambiguous3denvironments