Cargando…
Cross-Viewpoint Semantic Mapping: Integrating Human and Robot Perspectives for Improved 3D Semantic Reconstruction
Allocentric semantic 3D maps are highly useful for a variety of human–machine interaction related tasks since egocentric viewpoints can be derived by the machine for the human partner. Class labels and map interpretations, however, may differ or could be missing for the participants due to the diffe...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10255526/ https://www.ncbi.nlm.nih.gov/pubmed/37299853 http://dx.doi.org/10.3390/s23115126 |
_version_ | 1785056893810507776 |
---|---|
author | Kopácsi, László Baffy, Benjámin Baranyi, Gábor Skaf, Joul Sörös, Gábor Szeier, Szilvia Lőrincz, András Sonntag, Daniel |
author_facet | Kopácsi, László Baffy, Benjámin Baranyi, Gábor Skaf, Joul Sörös, Gábor Szeier, Szilvia Lőrincz, András Sonntag, Daniel |
author_sort | Kopácsi, László |
collection | PubMed |
description | Allocentric semantic 3D maps are highly useful for a variety of human–machine interaction related tasks since egocentric viewpoints can be derived by the machine for the human partner. Class labels and map interpretations, however, may differ or could be missing for the participants due to the different perspectives. Particularly, when considering the viewpoint of a small robot, which significantly differs from the viewpoint of a human. In order to overcome this issue, and to establish common ground, we extend an existing real-time 3D semantic reconstruction pipeline with semantic matching across human and robot viewpoints. We use deep recognition networks, which usually perform well from higher (i.e., human) viewpoints but are inferior from lower viewpoints, such as that of a small robot. We propose several approaches for acquiring semantic labels for images taken from unusual perspectives. We start with a partial 3D semantic reconstruction from the human perspective that we transfer and adapt to the small robot’s perspective using superpixel segmentation and the geometry of the surroundings. The quality of the reconstruction is evaluated in the Habitat simulator and a real environment using a robot car with an RGBD camera. We show that the proposed approach provides high-quality semantic segmentation from the robot’s perspective, with accuracy comparable to the original one. In addition, we exploit the gained information and improve the recognition performance of the deep network for the lower viewpoints and show that the small robot alone is capable of generating high-quality semantic maps for the human partner. The computations are close to real-time, so the approach enables interactive applications. |
format | Online Article Text |
id | pubmed-10255526 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-102555262023-06-10 Cross-Viewpoint Semantic Mapping: Integrating Human and Robot Perspectives for Improved 3D Semantic Reconstruction Kopácsi, László Baffy, Benjámin Baranyi, Gábor Skaf, Joul Sörös, Gábor Szeier, Szilvia Lőrincz, András Sonntag, Daniel Sensors (Basel) Article Allocentric semantic 3D maps are highly useful for a variety of human–machine interaction related tasks since egocentric viewpoints can be derived by the machine for the human partner. Class labels and map interpretations, however, may differ or could be missing for the participants due to the different perspectives. Particularly, when considering the viewpoint of a small robot, which significantly differs from the viewpoint of a human. In order to overcome this issue, and to establish common ground, we extend an existing real-time 3D semantic reconstruction pipeline with semantic matching across human and robot viewpoints. We use deep recognition networks, which usually perform well from higher (i.e., human) viewpoints but are inferior from lower viewpoints, such as that of a small robot. We propose several approaches for acquiring semantic labels for images taken from unusual perspectives. We start with a partial 3D semantic reconstruction from the human perspective that we transfer and adapt to the small robot’s perspective using superpixel segmentation and the geometry of the surroundings. The quality of the reconstruction is evaluated in the Habitat simulator and a real environment using a robot car with an RGBD camera. We show that the proposed approach provides high-quality semantic segmentation from the robot’s perspective, with accuracy comparable to the original one. In addition, we exploit the gained information and improve the recognition performance of the deep network for the lower viewpoints and show that the small robot alone is capable of generating high-quality semantic maps for the human partner. The computations are close to real-time, so the approach enables interactive applications. MDPI 2023-05-27 /pmc/articles/PMC10255526/ /pubmed/37299853 http://dx.doi.org/10.3390/s23115126 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Kopácsi, László Baffy, Benjámin Baranyi, Gábor Skaf, Joul Sörös, Gábor Szeier, Szilvia Lőrincz, András Sonntag, Daniel Cross-Viewpoint Semantic Mapping: Integrating Human and Robot Perspectives for Improved 3D Semantic Reconstruction |
title | Cross-Viewpoint Semantic Mapping: Integrating Human and Robot Perspectives for Improved 3D Semantic Reconstruction |
title_full | Cross-Viewpoint Semantic Mapping: Integrating Human and Robot Perspectives for Improved 3D Semantic Reconstruction |
title_fullStr | Cross-Viewpoint Semantic Mapping: Integrating Human and Robot Perspectives for Improved 3D Semantic Reconstruction |
title_full_unstemmed | Cross-Viewpoint Semantic Mapping: Integrating Human and Robot Perspectives for Improved 3D Semantic Reconstruction |
title_short | Cross-Viewpoint Semantic Mapping: Integrating Human and Robot Perspectives for Improved 3D Semantic Reconstruction |
title_sort | cross-viewpoint semantic mapping: integrating human and robot perspectives for improved 3d semantic reconstruction |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10255526/ https://www.ncbi.nlm.nih.gov/pubmed/37299853 http://dx.doi.org/10.3390/s23115126 |
work_keys_str_mv | AT kopacsilaszlo crossviewpointsemanticmappingintegratinghumanandrobotperspectivesforimproved3dsemanticreconstruction AT baffybenjamin crossviewpointsemanticmappingintegratinghumanandrobotperspectivesforimproved3dsemanticreconstruction AT baranyigabor crossviewpointsemanticmappingintegratinghumanandrobotperspectivesforimproved3dsemanticreconstruction AT skafjoul crossviewpointsemanticmappingintegratinghumanandrobotperspectivesforimproved3dsemanticreconstruction AT sorosgabor crossviewpointsemanticmappingintegratinghumanandrobotperspectivesforimproved3dsemanticreconstruction AT szeierszilvia crossviewpointsemanticmappingintegratinghumanandrobotperspectivesforimproved3dsemanticreconstruction AT lorinczandras crossviewpointsemanticmappingintegratinghumanandrobotperspectivesforimproved3dsemanticreconstruction AT sonntagdaniel crossviewpointsemanticmappingintegratinghumanandrobotperspectivesforimproved3dsemanticreconstruction |