Cargando…
Dense captioning and multidimensional evaluations for indoor robotic scenes
The field of human-computer interaction is expanding, especially within the domain of intelligent technologies. Scene understanding, which entails the generation of advanced semantic descriptions from scene content, is crucial for effective interaction. Despite its importance, it remains a significa...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10682356/ https://www.ncbi.nlm.nih.gov/pubmed/38034836 http://dx.doi.org/10.3389/fnbot.2023.1280501 |
_version_ | 1785150957382795264 |
---|---|
author | Wang, Hua Wang, Wenshuai Li, Wenhao Liu, Hong |
author_facet | Wang, Hua Wang, Wenshuai Li, Wenhao Liu, Hong |
author_sort | Wang, Hua |
collection | PubMed |
description | The field of human-computer interaction is expanding, especially within the domain of intelligent technologies. Scene understanding, which entails the generation of advanced semantic descriptions from scene content, is crucial for effective interaction. Despite its importance, it remains a significant challenge. This study introduces RGBD2Cap, an innovative method that uses RGBD images for scene semantic description. We utilize a multimodal fusion module to integrate RGB and Depth information for extracting multi-level features. And the method also incorporates target detection and region proposal network and a top-down attention LSTM network to generate semantic descriptions. The experimental data are derived from the ScanRefer indoor scene dataset, with RGB and depth images rendered from ScanNet's 3D scene serving as the model's input. The method outperforms the DenseCap network in several metrics, including BLEU, CIDEr, and METEOR. Ablation studies have confirmed the essential role of the RGBD fusion module in the method's success. Furthermore, the practical applicability of our method was verified within the AI2-THOR embodied intelligence experimental environment, showcasing its reliability. |
format | Online Article Text |
id | pubmed-10682356 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-106823562023-11-30 Dense captioning and multidimensional evaluations for indoor robotic scenes Wang, Hua Wang, Wenshuai Li, Wenhao Liu, Hong Front Neurorobot Neuroscience The field of human-computer interaction is expanding, especially within the domain of intelligent technologies. Scene understanding, which entails the generation of advanced semantic descriptions from scene content, is crucial for effective interaction. Despite its importance, it remains a significant challenge. This study introduces RGBD2Cap, an innovative method that uses RGBD images for scene semantic description. We utilize a multimodal fusion module to integrate RGB and Depth information for extracting multi-level features. And the method also incorporates target detection and region proposal network and a top-down attention LSTM network to generate semantic descriptions. The experimental data are derived from the ScanRefer indoor scene dataset, with RGB and depth images rendered from ScanNet's 3D scene serving as the model's input. The method outperforms the DenseCap network in several metrics, including BLEU, CIDEr, and METEOR. Ablation studies have confirmed the essential role of the RGBD fusion module in the method's success. Furthermore, the practical applicability of our method was verified within the AI2-THOR embodied intelligence experimental environment, showcasing its reliability. Frontiers Media S.A. 2023-11-14 /pmc/articles/PMC10682356/ /pubmed/38034836 http://dx.doi.org/10.3389/fnbot.2023.1280501 Text en Copyright © 2023 Wang, Wang, Li and Liu. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Wang, Hua Wang, Wenshuai Li, Wenhao Liu, Hong Dense captioning and multidimensional evaluations for indoor robotic scenes |
title | Dense captioning and multidimensional evaluations for indoor robotic scenes |
title_full | Dense captioning and multidimensional evaluations for indoor robotic scenes |
title_fullStr | Dense captioning and multidimensional evaluations for indoor robotic scenes |
title_full_unstemmed | Dense captioning and multidimensional evaluations for indoor robotic scenes |
title_short | Dense captioning and multidimensional evaluations for indoor robotic scenes |
title_sort | dense captioning and multidimensional evaluations for indoor robotic scenes |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10682356/ https://www.ncbi.nlm.nih.gov/pubmed/38034836 http://dx.doi.org/10.3389/fnbot.2023.1280501 |
work_keys_str_mv | AT wanghua densecaptioningandmultidimensionalevaluationsforindoorroboticscenes AT wangwenshuai densecaptioningandmultidimensionalevaluationsforindoorroboticscenes AT liwenhao densecaptioningandmultidimensionalevaluationsforindoorroboticscenes AT liuhong densecaptioningandmultidimensionalevaluationsforindoorroboticscenes |