Cargando…
Indoor Scene Change Captioning Based on Multimodality Data
This study proposes a framework for describing a scene change using natural language text based on indoor scene observations conducted before and after a scene change. The recognition of scene changes plays an essential role in a variety of real-world applications, such as scene anomaly detection. M...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7506858/ https://www.ncbi.nlm.nih.gov/pubmed/32842516 http://dx.doi.org/10.3390/s20174761 |
_version_ | 1783585109280030720 |
---|---|
author | Qiu, Yue Satoh, Yutaka Suzuki, Ryota Iwata, Kenji Kataoka, Hirokatsu |
author_facet | Qiu, Yue Satoh, Yutaka Suzuki, Ryota Iwata, Kenji Kataoka, Hirokatsu |
author_sort | Qiu, Yue |
collection | PubMed |
description | This study proposes a framework for describing a scene change using natural language text based on indoor scene observations conducted before and after a scene change. The recognition of scene changes plays an essential role in a variety of real-world applications, such as scene anomaly detection. Most scene understanding research has focused on static scenes. Most existing scene change captioning methods detect scene changes from single-view RGB images, neglecting the underlying three-dimensional structures. Previous three-dimensional scene change captioning methods use simulated scenes consisting of geometry primitives, making it unsuitable for real-world applications. To solve these problems, we automatically generated large-scale indoor scene change caption datasets. We propose an end-to-end framework for describing scene changes from various input modalities, namely, RGB images, depth images, and point cloud data, which are available in most robot applications. We conducted experiments with various input modalities and models and evaluated model performance using datasets with various levels of complexity. Experimental results show that the models that combine RGB images and point cloud data as input achieve high performance in sentence generation and caption correctness and are robust for change type understanding for datasets with high complexity. The developed datasets and models contribute to the study of indoor scene change understanding. |
format | Online Article Text |
id | pubmed-7506858 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-75068582020-09-26 Indoor Scene Change Captioning Based on Multimodality Data Qiu, Yue Satoh, Yutaka Suzuki, Ryota Iwata, Kenji Kataoka, Hirokatsu Sensors (Basel) Article This study proposes a framework for describing a scene change using natural language text based on indoor scene observations conducted before and after a scene change. The recognition of scene changes plays an essential role in a variety of real-world applications, such as scene anomaly detection. Most scene understanding research has focused on static scenes. Most existing scene change captioning methods detect scene changes from single-view RGB images, neglecting the underlying three-dimensional structures. Previous three-dimensional scene change captioning methods use simulated scenes consisting of geometry primitives, making it unsuitable for real-world applications. To solve these problems, we automatically generated large-scale indoor scene change caption datasets. We propose an end-to-end framework for describing scene changes from various input modalities, namely, RGB images, depth images, and point cloud data, which are available in most robot applications. We conducted experiments with various input modalities and models and evaluated model performance using datasets with various levels of complexity. Experimental results show that the models that combine RGB images and point cloud data as input achieve high performance in sentence generation and caption correctness and are robust for change type understanding for datasets with high complexity. The developed datasets and models contribute to the study of indoor scene change understanding. MDPI 2020-08-23 /pmc/articles/PMC7506858/ /pubmed/32842516 http://dx.doi.org/10.3390/s20174761 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Qiu, Yue Satoh, Yutaka Suzuki, Ryota Iwata, Kenji Kataoka, Hirokatsu Indoor Scene Change Captioning Based on Multimodality Data |
title | Indoor Scene Change Captioning Based on Multimodality Data |
title_full | Indoor Scene Change Captioning Based on Multimodality Data |
title_fullStr | Indoor Scene Change Captioning Based on Multimodality Data |
title_full_unstemmed | Indoor Scene Change Captioning Based on Multimodality Data |
title_short | Indoor Scene Change Captioning Based on Multimodality Data |
title_sort | indoor scene change captioning based on multimodality data |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7506858/ https://www.ncbi.nlm.nih.gov/pubmed/32842516 http://dx.doi.org/10.3390/s20174761 |
work_keys_str_mv | AT qiuyue indoorscenechangecaptioningbasedonmultimodalitydata AT satohyutaka indoorscenechangecaptioningbasedonmultimodalitydata AT suzukiryota indoorscenechangecaptioningbasedonmultimodalitydata AT iwatakenji indoorscenechangecaptioningbasedonmultimodalitydata AT kataokahirokatsu indoorscenechangecaptioningbasedonmultimodalitydata |