Cargando…

An effective spatial relational reasoning networks for visual question answering

Visual Question Answering (VQA) is a method of answering questions in natural language based on the content of images and has been widely concerned by researchers. The existing research on the visual question answering model mainly focuses on the point of view of attention mechanism and multi-modal...

Descripción completa

Detalles Bibliográficos
Autores principales: Shen, Xiang, Han, Dezhi, Chen, Chongqing, Luo, Gaofeng, Wu, Zhongdai
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9704574/
https://www.ncbi.nlm.nih.gov/pubmed/36441742
http://dx.doi.org/10.1371/journal.pone.0277693
_version_ 1784840084594360320
author Shen, Xiang
Han, Dezhi
Chen, Chongqing
Luo, Gaofeng
Wu, Zhongdai
author_facet Shen, Xiang
Han, Dezhi
Chen, Chongqing
Luo, Gaofeng
Wu, Zhongdai
author_sort Shen, Xiang
collection PubMed
description Visual Question Answering (VQA) is a method of answering questions in natural language based on the content of images and has been widely concerned by researchers. The existing research on the visual question answering model mainly focuses on the point of view of attention mechanism and multi-modal fusion. It only pays attention to the visual semantic features of the image in the process of image modeling, ignoring the importance of modeling the spatial relationship of visual objects. We are aiming at the existing problems of the existing VQA model research. An effective spatial relationship reasoning network model is proposed, which can combine visual object semantic reasoning and spatial relationship reasoning at the same time to realize fine-grained multi-modal reasoning and fusion. A sparse attention encoder is designed to capture contextual information effectively in the semantic reasoning module. In the spatial relationship reasoning module, the graph neural network attention mechanism is used to model the spatial relationship of visual objects, which can correctly answer complex spatial relationship reasoning questions. Finally, a practical compact self-attention (CSA) mechanism is designed to reduce the redundancy of self-attention in linear transformation and the number of model parameters and effectively improve the model’s overall performance. Quantitative and qualitative experiments are conducted on the benchmark datasets of VQA 2.0 and GQA. The experimental results demonstrate that the proposed method performs favorably against the state-of-the-art approaches. Our best single model has an overall accuracy of 71.18% on the VQA 2.0 dataset and 57.59% on the GQA dataset.
format Online
Article
Text
id pubmed-9704574
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-97045742022-11-29 An effective spatial relational reasoning networks for visual question answering Shen, Xiang Han, Dezhi Chen, Chongqing Luo, Gaofeng Wu, Zhongdai PLoS One Research Article Visual Question Answering (VQA) is a method of answering questions in natural language based on the content of images and has been widely concerned by researchers. The existing research on the visual question answering model mainly focuses on the point of view of attention mechanism and multi-modal fusion. It only pays attention to the visual semantic features of the image in the process of image modeling, ignoring the importance of modeling the spatial relationship of visual objects. We are aiming at the existing problems of the existing VQA model research. An effective spatial relationship reasoning network model is proposed, which can combine visual object semantic reasoning and spatial relationship reasoning at the same time to realize fine-grained multi-modal reasoning and fusion. A sparse attention encoder is designed to capture contextual information effectively in the semantic reasoning module. In the spatial relationship reasoning module, the graph neural network attention mechanism is used to model the spatial relationship of visual objects, which can correctly answer complex spatial relationship reasoning questions. Finally, a practical compact self-attention (CSA) mechanism is designed to reduce the redundancy of self-attention in linear transformation and the number of model parameters and effectively improve the model’s overall performance. Quantitative and qualitative experiments are conducted on the benchmark datasets of VQA 2.0 and GQA. The experimental results demonstrate that the proposed method performs favorably against the state-of-the-art approaches. Our best single model has an overall accuracy of 71.18% on the VQA 2.0 dataset and 57.59% on the GQA dataset. Public Library of Science 2022-11-28 /pmc/articles/PMC9704574/ /pubmed/36441742 http://dx.doi.org/10.1371/journal.pone.0277693 Text en © 2022 Shen et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Shen, Xiang
Han, Dezhi
Chen, Chongqing
Luo, Gaofeng
Wu, Zhongdai
An effective spatial relational reasoning networks for visual question answering
title An effective spatial relational reasoning networks for visual question answering
title_full An effective spatial relational reasoning networks for visual question answering
title_fullStr An effective spatial relational reasoning networks for visual question answering
title_full_unstemmed An effective spatial relational reasoning networks for visual question answering
title_short An effective spatial relational reasoning networks for visual question answering
title_sort effective spatial relational reasoning networks for visual question answering
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9704574/
https://www.ncbi.nlm.nih.gov/pubmed/36441742
http://dx.doi.org/10.1371/journal.pone.0277693
work_keys_str_mv AT shenxiang aneffectivespatialrelationalreasoningnetworksforvisualquestionanswering
AT handezhi aneffectivespatialrelationalreasoningnetworksforvisualquestionanswering
AT chenchongqing aneffectivespatialrelationalreasoningnetworksforvisualquestionanswering
AT luogaofeng aneffectivespatialrelationalreasoningnetworksforvisualquestionanswering
AT wuzhongdai aneffectivespatialrelationalreasoningnetworksforvisualquestionanswering
AT shenxiang effectivespatialrelationalreasoningnetworksforvisualquestionanswering
AT handezhi effectivespatialrelationalreasoningnetworksforvisualquestionanswering
AT chenchongqing effectivespatialrelationalreasoningnetworksforvisualquestionanswering
AT luogaofeng effectivespatialrelationalreasoningnetworksforvisualquestionanswering
AT wuzhongdai effectivespatialrelationalreasoningnetworksforvisualquestionanswering