Cargando…

Multi-modal adaptive gated mechanism for visual question answering

Visual Question Answering (VQA) is a multimodal task that uses natural language to ask and answer questions based on image content. For multimodal tasks, obtaining accurate modality feature information is crucial. The existing researches on the visual question answering model mainly start from the p...

Descripción completa

Detalles Bibliográficos
Autores principales: Xu, Yangshuyi, Zhang, Lin, Shen, Xiang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10306234/
https://www.ncbi.nlm.nih.gov/pubmed/37379280
http://dx.doi.org/10.1371/journal.pone.0287557
_version_ 1785065895029112832
author Xu, Yangshuyi
Zhang, Lin
Shen, Xiang
author_facet Xu, Yangshuyi
Zhang, Lin
Shen, Xiang
author_sort Xu, Yangshuyi
collection PubMed
description Visual Question Answering (VQA) is a multimodal task that uses natural language to ask and answer questions based on image content. For multimodal tasks, obtaining accurate modality feature information is crucial. The existing researches on the visual question answering model mainly start from the perspective of attention mechanism and multimodal fusion, which will tend to ignore the impact of modal interaction learning and the introduction of noise information in the process of modal fusion on the overall performance of the model. This paper proposes a novel and efficient multimodal adaptive gated mechanism model, MAGM. The model adds an adaptive gate mechanism to the intra- and inter-modality learning and the modal fusion process. This model can effectively filter irrelevant noise information, obtain fine-grained modal features, and improve the ability of the model to adaptively control the contribution of the two modal features to the predicted answer. In intra- and inter-modality learning modules, the self-attention gated and self-guided-attention gated units are designed to filter text and image features’ noise information effectively. In modal fusion module, the adaptive gated modal feature fusion structure is designed to obtain fine-grained modal features and improve the accuracy of the model in answering questions. Quantitative and qualitative experiments on the two VQA task benchmark datasets, VQA 2.0 and GQA, proved that the method in this paper is superior to the existing methods. The MAGM model has an overall accuracy of 71.30% on the VQA 2.0 dataset and an overall accuracy of 57.57% on the GQA dataset.
format Online
Article
Text
id pubmed-10306234
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-103062342023-06-29 Multi-modal adaptive gated mechanism for visual question answering Xu, Yangshuyi Zhang, Lin Shen, Xiang PLoS One Research Article Visual Question Answering (VQA) is a multimodal task that uses natural language to ask and answer questions based on image content. For multimodal tasks, obtaining accurate modality feature information is crucial. The existing researches on the visual question answering model mainly start from the perspective of attention mechanism and multimodal fusion, which will tend to ignore the impact of modal interaction learning and the introduction of noise information in the process of modal fusion on the overall performance of the model. This paper proposes a novel and efficient multimodal adaptive gated mechanism model, MAGM. The model adds an adaptive gate mechanism to the intra- and inter-modality learning and the modal fusion process. This model can effectively filter irrelevant noise information, obtain fine-grained modal features, and improve the ability of the model to adaptively control the contribution of the two modal features to the predicted answer. In intra- and inter-modality learning modules, the self-attention gated and self-guided-attention gated units are designed to filter text and image features’ noise information effectively. In modal fusion module, the adaptive gated modal feature fusion structure is designed to obtain fine-grained modal features and improve the accuracy of the model in answering questions. Quantitative and qualitative experiments on the two VQA task benchmark datasets, VQA 2.0 and GQA, proved that the method in this paper is superior to the existing methods. The MAGM model has an overall accuracy of 71.30% on the VQA 2.0 dataset and an overall accuracy of 57.57% on the GQA dataset. Public Library of Science 2023-06-28 /pmc/articles/PMC10306234/ /pubmed/37379280 http://dx.doi.org/10.1371/journal.pone.0287557 Text en © 2023 Xu et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Xu, Yangshuyi
Zhang, Lin
Shen, Xiang
Multi-modal adaptive gated mechanism for visual question answering
title Multi-modal adaptive gated mechanism for visual question answering
title_full Multi-modal adaptive gated mechanism for visual question answering
title_fullStr Multi-modal adaptive gated mechanism for visual question answering
title_full_unstemmed Multi-modal adaptive gated mechanism for visual question answering
title_short Multi-modal adaptive gated mechanism for visual question answering
title_sort multi-modal adaptive gated mechanism for visual question answering
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10306234/
https://www.ncbi.nlm.nih.gov/pubmed/37379280
http://dx.doi.org/10.1371/journal.pone.0287557
work_keys_str_mv AT xuyangshuyi multimodaladaptivegatedmechanismforvisualquestionanswering
AT zhanglin multimodaladaptivegatedmechanismforvisualquestionanswering
AT shenxiang multimodaladaptivegatedmechanismforvisualquestionanswering