Cargando…

Research on visual question answering based on dynamic memory network model of multiple attention mechanisms

Since the existing visual question answering model lacks long-term memory modules for answering complex questions, it is easy to cause the loss of effective information. In order to further improve the accuracy of the visual question answering model, this paper applies the multiple attention mechani...

Descripción completa

Detalles Bibliográficos
Autores principales: Miao, Yalin, He, Shuyun, Cheng, WenFang, Li, Guodong, Tong, Meng
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9537137/
https://www.ncbi.nlm.nih.gov/pubmed/36202900
http://dx.doi.org/10.1038/s41598-022-21149-9
_version_ 1784803132073574400
author Miao, Yalin
He, Shuyun
Cheng, WenFang
Li, Guodong
Tong, Meng
author_facet Miao, Yalin
He, Shuyun
Cheng, WenFang
Li, Guodong
Tong, Meng
author_sort Miao, Yalin
collection PubMed
description Since the existing visual question answering model lacks long-term memory modules for answering complex questions, it is easy to cause the loss of effective information. In order to further improve the accuracy of the visual question answering model, this paper applies the multiple attention mechanism combining channel attention and spatial attention to memory networks for the first time and proposes a dynamic memory network model (DMN-MA) based on the multiple attention mechanism. The model uses the multiple attention mechanism in the situational memory module to obtain the most relevant visual vectors for answering questions based on continuous memory updating, storage and iterative inference of the questions, and effectively uses contextual information for answer inference. The experimental results show that the accuracy of the model in this paper reaches 64.57% and 67.18% on the large-scale public datasets COCO-QA and VQA2.0, respectively.
format Online
Article
Text
id pubmed-9537137
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-95371372022-10-08 Research on visual question answering based on dynamic memory network model of multiple attention mechanisms Miao, Yalin He, Shuyun Cheng, WenFang Li, Guodong Tong, Meng Sci Rep Article Since the existing visual question answering model lacks long-term memory modules for answering complex questions, it is easy to cause the loss of effective information. In order to further improve the accuracy of the visual question answering model, this paper applies the multiple attention mechanism combining channel attention and spatial attention to memory networks for the first time and proposes a dynamic memory network model (DMN-MA) based on the multiple attention mechanism. The model uses the multiple attention mechanism in the situational memory module to obtain the most relevant visual vectors for answering questions based on continuous memory updating, storage and iterative inference of the questions, and effectively uses contextual information for answer inference. The experimental results show that the accuracy of the model in this paper reaches 64.57% and 67.18% on the large-scale public datasets COCO-QA and VQA2.0, respectively. Nature Publishing Group UK 2022-10-06 /pmc/articles/PMC9537137/ /pubmed/36202900 http://dx.doi.org/10.1038/s41598-022-21149-9 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Miao, Yalin
He, Shuyun
Cheng, WenFang
Li, Guodong
Tong, Meng
Research on visual question answering based on dynamic memory network model of multiple attention mechanisms
title Research on visual question answering based on dynamic memory network model of multiple attention mechanisms
title_full Research on visual question answering based on dynamic memory network model of multiple attention mechanisms
title_fullStr Research on visual question answering based on dynamic memory network model of multiple attention mechanisms
title_full_unstemmed Research on visual question answering based on dynamic memory network model of multiple attention mechanisms
title_short Research on visual question answering based on dynamic memory network model of multiple attention mechanisms
title_sort research on visual question answering based on dynamic memory network model of multiple attention mechanisms
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9537137/
https://www.ncbi.nlm.nih.gov/pubmed/36202900
http://dx.doi.org/10.1038/s41598-022-21149-9
work_keys_str_mv AT miaoyalin researchonvisualquestionansweringbasedondynamicmemorynetworkmodelofmultipleattentionmechanisms
AT heshuyun researchonvisualquestionansweringbasedondynamicmemorynetworkmodelofmultipleattentionmechanisms
AT chengwenfang researchonvisualquestionansweringbasedondynamicmemorynetworkmodelofmultipleattentionmechanisms
AT liguodong researchonvisualquestionansweringbasedondynamicmemorynetworkmodelofmultipleattentionmechanisms
AT tongmeng researchonvisualquestionansweringbasedondynamicmemorynetworkmodelofmultipleattentionmechanisms