Cargando…

Research on visual question answering based on dynamic memory network model of multiple attention mechanisms

Since the existing visual question answering model lacks long-term memory modules for answering complex questions, it is easy to cause the loss of effective information. In order to further improve the accuracy of the visual question answering model, this paper applies the multiple attention mechani...

Descripción completa

Detalles Bibliográficos
Autores principales: Miao, Yalin, He, Shuyun, Cheng, WenFang, Li, Guodong, Tong, Meng
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9537137/
https://www.ncbi.nlm.nih.gov/pubmed/36202900
http://dx.doi.org/10.1038/s41598-022-21149-9
Descripción
Sumario:Since the existing visual question answering model lacks long-term memory modules for answering complex questions, it is easy to cause the loss of effective information. In order to further improve the accuracy of the visual question answering model, this paper applies the multiple attention mechanism combining channel attention and spatial attention to memory networks for the first time and proposes a dynamic memory network model (DMN-MA) based on the multiple attention mechanism. The model uses the multiple attention mechanism in the situational memory module to obtain the most relevant visual vectors for answering questions based on continuous memory updating, storage and iterative inference of the questions, and effectively uses contextual information for answer inference. The experimental results show that the accuracy of the model in this paper reaches 64.57% and 67.18% on the large-scale public datasets COCO-QA and VQA2.0, respectively.