Cargando…

The multi-modal fusion in visual question answering: a review of attention mechanisms

Visual Question Answering (VQA) is a significant cross-disciplinary issue in the fields of computer vision and natural language processing that requires a computer to output a natural language answer based on pictures and questions posed based on the pictures. This requires simultaneous processing o...

Descripción completa

Detalles Bibliográficos
Autores principales: Lu, Siyu, Liu, Mingzhe, Yin, Lirong, Yin, Zhengtong, Liu, Xuan, Zheng, Wenfeng
Formato: Online Artículo Texto
Lenguaje:English
Publicado: PeerJ Inc. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280591/
https://www.ncbi.nlm.nih.gov/pubmed/37346665
http://dx.doi.org/10.7717/peerj-cs.1400
_version_ 1785060829744332800
author Lu, Siyu
Liu, Mingzhe
Yin, Lirong
Yin, Zhengtong
Liu, Xuan
Zheng, Wenfeng
author_facet Lu, Siyu
Liu, Mingzhe
Yin, Lirong
Yin, Zhengtong
Liu, Xuan
Zheng, Wenfeng
author_sort Lu, Siyu
collection PubMed
description Visual Question Answering (VQA) is a significant cross-disciplinary issue in the fields of computer vision and natural language processing that requires a computer to output a natural language answer based on pictures and questions posed based on the pictures. This requires simultaneous processing of multimodal fusion of text features and visual features, and the key task that can ensure its success is the attention mechanism. Bringing in attention mechanisms makes it better to integrate text features and image features into a compact multi-modal representation. Therefore, it is necessary to clarify the development status of attention mechanism, understand the most advanced attention mechanism methods, and look forward to its future development direction. In this article, we first conduct a bibliometric analysis of the correlation through CiteSpace, then we find and reasonably speculate that the attention mechanism has great development potential in cross-modal retrieval. Secondly, we discuss the classification and application of existing attention mechanisms in VQA tasks, analysis their shortcomings, and summarize current improvement methods. Finally, through the continuous exploration of attention mechanisms, we believe that VQA will evolve in a smarter and more human direction.
format Online
Article
Text
id pubmed-10280591
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher PeerJ Inc.
record_format MEDLINE/PubMed
spelling pubmed-102805912023-06-21 The multi-modal fusion in visual question answering: a review of attention mechanisms Lu, Siyu Liu, Mingzhe Yin, Lirong Yin, Zhengtong Liu, Xuan Zheng, Wenfeng PeerJ Comput Sci Artificial Intelligence Visual Question Answering (VQA) is a significant cross-disciplinary issue in the fields of computer vision and natural language processing that requires a computer to output a natural language answer based on pictures and questions posed based on the pictures. This requires simultaneous processing of multimodal fusion of text features and visual features, and the key task that can ensure its success is the attention mechanism. Bringing in attention mechanisms makes it better to integrate text features and image features into a compact multi-modal representation. Therefore, it is necessary to clarify the development status of attention mechanism, understand the most advanced attention mechanism methods, and look forward to its future development direction. In this article, we first conduct a bibliometric analysis of the correlation through CiteSpace, then we find and reasonably speculate that the attention mechanism has great development potential in cross-modal retrieval. Secondly, we discuss the classification and application of existing attention mechanisms in VQA tasks, analysis their shortcomings, and summarize current improvement methods. Finally, through the continuous exploration of attention mechanisms, we believe that VQA will evolve in a smarter and more human direction. PeerJ Inc. 2023-05-30 /pmc/articles/PMC10280591/ /pubmed/37346665 http://dx.doi.org/10.7717/peerj-cs.1400 Text en ©2023 Lu et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.
spellingShingle Artificial Intelligence
Lu, Siyu
Liu, Mingzhe
Yin, Lirong
Yin, Zhengtong
Liu, Xuan
Zheng, Wenfeng
The multi-modal fusion in visual question answering: a review of attention mechanisms
title The multi-modal fusion in visual question answering: a review of attention mechanisms
title_full The multi-modal fusion in visual question answering: a review of attention mechanisms
title_fullStr The multi-modal fusion in visual question answering: a review of attention mechanisms
title_full_unstemmed The multi-modal fusion in visual question answering: a review of attention mechanisms
title_short The multi-modal fusion in visual question answering: a review of attention mechanisms
title_sort multi-modal fusion in visual question answering: a review of attention mechanisms
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280591/
https://www.ncbi.nlm.nih.gov/pubmed/37346665
http://dx.doi.org/10.7717/peerj-cs.1400
work_keys_str_mv AT lusiyu themultimodalfusioninvisualquestionansweringareviewofattentionmechanisms
AT liumingzhe themultimodalfusioninvisualquestionansweringareviewofattentionmechanisms
AT yinlirong themultimodalfusioninvisualquestionansweringareviewofattentionmechanisms
AT yinzhengtong themultimodalfusioninvisualquestionansweringareviewofattentionmechanisms
AT liuxuan themultimodalfusioninvisualquestionansweringareviewofattentionmechanisms
AT zhengwenfeng themultimodalfusioninvisualquestionansweringareviewofattentionmechanisms
AT lusiyu multimodalfusioninvisualquestionansweringareviewofattentionmechanisms
AT liumingzhe multimodalfusioninvisualquestionansweringareviewofattentionmechanisms
AT yinlirong multimodalfusioninvisualquestionansweringareviewofattentionmechanisms
AT yinzhengtong multimodalfusioninvisualquestionansweringareviewofattentionmechanisms
AT liuxuan multimodalfusioninvisualquestionansweringareviewofattentionmechanisms
AT zhengwenfeng multimodalfusioninvisualquestionansweringareviewofattentionmechanisms