Cargando…

Bilinear pooling in video-QA: empirical challenges and motivational drift from neurological parallels

Bilinear pooling (BLP) refers to a family of operations recently developed for fusing features from different modalities predominantly for visual question answering (VQA) models. Successive BLP techniques have yielded higher performance with lower computational expense, yet at the same time they hav...

Descripción completa

Detalles Bibliográficos
Autores principales: Winterbottom, Thomas, Xiao, Sarah, McLean, Alistair, Al Moubayed, Noura
Formato: Online Artículo Texto
Lenguaje:English
Publicado: PeerJ Inc. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9202627/
https://www.ncbi.nlm.nih.gov/pubmed/35721409
http://dx.doi.org/10.7717/peerj-cs.974
_version_ 1784728570608746496
author Winterbottom, Thomas
Xiao, Sarah
McLean, Alistair
Al Moubayed, Noura
author_facet Winterbottom, Thomas
Xiao, Sarah
McLean, Alistair
Al Moubayed, Noura
author_sort Winterbottom, Thomas
collection PubMed
description Bilinear pooling (BLP) refers to a family of operations recently developed for fusing features from different modalities predominantly for visual question answering (VQA) models. Successive BLP techniques have yielded higher performance with lower computational expense, yet at the same time they have drifted further from the original motivational justification of bilinear models, instead becoming empirically motivated by task performance. Furthermore, despite significant success in text-image fusion in VQA, BLP has not yet gained such notoriety in video question answering (video-QA). Though BLP methods have continued to perform well on video tasks when fusing vision and non-textual features, BLP has recently been overshadowed by other vision and textual feature fusion techniques in video-QA. We aim to add a new perspective to the empirical and motivational drift in BLP. We take a step back and discuss the motivational origins of BLP, highlighting the often-overlooked parallels to neurological theories (Dual Coding Theory and The Two-Stream Model of Vision). We seek to carefully and experimentally ascertain the empirical strengths and limitations of BLP as a multimodal text-vision fusion technique in video-QA using two models (TVQA baseline and heterogeneous-memory-enchanced ‘HME’ model) and four datasets (TVQA, TGif-QA, MSVD-QA, and EgoVQA). We examine the impact of both simply replacing feature concatenation in the existing models with BLP, and a modified version of the TVQA baseline to accommodate BLP that we name the ‘dual-stream’ model. We find that our relatively simple integration of BLP does not increase, and mostly harms, performance on these video-QA benchmarks. Using our insights on recent work in BLP for video-QA results and recently proposed theoretical multimodal fusion taxonomies, we offer insight into why BLP-driven performance gain for video-QA benchmarks may be more difficult to achieve than in earlier VQA models. We share our perspective on, and suggest solutions for, the key issues we identify with BLP techniques for multimodal fusion in video-QA. We look beyond the empirical justification of BLP techniques and propose both alternatives and improvements to multimodal fusion by drawing neurological inspiration from Dual Coding Theory and the Two-Stream Model of Vision. We qualitatively highlight the potential for neurological inspirations in video-QA by identifying the relative abundance of psycholinguistically ‘concrete’ words in the vocabularies for each of the text components (e.g., questions and answers) of the four video-QA datasets we experiment with.
format Online
Article
Text
id pubmed-9202627
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher PeerJ Inc.
record_format MEDLINE/PubMed
spelling pubmed-92026272022-06-17 Bilinear pooling in video-QA: empirical challenges and motivational drift from neurological parallels Winterbottom, Thomas Xiao, Sarah McLean, Alistair Al Moubayed, Noura PeerJ Comput Sci Artificial Intelligence Bilinear pooling (BLP) refers to a family of operations recently developed for fusing features from different modalities predominantly for visual question answering (VQA) models. Successive BLP techniques have yielded higher performance with lower computational expense, yet at the same time they have drifted further from the original motivational justification of bilinear models, instead becoming empirically motivated by task performance. Furthermore, despite significant success in text-image fusion in VQA, BLP has not yet gained such notoriety in video question answering (video-QA). Though BLP methods have continued to perform well on video tasks when fusing vision and non-textual features, BLP has recently been overshadowed by other vision and textual feature fusion techniques in video-QA. We aim to add a new perspective to the empirical and motivational drift in BLP. We take a step back and discuss the motivational origins of BLP, highlighting the often-overlooked parallels to neurological theories (Dual Coding Theory and The Two-Stream Model of Vision). We seek to carefully and experimentally ascertain the empirical strengths and limitations of BLP as a multimodal text-vision fusion technique in video-QA using two models (TVQA baseline and heterogeneous-memory-enchanced ‘HME’ model) and four datasets (TVQA, TGif-QA, MSVD-QA, and EgoVQA). We examine the impact of both simply replacing feature concatenation in the existing models with BLP, and a modified version of the TVQA baseline to accommodate BLP that we name the ‘dual-stream’ model. We find that our relatively simple integration of BLP does not increase, and mostly harms, performance on these video-QA benchmarks. Using our insights on recent work in BLP for video-QA results and recently proposed theoretical multimodal fusion taxonomies, we offer insight into why BLP-driven performance gain for video-QA benchmarks may be more difficult to achieve than in earlier VQA models. We share our perspective on, and suggest solutions for, the key issues we identify with BLP techniques for multimodal fusion in video-QA. We look beyond the empirical justification of BLP techniques and propose both alternatives and improvements to multimodal fusion by drawing neurological inspiration from Dual Coding Theory and the Two-Stream Model of Vision. We qualitatively highlight the potential for neurological inspirations in video-QA by identifying the relative abundance of psycholinguistically ‘concrete’ words in the vocabularies for each of the text components (e.g., questions and answers) of the four video-QA datasets we experiment with. PeerJ Inc. 2022-06-03 /pmc/articles/PMC9202627/ /pubmed/35721409 http://dx.doi.org/10.7717/peerj-cs.974 Text en ©2022 Winterbottom et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.
spellingShingle Artificial Intelligence
Winterbottom, Thomas
Xiao, Sarah
McLean, Alistair
Al Moubayed, Noura
Bilinear pooling in video-QA: empirical challenges and motivational drift from neurological parallels
title Bilinear pooling in video-QA: empirical challenges and motivational drift from neurological parallels
title_full Bilinear pooling in video-QA: empirical challenges and motivational drift from neurological parallels
title_fullStr Bilinear pooling in video-QA: empirical challenges and motivational drift from neurological parallels
title_full_unstemmed Bilinear pooling in video-QA: empirical challenges and motivational drift from neurological parallels
title_short Bilinear pooling in video-QA: empirical challenges and motivational drift from neurological parallels
title_sort bilinear pooling in video-qa: empirical challenges and motivational drift from neurological parallels
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9202627/
https://www.ncbi.nlm.nih.gov/pubmed/35721409
http://dx.doi.org/10.7717/peerj-cs.974
work_keys_str_mv AT winterbottomthomas bilinearpoolinginvideoqaempiricalchallengesandmotivationaldriftfromneurologicalparallels
AT xiaosarah bilinearpoolinginvideoqaempiricalchallengesandmotivationaldriftfromneurologicalparallels
AT mcleanalistair bilinearpoolinginvideoqaempiricalchallengesandmotivationaldriftfromneurologicalparallels
AT almoubayednoura bilinearpoolinginvideoqaempiricalchallengesandmotivationaldriftfromneurologicalparallels