Cargando…

On the Limitations of Visual-Semantic Embedding Networks for Image-to-Text Information Retrieval

Visual-semantic embedding (VSE) networks create joint image–text representations to map images and texts in a shared embedding space to enable various information retrieval-related tasks, such as image–text retrieval, image captioning, and visual question answering. The most recent state-of-the-art...

Descripción completa

Detalles Bibliográficos
Autores principales: Gong, Yan, Cosma, Georgina, Fang, Hui
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8404943/
https://www.ncbi.nlm.nih.gov/pubmed/34460761
http://dx.doi.org/10.3390/jimaging7080125
_version_ 1783746239352799232
author Gong, Yan
Cosma, Georgina
Fang, Hui
author_facet Gong, Yan
Cosma, Georgina
Fang, Hui
author_sort Gong, Yan
collection PubMed
description Visual-semantic embedding (VSE) networks create joint image–text representations to map images and texts in a shared embedding space to enable various information retrieval-related tasks, such as image–text retrieval, image captioning, and visual question answering. The most recent state-of-the-art VSE-based networks are: VSE++, SCAN, VSRN, and UNITER. This study evaluates the performance of those VSE networks for the task of image-to-text retrieval and identifies and analyses their strengths and limitations to guide future research on the topic. The experimental results on Flickr30K revealed that the pre-trained network, UNITER, achieved 61.5% on average Recall@5 for the task of retrieving all relevant descriptions. The traditional networks, VSRN, SCAN, and VSE++, achieved 50.3%, 47.1%, and 29.4% on average Recall@5, respectively, for the same task. An additional analysis was performed on image–text pairs from the top 25 worst-performing classes using a subset of the Flickr30K-based dataset to identify the limitations of the performance of the best-performing models, VSRN and UNITER. These limitations are discussed from the perspective of image scenes, image objects, image semantics, and basic functions of neural networks. This paper discusses the strengths and limitations of VSE networks to guide further research into the topic of using VSE networks for cross-modal information retrieval tasks.
format Online
Article
Text
id pubmed-8404943
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-84049432021-10-28 On the Limitations of Visual-Semantic Embedding Networks for Image-to-Text Information Retrieval Gong, Yan Cosma, Georgina Fang, Hui J Imaging Article Visual-semantic embedding (VSE) networks create joint image–text representations to map images and texts in a shared embedding space to enable various information retrieval-related tasks, such as image–text retrieval, image captioning, and visual question answering. The most recent state-of-the-art VSE-based networks are: VSE++, SCAN, VSRN, and UNITER. This study evaluates the performance of those VSE networks for the task of image-to-text retrieval and identifies and analyses their strengths and limitations to guide future research on the topic. The experimental results on Flickr30K revealed that the pre-trained network, UNITER, achieved 61.5% on average Recall@5 for the task of retrieving all relevant descriptions. The traditional networks, VSRN, SCAN, and VSE++, achieved 50.3%, 47.1%, and 29.4% on average Recall@5, respectively, for the same task. An additional analysis was performed on image–text pairs from the top 25 worst-performing classes using a subset of the Flickr30K-based dataset to identify the limitations of the performance of the best-performing models, VSRN and UNITER. These limitations are discussed from the perspective of image scenes, image objects, image semantics, and basic functions of neural networks. This paper discusses the strengths and limitations of VSE networks to guide further research into the topic of using VSE networks for cross-modal information retrieval tasks. MDPI 2021-07-26 /pmc/articles/PMC8404943/ /pubmed/34460761 http://dx.doi.org/10.3390/jimaging7080125 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Gong, Yan
Cosma, Georgina
Fang, Hui
On the Limitations of Visual-Semantic Embedding Networks for Image-to-Text Information Retrieval
title On the Limitations of Visual-Semantic Embedding Networks for Image-to-Text Information Retrieval
title_full On the Limitations of Visual-Semantic Embedding Networks for Image-to-Text Information Retrieval
title_fullStr On the Limitations of Visual-Semantic Embedding Networks for Image-to-Text Information Retrieval
title_full_unstemmed On the Limitations of Visual-Semantic Embedding Networks for Image-to-Text Information Retrieval
title_short On the Limitations of Visual-Semantic Embedding Networks for Image-to-Text Information Retrieval
title_sort on the limitations of visual-semantic embedding networks for image-to-text information retrieval
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8404943/
https://www.ncbi.nlm.nih.gov/pubmed/34460761
http://dx.doi.org/10.3390/jimaging7080125
work_keys_str_mv AT gongyan onthelimitationsofvisualsemanticembeddingnetworksforimagetotextinformationretrieval
AT cosmageorgina onthelimitationsofvisualsemanticembeddingnetworksforimagetotextinformationretrieval
AT fanghui onthelimitationsofvisualsemanticembeddingnetworksforimagetotextinformationretrieval