Cargando…
Hybrid DAER Based Cross-Modal Retrieval Exploiting Deep Representation Learning
Information retrieval across multiple modes has attracted much attention from academics and practitioners. One key challenge of cross-modal retrieval is to eliminate the heterogeneous gap between different patterns. Most of the existing methods tend to jointly construct a common subspace. However, v...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10452985/ https://www.ncbi.nlm.nih.gov/pubmed/37628246 http://dx.doi.org/10.3390/e25081216 |
_version_ | 1785095807518638080 |
---|---|
author | Huang, Zhao Hu, Haowu Su, Miao |
author_facet | Huang, Zhao Hu, Haowu Su, Miao |
author_sort | Huang, Zhao |
collection | PubMed |
description | Information retrieval across multiple modes has attracted much attention from academics and practitioners. One key challenge of cross-modal retrieval is to eliminate the heterogeneous gap between different patterns. Most of the existing methods tend to jointly construct a common subspace. However, very little attention has been given to the study of the importance of different fine-grained regions of various modalities. This lack of consideration significantly influences the utilization of the extracted information of multiple modalities. Therefore, this study proposes a novel text-image cross-modal retrieval approach that constructs a dual attention network and an enhanced relation network (DAER). More specifically, the dual attention network tends to precisely extract fine-grained weight information from text and images, while the enhanced relation network is used to expand the differences between different categories of data in order to improve the computational accuracy of similarity. The comprehensive experimental results on three widely-used major datasets (i.e., Wikipedia, Pascal Sentence, and XMediaNet) show that our proposed approach is effective and superior to existing cross-modal retrieval methods. |
format | Online Article Text |
id | pubmed-10452985 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-104529852023-08-26 Hybrid DAER Based Cross-Modal Retrieval Exploiting Deep Representation Learning Huang, Zhao Hu, Haowu Su, Miao Entropy (Basel) Article Information retrieval across multiple modes has attracted much attention from academics and practitioners. One key challenge of cross-modal retrieval is to eliminate the heterogeneous gap between different patterns. Most of the existing methods tend to jointly construct a common subspace. However, very little attention has been given to the study of the importance of different fine-grained regions of various modalities. This lack of consideration significantly influences the utilization of the extracted information of multiple modalities. Therefore, this study proposes a novel text-image cross-modal retrieval approach that constructs a dual attention network and an enhanced relation network (DAER). More specifically, the dual attention network tends to precisely extract fine-grained weight information from text and images, while the enhanced relation network is used to expand the differences between different categories of data in order to improve the computational accuracy of similarity. The comprehensive experimental results on three widely-used major datasets (i.e., Wikipedia, Pascal Sentence, and XMediaNet) show that our proposed approach is effective and superior to existing cross-modal retrieval methods. MDPI 2023-08-16 /pmc/articles/PMC10452985/ /pubmed/37628246 http://dx.doi.org/10.3390/e25081216 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Huang, Zhao Hu, Haowu Su, Miao Hybrid DAER Based Cross-Modal Retrieval Exploiting Deep Representation Learning |
title | Hybrid DAER Based Cross-Modal Retrieval Exploiting Deep Representation Learning |
title_full | Hybrid DAER Based Cross-Modal Retrieval Exploiting Deep Representation Learning |
title_fullStr | Hybrid DAER Based Cross-Modal Retrieval Exploiting Deep Representation Learning |
title_full_unstemmed | Hybrid DAER Based Cross-Modal Retrieval Exploiting Deep Representation Learning |
title_short | Hybrid DAER Based Cross-Modal Retrieval Exploiting Deep Representation Learning |
title_sort | hybrid daer based cross-modal retrieval exploiting deep representation learning |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10452985/ https://www.ncbi.nlm.nih.gov/pubmed/37628246 http://dx.doi.org/10.3390/e25081216 |
work_keys_str_mv | AT huangzhao hybriddaerbasedcrossmodalretrievalexploitingdeeprepresentationlearning AT huhaowu hybriddaerbasedcrossmodalretrievalexploitingdeeprepresentationlearning AT sumiao hybriddaerbasedcrossmodalretrievalexploitingdeeprepresentationlearning |