Cargando…

A Framework for Enabling Unpaired Multi-Modal Learning for Deep Cross-Modal Hashing Retrieval

Cross-Modal Hashing (CMH) retrieval methods have garnered increasing attention within the information retrieval research community due to their capability to deal with large amounts of data thanks to the computational efficiency of hash-based methods. To date, the focus of cross-modal hashing method...

Descripción completa

Detalles Bibliográficos
Autores principales: Williams-Lekuona, Mikel, Cosma, Georgina, Phillips, Iain
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9785405/
https://www.ncbi.nlm.nih.gov/pubmed/36547493
http://dx.doi.org/10.3390/jimaging8120328
_version_ 1784858040601673728
author Williams-Lekuona, Mikel
Cosma, Georgina
Phillips, Iain
author_facet Williams-Lekuona, Mikel
Cosma, Georgina
Phillips, Iain
author_sort Williams-Lekuona, Mikel
collection PubMed
description Cross-Modal Hashing (CMH) retrieval methods have garnered increasing attention within the information retrieval research community due to their capability to deal with large amounts of data thanks to the computational efficiency of hash-based methods. To date, the focus of cross-modal hashing methods has been on training with paired data. Paired data refers to samples with one-to-one correspondence across modalities, e.g., image and text pairs where the text sample describes the image. However, real-world applications produce unpaired data that cannot be utilised by most current CMH methods during the training process. Models that can learn from unpaired data are crucial for real-world applications such as cross-modal neural information retrieval where paired data is limited or not available to train the model. This paper provides (1) an overview of the CMH methods when applied to unpaired datasets, (2) proposes a framework that enables pairwise-constrained CMH methods to train with unpaired samples, and (3) evaluates the performance of state-of-the-art CMH methods across different pairing scenarios.
format Online
Article
Text
id pubmed-9785405
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-97854052022-12-24 A Framework for Enabling Unpaired Multi-Modal Learning for Deep Cross-Modal Hashing Retrieval Williams-Lekuona, Mikel Cosma, Georgina Phillips, Iain J Imaging Article Cross-Modal Hashing (CMH) retrieval methods have garnered increasing attention within the information retrieval research community due to their capability to deal with large amounts of data thanks to the computational efficiency of hash-based methods. To date, the focus of cross-modal hashing methods has been on training with paired data. Paired data refers to samples with one-to-one correspondence across modalities, e.g., image and text pairs where the text sample describes the image. However, real-world applications produce unpaired data that cannot be utilised by most current CMH methods during the training process. Models that can learn from unpaired data are crucial for real-world applications such as cross-modal neural information retrieval where paired data is limited or not available to train the model. This paper provides (1) an overview of the CMH methods when applied to unpaired datasets, (2) proposes a framework that enables pairwise-constrained CMH methods to train with unpaired samples, and (3) evaluates the performance of state-of-the-art CMH methods across different pairing scenarios. MDPI 2022-12-15 /pmc/articles/PMC9785405/ /pubmed/36547493 http://dx.doi.org/10.3390/jimaging8120328 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Williams-Lekuona, Mikel
Cosma, Georgina
Phillips, Iain
A Framework for Enabling Unpaired Multi-Modal Learning for Deep Cross-Modal Hashing Retrieval
title A Framework for Enabling Unpaired Multi-Modal Learning for Deep Cross-Modal Hashing Retrieval
title_full A Framework for Enabling Unpaired Multi-Modal Learning for Deep Cross-Modal Hashing Retrieval
title_fullStr A Framework for Enabling Unpaired Multi-Modal Learning for Deep Cross-Modal Hashing Retrieval
title_full_unstemmed A Framework for Enabling Unpaired Multi-Modal Learning for Deep Cross-Modal Hashing Retrieval
title_short A Framework for Enabling Unpaired Multi-Modal Learning for Deep Cross-Modal Hashing Retrieval
title_sort framework for enabling unpaired multi-modal learning for deep cross-modal hashing retrieval
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9785405/
https://www.ncbi.nlm.nih.gov/pubmed/36547493
http://dx.doi.org/10.3390/jimaging8120328
work_keys_str_mv AT williamslekuonamikel aframeworkforenablingunpairedmultimodallearningfordeepcrossmodalhashingretrieval
AT cosmageorgina aframeworkforenablingunpairedmultimodallearningfordeepcrossmodalhashingretrieval
AT phillipsiain aframeworkforenablingunpairedmultimodallearningfordeepcrossmodalhashingretrieval
AT williamslekuonamikel frameworkforenablingunpairedmultimodallearningfordeepcrossmodalhashingretrieval
AT cosmageorgina frameworkforenablingunpairedmultimodallearningfordeepcrossmodalhashingretrieval
AT phillipsiain frameworkforenablingunpairedmultimodallearningfordeepcrossmodalhashingretrieval