Cargando…

Cross-modal semantic autoencoder with embedding consensus

Cross-modal retrieval has become a topic of popularity, since multi-data is heterogeneous and the similarities between different forms of information are worthy of attention. Traditional single-modal methods reconstruct the original information and lack of considering the semantic similarity between...

Descripción completa

Detalles Bibliográficos
Autores principales: Sun, Shengzi, Guo, Binghui, Mi, Zhilong, Zheng, Zhiming
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8514517/
https://www.ncbi.nlm.nih.gov/pubmed/34645836
http://dx.doi.org/10.1038/s41598-021-92750-7
_version_ 1784583408021667840
author Sun, Shengzi
Guo, Binghui
Mi, Zhilong
Zheng, Zhiming
author_facet Sun, Shengzi
Guo, Binghui
Mi, Zhilong
Zheng, Zhiming
author_sort Sun, Shengzi
collection PubMed
description Cross-modal retrieval has become a topic of popularity, since multi-data is heterogeneous and the similarities between different forms of information are worthy of attention. Traditional single-modal methods reconstruct the original information and lack of considering the semantic similarity between different data. In this work, a cross-modal semantic autoencoder with embedding consensus (CSAEC) is proposed, mapping the original data to a low-dimensional shared space to retain semantic information. Considering the similarity between the modalities, an automatic encoder is utilized to associate the feature projection to the semantic code vector. In addition, regularization and sparse constraints are applied to low-dimensional matrices to balance reconstruction errors. The high dimensional data is transformed into semantic code vector. Different models are constrained by parameters to achieve denoising. The experiments on four multi-modal data sets show that the query results are improved and effective cross-modal retrieval is achieved. Further, CSAEC can also be applied to fields related to computer and network such as deep and subspace learning. The model breaks through the obstacles in traditional methods, using deep learning methods innovatively to convert multi-modal data into abstract expression, which can get better accuracy and achieve better results in recognition.
format Online
Article
Text
id pubmed-8514517
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-85145172021-10-14 Cross-modal semantic autoencoder with embedding consensus Sun, Shengzi Guo, Binghui Mi, Zhilong Zheng, Zhiming Sci Rep Article Cross-modal retrieval has become a topic of popularity, since multi-data is heterogeneous and the similarities between different forms of information are worthy of attention. Traditional single-modal methods reconstruct the original information and lack of considering the semantic similarity between different data. In this work, a cross-modal semantic autoencoder with embedding consensus (CSAEC) is proposed, mapping the original data to a low-dimensional shared space to retain semantic information. Considering the similarity between the modalities, an automatic encoder is utilized to associate the feature projection to the semantic code vector. In addition, regularization and sparse constraints are applied to low-dimensional matrices to balance reconstruction errors. The high dimensional data is transformed into semantic code vector. Different models are constrained by parameters to achieve denoising. The experiments on four multi-modal data sets show that the query results are improved and effective cross-modal retrieval is achieved. Further, CSAEC can also be applied to fields related to computer and network such as deep and subspace learning. The model breaks through the obstacles in traditional methods, using deep learning methods innovatively to convert multi-modal data into abstract expression, which can get better accuracy and achieve better results in recognition. Nature Publishing Group UK 2021-10-13 /pmc/articles/PMC8514517/ /pubmed/34645836 http://dx.doi.org/10.1038/s41598-021-92750-7 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Sun, Shengzi
Guo, Binghui
Mi, Zhilong
Zheng, Zhiming
Cross-modal semantic autoencoder with embedding consensus
title Cross-modal semantic autoencoder with embedding consensus
title_full Cross-modal semantic autoencoder with embedding consensus
title_fullStr Cross-modal semantic autoencoder with embedding consensus
title_full_unstemmed Cross-modal semantic autoencoder with embedding consensus
title_short Cross-modal semantic autoencoder with embedding consensus
title_sort cross-modal semantic autoencoder with embedding consensus
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8514517/
https://www.ncbi.nlm.nih.gov/pubmed/34645836
http://dx.doi.org/10.1038/s41598-021-92750-7
work_keys_str_mv AT sunshengzi crossmodalsemanticautoencoderwithembeddingconsensus
AT guobinghui crossmodalsemanticautoencoderwithembeddingconsensus
AT mizhilong crossmodalsemanticautoencoderwithembeddingconsensus
AT zhengzhiming crossmodalsemanticautoencoderwithembeddingconsensus