Cargando…

Margin-Based Modal Adaptive Learning for Visible-Infrared Person Re-Identification

Visible-infrared person re-identification (VIPR) has great potential for intelligent transportation systems for constructing smart cities, but it is challenging to utilize due to the huge modal discrepancy between visible and infrared images. Although visible and infrared data can appear to be two d...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhao, Qianqian, Wu, Hanxiao, Zhu, Jianqing
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9921303/
https://www.ncbi.nlm.nih.gov/pubmed/36772466
http://dx.doi.org/10.3390/s23031426
_version_ 1784887279963078656
author Zhao, Qianqian
Wu, Hanxiao
Zhu, Jianqing
author_facet Zhao, Qianqian
Wu, Hanxiao
Zhu, Jianqing
author_sort Zhao, Qianqian
collection PubMed
description Visible-infrared person re-identification (VIPR) has great potential for intelligent transportation systems for constructing smart cities, but it is challenging to utilize due to the huge modal discrepancy between visible and infrared images. Although visible and infrared data can appear to be two domains, VIPR is not identical to domain adaptation as it can massively eliminate modal discrepancies. Because VIPR has complete identity information on both visible and infrared modalities, once the domain adaption is overemphasized, the discriminative appearance information on the visible and infrared domains would drain. For that, we propose a novel margin-based modal adaptive learning (MMAL) method for VIPR in this paper. On each domain, we apply triplet and label smoothing cross-entropy functions to learn appearance-discriminative features. Between the two domains, we design a simple yet effective marginal maximum mean discrepancy (M [Formula: see text] D) loss function to avoid an excessive suppression of modal discrepancies to protect the features’ discriminative ability on each domain. As a result, our MMAL method could learn modal-invariant yet appearance-discriminative features for improving VIPR. The experimental results show that our MMAL method acquires state-of-the-art VIPR performance, e.g., on the RegDB dataset in the visible-to-infrared retrieval mode, the rank-1 accuracy is 93.24% and the mean average precision is 83.77%.
format Online
Article
Text
id pubmed-9921303
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-99213032023-02-12 Margin-Based Modal Adaptive Learning for Visible-Infrared Person Re-Identification Zhao, Qianqian Wu, Hanxiao Zhu, Jianqing Sensors (Basel) Article Visible-infrared person re-identification (VIPR) has great potential for intelligent transportation systems for constructing smart cities, but it is challenging to utilize due to the huge modal discrepancy between visible and infrared images. Although visible and infrared data can appear to be two domains, VIPR is not identical to domain adaptation as it can massively eliminate modal discrepancies. Because VIPR has complete identity information on both visible and infrared modalities, once the domain adaption is overemphasized, the discriminative appearance information on the visible and infrared domains would drain. For that, we propose a novel margin-based modal adaptive learning (MMAL) method for VIPR in this paper. On each domain, we apply triplet and label smoothing cross-entropy functions to learn appearance-discriminative features. Between the two domains, we design a simple yet effective marginal maximum mean discrepancy (M [Formula: see text] D) loss function to avoid an excessive suppression of modal discrepancies to protect the features’ discriminative ability on each domain. As a result, our MMAL method could learn modal-invariant yet appearance-discriminative features for improving VIPR. The experimental results show that our MMAL method acquires state-of-the-art VIPR performance, e.g., on the RegDB dataset in the visible-to-infrared retrieval mode, the rank-1 accuracy is 93.24% and the mean average precision is 83.77%. MDPI 2023-01-27 /pmc/articles/PMC9921303/ /pubmed/36772466 http://dx.doi.org/10.3390/s23031426 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Zhao, Qianqian
Wu, Hanxiao
Zhu, Jianqing
Margin-Based Modal Adaptive Learning for Visible-Infrared Person Re-Identification
title Margin-Based Modal Adaptive Learning for Visible-Infrared Person Re-Identification
title_full Margin-Based Modal Adaptive Learning for Visible-Infrared Person Re-Identification
title_fullStr Margin-Based Modal Adaptive Learning for Visible-Infrared Person Re-Identification
title_full_unstemmed Margin-Based Modal Adaptive Learning for Visible-Infrared Person Re-Identification
title_short Margin-Based Modal Adaptive Learning for Visible-Infrared Person Re-Identification
title_sort margin-based modal adaptive learning for visible-infrared person re-identification
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9921303/
https://www.ncbi.nlm.nih.gov/pubmed/36772466
http://dx.doi.org/10.3390/s23031426
work_keys_str_mv AT zhaoqianqian marginbasedmodaladaptivelearningforvisibleinfraredpersonreidentification
AT wuhanxiao marginbasedmodaladaptivelearningforvisibleinfraredpersonreidentification
AT zhujianqing marginbasedmodaladaptivelearningforvisibleinfraredpersonreidentification