Cargando…
Joint Cross-Consistency Learning and Multi-Feature Fusion for Person Re-Identification
To solve the problem of inadequate feature extraction by the model due to factors such as occlusion and illumination in person re-identification tasks, this paper proposed a model with a joint cross-consistency learning and multi-feature fusion person re-identification. The attention mechanism and t...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9735728/ https://www.ncbi.nlm.nih.gov/pubmed/36502088 http://dx.doi.org/10.3390/s22239387 |
_version_ | 1784846842728546304 |
---|---|
author | Ren, Danping He, Tingting Dong, Huisheng |
author_facet | Ren, Danping He, Tingting Dong, Huisheng |
author_sort | Ren, Danping |
collection | PubMed |
description | To solve the problem of inadequate feature extraction by the model due to factors such as occlusion and illumination in person re-identification tasks, this paper proposed a model with a joint cross-consistency learning and multi-feature fusion person re-identification. The attention mechanism and the mixed pooling module were first embedded in the residual network so that the model adaptively focuses on the more valid information in the person images. Secondly, the dataset was randomly divided into two categories according to the camera perspective, and a feature classifier was trained for the two types of datasets respectively. Then, two classifiers with specific knowledge were used to guide the model to extract features unrelated to the camera perspective for the two types of datasets so that the obtained image features were endowed with domain invariance by the model, and the differences in the perspective, attitude, background, and other related information of different images were alleviated. Then, the multi-level features were fused through the feature pyramid to concern the more critical information of the image. Finally, a combination of Cosine Softmax loss, triplet loss, and cluster center loss was proposed to train the model to address the differences of multiple losses in the optimization space. The first accuracy of the proposed model reached 95.9% and 89.7% on the datasets Market-1501 and DukeMTMC-reID, respectively. The results indicated that the proposed model has good feature extraction capability. |
format | Online Article Text |
id | pubmed-9735728 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-97357282022-12-11 Joint Cross-Consistency Learning and Multi-Feature Fusion for Person Re-Identification Ren, Danping He, Tingting Dong, Huisheng Sensors (Basel) Article To solve the problem of inadequate feature extraction by the model due to factors such as occlusion and illumination in person re-identification tasks, this paper proposed a model with a joint cross-consistency learning and multi-feature fusion person re-identification. The attention mechanism and the mixed pooling module were first embedded in the residual network so that the model adaptively focuses on the more valid information in the person images. Secondly, the dataset was randomly divided into two categories according to the camera perspective, and a feature classifier was trained for the two types of datasets respectively. Then, two classifiers with specific knowledge were used to guide the model to extract features unrelated to the camera perspective for the two types of datasets so that the obtained image features were endowed with domain invariance by the model, and the differences in the perspective, attitude, background, and other related information of different images were alleviated. Then, the multi-level features were fused through the feature pyramid to concern the more critical information of the image. Finally, a combination of Cosine Softmax loss, triplet loss, and cluster center loss was proposed to train the model to address the differences of multiple losses in the optimization space. The first accuracy of the proposed model reached 95.9% and 89.7% on the datasets Market-1501 and DukeMTMC-reID, respectively. The results indicated that the proposed model has good feature extraction capability. MDPI 2022-12-01 /pmc/articles/PMC9735728/ /pubmed/36502088 http://dx.doi.org/10.3390/s22239387 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Ren, Danping He, Tingting Dong, Huisheng Joint Cross-Consistency Learning and Multi-Feature Fusion for Person Re-Identification |
title | Joint Cross-Consistency Learning and Multi-Feature Fusion for Person Re-Identification |
title_full | Joint Cross-Consistency Learning and Multi-Feature Fusion for Person Re-Identification |
title_fullStr | Joint Cross-Consistency Learning and Multi-Feature Fusion for Person Re-Identification |
title_full_unstemmed | Joint Cross-Consistency Learning and Multi-Feature Fusion for Person Re-Identification |
title_short | Joint Cross-Consistency Learning and Multi-Feature Fusion for Person Re-Identification |
title_sort | joint cross-consistency learning and multi-feature fusion for person re-identification |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9735728/ https://www.ncbi.nlm.nih.gov/pubmed/36502088 http://dx.doi.org/10.3390/s22239387 |
work_keys_str_mv | AT rendanping jointcrossconsistencylearningandmultifeaturefusionforpersonreidentification AT hetingting jointcrossconsistencylearningandmultifeaturefusionforpersonreidentification AT donghuisheng jointcrossconsistencylearningandmultifeaturefusionforpersonreidentification |