Cargando…

Focus on the Visible Regions: Semantic-Guided Alignment Model for Occluded Person Re-Identification

The occlusion problem is very common in pedestrian retrieval scenarios. When persons are occluded by various obstacles, the noise caused by the occluded area greatly affects the retrieval results. However, many previous pedestrian re-identification (Re-ID) methods ignore this problem. To solve it, w...

Descripción completa

Detalles Bibliográficos
Autores principales: Yang, Qin, Wang, Peizhi, Fang, Zihan, Lu, Qiyong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7472299/
https://www.ncbi.nlm.nih.gov/pubmed/32784411
http://dx.doi.org/10.3390/s20164431
_version_ 1783578956901908480
author Yang, Qin
Wang, Peizhi
Fang, Zihan
Lu, Qiyong
author_facet Yang, Qin
Wang, Peizhi
Fang, Zihan
Lu, Qiyong
author_sort Yang, Qin
collection PubMed
description The occlusion problem is very common in pedestrian retrieval scenarios. When persons are occluded by various obstacles, the noise caused by the occluded area greatly affects the retrieval results. However, many previous pedestrian re-identification (Re-ID) methods ignore this problem. To solve it, we propose a semantic-guided alignment model that uses image semantic information to separate useful information from occlusion noise. In the image preprocessing phase, we use a human semantic parsing network to generate probability maps. These maps show which regions of images are occluded, and the model automatically crops images to preserve the visible parts. In the construction phase, we fuse the probability maps with the global features of the image, and semantic information guides the model to focus on visible human regions and extract local features. During the matching process, we propose a measurement strategy that only calculates the distance of public areas (visible human areas on both images) between images, thereby suppressing the spatial misalignment caused by non-public areas. Experimental results on a series of public datasets confirm that our method outperforms previous occluded Re-ID methods, and it achieves top performance in the holistic Re-ID problem.
format Online
Article
Text
id pubmed-7472299
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-74722992020-09-04 Focus on the Visible Regions: Semantic-Guided Alignment Model for Occluded Person Re-Identification Yang, Qin Wang, Peizhi Fang, Zihan Lu, Qiyong Sensors (Basel) Article The occlusion problem is very common in pedestrian retrieval scenarios. When persons are occluded by various obstacles, the noise caused by the occluded area greatly affects the retrieval results. However, many previous pedestrian re-identification (Re-ID) methods ignore this problem. To solve it, we propose a semantic-guided alignment model that uses image semantic information to separate useful information from occlusion noise. In the image preprocessing phase, we use a human semantic parsing network to generate probability maps. These maps show which regions of images are occluded, and the model automatically crops images to preserve the visible parts. In the construction phase, we fuse the probability maps with the global features of the image, and semantic information guides the model to focus on visible human regions and extract local features. During the matching process, we propose a measurement strategy that only calculates the distance of public areas (visible human areas on both images) between images, thereby suppressing the spatial misalignment caused by non-public areas. Experimental results on a series of public datasets confirm that our method outperforms previous occluded Re-ID methods, and it achieves top performance in the holistic Re-ID problem. MDPI 2020-08-08 /pmc/articles/PMC7472299/ /pubmed/32784411 http://dx.doi.org/10.3390/s20164431 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Yang, Qin
Wang, Peizhi
Fang, Zihan
Lu, Qiyong
Focus on the Visible Regions: Semantic-Guided Alignment Model for Occluded Person Re-Identification
title Focus on the Visible Regions: Semantic-Guided Alignment Model for Occluded Person Re-Identification
title_full Focus on the Visible Regions: Semantic-Guided Alignment Model for Occluded Person Re-Identification
title_fullStr Focus on the Visible Regions: Semantic-Guided Alignment Model for Occluded Person Re-Identification
title_full_unstemmed Focus on the Visible Regions: Semantic-Guided Alignment Model for Occluded Person Re-Identification
title_short Focus on the Visible Regions: Semantic-Guided Alignment Model for Occluded Person Re-Identification
title_sort focus on the visible regions: semantic-guided alignment model for occluded person re-identification
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7472299/
https://www.ncbi.nlm.nih.gov/pubmed/32784411
http://dx.doi.org/10.3390/s20164431
work_keys_str_mv AT yangqin focusonthevisibleregionssemanticguidedalignmentmodelforoccludedpersonreidentification
AT wangpeizhi focusonthevisibleregionssemanticguidedalignmentmodelforoccludedpersonreidentification
AT fangzihan focusonthevisibleregionssemanticguidedalignmentmodelforoccludedpersonreidentification
AT luqiyong focusonthevisibleregionssemanticguidedalignmentmodelforoccludedpersonreidentification