Cargando…

IPD-Net: Infrared Pedestrian Detection Network via Adaptive Feature Extraction and Coordinate Information Fusion

Infrared pedestrian detection has important theoretical research value and a wide range of application scenarios. Because of its special imaging method, infrared images can be used for pedestrian detection at night and in severe weather conditions. However, the lack of pedestrian feature information...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhou, Lun, Gao, Song, Wang, Simin, Zhang, Hengsheng, Liu, Ruochen, Liu, Jiaming
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9696594/
https://www.ncbi.nlm.nih.gov/pubmed/36433562
http://dx.doi.org/10.3390/s22228966
_version_ 1784838349139214336
author Zhou, Lun
Gao, Song
Wang, Simin
Zhang, Hengsheng
Liu, Ruochen
Liu, Jiaming
author_facet Zhou, Lun
Gao, Song
Wang, Simin
Zhang, Hengsheng
Liu, Ruochen
Liu, Jiaming
author_sort Zhou, Lun
collection PubMed
description Infrared pedestrian detection has important theoretical research value and a wide range of application scenarios. Because of its special imaging method, infrared images can be used for pedestrian detection at night and in severe weather conditions. However, the lack of pedestrian feature information in infrared images and the small scale of pedestrian objects makes it difficult for detection networks to extract feature information and accurately detect small-scale pedestrians. To address these issues, this paper proposes an infrared pedestrian detection network based on YOLOv5, named IPD-Net. Firstly, an adaptive feature extraction module (AFEM) is designed in the backbone network section, in which a residual structure with stepwise selective kernel was included to enable the model to better extract feature information under different sizes of the receptive field. Secondly, a coordinate attention feature pyramid network (CA-FPN) is designed to enhance the deep feature map with location information through the coordinate attention module, so that the network gains better capability of object localization. Finally, shallow information is introduced into the feature fusion network to improve the detection accuracy of weak and small objects. Experimental results on the large infrared image dataset ZUT show that the mean Average Precision (mAP50) of our model is improved by 3.6% compared to that of YOLOv5s. In addition, IPD-Net shows various degrees of accuracy improvement compared to other excellent methods.
format Online
Article
Text
id pubmed-9696594
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-96965942022-11-26 IPD-Net: Infrared Pedestrian Detection Network via Adaptive Feature Extraction and Coordinate Information Fusion Zhou, Lun Gao, Song Wang, Simin Zhang, Hengsheng Liu, Ruochen Liu, Jiaming Sensors (Basel) Article Infrared pedestrian detection has important theoretical research value and a wide range of application scenarios. Because of its special imaging method, infrared images can be used for pedestrian detection at night and in severe weather conditions. However, the lack of pedestrian feature information in infrared images and the small scale of pedestrian objects makes it difficult for detection networks to extract feature information and accurately detect small-scale pedestrians. To address these issues, this paper proposes an infrared pedestrian detection network based on YOLOv5, named IPD-Net. Firstly, an adaptive feature extraction module (AFEM) is designed in the backbone network section, in which a residual structure with stepwise selective kernel was included to enable the model to better extract feature information under different sizes of the receptive field. Secondly, a coordinate attention feature pyramid network (CA-FPN) is designed to enhance the deep feature map with location information through the coordinate attention module, so that the network gains better capability of object localization. Finally, shallow information is introduced into the feature fusion network to improve the detection accuracy of weak and small objects. Experimental results on the large infrared image dataset ZUT show that the mean Average Precision (mAP50) of our model is improved by 3.6% compared to that of YOLOv5s. In addition, IPD-Net shows various degrees of accuracy improvement compared to other excellent methods. MDPI 2022-11-19 /pmc/articles/PMC9696594/ /pubmed/36433562 http://dx.doi.org/10.3390/s22228966 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Zhou, Lun
Gao, Song
Wang, Simin
Zhang, Hengsheng
Liu, Ruochen
Liu, Jiaming
IPD-Net: Infrared Pedestrian Detection Network via Adaptive Feature Extraction and Coordinate Information Fusion
title IPD-Net: Infrared Pedestrian Detection Network via Adaptive Feature Extraction and Coordinate Information Fusion
title_full IPD-Net: Infrared Pedestrian Detection Network via Adaptive Feature Extraction and Coordinate Information Fusion
title_fullStr IPD-Net: Infrared Pedestrian Detection Network via Adaptive Feature Extraction and Coordinate Information Fusion
title_full_unstemmed IPD-Net: Infrared Pedestrian Detection Network via Adaptive Feature Extraction and Coordinate Information Fusion
title_short IPD-Net: Infrared Pedestrian Detection Network via Adaptive Feature Extraction and Coordinate Information Fusion
title_sort ipd-net: infrared pedestrian detection network via adaptive feature extraction and coordinate information fusion
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9696594/
https://www.ncbi.nlm.nih.gov/pubmed/36433562
http://dx.doi.org/10.3390/s22228966
work_keys_str_mv AT zhoulun ipdnetinfraredpedestriandetectionnetworkviaadaptivefeatureextractionandcoordinateinformationfusion
AT gaosong ipdnetinfraredpedestriandetectionnetworkviaadaptivefeatureextractionandcoordinateinformationfusion
AT wangsimin ipdnetinfraredpedestriandetectionnetworkviaadaptivefeatureextractionandcoordinateinformationfusion
AT zhanghengsheng ipdnetinfraredpedestriandetectionnetworkviaadaptivefeatureextractionandcoordinateinformationfusion
AT liuruochen ipdnetinfraredpedestriandetectionnetworkviaadaptivefeatureextractionandcoordinateinformationfusion
AT liujiaming ipdnetinfraredpedestriandetectionnetworkviaadaptivefeatureextractionandcoordinateinformationfusion