Cargando…

Improved YOLOv3 Integrating SENet and Optimized GIoU Loss for Occluded Pedestrian Detection

Occluded pedestrian detection faces huge challenges. False positives and false negatives in crowd occlusion scenes will reduce the accuracy of occluded pedestrian detection. To overcome this problem, we proposed an improved you-only-look-once version 3 (YOLOv3) based on squeeze-and-excitation networ...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhang, Qiangbo, Liu, Yunxiang, Zhang, Yu, Zong, Ming, Zhu, Jianlin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10675795/
https://www.ncbi.nlm.nih.gov/pubmed/38005475
http://dx.doi.org/10.3390/s23229089
Descripción
Sumario:Occluded pedestrian detection faces huge challenges. False positives and false negatives in crowd occlusion scenes will reduce the accuracy of occluded pedestrian detection. To overcome this problem, we proposed an improved you-only-look-once version 3 (YOLOv3) based on squeeze-and-excitation networks (SENet) and optimized generalized intersection over union (GIoU) loss for occluded pedestrian detection, namely YOLOv3-Occlusion (YOLOv3-Occ). The proposed network model considered incorporating squeeze-and-excitation networks (SENet) into YOLOv3, which assigned greater weights to the features of unobstructed parts of pedestrians to solve the problem of feature extraction against unsheltered parts. For the loss function, a new generalized intersection over union(intersection over groundtruth) (GIoU(IoG)) loss was developed to ensure the areas of predicted frames of pedestrian invariant based on the GIoU loss, which tackled the problem of inaccurate positioning of pedestrians. The proposed method, YOLOv3-Occ, was validated on the CityPersons and COCO2014 datasets. Experimental results show the proposed method could obtain 1.2% MR(−2) gains on the CityPersons dataset and 0.7% mAP@50 improvements on the COCO2014 dataset.