Cargando…

FEA-Swin: Foreground Enhancement Attention Swin Transformer Network for Accurate UAV-Based Dense Object Detection

UAV-based object detection has recently attracted a lot of attention due to its diverse applications. Most of the existing convolution neural network based object detection models can perform well in common object detection cases. However, due to the fact that objects in UAV images are spatially dis...

Descripción completa

Detalles Bibliográficos
Autores principales: Xu, Wenyu, Zhang, Chaofan, Wang, Qi, Dai, Pangda
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9502707/
https://www.ncbi.nlm.nih.gov/pubmed/36146340
http://dx.doi.org/10.3390/s22186993
_version_ 1784795772858925056
author Xu, Wenyu
Zhang, Chaofan
Wang, Qi
Dai, Pangda
author_facet Xu, Wenyu
Zhang, Chaofan
Wang, Qi
Dai, Pangda
author_sort Xu, Wenyu
collection PubMed
description UAV-based object detection has recently attracted a lot of attention due to its diverse applications. Most of the existing convolution neural network based object detection models can perform well in common object detection cases. However, due to the fact that objects in UAV images are spatially distributed in a very dense manner, these methods have limited performance for UAV-based object detection. In this paper, we propose a novel transformer-based object detection model to improve the accuracy of object detection in UAV images. To detect dense objects competently, an advanced foreground enhancement attention Swin Transformer (FEA-Swin) framework is designed by integrating context information into the original backbone of a Swin Transformer. Moreover, to avoid the loss of information of small objects, an improved weighted bidirectional feature pyramid network (BiFPN) is presented by designing the skip connection operation. The proposed method aggregates feature maps from four stages and keeps abundant information of small objects. Specifically, to balance the detection accuracy and efficiency, we introduce an efficient neck of the BiFPN network by removing a redundant network layer. Experimental results on both public datasets and a self-made dataset demonstrate the performance of our method compared to the state-of-the-art methods in terms of detection accuracy.
format Online
Article
Text
id pubmed-9502707
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-95027072022-09-24 FEA-Swin: Foreground Enhancement Attention Swin Transformer Network for Accurate UAV-Based Dense Object Detection Xu, Wenyu Zhang, Chaofan Wang, Qi Dai, Pangda Sensors (Basel) Article UAV-based object detection has recently attracted a lot of attention due to its diverse applications. Most of the existing convolution neural network based object detection models can perform well in common object detection cases. However, due to the fact that objects in UAV images are spatially distributed in a very dense manner, these methods have limited performance for UAV-based object detection. In this paper, we propose a novel transformer-based object detection model to improve the accuracy of object detection in UAV images. To detect dense objects competently, an advanced foreground enhancement attention Swin Transformer (FEA-Swin) framework is designed by integrating context information into the original backbone of a Swin Transformer. Moreover, to avoid the loss of information of small objects, an improved weighted bidirectional feature pyramid network (BiFPN) is presented by designing the skip connection operation. The proposed method aggregates feature maps from four stages and keeps abundant information of small objects. Specifically, to balance the detection accuracy and efficiency, we introduce an efficient neck of the BiFPN network by removing a redundant network layer. Experimental results on both public datasets and a self-made dataset demonstrate the performance of our method compared to the state-of-the-art methods in terms of detection accuracy. MDPI 2022-09-15 /pmc/articles/PMC9502707/ /pubmed/36146340 http://dx.doi.org/10.3390/s22186993 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Xu, Wenyu
Zhang, Chaofan
Wang, Qi
Dai, Pangda
FEA-Swin: Foreground Enhancement Attention Swin Transformer Network for Accurate UAV-Based Dense Object Detection
title FEA-Swin: Foreground Enhancement Attention Swin Transformer Network for Accurate UAV-Based Dense Object Detection
title_full FEA-Swin: Foreground Enhancement Attention Swin Transformer Network for Accurate UAV-Based Dense Object Detection
title_fullStr FEA-Swin: Foreground Enhancement Attention Swin Transformer Network for Accurate UAV-Based Dense Object Detection
title_full_unstemmed FEA-Swin: Foreground Enhancement Attention Swin Transformer Network for Accurate UAV-Based Dense Object Detection
title_short FEA-Swin: Foreground Enhancement Attention Swin Transformer Network for Accurate UAV-Based Dense Object Detection
title_sort fea-swin: foreground enhancement attention swin transformer network for accurate uav-based dense object detection
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9502707/
https://www.ncbi.nlm.nih.gov/pubmed/36146340
http://dx.doi.org/10.3390/s22186993
work_keys_str_mv AT xuwenyu feaswinforegroundenhancementattentionswintransformernetworkforaccurateuavbaseddenseobjectdetection
AT zhangchaofan feaswinforegroundenhancementattentionswintransformernetworkforaccurateuavbaseddenseobjectdetection
AT wangqi feaswinforegroundenhancementattentionswintransformernetworkforaccurateuavbaseddenseobjectdetection
AT daipangda feaswinforegroundenhancementattentionswintransformernetworkforaccurateuavbaseddenseobjectdetection