Cargando…
Shared-Weight-Based Multi-Dimensional Feature Alignment Network for Oriented Object Detection in Remote Sensing Imagery
Arbitrarily Oriented Object Detection in aerial images is a highly challenging task in computer vision. The mainstream methods are based on the feature pyramid, while for remote-sensing targets, the misalignment of multi-scale features is always a thorny problem. In this article, we address the feat...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9824571/ https://www.ncbi.nlm.nih.gov/pubmed/36616808 http://dx.doi.org/10.3390/s23010207 |
Sumario: | Arbitrarily Oriented Object Detection in aerial images is a highly challenging task in computer vision. The mainstream methods are based on the feature pyramid, while for remote-sensing targets, the misalignment of multi-scale features is always a thorny problem. In this article, we address the feature misalignment problem of oriented object detection from three dimensions: spatial, axial, and semantic. First, for the spatial misalignment problem, we design an intra-level alignment network based on leading features that can synchronize the location information of different pyramid features by sparse sampling. For multi-oriented aerial targets, we propose an axially aware convolution to solve the mismatch between the traditional sampling method and the orientation of instances. With the proposed collaborative optimization strategy based on shared weights, the above two modules can achieve coarse-to-fine feature alignment in spatial and axial dimensions. Last but not least, we propose a hierarchical-wise semantic alignment network to address the semantic gap between pyramid features that can cope with remote-sensing targets at varying scales by endowing the feature map with global semantic perception across pyramid levels. Extensive experiments on several challenging aerial benchmarks show state-of-the-art accuracy and appreciable inference speed. Specifically, we achieve a mean Average Precision (mAP) of 78.11% on DOTA, 90.10% on HRSC2016, and 90.29% on UCAS-AOD. |
---|