Cargando…
CRABR-Net: A Contextual Relational Attention-Based Recognition Network for Remote Sensing Scene Objective
Remote sensing scene objective recognition (RSSOR) plays a serious application value in both military and civilian fields. Convolutional neural networks (CNNs) have greatly enhanced the improvement of intelligent objective recognition technology for remote sensing scenes, but most of the methods usi...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10490739/ https://www.ncbi.nlm.nih.gov/pubmed/37687971 http://dx.doi.org/10.3390/s23177514 |
_version_ | 1785103910361366528 |
---|---|
author | Guo, Ningbo Jiang, Mingyong Gao, Lijing Tang, Yizhuo Han, Jinwei Chen, Xiangning |
author_facet | Guo, Ningbo Jiang, Mingyong Gao, Lijing Tang, Yizhuo Han, Jinwei Chen, Xiangning |
author_sort | Guo, Ningbo |
collection | PubMed |
description | Remote sensing scene objective recognition (RSSOR) plays a serious application value in both military and civilian fields. Convolutional neural networks (CNNs) have greatly enhanced the improvement of intelligent objective recognition technology for remote sensing scenes, but most of the methods using CNN for high-resolution RSSOR either use only the feature map of the last layer or directly fuse the feature maps from various layers in the “summation” way, which not only ignores the favorable relationship information between adjacent layers but also leads to redundancy and loss of feature map, which hinders the improvement of recognition accuracy. In this study, a contextual, relational attention-based recognition network (CRABR-Net) was presented, which extracts different convolutional feature maps from CNN, focuses important feature content by using a simple, parameter-free attention module (SimAM), fuses the adjacent feature maps by using the complementary relationship feature map calculation, improves the feature learning ability by using the enhanced relationship feature map calculation, and finally uses the concatenated feature maps from different layers for RSSOR. Experimental results show that CRABR-Net exploits the relationship between the different CNN layers to improve recognition performance, achieves better results compared to several state-of-the-art algorithms, and the average accuracy on AID, UC-Merced, and RSSCN7 can be up to 96.46%, 99.20%, and 95.43% with generic training ratios. |
format | Online Article Text |
id | pubmed-10490739 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-104907392023-09-09 CRABR-Net: A Contextual Relational Attention-Based Recognition Network for Remote Sensing Scene Objective Guo, Ningbo Jiang, Mingyong Gao, Lijing Tang, Yizhuo Han, Jinwei Chen, Xiangning Sensors (Basel) Article Remote sensing scene objective recognition (RSSOR) plays a serious application value in both military and civilian fields. Convolutional neural networks (CNNs) have greatly enhanced the improvement of intelligent objective recognition technology for remote sensing scenes, but most of the methods using CNN for high-resolution RSSOR either use only the feature map of the last layer or directly fuse the feature maps from various layers in the “summation” way, which not only ignores the favorable relationship information between adjacent layers but also leads to redundancy and loss of feature map, which hinders the improvement of recognition accuracy. In this study, a contextual, relational attention-based recognition network (CRABR-Net) was presented, which extracts different convolutional feature maps from CNN, focuses important feature content by using a simple, parameter-free attention module (SimAM), fuses the adjacent feature maps by using the complementary relationship feature map calculation, improves the feature learning ability by using the enhanced relationship feature map calculation, and finally uses the concatenated feature maps from different layers for RSSOR. Experimental results show that CRABR-Net exploits the relationship between the different CNN layers to improve recognition performance, achieves better results compared to several state-of-the-art algorithms, and the average accuracy on AID, UC-Merced, and RSSCN7 can be up to 96.46%, 99.20%, and 95.43% with generic training ratios. MDPI 2023-08-29 /pmc/articles/PMC10490739/ /pubmed/37687971 http://dx.doi.org/10.3390/s23177514 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Guo, Ningbo Jiang, Mingyong Gao, Lijing Tang, Yizhuo Han, Jinwei Chen, Xiangning CRABR-Net: A Contextual Relational Attention-Based Recognition Network for Remote Sensing Scene Objective |
title | CRABR-Net: A Contextual Relational Attention-Based Recognition Network for Remote Sensing Scene Objective |
title_full | CRABR-Net: A Contextual Relational Attention-Based Recognition Network for Remote Sensing Scene Objective |
title_fullStr | CRABR-Net: A Contextual Relational Attention-Based Recognition Network for Remote Sensing Scene Objective |
title_full_unstemmed | CRABR-Net: A Contextual Relational Attention-Based Recognition Network for Remote Sensing Scene Objective |
title_short | CRABR-Net: A Contextual Relational Attention-Based Recognition Network for Remote Sensing Scene Objective |
title_sort | crabr-net: a contextual relational attention-based recognition network for remote sensing scene objective |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10490739/ https://www.ncbi.nlm.nih.gov/pubmed/37687971 http://dx.doi.org/10.3390/s23177514 |
work_keys_str_mv | AT guoningbo crabrnetacontextualrelationalattentionbasedrecognitionnetworkforremotesensingsceneobjective AT jiangmingyong crabrnetacontextualrelationalattentionbasedrecognitionnetworkforremotesensingsceneobjective AT gaolijing crabrnetacontextualrelationalattentionbasedrecognitionnetworkforremotesensingsceneobjective AT tangyizhuo crabrnetacontextualrelationalattentionbasedrecognitionnetworkforremotesensingsceneobjective AT hanjinwei crabrnetacontextualrelationalattentionbasedrecognitionnetworkforremotesensingsceneobjective AT chenxiangning crabrnetacontextualrelationalattentionbasedrecognitionnetworkforremotesensingsceneobjective |