Cargando…

Transformer based on channel-spatial attention for accurate classification of scenes in remote sensing image

Recently, the scenes in large high-resolution remote sensing (HRRS) datasets have been classified using convolutional neural network (CNN)-based methods. Such methods are well-suited for spatial feature extraction and can classify images with relatively high accuracy. However, CNNs do not adequately...

Descripción completa

Detalles Bibliográficos
Autores principales: Guo, Jingxia, Jia, Nan, Bai, Jinniu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9474818/
https://www.ncbi.nlm.nih.gov/pubmed/36104442
http://dx.doi.org/10.1038/s41598-022-19831-z
_version_ 1784789771028004864
author Guo, Jingxia
Jia, Nan
Bai, Jinniu
author_facet Guo, Jingxia
Jia, Nan
Bai, Jinniu
author_sort Guo, Jingxia
collection PubMed
description Recently, the scenes in large high-resolution remote sensing (HRRS) datasets have been classified using convolutional neural network (CNN)-based methods. Such methods are well-suited for spatial feature extraction and can classify images with relatively high accuracy. However, CNNs do not adequately learn the long-distance dependencies between images and features in image processing, despite this being necessary for HRRS image processing as the semantic content of the scenes in these images is closely related to their spatial relationship. CNNs also have limitations in solving problems related to large intra-class differences and high inter-class similarity. To overcome these challenges, in this study we combine the channel-spatial attention (CSA) mechanism with the Vision Transformer method to propose an effective HRRS image scene classification framework using Channel-Spatial Attention Transformers (CSAT). The proposed model extracts the channel and spatial features of HRRS images using CSA and the Multi-head Self-Attention (MSA) mechanism in the transformer module. First, the HRRS image is mapped into a series of multiple planar 2D patch vectors after passing to the CSA. Second, the ordered vector is obtained via the linear transformation of each vector, and the position and learnable embedding vectors are added to the sequence vector to capture the inter-feature dependencies at a distance from the generated image. Next, we use MSA to extract image features and the residual network structure to complete the encoder construction to solve the gradient disappearance problem and avoid overfitting. Finally, a multi-layer perceptron is used to classify the scenes in the HRRS images. The CSAT network is evaluated using three public remote sensing scene image datasets: UC-Merced, AID, and NWPU-RESISC45. The experimental results show that the proposed CSAT network outperforms a selection of state-of-the-art methods in terms of scene classification.
format Online
Article
Text
id pubmed-9474818
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-94748182022-09-16 Transformer based on channel-spatial attention for accurate classification of scenes in remote sensing image Guo, Jingxia Jia, Nan Bai, Jinniu Sci Rep Article Recently, the scenes in large high-resolution remote sensing (HRRS) datasets have been classified using convolutional neural network (CNN)-based methods. Such methods are well-suited for spatial feature extraction and can classify images with relatively high accuracy. However, CNNs do not adequately learn the long-distance dependencies between images and features in image processing, despite this being necessary for HRRS image processing as the semantic content of the scenes in these images is closely related to their spatial relationship. CNNs also have limitations in solving problems related to large intra-class differences and high inter-class similarity. To overcome these challenges, in this study we combine the channel-spatial attention (CSA) mechanism with the Vision Transformer method to propose an effective HRRS image scene classification framework using Channel-Spatial Attention Transformers (CSAT). The proposed model extracts the channel and spatial features of HRRS images using CSA and the Multi-head Self-Attention (MSA) mechanism in the transformer module. First, the HRRS image is mapped into a series of multiple planar 2D patch vectors after passing to the CSA. Second, the ordered vector is obtained via the linear transformation of each vector, and the position and learnable embedding vectors are added to the sequence vector to capture the inter-feature dependencies at a distance from the generated image. Next, we use MSA to extract image features and the residual network structure to complete the encoder construction to solve the gradient disappearance problem and avoid overfitting. Finally, a multi-layer perceptron is used to classify the scenes in the HRRS images. The CSAT network is evaluated using three public remote sensing scene image datasets: UC-Merced, AID, and NWPU-RESISC45. The experimental results show that the proposed CSAT network outperforms a selection of state-of-the-art methods in terms of scene classification. Nature Publishing Group UK 2022-09-14 /pmc/articles/PMC9474818/ /pubmed/36104442 http://dx.doi.org/10.1038/s41598-022-19831-z Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Guo, Jingxia
Jia, Nan
Bai, Jinniu
Transformer based on channel-spatial attention for accurate classification of scenes in remote sensing image
title Transformer based on channel-spatial attention for accurate classification of scenes in remote sensing image
title_full Transformer based on channel-spatial attention for accurate classification of scenes in remote sensing image
title_fullStr Transformer based on channel-spatial attention for accurate classification of scenes in remote sensing image
title_full_unstemmed Transformer based on channel-spatial attention for accurate classification of scenes in remote sensing image
title_short Transformer based on channel-spatial attention for accurate classification of scenes in remote sensing image
title_sort transformer based on channel-spatial attention for accurate classification of scenes in remote sensing image
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9474818/
https://www.ncbi.nlm.nih.gov/pubmed/36104442
http://dx.doi.org/10.1038/s41598-022-19831-z
work_keys_str_mv AT guojingxia transformerbasedonchannelspatialattentionforaccurateclassificationofscenesinremotesensingimage
AT jianan transformerbasedonchannelspatialattentionforaccurateclassificationofscenesinremotesensingimage
AT baijinniu transformerbasedonchannelspatialattentionforaccurateclassificationofscenesinremotesensingimage