Cargando…
Robust Building Extraction for High Spatial Resolution Remote Sensing Images with Self-Attention Network
Building extraction from high spatial resolution remote sensing images is a hot spot in the field of remote sensing applications and computer vision. This paper presents a semantic segmentation model, which is a supervised method, named Pyramid Self-Attention Network (PISANet). Its structure is simp...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7766463/ https://www.ncbi.nlm.nih.gov/pubmed/33348752 http://dx.doi.org/10.3390/s20247241 |
_version_ | 1783628724616298496 |
---|---|
author | Zhou, Dengji Wang, Guizhou He, Guojin Long, Tengfei Yin, Ranyu Zhang, Zhaoming Chen, Sibao Luo, Bin |
author_facet | Zhou, Dengji Wang, Guizhou He, Guojin Long, Tengfei Yin, Ranyu Zhang, Zhaoming Chen, Sibao Luo, Bin |
author_sort | Zhou, Dengji |
collection | PubMed |
description | Building extraction from high spatial resolution remote sensing images is a hot spot in the field of remote sensing applications and computer vision. This paper presents a semantic segmentation model, which is a supervised method, named Pyramid Self-Attention Network (PISANet). Its structure is simple, because it contains only two parts: one is the backbone of the network, which is used to learn the local features (short distance context information around the pixel) of buildings from the image; the other part is the pyramid self-attention module, which is used to obtain the global features (long distance context information with other pixels in the image) and the comprehensive features (includes color, texture, geometric and high-level semantic feature) of the building. The network is an end-to-end approach. In the training stage, the input is the remote sensing image and corresponding label, and the output is probability map (the probability that each pixel is or is not building). In the prediction stage, the input is the remote sensing image, and the output is the extraction result of the building. The complexity of the network structure was reduced so that it is easy to implement. The proposed PISANet was tested on two datasets. The result shows that the overall accuracy reached 94.50 and 96.15%, the intersection-over-union reached 77.45 and 87.97%, and F1 index reached 87.27 and 93.55%, respectively. In experiments on different datasets, PISANet obtained high overall accuracy, low error rate and improved integrity of individual buildings. |
format | Online Article Text |
id | pubmed-7766463 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-77664632020-12-28 Robust Building Extraction for High Spatial Resolution Remote Sensing Images with Self-Attention Network Zhou, Dengji Wang, Guizhou He, Guojin Long, Tengfei Yin, Ranyu Zhang, Zhaoming Chen, Sibao Luo, Bin Sensors (Basel) Article Building extraction from high spatial resolution remote sensing images is a hot spot in the field of remote sensing applications and computer vision. This paper presents a semantic segmentation model, which is a supervised method, named Pyramid Self-Attention Network (PISANet). Its structure is simple, because it contains only two parts: one is the backbone of the network, which is used to learn the local features (short distance context information around the pixel) of buildings from the image; the other part is the pyramid self-attention module, which is used to obtain the global features (long distance context information with other pixels in the image) and the comprehensive features (includes color, texture, geometric and high-level semantic feature) of the building. The network is an end-to-end approach. In the training stage, the input is the remote sensing image and corresponding label, and the output is probability map (the probability that each pixel is or is not building). In the prediction stage, the input is the remote sensing image, and the output is the extraction result of the building. The complexity of the network structure was reduced so that it is easy to implement. The proposed PISANet was tested on two datasets. The result shows that the overall accuracy reached 94.50 and 96.15%, the intersection-over-union reached 77.45 and 87.97%, and F1 index reached 87.27 and 93.55%, respectively. In experiments on different datasets, PISANet obtained high overall accuracy, low error rate and improved integrity of individual buildings. MDPI 2020-12-17 /pmc/articles/PMC7766463/ /pubmed/33348752 http://dx.doi.org/10.3390/s20247241 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Zhou, Dengji Wang, Guizhou He, Guojin Long, Tengfei Yin, Ranyu Zhang, Zhaoming Chen, Sibao Luo, Bin Robust Building Extraction for High Spatial Resolution Remote Sensing Images with Self-Attention Network |
title | Robust Building Extraction for High Spatial Resolution Remote Sensing Images with Self-Attention Network |
title_full | Robust Building Extraction for High Spatial Resolution Remote Sensing Images with Self-Attention Network |
title_fullStr | Robust Building Extraction for High Spatial Resolution Remote Sensing Images with Self-Attention Network |
title_full_unstemmed | Robust Building Extraction for High Spatial Resolution Remote Sensing Images with Self-Attention Network |
title_short | Robust Building Extraction for High Spatial Resolution Remote Sensing Images with Self-Attention Network |
title_sort | robust building extraction for high spatial resolution remote sensing images with self-attention network |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7766463/ https://www.ncbi.nlm.nih.gov/pubmed/33348752 http://dx.doi.org/10.3390/s20247241 |
work_keys_str_mv | AT zhoudengji robustbuildingextractionforhighspatialresolutionremotesensingimageswithselfattentionnetwork AT wangguizhou robustbuildingextractionforhighspatialresolutionremotesensingimageswithselfattentionnetwork AT heguojin robustbuildingextractionforhighspatialresolutionremotesensingimageswithselfattentionnetwork AT longtengfei robustbuildingextractionforhighspatialresolutionremotesensingimageswithselfattentionnetwork AT yinranyu robustbuildingextractionforhighspatialresolutionremotesensingimageswithselfattentionnetwork AT zhangzhaoming robustbuildingextractionforhighspatialresolutionremotesensingimageswithselfattentionnetwork AT chensibao robustbuildingextractionforhighspatialresolutionremotesensingimageswithselfattentionnetwork AT luobin robustbuildingextractionforhighspatialresolutionremotesensingimageswithselfattentionnetwork |