Cargando…

High-Resolution U-Net: Preserving Image Details for Cultivated Land Extraction

Accurate and efficient extraction of cultivated land data is of great significance for agricultural resource monitoring and national food security. Deep-learning-based classification of remote-sensing images overcomes the two difficulties of traditional learning methods (e.g., support vector machine...

Descripción completa

Detalles Bibliográficos
Autores principales: Xu, Wenna, Deng, Xinping, Guo, Shanxin, Chen, Jinsong, Sun, Luyi, Zheng, Xiaorou, Xiong, Yingfei, Shen, Yuan, Wang, Xiaoqin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7436155/
https://www.ncbi.nlm.nih.gov/pubmed/32707825
http://dx.doi.org/10.3390/s20154064
_version_ 1783572488075083776
author Xu, Wenna
Deng, Xinping
Guo, Shanxin
Chen, Jinsong
Sun, Luyi
Zheng, Xiaorou
Xiong, Yingfei
Shen, Yuan
Wang, Xiaoqin
author_facet Xu, Wenna
Deng, Xinping
Guo, Shanxin
Chen, Jinsong
Sun, Luyi
Zheng, Xiaorou
Xiong, Yingfei
Shen, Yuan
Wang, Xiaoqin
author_sort Xu, Wenna
collection PubMed
description Accurate and efficient extraction of cultivated land data is of great significance for agricultural resource monitoring and national food security. Deep-learning-based classification of remote-sensing images overcomes the two difficulties of traditional learning methods (e.g., support vector machine (SVM), K-nearest neighbors (KNN), and random forest (RF)) when extracting the cultivated land: (1) the limited performance when extracting the same land-cover type with the high intra-class spectral variation, such as cultivated land with both vegetation and non-vegetation cover, and (2) the limited generalization ability for handling a large dataset to apply the model to different locations. However, the “pooling” process in most deep convolutional networks, which attempts to enlarge the sensing field of the kernel by involving the upscale process, leads to significant detail loss in the output, including the edges, gradients, and image texture details. To solve this problem, in this study we proposed a new end-to-end extraction algorithm, a high-resolution U-Net (HRU-Net), to preserve the image details by improving the skip connection structure and the loss function of the original U-Net. The proposed HRU-Net was tested in Xinjiang Province, China to extract the cultivated land from Landsat Thematic Mapper (TM) images. The result showed that the HRU-Net achieved better performance (Acc: 92.81%; kappa: 0.81; F1-score: 0.90) than the U-Net++ (Acc: 91.74%; kappa: 0.79; F1-score: 0.89), the original U-Net (Acc: 89.83%; kappa: 0.74; F1-score: 0.86), and the Random Forest model (Acc: 76.13%; kappa: 0.48; F1-score: 0.69). The robustness of the proposed model for the intra-class spectral variation and the accuracy of the edge details were also compared, and this showed that the HRU-Net obtained more accurate edge details and had less influence from the intra-class spectral variation. The model proposed in this study can be further applied to other land cover types that have more spectral diversity and require more details of extraction.
format Online
Article
Text
id pubmed-7436155
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-74361552020-08-24 High-Resolution U-Net: Preserving Image Details for Cultivated Land Extraction Xu, Wenna Deng, Xinping Guo, Shanxin Chen, Jinsong Sun, Luyi Zheng, Xiaorou Xiong, Yingfei Shen, Yuan Wang, Xiaoqin Sensors (Basel) Article Accurate and efficient extraction of cultivated land data is of great significance for agricultural resource monitoring and national food security. Deep-learning-based classification of remote-sensing images overcomes the two difficulties of traditional learning methods (e.g., support vector machine (SVM), K-nearest neighbors (KNN), and random forest (RF)) when extracting the cultivated land: (1) the limited performance when extracting the same land-cover type with the high intra-class spectral variation, such as cultivated land with both vegetation and non-vegetation cover, and (2) the limited generalization ability for handling a large dataset to apply the model to different locations. However, the “pooling” process in most deep convolutional networks, which attempts to enlarge the sensing field of the kernel by involving the upscale process, leads to significant detail loss in the output, including the edges, gradients, and image texture details. To solve this problem, in this study we proposed a new end-to-end extraction algorithm, a high-resolution U-Net (HRU-Net), to preserve the image details by improving the skip connection structure and the loss function of the original U-Net. The proposed HRU-Net was tested in Xinjiang Province, China to extract the cultivated land from Landsat Thematic Mapper (TM) images. The result showed that the HRU-Net achieved better performance (Acc: 92.81%; kappa: 0.81; F1-score: 0.90) than the U-Net++ (Acc: 91.74%; kappa: 0.79; F1-score: 0.89), the original U-Net (Acc: 89.83%; kappa: 0.74; F1-score: 0.86), and the Random Forest model (Acc: 76.13%; kappa: 0.48; F1-score: 0.69). The robustness of the proposed model for the intra-class spectral variation and the accuracy of the edge details were also compared, and this showed that the HRU-Net obtained more accurate edge details and had less influence from the intra-class spectral variation. The model proposed in this study can be further applied to other land cover types that have more spectral diversity and require more details of extraction. MDPI 2020-07-22 /pmc/articles/PMC7436155/ /pubmed/32707825 http://dx.doi.org/10.3390/s20154064 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Xu, Wenna
Deng, Xinping
Guo, Shanxin
Chen, Jinsong
Sun, Luyi
Zheng, Xiaorou
Xiong, Yingfei
Shen, Yuan
Wang, Xiaoqin
High-Resolution U-Net: Preserving Image Details for Cultivated Land Extraction
title High-Resolution U-Net: Preserving Image Details for Cultivated Land Extraction
title_full High-Resolution U-Net: Preserving Image Details for Cultivated Land Extraction
title_fullStr High-Resolution U-Net: Preserving Image Details for Cultivated Land Extraction
title_full_unstemmed High-Resolution U-Net: Preserving Image Details for Cultivated Land Extraction
title_short High-Resolution U-Net: Preserving Image Details for Cultivated Land Extraction
title_sort high-resolution u-net: preserving image details for cultivated land extraction
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7436155/
https://www.ncbi.nlm.nih.gov/pubmed/32707825
http://dx.doi.org/10.3390/s20154064
work_keys_str_mv AT xuwenna highresolutionunetpreservingimagedetailsforcultivatedlandextraction
AT dengxinping highresolutionunetpreservingimagedetailsforcultivatedlandextraction
AT guoshanxin highresolutionunetpreservingimagedetailsforcultivatedlandextraction
AT chenjinsong highresolutionunetpreservingimagedetailsforcultivatedlandextraction
AT sunluyi highresolutionunetpreservingimagedetailsforcultivatedlandextraction
AT zhengxiaorou highresolutionunetpreservingimagedetailsforcultivatedlandextraction
AT xiongyingfei highresolutionunetpreservingimagedetailsforcultivatedlandextraction
AT shenyuan highresolutionunetpreservingimagedetailsforcultivatedlandextraction
AT wangxiaoqin highresolutionunetpreservingimagedetailsforcultivatedlandextraction