Cargando…

Multi-U-Net: Residual Module under Multisensory Field and Attention Mechanism Based Optimized U-Net for VHR Image Semantic Segmentation

As the acquisition of very high resolution (VHR) images becomes easier, the complex characteristics of VHR images pose new challenges to traditional machine learning semantic segmentation methods. As an excellent convolutional neural network (CNN) structure, U-Net does not require manual interventio...

Descripción completa

Detalles Bibliográficos
Autores principales: Ran, Si, Ding, Jianli, Liu, Bohua, Ge, Xiangyu, Ma, Guolin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7961556/
https://www.ncbi.nlm.nih.gov/pubmed/33807525
http://dx.doi.org/10.3390/s21051794
_version_ 1783665286478561280
author Ran, Si
Ding, Jianli
Liu, Bohua
Ge, Xiangyu
Ma, Guolin
author_facet Ran, Si
Ding, Jianli
Liu, Bohua
Ge, Xiangyu
Ma, Guolin
author_sort Ran, Si
collection PubMed
description As the acquisition of very high resolution (VHR) images becomes easier, the complex characteristics of VHR images pose new challenges to traditional machine learning semantic segmentation methods. As an excellent convolutional neural network (CNN) structure, U-Net does not require manual intervention, and its high-precision features are widely used in image interpretation. However, as an end-to-end fully convolutional network, U-Net has not explored enough information from the full scale, and there is still room for improvement. In this study, we constructed an effective network module: residual module under a multisensory field (RMMF) to extract multiscale features of target and an attention mechanism to optimize feature information. RMMF uses parallel convolutional layers to learn features of different scales in the network and adds shortcut connections between stacked layers to construct residual blocks, combining low-level detailed information with high-level semantic information. RMMF is universal and extensible. The convolutional layer in the U-Net network is replaced with RMMF to improve the network structure. Additionally, the multiscale convolutional network was tested using RMMF on the Gaofen-2 data set and Potsdam data sets. Experiments show that compared to other technologies, this method has better performance in airborne and spaceborne images.
format Online
Article
Text
id pubmed-7961556
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-79615562021-03-17 Multi-U-Net: Residual Module under Multisensory Field and Attention Mechanism Based Optimized U-Net for VHR Image Semantic Segmentation Ran, Si Ding, Jianli Liu, Bohua Ge, Xiangyu Ma, Guolin Sensors (Basel) Article As the acquisition of very high resolution (VHR) images becomes easier, the complex characteristics of VHR images pose new challenges to traditional machine learning semantic segmentation methods. As an excellent convolutional neural network (CNN) structure, U-Net does not require manual intervention, and its high-precision features are widely used in image interpretation. However, as an end-to-end fully convolutional network, U-Net has not explored enough information from the full scale, and there is still room for improvement. In this study, we constructed an effective network module: residual module under a multisensory field (RMMF) to extract multiscale features of target and an attention mechanism to optimize feature information. RMMF uses parallel convolutional layers to learn features of different scales in the network and adds shortcut connections between stacked layers to construct residual blocks, combining low-level detailed information with high-level semantic information. RMMF is universal and extensible. The convolutional layer in the U-Net network is replaced with RMMF to improve the network structure. Additionally, the multiscale convolutional network was tested using RMMF on the Gaofen-2 data set and Potsdam data sets. Experiments show that compared to other technologies, this method has better performance in airborne and spaceborne images. MDPI 2021-03-05 /pmc/articles/PMC7961556/ /pubmed/33807525 http://dx.doi.org/10.3390/s21051794 Text en © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Ran, Si
Ding, Jianli
Liu, Bohua
Ge, Xiangyu
Ma, Guolin
Multi-U-Net: Residual Module under Multisensory Field and Attention Mechanism Based Optimized U-Net for VHR Image Semantic Segmentation
title Multi-U-Net: Residual Module under Multisensory Field and Attention Mechanism Based Optimized U-Net for VHR Image Semantic Segmentation
title_full Multi-U-Net: Residual Module under Multisensory Field and Attention Mechanism Based Optimized U-Net for VHR Image Semantic Segmentation
title_fullStr Multi-U-Net: Residual Module under Multisensory Field and Attention Mechanism Based Optimized U-Net for VHR Image Semantic Segmentation
title_full_unstemmed Multi-U-Net: Residual Module under Multisensory Field and Attention Mechanism Based Optimized U-Net for VHR Image Semantic Segmentation
title_short Multi-U-Net: Residual Module under Multisensory Field and Attention Mechanism Based Optimized U-Net for VHR Image Semantic Segmentation
title_sort multi-u-net: residual module under multisensory field and attention mechanism based optimized u-net for vhr image semantic segmentation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7961556/
https://www.ncbi.nlm.nih.gov/pubmed/33807525
http://dx.doi.org/10.3390/s21051794
work_keys_str_mv AT ransi multiunetresidualmoduleundermultisensoryfieldandattentionmechanismbasedoptimizedunetforvhrimagesemanticsegmentation
AT dingjianli multiunetresidualmoduleundermultisensoryfieldandattentionmechanismbasedoptimizedunetforvhrimagesemanticsegmentation
AT liubohua multiunetresidualmoduleundermultisensoryfieldandattentionmechanismbasedoptimizedunetforvhrimagesemanticsegmentation
AT gexiangyu multiunetresidualmoduleundermultisensoryfieldandattentionmechanismbasedoptimizedunetforvhrimagesemanticsegmentation
AT maguolin multiunetresidualmoduleundermultisensoryfieldandattentionmechanismbasedoptimizedunetforvhrimagesemanticsegmentation