Cargando…

FGCN: Image-Fused Point Cloud Semantic Segmentation with Fusion Graph Convolutional Network

In interpreting a scene for numerous applications, including autonomous driving and robotic navigation, semantic segmentation is crucial. Compared to single-modal data, multi-modal data allow us to extract a richer set of features, which is the benefit of improving segmentation accuracy and effect....

Descripción completa

Detalles Bibliográficos
Autores principales: Zhang, Kun, Chen, Rui, Peng, Zidong, Zhu, Yawei, Wang, Xiaohong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10575317/
https://www.ncbi.nlm.nih.gov/pubmed/37837167
http://dx.doi.org/10.3390/s23198338
_version_ 1785120896168493056
author Zhang, Kun
Chen, Rui
Peng, Zidong
Zhu, Yawei
Wang, Xiaohong
author_facet Zhang, Kun
Chen, Rui
Peng, Zidong
Zhu, Yawei
Wang, Xiaohong
author_sort Zhang, Kun
collection PubMed
description In interpreting a scene for numerous applications, including autonomous driving and robotic navigation, semantic segmentation is crucial. Compared to single-modal data, multi-modal data allow us to extract a richer set of features, which is the benefit of improving segmentation accuracy and effect. We propose a point cloud semantic segmentation method, and a fusion graph convolutional network (FGCN) which extracts the semantic information of each point involved in the two-modal data of images and point clouds. The two-channel k-nearest neighbors (KNN) module of the FGCN was created to address the issue of the feature extraction’s poor efficiency by utilizing picture data. Notably, the FGCN utilizes the spatial attention mechanism to better distinguish more important features and fuses multi-scale features to enhance the generalization capability of the network and increase the accuracy of the semantic segmentation. In the experiment, a self-made semantic segmentation KITTI (SSKIT) dataset was made for the fusion effect. The mean intersection over union (MIoU) of the SSKIT can reach 88.06%. As well as the public datasets, the S3DIS showed that our method can enhance data features and outperform other methods: the MIoU of the S3DIS can reach up to 78.55%. The segmentation accuracy is significantly improved compared with the existing methods, which verifies the effectiveness of the improved algorithms.
format Online
Article
Text
id pubmed-10575317
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-105753172023-10-14 FGCN: Image-Fused Point Cloud Semantic Segmentation with Fusion Graph Convolutional Network Zhang, Kun Chen, Rui Peng, Zidong Zhu, Yawei Wang, Xiaohong Sensors (Basel) Article In interpreting a scene for numerous applications, including autonomous driving and robotic navigation, semantic segmentation is crucial. Compared to single-modal data, multi-modal data allow us to extract a richer set of features, which is the benefit of improving segmentation accuracy and effect. We propose a point cloud semantic segmentation method, and a fusion graph convolutional network (FGCN) which extracts the semantic information of each point involved in the two-modal data of images and point clouds. The two-channel k-nearest neighbors (KNN) module of the FGCN was created to address the issue of the feature extraction’s poor efficiency by utilizing picture data. Notably, the FGCN utilizes the spatial attention mechanism to better distinguish more important features and fuses multi-scale features to enhance the generalization capability of the network and increase the accuracy of the semantic segmentation. In the experiment, a self-made semantic segmentation KITTI (SSKIT) dataset was made for the fusion effect. The mean intersection over union (MIoU) of the SSKIT can reach 88.06%. As well as the public datasets, the S3DIS showed that our method can enhance data features and outperform other methods: the MIoU of the S3DIS can reach up to 78.55%. The segmentation accuracy is significantly improved compared with the existing methods, which verifies the effectiveness of the improved algorithms. MDPI 2023-10-09 /pmc/articles/PMC10575317/ /pubmed/37837167 http://dx.doi.org/10.3390/s23198338 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Zhang, Kun
Chen, Rui
Peng, Zidong
Zhu, Yawei
Wang, Xiaohong
FGCN: Image-Fused Point Cloud Semantic Segmentation with Fusion Graph Convolutional Network
title FGCN: Image-Fused Point Cloud Semantic Segmentation with Fusion Graph Convolutional Network
title_full FGCN: Image-Fused Point Cloud Semantic Segmentation with Fusion Graph Convolutional Network
title_fullStr FGCN: Image-Fused Point Cloud Semantic Segmentation with Fusion Graph Convolutional Network
title_full_unstemmed FGCN: Image-Fused Point Cloud Semantic Segmentation with Fusion Graph Convolutional Network
title_short FGCN: Image-Fused Point Cloud Semantic Segmentation with Fusion Graph Convolutional Network
title_sort fgcn: image-fused point cloud semantic segmentation with fusion graph convolutional network
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10575317/
https://www.ncbi.nlm.nih.gov/pubmed/37837167
http://dx.doi.org/10.3390/s23198338
work_keys_str_mv AT zhangkun fgcnimagefusedpointcloudsemanticsegmentationwithfusiongraphconvolutionalnetwork
AT chenrui fgcnimagefusedpointcloudsemanticsegmentationwithfusiongraphconvolutionalnetwork
AT pengzidong fgcnimagefusedpointcloudsemanticsegmentationwithfusiongraphconvolutionalnetwork
AT zhuyawei fgcnimagefusedpointcloudsemanticsegmentationwithfusiongraphconvolutionalnetwork
AT wangxiaohong fgcnimagefusedpointcloudsemanticsegmentationwithfusiongraphconvolutionalnetwork