Cargando…

SGSNet: A Lightweight Depth Completion Network Based on Secondary Guidance and Spatial Fusion

The depth completion task aims to generate a dense depth map from a sparse depth map and the corresponding RGB image. As a data preprocessing task, obtaining denser depth maps without affecting the real-time performance of downstream tasks is the challenge. In this paper, we propose a lightweight de...

Descripción completa

Detalles Bibliográficos
Autores principales: Chen, Baifan, Lv, Xiaotian, Liu, Chongliang, Jiao, Hao
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9459817/
https://www.ncbi.nlm.nih.gov/pubmed/36080872
http://dx.doi.org/10.3390/s22176414
_version_ 1784786599476723712
author Chen, Baifan
Lv, Xiaotian
Liu, Chongliang
Jiao, Hao
author_facet Chen, Baifan
Lv, Xiaotian
Liu, Chongliang
Jiao, Hao
author_sort Chen, Baifan
collection PubMed
description The depth completion task aims to generate a dense depth map from a sparse depth map and the corresponding RGB image. As a data preprocessing task, obtaining denser depth maps without affecting the real-time performance of downstream tasks is the challenge. In this paper, we propose a lightweight depth completion network based on secondary guidance and spatial fusion named SGSNet. We design the image feature extraction module to better extract features from different scales between and within layers in parallel and to generate guidance features. Then, SGSNet uses the secondary guidance to complete the depth completion. The first guidance uses the lightweight guidance module to quickly guide LiDAR feature extraction with the texture features of RGB images. The second guidance uses the depth information completion module for sparse depth map feature completion and inputs it into the DA-CSPN++ module to complete the dense depth map re-guidance. By using a lightweight bootstrap module, the overall network runs ten times faster than the baseline. The overall network is relatively lightweight, up to thirty frames, which is sufficient to meet the speed needs of large SLAM and three-dimensional reconstruction for sensor data extraction. At the time of submission, the accuracy of the algorithm in SGSNet ranked first in the KITTI ranking of lightweight depth completion methods. It was 37.5% faster than the top published algorithms in the rank and was second in the full ranking.
format Online
Article
Text
id pubmed-9459817
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-94598172022-09-10 SGSNet: A Lightweight Depth Completion Network Based on Secondary Guidance and Spatial Fusion Chen, Baifan Lv, Xiaotian Liu, Chongliang Jiao, Hao Sensors (Basel) Article The depth completion task aims to generate a dense depth map from a sparse depth map and the corresponding RGB image. As a data preprocessing task, obtaining denser depth maps without affecting the real-time performance of downstream tasks is the challenge. In this paper, we propose a lightweight depth completion network based on secondary guidance and spatial fusion named SGSNet. We design the image feature extraction module to better extract features from different scales between and within layers in parallel and to generate guidance features. Then, SGSNet uses the secondary guidance to complete the depth completion. The first guidance uses the lightweight guidance module to quickly guide LiDAR feature extraction with the texture features of RGB images. The second guidance uses the depth information completion module for sparse depth map feature completion and inputs it into the DA-CSPN++ module to complete the dense depth map re-guidance. By using a lightweight bootstrap module, the overall network runs ten times faster than the baseline. The overall network is relatively lightweight, up to thirty frames, which is sufficient to meet the speed needs of large SLAM and three-dimensional reconstruction for sensor data extraction. At the time of submission, the accuracy of the algorithm in SGSNet ranked first in the KITTI ranking of lightweight depth completion methods. It was 37.5% faster than the top published algorithms in the rank and was second in the full ranking. MDPI 2022-08-25 /pmc/articles/PMC9459817/ /pubmed/36080872 http://dx.doi.org/10.3390/s22176414 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Chen, Baifan
Lv, Xiaotian
Liu, Chongliang
Jiao, Hao
SGSNet: A Lightweight Depth Completion Network Based on Secondary Guidance and Spatial Fusion
title SGSNet: A Lightweight Depth Completion Network Based on Secondary Guidance and Spatial Fusion
title_full SGSNet: A Lightweight Depth Completion Network Based on Secondary Guidance and Spatial Fusion
title_fullStr SGSNet: A Lightweight Depth Completion Network Based on Secondary Guidance and Spatial Fusion
title_full_unstemmed SGSNet: A Lightweight Depth Completion Network Based on Secondary Guidance and Spatial Fusion
title_short SGSNet: A Lightweight Depth Completion Network Based on Secondary Guidance and Spatial Fusion
title_sort sgsnet: a lightweight depth completion network based on secondary guidance and spatial fusion
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9459817/
https://www.ncbi.nlm.nih.gov/pubmed/36080872
http://dx.doi.org/10.3390/s22176414
work_keys_str_mv AT chenbaifan sgsnetalightweightdepthcompletionnetworkbasedonsecondaryguidanceandspatialfusion
AT lvxiaotian sgsnetalightweightdepthcompletionnetworkbasedonsecondaryguidanceandspatialfusion
AT liuchongliang sgsnetalightweightdepthcompletionnetworkbasedonsecondaryguidanceandspatialfusion
AT jiaohao sgsnetalightweightdepthcompletionnetworkbasedonsecondaryguidanceandspatialfusion