Cargando…

UnVELO: Unsupervised Vision-Enhanced LiDAR Odometry with Online Correction

Due to the complementary characteristics of visual and LiDAR information, these two modalities have been fused to facilitate many vision tasks. However, current studies of learning-based odometries mainly focus on either the visual or LiDAR modality, leaving visual–LiDAR odometries (VLOs) under-expl...

Descripción completa

Detalles Bibliográficos
Autores principales: Li, Bin, Ye, Haifeng, Fu, Sihan, Gong, Xiaojin, Xiang, Zhiyu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10142647/
https://www.ncbi.nlm.nih.gov/pubmed/37112307
http://dx.doi.org/10.3390/s23083967
_version_ 1785033663367348224
author Li, Bin
Ye, Haifeng
Fu, Sihan
Gong, Xiaojin
Xiang, Zhiyu
author_facet Li, Bin
Ye, Haifeng
Fu, Sihan
Gong, Xiaojin
Xiang, Zhiyu
author_sort Li, Bin
collection PubMed
description Due to the complementary characteristics of visual and LiDAR information, these two modalities have been fused to facilitate many vision tasks. However, current studies of learning-based odometries mainly focus on either the visual or LiDAR modality, leaving visual–LiDAR odometries (VLOs) under-explored. This work proposes a new method to implement an unsupervised VLO, which adopts a LiDAR-dominant scheme to fuse the two modalities. We, therefore, refer to it as unsupervised vision-enhanced LiDAR odometry (UnVELO). It converts 3D LiDAR points into a dense vertex map via spherical projection and generates a vertex color map by colorizing each vertex with visual information. Further, a point-to-plane distance-based geometric loss and a photometric-error-based visual loss are, respectively, placed on locally planar regions and cluttered regions. Last, but not least, we designed an online pose-correction module to refine the pose predicted by the trained UnVELO during test time. In contrast to the vision-dominant fusion scheme adopted in most previous VLOs, our LiDAR-dominant method adopts the dense representations for both modalities, which facilitates the visual–LiDAR fusion. Besides, our method uses the accurate LiDAR measurements instead of the predicted noisy dense depth maps, which significantly improves the robustness to illumination variations, as well as the efficiency of the online pose correction. The experiments on the KITTI and DSEC datasets showed that our method outperformed previous two-frame-based learning methods. It was also competitive with hybrid methods that integrate a global optimization on multiple or all frames.
format Online
Article
Text
id pubmed-10142647
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-101426472023-04-29 UnVELO: Unsupervised Vision-Enhanced LiDAR Odometry with Online Correction Li, Bin Ye, Haifeng Fu, Sihan Gong, Xiaojin Xiang, Zhiyu Sensors (Basel) Article Due to the complementary characteristics of visual and LiDAR information, these two modalities have been fused to facilitate many vision tasks. However, current studies of learning-based odometries mainly focus on either the visual or LiDAR modality, leaving visual–LiDAR odometries (VLOs) under-explored. This work proposes a new method to implement an unsupervised VLO, which adopts a LiDAR-dominant scheme to fuse the two modalities. We, therefore, refer to it as unsupervised vision-enhanced LiDAR odometry (UnVELO). It converts 3D LiDAR points into a dense vertex map via spherical projection and generates a vertex color map by colorizing each vertex with visual information. Further, a point-to-plane distance-based geometric loss and a photometric-error-based visual loss are, respectively, placed on locally planar regions and cluttered regions. Last, but not least, we designed an online pose-correction module to refine the pose predicted by the trained UnVELO during test time. In contrast to the vision-dominant fusion scheme adopted in most previous VLOs, our LiDAR-dominant method adopts the dense representations for both modalities, which facilitates the visual–LiDAR fusion. Besides, our method uses the accurate LiDAR measurements instead of the predicted noisy dense depth maps, which significantly improves the robustness to illumination variations, as well as the efficiency of the online pose correction. The experiments on the KITTI and DSEC datasets showed that our method outperformed previous two-frame-based learning methods. It was also competitive with hybrid methods that integrate a global optimization on multiple or all frames. MDPI 2023-04-13 /pmc/articles/PMC10142647/ /pubmed/37112307 http://dx.doi.org/10.3390/s23083967 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Li, Bin
Ye, Haifeng
Fu, Sihan
Gong, Xiaojin
Xiang, Zhiyu
UnVELO: Unsupervised Vision-Enhanced LiDAR Odometry with Online Correction
title UnVELO: Unsupervised Vision-Enhanced LiDAR Odometry with Online Correction
title_full UnVELO: Unsupervised Vision-Enhanced LiDAR Odometry with Online Correction
title_fullStr UnVELO: Unsupervised Vision-Enhanced LiDAR Odometry with Online Correction
title_full_unstemmed UnVELO: Unsupervised Vision-Enhanced LiDAR Odometry with Online Correction
title_short UnVELO: Unsupervised Vision-Enhanced LiDAR Odometry with Online Correction
title_sort unvelo: unsupervised vision-enhanced lidar odometry with online correction
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10142647/
https://www.ncbi.nlm.nih.gov/pubmed/37112307
http://dx.doi.org/10.3390/s23083967
work_keys_str_mv AT libin unvelounsupervisedvisionenhancedlidarodometrywithonlinecorrection
AT yehaifeng unvelounsupervisedvisionenhancedlidarodometrywithonlinecorrection
AT fusihan unvelounsupervisedvisionenhancedlidarodometrywithonlinecorrection
AT gongxiaojin unvelounsupervisedvisionenhancedlidarodometrywithonlinecorrection
AT xiangzhiyu unvelounsupervisedvisionenhancedlidarodometrywithonlinecorrection