Cargando…

VILO SLAM: Tightly Coupled Binocular Vision–Inertia SLAM Combined with LiDAR

For the existing visual–inertial SLAM algorithm, when the robot is moving at a constant speed or purely rotating and encounters scenes with insufficient visual features, problems of low accuracy and poor robustness arise. Aiming to solve the problems of low accuracy and robustness of the visual iner...

Descripción completa

Detalles Bibliográficos
Autores principales: Peng, Gang, Zhou, Yicheng, Hu, Lu, Xiao, Li, Sun, Zhigang, Wu, Zhangang, Zhu, Xukang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10223234/
https://www.ncbi.nlm.nih.gov/pubmed/37430501
http://dx.doi.org/10.3390/s23104588
_version_ 1785049892976066560
author Peng, Gang
Zhou, Yicheng
Hu, Lu
Xiao, Li
Sun, Zhigang
Wu, Zhangang
Zhu, Xukang
author_facet Peng, Gang
Zhou, Yicheng
Hu, Lu
Xiao, Li
Sun, Zhigang
Wu, Zhangang
Zhu, Xukang
author_sort Peng, Gang
collection PubMed
description For the existing visual–inertial SLAM algorithm, when the robot is moving at a constant speed or purely rotating and encounters scenes with insufficient visual features, problems of low accuracy and poor robustness arise. Aiming to solve the problems of low accuracy and robustness of the visual inertial SLAM algorithm, a tightly coupled vision-IMU-2D lidar odometry (VILO) algorithm is proposed. Firstly, low-cost 2D lidar observations and visual–inertial observations are fused in a tightly coupled manner. Secondly, the low-cost 2D lidar odometry model is used to derive the Jacobian matrix of the lidar residual with respect to the state variable to be estimated, and the residual constraint equation of the vision-IMU-2D lidar is constructed. Thirdly, the nonlinear solution method is used to obtain the optimal robot pose, which solves the problem of how to fuse 2D lidar observations with visual–inertial information in a tightly coupled manner. The results show that the algorithm still has reliable pose-estimation accuracy and robustness in many special environments, and the position error and yaw angle error are greatly reduced. Our research improves the accuracy and robustness of the multi-sensor fusion SLAM algorithm.
format Online
Article
Text
id pubmed-10223234
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-102232342023-05-28 VILO SLAM: Tightly Coupled Binocular Vision–Inertia SLAM Combined with LiDAR Peng, Gang Zhou, Yicheng Hu, Lu Xiao, Li Sun, Zhigang Wu, Zhangang Zhu, Xukang Sensors (Basel) Article For the existing visual–inertial SLAM algorithm, when the robot is moving at a constant speed or purely rotating and encounters scenes with insufficient visual features, problems of low accuracy and poor robustness arise. Aiming to solve the problems of low accuracy and robustness of the visual inertial SLAM algorithm, a tightly coupled vision-IMU-2D lidar odometry (VILO) algorithm is proposed. Firstly, low-cost 2D lidar observations and visual–inertial observations are fused in a tightly coupled manner. Secondly, the low-cost 2D lidar odometry model is used to derive the Jacobian matrix of the lidar residual with respect to the state variable to be estimated, and the residual constraint equation of the vision-IMU-2D lidar is constructed. Thirdly, the nonlinear solution method is used to obtain the optimal robot pose, which solves the problem of how to fuse 2D lidar observations with visual–inertial information in a tightly coupled manner. The results show that the algorithm still has reliable pose-estimation accuracy and robustness in many special environments, and the position error and yaw angle error are greatly reduced. Our research improves the accuracy and robustness of the multi-sensor fusion SLAM algorithm. MDPI 2023-05-09 /pmc/articles/PMC10223234/ /pubmed/37430501 http://dx.doi.org/10.3390/s23104588 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Peng, Gang
Zhou, Yicheng
Hu, Lu
Xiao, Li
Sun, Zhigang
Wu, Zhangang
Zhu, Xukang
VILO SLAM: Tightly Coupled Binocular Vision–Inertia SLAM Combined with LiDAR
title VILO SLAM: Tightly Coupled Binocular Vision–Inertia SLAM Combined with LiDAR
title_full VILO SLAM: Tightly Coupled Binocular Vision–Inertia SLAM Combined with LiDAR
title_fullStr VILO SLAM: Tightly Coupled Binocular Vision–Inertia SLAM Combined with LiDAR
title_full_unstemmed VILO SLAM: Tightly Coupled Binocular Vision–Inertia SLAM Combined with LiDAR
title_short VILO SLAM: Tightly Coupled Binocular Vision–Inertia SLAM Combined with LiDAR
title_sort vilo slam: tightly coupled binocular vision–inertia slam combined with lidar
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10223234/
https://www.ncbi.nlm.nih.gov/pubmed/37430501
http://dx.doi.org/10.3390/s23104588
work_keys_str_mv AT penggang viloslamtightlycoupledbinocularvisioninertiaslamcombinedwithlidar
AT zhouyicheng viloslamtightlycoupledbinocularvisioninertiaslamcombinedwithlidar
AT hulu viloslamtightlycoupledbinocularvisioninertiaslamcombinedwithlidar
AT xiaoli viloslamtightlycoupledbinocularvisioninertiaslamcombinedwithlidar
AT sunzhigang viloslamtightlycoupledbinocularvisioninertiaslamcombinedwithlidar
AT wuzhangang viloslamtightlycoupledbinocularvisioninertiaslamcombinedwithlidar
AT zhuxukang viloslamtightlycoupledbinocularvisioninertiaslamcombinedwithlidar