Cargando…

A Resilient Method for Visual–Inertial Fusion Based on Covariance Tuning

To improve localization and pose precision of visual–inertial simultaneous localization and mapping (viSLAM) in complex scenarios, it is necessary to tune the weights of the visual and inertial inputs during sensor fusion. To this end, we propose a resilient viSLAM algorithm based on covariance tuni...

Descripción completa

Detalles Bibliográficos
Autores principales: Li, Kailin, Li, Jiansheng, Wang, Ancheng, Luo, Haolong, Li, Xueqiang, Yang, Zidi
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9781031/
https://www.ncbi.nlm.nih.gov/pubmed/36560205
http://dx.doi.org/10.3390/s22249836
_version_ 1784856974002749440
author Li, Kailin
Li, Jiansheng
Wang, Ancheng
Luo, Haolong
Li, Xueqiang
Yang, Zidi
author_facet Li, Kailin
Li, Jiansheng
Wang, Ancheng
Luo, Haolong
Li, Xueqiang
Yang, Zidi
author_sort Li, Kailin
collection PubMed
description To improve localization and pose precision of visual–inertial simultaneous localization and mapping (viSLAM) in complex scenarios, it is necessary to tune the weights of the visual and inertial inputs during sensor fusion. To this end, we propose a resilient viSLAM algorithm based on covariance tuning. During back-end optimization of the viSLAM process, the unit-weight root-mean-square error (RMSE) of the visual reprojection and IMU preintegration in each optimization is computed to construct a covariance tuning function, producing a new covariance matrix. This is used to perform another round of nonlinear optimization, effectively improving pose and localization precision without closed-loop detection. In the validation experiment, our algorithm outperformed the OKVIS, R-VIO, and VINS-Mono open-source viSLAM frameworks in pose and localization precision on the EuRoc dataset, at all difficulty levels.
format Online
Article
Text
id pubmed-9781031
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-97810312022-12-24 A Resilient Method for Visual–Inertial Fusion Based on Covariance Tuning Li, Kailin Li, Jiansheng Wang, Ancheng Luo, Haolong Li, Xueqiang Yang, Zidi Sensors (Basel) Article To improve localization and pose precision of visual–inertial simultaneous localization and mapping (viSLAM) in complex scenarios, it is necessary to tune the weights of the visual and inertial inputs during sensor fusion. To this end, we propose a resilient viSLAM algorithm based on covariance tuning. During back-end optimization of the viSLAM process, the unit-weight root-mean-square error (RMSE) of the visual reprojection and IMU preintegration in each optimization is computed to construct a covariance tuning function, producing a new covariance matrix. This is used to perform another round of nonlinear optimization, effectively improving pose and localization precision without closed-loop detection. In the validation experiment, our algorithm outperformed the OKVIS, R-VIO, and VINS-Mono open-source viSLAM frameworks in pose and localization precision on the EuRoc dataset, at all difficulty levels. MDPI 2022-12-14 /pmc/articles/PMC9781031/ /pubmed/36560205 http://dx.doi.org/10.3390/s22249836 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Li, Kailin
Li, Jiansheng
Wang, Ancheng
Luo, Haolong
Li, Xueqiang
Yang, Zidi
A Resilient Method for Visual–Inertial Fusion Based on Covariance Tuning
title A Resilient Method for Visual–Inertial Fusion Based on Covariance Tuning
title_full A Resilient Method for Visual–Inertial Fusion Based on Covariance Tuning
title_fullStr A Resilient Method for Visual–Inertial Fusion Based on Covariance Tuning
title_full_unstemmed A Resilient Method for Visual–Inertial Fusion Based on Covariance Tuning
title_short A Resilient Method for Visual–Inertial Fusion Based on Covariance Tuning
title_sort resilient method for visual–inertial fusion based on covariance tuning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9781031/
https://www.ncbi.nlm.nih.gov/pubmed/36560205
http://dx.doi.org/10.3390/s22249836
work_keys_str_mv AT likailin aresilientmethodforvisualinertialfusionbasedoncovariancetuning
AT lijiansheng aresilientmethodforvisualinertialfusionbasedoncovariancetuning
AT wangancheng aresilientmethodforvisualinertialfusionbasedoncovariancetuning
AT luohaolong aresilientmethodforvisualinertialfusionbasedoncovariancetuning
AT lixueqiang aresilientmethodforvisualinertialfusionbasedoncovariancetuning
AT yangzidi aresilientmethodforvisualinertialfusionbasedoncovariancetuning
AT likailin resilientmethodforvisualinertialfusionbasedoncovariancetuning
AT lijiansheng resilientmethodforvisualinertialfusionbasedoncovariancetuning
AT wangancheng resilientmethodforvisualinertialfusionbasedoncovariancetuning
AT luohaolong resilientmethodforvisualinertialfusionbasedoncovariancetuning
AT lixueqiang resilientmethodforvisualinertialfusionbasedoncovariancetuning
AT yangzidi resilientmethodforvisualinertialfusionbasedoncovariancetuning