Cargando…

Robust Tightly Coupled Pose Measurement Based on Multi-Sensor Fusion in Mobile Robot System

Currently, simultaneous localization and mapping (SLAM) is one of the main research topics in the robotics field. Visual-inertia SLAM, which consists of a camera and an inertial measurement unit (IMU), can significantly improve robustness and enable scale weak-visibility, whereas monocular visual SL...

Descripción completa

Detalles Bibliográficos
Autores principales: Peng, Gang, Lu, Zezao, Peng, Jiaxi, He, Dingxin, Li, Xinde, Hu, Bin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8402045/
https://www.ncbi.nlm.nih.gov/pubmed/34450964
http://dx.doi.org/10.3390/s21165522
Descripción
Sumario:Currently, simultaneous localization and mapping (SLAM) is one of the main research topics in the robotics field. Visual-inertia SLAM, which consists of a camera and an inertial measurement unit (IMU), can significantly improve robustness and enable scale weak-visibility, whereas monocular visual SLAM is scale-invisible. For ground mobile robots, the introduction of a wheel speed sensor can solve the scale weak-visibility problem and improve robustness under abnormal conditions. In this paper, a multi-sensor fusion SLAM algorithm using monocular vision, inertia, and wheel speed measurements is proposed. The sensor measurements are combined in a tightly coupled manner, and a nonlinear optimization method is used to maximize the posterior probability to solve the optimal state estimation. Loop detection and back-end optimization are added to help reduce or even eliminate the cumulative error of the estimated poses, thus ensuring global consistency of the trajectory and map. The outstanding contribution of this paper is that the wheel odometer pre-integration algorithm, which combines the chassis speed and IMU angular speed, can avoid the repeated integration caused by linearization point changes during iterative optimization; state initialization based on the wheel odometer and IMU enables a quick and reliable calculation of the initial state values required by the state estimator in both stationary and moving states. Comparative experiments were conducted in room-scale scenes, building scale scenes, and visual loss scenarios. The results showed that the proposed algorithm is highly accurate—2.2 m of cumulative error after moving 812 m (0.28%, loopback optimization disabled)—robust, and has an effective localization capability even in the event of sensor loss, including visual loss. The accuracy and robustness of the proposed method are superior to those of monocular visual inertia SLAM and traditional wheel odometers.