Cargando…
Dense RGB-D SLAM with Multiple Cameras
A multi-camera dense RGB-D SLAM (simultaneous localization and mapping) system has the potential both to speed up scene reconstruction and to improve localization accuracy, thanks to multiple mounted sensors and an enlarged effective field of view. To effectively tap the potential of the system, two...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6068657/ https://www.ncbi.nlm.nih.gov/pubmed/30004420 http://dx.doi.org/10.3390/s18072118 |
_version_ | 1783343319765483520 |
---|---|
author | Meng, Xinrui Gao, Wei Hu, Zhanyi |
author_facet | Meng, Xinrui Gao, Wei Hu, Zhanyi |
author_sort | Meng, Xinrui |
collection | PubMed |
description | A multi-camera dense RGB-D SLAM (simultaneous localization and mapping) system has the potential both to speed up scene reconstruction and to improve localization accuracy, thanks to multiple mounted sensors and an enlarged effective field of view. To effectively tap the potential of the system, two issues must be understood: first, how to calibrate the system where sensors usually shares small or no common field of view to maximally increase the effective field of view; second, how to fuse the location information from different sensors. In this work, a three-Kinect system is reported. For system calibration, two kinds of calibration methods are proposed, one is suitable for system with inertial measurement unit (IMU) using an improved hand–eye calibration method, the other for pure visual SLAM without any other auxiliary sensors. In the RGB-D SLAM stage, we extend and improve a state-of-art single RGB-D SLAM method to multi-camera system. We track the multiple cameras’ poses independently and select the one with the pose minimal-error as the reference pose at each moment to correct other cameras’ poses. To optimize the initial estimated pose, we improve the deformation graph by adding an attribute of device number to distinguish surfels built by different cameras and do deformations according to the device number. We verify the accuracy of our extrinsic calibration methods in the experiment section and show the satisfactory reconstructed models by our multi-camera dense RGB-D SLAM. The RMSE (root-mean-square error) of the lengths measured in our reconstructed mode is 1.55 cm (similar to the state-of-art single camera RGB-D SLAM systems). |
format | Online Article Text |
id | pubmed-6068657 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-60686572018-08-07 Dense RGB-D SLAM with Multiple Cameras Meng, Xinrui Gao, Wei Hu, Zhanyi Sensors (Basel) Article A multi-camera dense RGB-D SLAM (simultaneous localization and mapping) system has the potential both to speed up scene reconstruction and to improve localization accuracy, thanks to multiple mounted sensors and an enlarged effective field of view. To effectively tap the potential of the system, two issues must be understood: first, how to calibrate the system where sensors usually shares small or no common field of view to maximally increase the effective field of view; second, how to fuse the location information from different sensors. In this work, a three-Kinect system is reported. For system calibration, two kinds of calibration methods are proposed, one is suitable for system with inertial measurement unit (IMU) using an improved hand–eye calibration method, the other for pure visual SLAM without any other auxiliary sensors. In the RGB-D SLAM stage, we extend and improve a state-of-art single RGB-D SLAM method to multi-camera system. We track the multiple cameras’ poses independently and select the one with the pose minimal-error as the reference pose at each moment to correct other cameras’ poses. To optimize the initial estimated pose, we improve the deformation graph by adding an attribute of device number to distinguish surfels built by different cameras and do deformations according to the device number. We verify the accuracy of our extrinsic calibration methods in the experiment section and show the satisfactory reconstructed models by our multi-camera dense RGB-D SLAM. The RMSE (root-mean-square error) of the lengths measured in our reconstructed mode is 1.55 cm (similar to the state-of-art single camera RGB-D SLAM systems). MDPI 2018-07-02 /pmc/articles/PMC6068657/ /pubmed/30004420 http://dx.doi.org/10.3390/s18072118 Text en © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Meng, Xinrui Gao, Wei Hu, Zhanyi Dense RGB-D SLAM with Multiple Cameras |
title | Dense RGB-D SLAM with Multiple Cameras |
title_full | Dense RGB-D SLAM with Multiple Cameras |
title_fullStr | Dense RGB-D SLAM with Multiple Cameras |
title_full_unstemmed | Dense RGB-D SLAM with Multiple Cameras |
title_short | Dense RGB-D SLAM with Multiple Cameras |
title_sort | dense rgb-d slam with multiple cameras |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6068657/ https://www.ncbi.nlm.nih.gov/pubmed/30004420 http://dx.doi.org/10.3390/s18072118 |
work_keys_str_mv | AT mengxinrui densergbdslamwithmultiplecameras AT gaowei densergbdslamwithmultiplecameras AT huzhanyi densergbdslamwithmultiplecameras |