Cargando…
3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach
In this paper, we propose a novel approach to obtain accurate 3D reconstructions of large-scale environments by means of a mobile acquisition platform. The system incorporates a Velodyne LiDAR scanner, as well as a Point Grey Ladybug panoramic camera system. It was designed with genericity in mind,...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2016
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5134582/ https://www.ncbi.nlm.nih.gov/pubmed/27854315 http://dx.doi.org/10.3390/s16111923 |
_version_ | 1782471485930078208 |
---|---|
author | Vlaminck, Michiel Luong, Hiep Goeman, Werner Philips, Wilfried |
author_facet | Vlaminck, Michiel Luong, Hiep Goeman, Werner Philips, Wilfried |
author_sort | Vlaminck, Michiel |
collection | PubMed |
description | In this paper, we propose a novel approach to obtain accurate 3D reconstructions of large-scale environments by means of a mobile acquisition platform. The system incorporates a Velodyne LiDAR scanner, as well as a Point Grey Ladybug panoramic camera system. It was designed with genericity in mind, and hence, it does not make any assumption about the scene or about the sensor set-up. The main novelty of this work is that the proposed LiDAR mapping approach deals explicitly with the inhomogeneous density of point clouds produced by LiDAR scanners. To this end, we keep track of a global 3D map of the environment, which is continuously improved and refined by means of a surface reconstruction technique. Moreover, we perform surface analysis on consecutive generated point clouds in order to assure a perfect alignment with the global 3D map. In order to cope with drift, the system incorporates loop closure by determining the pose error and propagating it back in the pose graph. Our algorithm was exhaustively tested on data captured at a conference building, a university campus and an industrial site of a chemical company. Experiments demonstrate that it is capable of generating highly accurate 3D maps in very challenging environments. We can state that the average distance of corresponding point pairs between the ground truth and estimated point cloud approximates one centimeter for an area covering approximately 4000 m [Formula: see text]. To prove the genericity of the system, it was tested on the well-known Kitti vision benchmark. The results show that our approach competes with state of the art methods without making any additional assumptions. |
format | Online Article Text |
id | pubmed-5134582 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2016 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-51345822017-01-03 3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach Vlaminck, Michiel Luong, Hiep Goeman, Werner Philips, Wilfried Sensors (Basel) Article In this paper, we propose a novel approach to obtain accurate 3D reconstructions of large-scale environments by means of a mobile acquisition platform. The system incorporates a Velodyne LiDAR scanner, as well as a Point Grey Ladybug panoramic camera system. It was designed with genericity in mind, and hence, it does not make any assumption about the scene or about the sensor set-up. The main novelty of this work is that the proposed LiDAR mapping approach deals explicitly with the inhomogeneous density of point clouds produced by LiDAR scanners. To this end, we keep track of a global 3D map of the environment, which is continuously improved and refined by means of a surface reconstruction technique. Moreover, we perform surface analysis on consecutive generated point clouds in order to assure a perfect alignment with the global 3D map. In order to cope with drift, the system incorporates loop closure by determining the pose error and propagating it back in the pose graph. Our algorithm was exhaustively tested on data captured at a conference building, a university campus and an industrial site of a chemical company. Experiments demonstrate that it is capable of generating highly accurate 3D maps in very challenging environments. We can state that the average distance of corresponding point pairs between the ground truth and estimated point cloud approximates one centimeter for an area covering approximately 4000 m [Formula: see text]. To prove the genericity of the system, it was tested on the well-known Kitti vision benchmark. The results show that our approach competes with state of the art methods without making any additional assumptions. MDPI 2016-11-16 /pmc/articles/PMC5134582/ /pubmed/27854315 http://dx.doi.org/10.3390/s16111923 Text en © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Vlaminck, Michiel Luong, Hiep Goeman, Werner Philips, Wilfried 3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach |
title | 3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach |
title_full | 3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach |
title_fullStr | 3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach |
title_full_unstemmed | 3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach |
title_short | 3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach |
title_sort | 3d scene reconstruction using omnidirectional vision and lidar: a hybrid approach |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5134582/ https://www.ncbi.nlm.nih.gov/pubmed/27854315 http://dx.doi.org/10.3390/s16111923 |
work_keys_str_mv | AT vlaminckmichiel 3dscenereconstructionusingomnidirectionalvisionandlidarahybridapproach AT luonghiep 3dscenereconstructionusingomnidirectionalvisionandlidarahybridapproach AT goemanwerner 3dscenereconstructionusingomnidirectionalvisionandlidarahybridapproach AT philipswilfried 3dscenereconstructionusingomnidirectionalvisionandlidarahybridapproach |