Cargando…

Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation

Recent deep learning frameworks draw strong research interest in application of ego-motion estimation as they demonstrate a superior result compared to geometric approaches. However, due to the lack of multimodal datasets, most of these studies primarily focused on single-sensor-based estimation. To...

Descripción completa

Detalles Bibliográficos
Autores principales: Tibebu, Haileleol, De-Silva, Varuna, Artaud, Corentin, Pina, Rafael, Shi, Xiyu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9611591/
https://www.ncbi.nlm.nih.gov/pubmed/36298368
http://dx.doi.org/10.3390/s22208021
_version_ 1784819565357694976
author Tibebu, Haileleol
De-Silva, Varuna
Artaud, Corentin
Pina, Rafael
Shi, Xiyu
author_facet Tibebu, Haileleol
De-Silva, Varuna
Artaud, Corentin
Pina, Rafael
Shi, Xiyu
author_sort Tibebu, Haileleol
collection PubMed
description Recent deep learning frameworks draw strong research interest in application of ego-motion estimation as they demonstrate a superior result compared to geometric approaches. However, due to the lack of multimodal datasets, most of these studies primarily focused on single-sensor-based estimation. To overcome this challenge, we collect a unique multimodal dataset named LboroAV2 using multiple sensors, including camera, light detecting and ranging (LiDAR), ultrasound, e-compass and rotary encoder. We also propose an end-to-end deep learning architecture for fusion of RGB images and LiDAR laser scan data for odometry application. The proposed method contains a convolutional encoder, a compressed representation and a recurrent neural network. Besides feature extraction and outlier rejection, the convolutional encoder produces a compressed representation, which is used to visualise the network’s learning process and to pass useful sequential information. The recurrent neural network uses this compressed sequential data to learn the relationship between consecutive time steps. We use the Loughborough autonomous vehicle (LboroAV2) and the Karlsruhe Institute of Technology and Toyota Institute (KITTI) Visual Odometry (VO) datasets to experiment and evaluate our results. In addition to visualising the network’s learning process, our approach provides superior results compared to other similar methods. The code for the proposed architecture is released in GitHub and accessible publicly.
format Online
Article
Text
id pubmed-9611591
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-96115912022-10-28 Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation Tibebu, Haileleol De-Silva, Varuna Artaud, Corentin Pina, Rafael Shi, Xiyu Sensors (Basel) Article Recent deep learning frameworks draw strong research interest in application of ego-motion estimation as they demonstrate a superior result compared to geometric approaches. However, due to the lack of multimodal datasets, most of these studies primarily focused on single-sensor-based estimation. To overcome this challenge, we collect a unique multimodal dataset named LboroAV2 using multiple sensors, including camera, light detecting and ranging (LiDAR), ultrasound, e-compass and rotary encoder. We also propose an end-to-end deep learning architecture for fusion of RGB images and LiDAR laser scan data for odometry application. The proposed method contains a convolutional encoder, a compressed representation and a recurrent neural network. Besides feature extraction and outlier rejection, the convolutional encoder produces a compressed representation, which is used to visualise the network’s learning process and to pass useful sequential information. The recurrent neural network uses this compressed sequential data to learn the relationship between consecutive time steps. We use the Loughborough autonomous vehicle (LboroAV2) and the Karlsruhe Institute of Technology and Toyota Institute (KITTI) Visual Odometry (VO) datasets to experiment and evaluate our results. In addition to visualising the network’s learning process, our approach provides superior results compared to other similar methods. The code for the proposed architecture is released in GitHub and accessible publicly. MDPI 2022-10-20 /pmc/articles/PMC9611591/ /pubmed/36298368 http://dx.doi.org/10.3390/s22208021 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Tibebu, Haileleol
De-Silva, Varuna
Artaud, Corentin
Pina, Rafael
Shi, Xiyu
Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation
title Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation
title_full Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation
title_fullStr Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation
title_full_unstemmed Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation
title_short Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation
title_sort towards interpretable camera and lidar data fusion for autonomous ground vehicles localisation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9611591/
https://www.ncbi.nlm.nih.gov/pubmed/36298368
http://dx.doi.org/10.3390/s22208021
work_keys_str_mv AT tibebuhaileleol towardsinterpretablecameraandlidardatafusionforautonomousgroundvehicleslocalisation
AT desilvavaruna towardsinterpretablecameraandlidardatafusionforautonomousgroundvehicleslocalisation
AT artaudcorentin towardsinterpretablecameraandlidardatafusionforautonomousgroundvehicleslocalisation
AT pinarafael towardsinterpretablecameraandlidardatafusionforautonomousgroundvehicleslocalisation
AT shixiyu towardsinterpretablecameraandlidardatafusionforautonomousgroundvehicleslocalisation