Cargando…

LiDAR-as-Camera for End-to-End Driving

The core task of any autonomous driving system is to transform sensory inputs into driving commands. In end-to-end driving, this is achieved via a neural network, with one or multiple cameras as the most commonly used input and low-level driving commands, e.g., steering angle, as output. However, si...

Descripción completa

Detalles Bibliográficos
Autores principales: Tampuu, Ardi, Aidla, Romet, van Gent, Jan Aare, Matiisen, Tambet
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10007091/
https://www.ncbi.nlm.nih.gov/pubmed/36905051
http://dx.doi.org/10.3390/s23052845
_version_ 1784905432803835904
author Tampuu, Ardi
Aidla, Romet
van Gent, Jan Aare
Matiisen, Tambet
author_facet Tampuu, Ardi
Aidla, Romet
van Gent, Jan Aare
Matiisen, Tambet
author_sort Tampuu, Ardi
collection PubMed
description The core task of any autonomous driving system is to transform sensory inputs into driving commands. In end-to-end driving, this is achieved via a neural network, with one or multiple cameras as the most commonly used input and low-level driving commands, e.g., steering angle, as output. However, simulation studies have shown that depth-sensing can make the end-to-end driving task easier. On a real car, combining depth and visual information can be challenging due to the difficulty of obtaining good spatial and temporal alignment of the sensors. To alleviate alignment problems, Ouster LiDARs can output surround-view LiDAR images with depth, intensity, and ambient radiation channels. These measurements originate from the same sensor, rendering them perfectly aligned in time and space. The main goal of our study is to investigate how useful such images are as inputs to a self-driving neural network. We demonstrate that such LiDAR images are sufficient for the real-car road-following task. Models using these images as input perform at least as well as camera-based models in the tested conditions. Moreover, LiDAR images are less sensitive to weather conditions and lead to better generalization. In a secondary research direction, we reveal that the temporal smoothness of off-policy prediction sequences correlates with the actual on-policy driving ability equally well as the commonly used mean absolute error.
format Online
Article
Text
id pubmed-10007091
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-100070912023-03-12 LiDAR-as-Camera for End-to-End Driving Tampuu, Ardi Aidla, Romet van Gent, Jan Aare Matiisen, Tambet Sensors (Basel) Article The core task of any autonomous driving system is to transform sensory inputs into driving commands. In end-to-end driving, this is achieved via a neural network, with one or multiple cameras as the most commonly used input and low-level driving commands, e.g., steering angle, as output. However, simulation studies have shown that depth-sensing can make the end-to-end driving task easier. On a real car, combining depth and visual information can be challenging due to the difficulty of obtaining good spatial and temporal alignment of the sensors. To alleviate alignment problems, Ouster LiDARs can output surround-view LiDAR images with depth, intensity, and ambient radiation channels. These measurements originate from the same sensor, rendering them perfectly aligned in time and space. The main goal of our study is to investigate how useful such images are as inputs to a self-driving neural network. We demonstrate that such LiDAR images are sufficient for the real-car road-following task. Models using these images as input perform at least as well as camera-based models in the tested conditions. Moreover, LiDAR images are less sensitive to weather conditions and lead to better generalization. In a secondary research direction, we reveal that the temporal smoothness of off-policy prediction sequences correlates with the actual on-policy driving ability equally well as the commonly used mean absolute error. MDPI 2023-03-06 /pmc/articles/PMC10007091/ /pubmed/36905051 http://dx.doi.org/10.3390/s23052845 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Tampuu, Ardi
Aidla, Romet
van Gent, Jan Aare
Matiisen, Tambet
LiDAR-as-Camera for End-to-End Driving
title LiDAR-as-Camera for End-to-End Driving
title_full LiDAR-as-Camera for End-to-End Driving
title_fullStr LiDAR-as-Camera for End-to-End Driving
title_full_unstemmed LiDAR-as-Camera for End-to-End Driving
title_short LiDAR-as-Camera for End-to-End Driving
title_sort lidar-as-camera for end-to-end driving
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10007091/
https://www.ncbi.nlm.nih.gov/pubmed/36905051
http://dx.doi.org/10.3390/s23052845
work_keys_str_mv AT tampuuardi lidarascameraforendtoenddriving
AT aidlaromet lidarascameraforendtoenddriving
AT vangentjanaare lidarascameraforendtoenddriving
AT matiisentambet lidarascameraforendtoenddriving