Cargando…
LiDAR-as-Camera for End-to-End Driving
The core task of any autonomous driving system is to transform sensory inputs into driving commands. In end-to-end driving, this is achieved via a neural network, with one or multiple cameras as the most commonly used input and low-level driving commands, e.g., steering angle, as output. However, si...
Autores principales: | Tampuu, Ardi, Aidla, Romet, van Gent, Jan Aare, Matiisen, Tambet |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10007091/ https://www.ncbi.nlm.nih.gov/pubmed/36905051 http://dx.doi.org/10.3390/s23052845 |
Ejemplares similares
-
Perspective Taking in Deep Reinforcement Learning Agents
por: Labash, Aqeel, et al.
Publicado: (2020) -
LiDAR-Camera Calibration Using Line Correspondences
por: Bai, Zixuan, et al.
Publicado: (2020) -
Real time object detection using LiDAR and camera fusion for autonomous driving
por: Liu, Haibin, et al.
Publicado: (2023) -
Efficient neural decoding of self-location with a deep recurrent network
por: Tampuu, Ardi, et al.
Publicado: (2019) -
Accurate Calibration of Multi-LiDAR-Multi-Camera Systems
por: Pusztai, Zoltán, et al.
Publicado: (2018)