Cargando…

Navigating an Automated Driving Vehicle via the Early Fusion of Multi-Modality

The ability of artificial intelligence to drive toward an intended destination is a key component of an autonomous vehicle. Different paradigms are now being employed to address artificial intelligence advancement. On the one hand, modular pipelines break down the driving model into submodels, such...

Descripción completa

Detalles Bibliográficos
Autores principales: Haris, Malik, Glowacz, Adam
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8878300/
https://www.ncbi.nlm.nih.gov/pubmed/35214327
http://dx.doi.org/10.3390/s22041425
_version_ 1784658628448354304
author Haris, Malik
Glowacz, Adam
author_facet Haris, Malik
Glowacz, Adam
author_sort Haris, Malik
collection PubMed
description The ability of artificial intelligence to drive toward an intended destination is a key component of an autonomous vehicle. Different paradigms are now being employed to address artificial intelligence advancement. On the one hand, modular pipelines break down the driving model into submodels, such as perception, maneuver planning and control. On the other hand, we used the end-to-end driving method to assign raw sensor data directly to vehicle control signals. The latter is less well-studied but is becoming more popular since it is easier to use. This article focuses on end-to-end autonomous driving, using RGB pictures as the primary sensor input data. The autonomous vehicle is equipped with a camera and active sensors, such as LiDAR and Radar, for safe navigation. Active sensors (e.g., LiDAR) provide more accurate depth information than passive sensors. As a result, this paper examines whether combining the RGB from the camera and active depth information from LiDAR has better results in end-to-end artificial driving than using only a single modality. This paper focuses on the early fusion of multi-modality and demonstrates how it outperforms a single modality using the CARLA simulator.
format Online
Article
Text
id pubmed-8878300
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-88783002022-02-26 Navigating an Automated Driving Vehicle via the Early Fusion of Multi-Modality Haris, Malik Glowacz, Adam Sensors (Basel) Article The ability of artificial intelligence to drive toward an intended destination is a key component of an autonomous vehicle. Different paradigms are now being employed to address artificial intelligence advancement. On the one hand, modular pipelines break down the driving model into submodels, such as perception, maneuver planning and control. On the other hand, we used the end-to-end driving method to assign raw sensor data directly to vehicle control signals. The latter is less well-studied but is becoming more popular since it is easier to use. This article focuses on end-to-end autonomous driving, using RGB pictures as the primary sensor input data. The autonomous vehicle is equipped with a camera and active sensors, such as LiDAR and Radar, for safe navigation. Active sensors (e.g., LiDAR) provide more accurate depth information than passive sensors. As a result, this paper examines whether combining the RGB from the camera and active depth information from LiDAR has better results in end-to-end artificial driving than using only a single modality. This paper focuses on the early fusion of multi-modality and demonstrates how it outperforms a single modality using the CARLA simulator. MDPI 2022-02-13 /pmc/articles/PMC8878300/ /pubmed/35214327 http://dx.doi.org/10.3390/s22041425 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Haris, Malik
Glowacz, Adam
Navigating an Automated Driving Vehicle via the Early Fusion of Multi-Modality
title Navigating an Automated Driving Vehicle via the Early Fusion of Multi-Modality
title_full Navigating an Automated Driving Vehicle via the Early Fusion of Multi-Modality
title_fullStr Navigating an Automated Driving Vehicle via the Early Fusion of Multi-Modality
title_full_unstemmed Navigating an Automated Driving Vehicle via the Early Fusion of Multi-Modality
title_short Navigating an Automated Driving Vehicle via the Early Fusion of Multi-Modality
title_sort navigating an automated driving vehicle via the early fusion of multi-modality
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8878300/
https://www.ncbi.nlm.nih.gov/pubmed/35214327
http://dx.doi.org/10.3390/s22041425
work_keys_str_mv AT harismalik navigatinganautomateddrivingvehicleviatheearlyfusionofmultimodality
AT glowaczadam navigatinganautomateddrivingvehicleviatheearlyfusionofmultimodality