Cargando…

Evaluation of 3D Vulnerable Objects’ Detection Using a Multi-Sensors System for Autonomous Vehicles

One of the primary tasks undertaken by autonomous vehicles (AVs) is object detection, which comes ahead of object tracking, trajectory estimation, and collision avoidance. Vulnerable road objects (e.g., pedestrians, cyclists, etc.) pose a greater challenge to the reliability of object detection oper...

Descripción completa

Detalles Bibliográficos
Autores principales: Khatab, Esraa, Onsy, Ahmed, Abouelfarag, Ahmed
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8874666/
https://www.ncbi.nlm.nih.gov/pubmed/35214569
http://dx.doi.org/10.3390/s22041663
_version_ 1784657743441821696
author Khatab, Esraa
Onsy, Ahmed
Abouelfarag, Ahmed
author_facet Khatab, Esraa
Onsy, Ahmed
Abouelfarag, Ahmed
author_sort Khatab, Esraa
collection PubMed
description One of the primary tasks undertaken by autonomous vehicles (AVs) is object detection, which comes ahead of object tracking, trajectory estimation, and collision avoidance. Vulnerable road objects (e.g., pedestrians, cyclists, etc.) pose a greater challenge to the reliability of object detection operations due to their continuously changing behavior. The majority of commercially available AVs, and research into them, depends on employing expensive sensors. However, this hinders the development of further research on the operations of AVs. In this paper, therefore, we focus on the use of a lower-cost single-beam LiDAR in addition to a monocular camera to achieve multiple 3D vulnerable object detection in real driving scenarios, all the while maintaining real-time performance. This research also addresses the problems faced during object detection, such as the complex interaction between objects where occlusion and truncation occur, and the dynamic changes in the perspective and scale of bounding boxes. The video-processing module works upon a deep-learning detector (YOLOv3), while the LiDAR measurements are pre-processed and grouped into clusters. The output of the proposed system is objects classification and localization by having bounding boxes accompanied by a third depth dimension acquired by the LiDAR. Real-time tests show that the system can efficiently detect the 3D location of vulnerable objects in real-time scenarios.
format Online
Article
Text
id pubmed-8874666
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-88746662022-02-26 Evaluation of 3D Vulnerable Objects’ Detection Using a Multi-Sensors System for Autonomous Vehicles Khatab, Esraa Onsy, Ahmed Abouelfarag, Ahmed Sensors (Basel) Article One of the primary tasks undertaken by autonomous vehicles (AVs) is object detection, which comes ahead of object tracking, trajectory estimation, and collision avoidance. Vulnerable road objects (e.g., pedestrians, cyclists, etc.) pose a greater challenge to the reliability of object detection operations due to their continuously changing behavior. The majority of commercially available AVs, and research into them, depends on employing expensive sensors. However, this hinders the development of further research on the operations of AVs. In this paper, therefore, we focus on the use of a lower-cost single-beam LiDAR in addition to a monocular camera to achieve multiple 3D vulnerable object detection in real driving scenarios, all the while maintaining real-time performance. This research also addresses the problems faced during object detection, such as the complex interaction between objects where occlusion and truncation occur, and the dynamic changes in the perspective and scale of bounding boxes. The video-processing module works upon a deep-learning detector (YOLOv3), while the LiDAR measurements are pre-processed and grouped into clusters. The output of the proposed system is objects classification and localization by having bounding boxes accompanied by a third depth dimension acquired by the LiDAR. Real-time tests show that the system can efficiently detect the 3D location of vulnerable objects in real-time scenarios. MDPI 2022-02-21 /pmc/articles/PMC8874666/ /pubmed/35214569 http://dx.doi.org/10.3390/s22041663 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Khatab, Esraa
Onsy, Ahmed
Abouelfarag, Ahmed
Evaluation of 3D Vulnerable Objects’ Detection Using a Multi-Sensors System for Autonomous Vehicles
title Evaluation of 3D Vulnerable Objects’ Detection Using a Multi-Sensors System for Autonomous Vehicles
title_full Evaluation of 3D Vulnerable Objects’ Detection Using a Multi-Sensors System for Autonomous Vehicles
title_fullStr Evaluation of 3D Vulnerable Objects’ Detection Using a Multi-Sensors System for Autonomous Vehicles
title_full_unstemmed Evaluation of 3D Vulnerable Objects’ Detection Using a Multi-Sensors System for Autonomous Vehicles
title_short Evaluation of 3D Vulnerable Objects’ Detection Using a Multi-Sensors System for Autonomous Vehicles
title_sort evaluation of 3d vulnerable objects’ detection using a multi-sensors system for autonomous vehicles
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8874666/
https://www.ncbi.nlm.nih.gov/pubmed/35214569
http://dx.doi.org/10.3390/s22041663
work_keys_str_mv AT khatabesraa evaluationof3dvulnerableobjectsdetectionusingamultisensorssystemforautonomousvehicles
AT onsyahmed evaluationof3dvulnerableobjectsdetectionusingamultisensorssystemforautonomousvehicles
AT abouelfaragahmed evaluationof3dvulnerableobjectsdetectionusingamultisensorssystemforautonomousvehicles