Cargando…

ExistenceMap-PointPillars: A Multifusion Network for Robust 3D Object Detection with Object Existence Probability Map †

Recognition of surrounding objects is crucial for ensuring the safety of automated driving systems. In the realm of 3D object recognition through deep learning, several methods incorporate the fusion of Light Detection and Ranging (LiDAR) and camera data. The effectiveness of the LiDAR–camera fusion...

Descripción completa

Detalles Bibliográficos
Autores principales: Hariya, Keigo, Inoshita, Hiroki, Yanase, Ryo, Yoneda, Keisuke, Suganuma, Naoki
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10610647/
https://www.ncbi.nlm.nih.gov/pubmed/37896463
http://dx.doi.org/10.3390/s23208367
Descripción
Sumario:Recognition of surrounding objects is crucial for ensuring the safety of automated driving systems. In the realm of 3D object recognition through deep learning, several methods incorporate the fusion of Light Detection and Ranging (LiDAR) and camera data. The effectiveness of the LiDAR–camera fusion approach is widely acknowledged due to its ability to provide a richer source of information for object detection compared to methods that rely solely on individual sensors. Within the framework of the LiDAR–camera multistage fusion method, challenges arise in maintaining stable object recognition, especially under adverse conditions where object detection in camera images becomes challenging, such as during night-time or in rainy weather. In this research paper, we introduce "ExistenceMap-PointPillars", a novel and effective approach for 3D object detection that leverages information from multiple sensors. This approach involves a straightforward modification of the LiDAR-based 3D object detection network. The core concept of ExistenceMap-PointPillars revolves around the integration of pseudo 2D maps, which depict the estimated object existence regions derived from the fused sensor data in a probabilistic manner. These maps are then incorporated into a pseudo image generated from a 3D point cloud. Our experimental results, based on our proprietary dataset, demonstrate the substantial improvements achieved by ExistenceMap-PointPillars. Specifically, it enhances the mean Average Precision (mAP) by a noteworthy +4.19% compared to the conventional PointPillars method. Additionally, we conducted an evaluation of the network’s response using Grad-CAM in conjunction with ExistenceMap-PointPillars, which exhibited a heightened focus on the existence regions of objects within the pseudo 2D map. This focus resulted in a reduction in the number of false positives. In summary, our research presents ExistenceMap-PointPillars as a valuable advancement in the field of 3D object detection, offering improved performance and robustness, especially in challenging environmental conditions.