Cargando…

ExistenceMap-PointPillars: A Multifusion Network for Robust 3D Object Detection with Object Existence Probability Map †

Recognition of surrounding objects is crucial for ensuring the safety of automated driving systems. In the realm of 3D object recognition through deep learning, several methods incorporate the fusion of Light Detection and Ranging (LiDAR) and camera data. The effectiveness of the LiDAR–camera fusion...

Descripción completa

Detalles Bibliográficos
Autores principales: Hariya, Keigo, Inoshita, Hiroki, Yanase, Ryo, Yoneda, Keisuke, Suganuma, Naoki
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10610647/
https://www.ncbi.nlm.nih.gov/pubmed/37896463
http://dx.doi.org/10.3390/s23208367
_version_ 1785128306330304512
author Hariya, Keigo
Inoshita, Hiroki
Yanase, Ryo
Yoneda, Keisuke
Suganuma, Naoki
author_facet Hariya, Keigo
Inoshita, Hiroki
Yanase, Ryo
Yoneda, Keisuke
Suganuma, Naoki
author_sort Hariya, Keigo
collection PubMed
description Recognition of surrounding objects is crucial for ensuring the safety of automated driving systems. In the realm of 3D object recognition through deep learning, several methods incorporate the fusion of Light Detection and Ranging (LiDAR) and camera data. The effectiveness of the LiDAR–camera fusion approach is widely acknowledged due to its ability to provide a richer source of information for object detection compared to methods that rely solely on individual sensors. Within the framework of the LiDAR–camera multistage fusion method, challenges arise in maintaining stable object recognition, especially under adverse conditions where object detection in camera images becomes challenging, such as during night-time or in rainy weather. In this research paper, we introduce "ExistenceMap-PointPillars", a novel and effective approach for 3D object detection that leverages information from multiple sensors. This approach involves a straightforward modification of the LiDAR-based 3D object detection network. The core concept of ExistenceMap-PointPillars revolves around the integration of pseudo 2D maps, which depict the estimated object existence regions derived from the fused sensor data in a probabilistic manner. These maps are then incorporated into a pseudo image generated from a 3D point cloud. Our experimental results, based on our proprietary dataset, demonstrate the substantial improvements achieved by ExistenceMap-PointPillars. Specifically, it enhances the mean Average Precision (mAP) by a noteworthy +4.19% compared to the conventional PointPillars method. Additionally, we conducted an evaluation of the network’s response using Grad-CAM in conjunction with ExistenceMap-PointPillars, which exhibited a heightened focus on the existence regions of objects within the pseudo 2D map. This focus resulted in a reduction in the number of false positives. In summary, our research presents ExistenceMap-PointPillars as a valuable advancement in the field of 3D object detection, offering improved performance and robustness, especially in challenging environmental conditions.
format Online
Article
Text
id pubmed-10610647
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-106106472023-10-28 ExistenceMap-PointPillars: A Multifusion Network for Robust 3D Object Detection with Object Existence Probability Map † Hariya, Keigo Inoshita, Hiroki Yanase, Ryo Yoneda, Keisuke Suganuma, Naoki Sensors (Basel) Article Recognition of surrounding objects is crucial for ensuring the safety of automated driving systems. In the realm of 3D object recognition through deep learning, several methods incorporate the fusion of Light Detection and Ranging (LiDAR) and camera data. The effectiveness of the LiDAR–camera fusion approach is widely acknowledged due to its ability to provide a richer source of information for object detection compared to methods that rely solely on individual sensors. Within the framework of the LiDAR–camera multistage fusion method, challenges arise in maintaining stable object recognition, especially under adverse conditions where object detection in camera images becomes challenging, such as during night-time or in rainy weather. In this research paper, we introduce "ExistenceMap-PointPillars", a novel and effective approach for 3D object detection that leverages information from multiple sensors. This approach involves a straightforward modification of the LiDAR-based 3D object detection network. The core concept of ExistenceMap-PointPillars revolves around the integration of pseudo 2D maps, which depict the estimated object existence regions derived from the fused sensor data in a probabilistic manner. These maps are then incorporated into a pseudo image generated from a 3D point cloud. Our experimental results, based on our proprietary dataset, demonstrate the substantial improvements achieved by ExistenceMap-PointPillars. Specifically, it enhances the mean Average Precision (mAP) by a noteworthy +4.19% compared to the conventional PointPillars method. Additionally, we conducted an evaluation of the network’s response using Grad-CAM in conjunction with ExistenceMap-PointPillars, which exhibited a heightened focus on the existence regions of objects within the pseudo 2D map. This focus resulted in a reduction in the number of false positives. In summary, our research presents ExistenceMap-PointPillars as a valuable advancement in the field of 3D object detection, offering improved performance and robustness, especially in challenging environmental conditions. MDPI 2023-10-10 /pmc/articles/PMC10610647/ /pubmed/37896463 http://dx.doi.org/10.3390/s23208367 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Hariya, Keigo
Inoshita, Hiroki
Yanase, Ryo
Yoneda, Keisuke
Suganuma, Naoki
ExistenceMap-PointPillars: A Multifusion Network for Robust 3D Object Detection with Object Existence Probability Map †
title ExistenceMap-PointPillars: A Multifusion Network for Robust 3D Object Detection with Object Existence Probability Map †
title_full ExistenceMap-PointPillars: A Multifusion Network for Robust 3D Object Detection with Object Existence Probability Map †
title_fullStr ExistenceMap-PointPillars: A Multifusion Network for Robust 3D Object Detection with Object Existence Probability Map †
title_full_unstemmed ExistenceMap-PointPillars: A Multifusion Network for Robust 3D Object Detection with Object Existence Probability Map †
title_short ExistenceMap-PointPillars: A Multifusion Network for Robust 3D Object Detection with Object Existence Probability Map †
title_sort existencemap-pointpillars: a multifusion network for robust 3d object detection with object existence probability map †
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10610647/
https://www.ncbi.nlm.nih.gov/pubmed/37896463
http://dx.doi.org/10.3390/s23208367
work_keys_str_mv AT hariyakeigo existencemappointpillarsamultifusionnetworkforrobust3dobjectdetectionwithobjectexistenceprobabilitymap
AT inoshitahiroki existencemappointpillarsamultifusionnetworkforrobust3dobjectdetectionwithobjectexistenceprobabilitymap
AT yanaseryo existencemappointpillarsamultifusionnetworkforrobust3dobjectdetectionwithobjectexistenceprobabilitymap
AT yonedakeisuke existencemappointpillarsamultifusionnetworkforrobust3dobjectdetectionwithobjectexistenceprobabilitymap
AT suganumanaoki existencemappointpillarsamultifusionnetworkforrobust3dobjectdetectionwithobjectexistenceprobabilitymap