Cargando…
Fast vehicle detection based on colored point cloud with bird’s eye view representation
RGB cameras and LiDAR are crucial sensors for autonomous vehicles that provide complementary information for accurate detection. Recent early-level fusion-based approaches, flourishing LiDAR data with camera features, may not accomplish promising performance ascribable to the immense difference betw...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10167367/ https://www.ncbi.nlm.nih.gov/pubmed/37156868 http://dx.doi.org/10.1038/s41598-023-34479-z |
_version_ | 1785038649922945024 |
---|---|
author | Wang, Lele Huang, Yingping |
author_facet | Wang, Lele Huang, Yingping |
author_sort | Wang, Lele |
collection | PubMed |
description | RGB cameras and LiDAR are crucial sensors for autonomous vehicles that provide complementary information for accurate detection. Recent early-level fusion-based approaches, flourishing LiDAR data with camera features, may not accomplish promising performance ascribable to the immense difference between two modalities. This paper presents a simple and effective vehicle detection method based on an early-fusion strategy, unified 2D BEV grids, and feature fusion. The proposed method first eliminates many null point clouds through cor-calibration. It augments point cloud data by color information to generate 7D colored point cloud, and unifies augmented data into 2D BEV grids. The colored BEV maps can then be fed to any 2D convolution network. A peculiar Feature Fusion (2F) detection module is utilized to extract multiple scale features from BEV images. Experiments on the KITTI public benchmark and Nuscenes dataset show that fusing RGB image with point cloud rather than raw point cloud can lead to better detection accuracy. Besides, the inference time of the proposed method reaches 0.05 s/frame thanks to its simple and compact architecture. |
format | Online Article Text |
id | pubmed-10167367 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-101673672023-05-10 Fast vehicle detection based on colored point cloud with bird’s eye view representation Wang, Lele Huang, Yingping Sci Rep Article RGB cameras and LiDAR are crucial sensors for autonomous vehicles that provide complementary information for accurate detection. Recent early-level fusion-based approaches, flourishing LiDAR data with camera features, may not accomplish promising performance ascribable to the immense difference between two modalities. This paper presents a simple and effective vehicle detection method based on an early-fusion strategy, unified 2D BEV grids, and feature fusion. The proposed method first eliminates many null point clouds through cor-calibration. It augments point cloud data by color information to generate 7D colored point cloud, and unifies augmented data into 2D BEV grids. The colored BEV maps can then be fed to any 2D convolution network. A peculiar Feature Fusion (2F) detection module is utilized to extract multiple scale features from BEV images. Experiments on the KITTI public benchmark and Nuscenes dataset show that fusing RGB image with point cloud rather than raw point cloud can lead to better detection accuracy. Besides, the inference time of the proposed method reaches 0.05 s/frame thanks to its simple and compact architecture. Nature Publishing Group UK 2023-05-08 /pmc/articles/PMC10167367/ /pubmed/37156868 http://dx.doi.org/10.1038/s41598-023-34479-z Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Wang, Lele Huang, Yingping Fast vehicle detection based on colored point cloud with bird’s eye view representation |
title | Fast vehicle detection based on colored point cloud with bird’s eye view representation |
title_full | Fast vehicle detection based on colored point cloud with bird’s eye view representation |
title_fullStr | Fast vehicle detection based on colored point cloud with bird’s eye view representation |
title_full_unstemmed | Fast vehicle detection based on colored point cloud with bird’s eye view representation |
title_short | Fast vehicle detection based on colored point cloud with bird’s eye view representation |
title_sort | fast vehicle detection based on colored point cloud with bird’s eye view representation |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10167367/ https://www.ncbi.nlm.nih.gov/pubmed/37156868 http://dx.doi.org/10.1038/s41598-023-34479-z |
work_keys_str_mv | AT wanglele fastvehicledetectionbasedoncoloredpointcloudwithbirdseyeviewrepresentation AT huangyingping fastvehicledetectionbasedoncoloredpointcloudwithbirdseyeviewrepresentation |