Cargando…

Real time object detection using LiDAR and camera fusion for autonomous driving

Autonomous driving has been widely applied in commercial and industrial applications, along with the upgrade of environmental awareness systems. Tasks such as path planning, trajectory tracking, and obstacle avoidance are strongly dependent on the ability to perform real-time object detection and po...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Haibin, Wu, Chao, Wang, Huanjie
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10192255/
https://www.ncbi.nlm.nih.gov/pubmed/37198255
http://dx.doi.org/10.1038/s41598-023-35170-z
_version_ 1785043589160501248
author Liu, Haibin
Wu, Chao
Wang, Huanjie
author_facet Liu, Haibin
Wu, Chao
Wang, Huanjie
author_sort Liu, Haibin
collection PubMed
description Autonomous driving has been widely applied in commercial and industrial applications, along with the upgrade of environmental awareness systems. Tasks such as path planning, trajectory tracking, and obstacle avoidance are strongly dependent on the ability to perform real-time object detection and position regression. Among the most commonly used sensors, camera provides dense semantic information but lacks accurate distance information to the target, while LiDAR provides accurate depth information but with sparse resolution. In this paper, a LiDAR-camera-based fusion algorithm is proposed to improve the above-mentioned trade-off problems by constructing a Siamese network for object detection. Raw point clouds are converted to camera planes to obtain a 2D depth image. By designing a cross feature fusion block to connect the depth and RGB processing branches, the feature-layer fusion strategy is applied to integrate multi-modality data. The proposed fusion algorithm is evaluated on the KITTI dataset. Experimental results demonstrate that our algorithm has superior performance and real-time efficiency. Remarkably, it outperforms other state-of-the-art algorithms at the most important moderate level and achieves excellent performance at the easy and hard levels.
format Online
Article
Text
id pubmed-10192255
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-101922552023-05-19 Real time object detection using LiDAR and camera fusion for autonomous driving Liu, Haibin Wu, Chao Wang, Huanjie Sci Rep Article Autonomous driving has been widely applied in commercial and industrial applications, along with the upgrade of environmental awareness systems. Tasks such as path planning, trajectory tracking, and obstacle avoidance are strongly dependent on the ability to perform real-time object detection and position regression. Among the most commonly used sensors, camera provides dense semantic information but lacks accurate distance information to the target, while LiDAR provides accurate depth information but with sparse resolution. In this paper, a LiDAR-camera-based fusion algorithm is proposed to improve the above-mentioned trade-off problems by constructing a Siamese network for object detection. Raw point clouds are converted to camera planes to obtain a 2D depth image. By designing a cross feature fusion block to connect the depth and RGB processing branches, the feature-layer fusion strategy is applied to integrate multi-modality data. The proposed fusion algorithm is evaluated on the KITTI dataset. Experimental results demonstrate that our algorithm has superior performance and real-time efficiency. Remarkably, it outperforms other state-of-the-art algorithms at the most important moderate level and achieves excellent performance at the easy and hard levels. Nature Publishing Group UK 2023-05-17 /pmc/articles/PMC10192255/ /pubmed/37198255 http://dx.doi.org/10.1038/s41598-023-35170-z Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Liu, Haibin
Wu, Chao
Wang, Huanjie
Real time object detection using LiDAR and camera fusion for autonomous driving
title Real time object detection using LiDAR and camera fusion for autonomous driving
title_full Real time object detection using LiDAR and camera fusion for autonomous driving
title_fullStr Real time object detection using LiDAR and camera fusion for autonomous driving
title_full_unstemmed Real time object detection using LiDAR and camera fusion for autonomous driving
title_short Real time object detection using LiDAR and camera fusion for autonomous driving
title_sort real time object detection using lidar and camera fusion for autonomous driving
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10192255/
https://www.ncbi.nlm.nih.gov/pubmed/37198255
http://dx.doi.org/10.1038/s41598-023-35170-z
work_keys_str_mv AT liuhaibin realtimeobjectdetectionusinglidarandcamerafusionforautonomousdriving
AT wuchao realtimeobjectdetectionusinglidarandcamerafusionforautonomousdriving
AT wanghuanjie realtimeobjectdetectionusinglidarandcamerafusionforautonomousdriving