Cargando…
Free Space Detection Using Camera-LiDAR Fusion in a Bird’s Eye View Plane
Although numerous road segmentation studies have utilized vision data, obtaining robust classification is still challenging due to vision sensor noise and target object deformation. Long-distance images are still problematic because of blur and low resolution, and these features make distinguishing...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8619025/ https://www.ncbi.nlm.nih.gov/pubmed/34833698 http://dx.doi.org/10.3390/s21227623 |
_version_ | 1784604889783992320 |
---|---|
author | Yu, Byeongjun Lee, Dongkyu Lee, Jae-Seol Kee, Seok-Cheol |
author_facet | Yu, Byeongjun Lee, Dongkyu Lee, Jae-Seol Kee, Seok-Cheol |
author_sort | Yu, Byeongjun |
collection | PubMed |
description | Although numerous road segmentation studies have utilized vision data, obtaining robust classification is still challenging due to vision sensor noise and target object deformation. Long-distance images are still problematic because of blur and low resolution, and these features make distinguishing roads from objects difficult. This study utilizes light detection and ranging (LiDAR), which generates information that camera images lack, such as distance, height, and intensity, as a reliable supplement to address this problem. In contrast to conventional approaches, additional domain transformation to a bird’s eye view space is executed to obtain long-range data with resolutions comparable to those of short-range data. This study proposes a convolutional neural network architecture that processes data transformed to a bird’s eye view plane. The network’s pathways are split into two parts to resolve calibration errors in the transformed image and point cloud. The network, which has modules that operate sequentially at various scaled dilated convolution rates, is designed to quickly and accurately handle a wide range of data. Comprehensive empirical studies using the Karlsruhe Institute of Technology and Toyota Technological Institute’s (KITTI’s) road detection benchmarks demonstrate that this study’s approach takes advantage of camera and LiDAR information, achieving robust road detection with short runtimes. Our result ranks 22nd in the KITTI’s leaderboard and shows real-time performance. |
format | Online Article Text |
id | pubmed-8619025 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-86190252021-11-27 Free Space Detection Using Camera-LiDAR Fusion in a Bird’s Eye View Plane Yu, Byeongjun Lee, Dongkyu Lee, Jae-Seol Kee, Seok-Cheol Sensors (Basel) Article Although numerous road segmentation studies have utilized vision data, obtaining robust classification is still challenging due to vision sensor noise and target object deformation. Long-distance images are still problematic because of blur and low resolution, and these features make distinguishing roads from objects difficult. This study utilizes light detection and ranging (LiDAR), which generates information that camera images lack, such as distance, height, and intensity, as a reliable supplement to address this problem. In contrast to conventional approaches, additional domain transformation to a bird’s eye view space is executed to obtain long-range data with resolutions comparable to those of short-range data. This study proposes a convolutional neural network architecture that processes data transformed to a bird’s eye view plane. The network’s pathways are split into two parts to resolve calibration errors in the transformed image and point cloud. The network, which has modules that operate sequentially at various scaled dilated convolution rates, is designed to quickly and accurately handle a wide range of data. Comprehensive empirical studies using the Karlsruhe Institute of Technology and Toyota Technological Institute’s (KITTI’s) road detection benchmarks demonstrate that this study’s approach takes advantage of camera and LiDAR information, achieving robust road detection with short runtimes. Our result ranks 22nd in the KITTI’s leaderboard and shows real-time performance. MDPI 2021-11-17 /pmc/articles/PMC8619025/ /pubmed/34833698 http://dx.doi.org/10.3390/s21227623 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Yu, Byeongjun Lee, Dongkyu Lee, Jae-Seol Kee, Seok-Cheol Free Space Detection Using Camera-LiDAR Fusion in a Bird’s Eye View Plane |
title | Free Space Detection Using Camera-LiDAR Fusion in a Bird’s Eye View Plane |
title_full | Free Space Detection Using Camera-LiDAR Fusion in a Bird’s Eye View Plane |
title_fullStr | Free Space Detection Using Camera-LiDAR Fusion in a Bird’s Eye View Plane |
title_full_unstemmed | Free Space Detection Using Camera-LiDAR Fusion in a Bird’s Eye View Plane |
title_short | Free Space Detection Using Camera-LiDAR Fusion in a Bird’s Eye View Plane |
title_sort | free space detection using camera-lidar fusion in a bird’s eye view plane |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8619025/ https://www.ncbi.nlm.nih.gov/pubmed/34833698 http://dx.doi.org/10.3390/s21227623 |
work_keys_str_mv | AT yubyeongjun freespacedetectionusingcameralidarfusioninabirdseyeviewplane AT leedongkyu freespacedetectionusingcameralidarfusioninabirdseyeviewplane AT leejaeseol freespacedetectionusingcameralidarfusioninabirdseyeviewplane AT keeseokcheol freespacedetectionusingcameralidarfusioninabirdseyeviewplane |