Cargando…
LiDAR–camera fusion for road detection using a recurrent conditional random field model
Reliable road detection is an essential task in autonomous driving systems. Two categories of sensors are commonly used, cameras and light detection and ranging (LiDAR), each of which can provide corresponding supplements. Nevertheless, existing sensor fusion methods do not fully utilize multimodal...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9256626/ https://www.ncbi.nlm.nih.gov/pubmed/35790795 http://dx.doi.org/10.1038/s41598-022-14438-w |
_version_ | 1784741171160940544 |
---|---|
author | Wang, Lele Huang, Yingping |
author_facet | Wang, Lele Huang, Yingping |
author_sort | Wang, Lele |
collection | PubMed |
description | Reliable road detection is an essential task in autonomous driving systems. Two categories of sensors are commonly used, cameras and light detection and ranging (LiDAR), each of which can provide corresponding supplements. Nevertheless, existing sensor fusion methods do not fully utilize multimodal data. Most of them are dominated by images and take point clouds as a supplement rather than making the best of them, and the correlation between modalities is ignored. This paper proposes a recurrent conditional random field (R-CRF) model to fuse images and point clouds for road detection. The R-CRF model integrates results (information) from modalities in a probabilistic way. Each modality is independently processed with its semantic segmentation network. The probability scores obtained are considered a unary term for individual pixel nodes in a random field, while RGB images and the densified LiDAR images are used as pairwise terms. The energy function is then iteratively optimized by mean-field variational inference, and the labelling results are refined by exploiting fully connected graphs of the RGB image and LiDAR images. Extensive experiments are conducted on the public KITTI-Road dataset, and the proposed method achieves competitive performance. |
format | Online Article Text |
id | pubmed-9256626 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-92566262022-07-07 LiDAR–camera fusion for road detection using a recurrent conditional random field model Wang, Lele Huang, Yingping Sci Rep Article Reliable road detection is an essential task in autonomous driving systems. Two categories of sensors are commonly used, cameras and light detection and ranging (LiDAR), each of which can provide corresponding supplements. Nevertheless, existing sensor fusion methods do not fully utilize multimodal data. Most of them are dominated by images and take point clouds as a supplement rather than making the best of them, and the correlation between modalities is ignored. This paper proposes a recurrent conditional random field (R-CRF) model to fuse images and point clouds for road detection. The R-CRF model integrates results (information) from modalities in a probabilistic way. Each modality is independently processed with its semantic segmentation network. The probability scores obtained are considered a unary term for individual pixel nodes in a random field, while RGB images and the densified LiDAR images are used as pairwise terms. The energy function is then iteratively optimized by mean-field variational inference, and the labelling results are refined by exploiting fully connected graphs of the RGB image and LiDAR images. Extensive experiments are conducted on the public KITTI-Road dataset, and the proposed method achieves competitive performance. Nature Publishing Group UK 2022-07-05 /pmc/articles/PMC9256626/ /pubmed/35790795 http://dx.doi.org/10.1038/s41598-022-14438-w Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Wang, Lele Huang, Yingping LiDAR–camera fusion for road detection using a recurrent conditional random field model |
title | LiDAR–camera fusion for road detection using a recurrent conditional random field model |
title_full | LiDAR–camera fusion for road detection using a recurrent conditional random field model |
title_fullStr | LiDAR–camera fusion for road detection using a recurrent conditional random field model |
title_full_unstemmed | LiDAR–camera fusion for road detection using a recurrent conditional random field model |
title_short | LiDAR–camera fusion for road detection using a recurrent conditional random field model |
title_sort | lidar–camera fusion for road detection using a recurrent conditional random field model |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9256626/ https://www.ncbi.nlm.nih.gov/pubmed/35790795 http://dx.doi.org/10.1038/s41598-022-14438-w |
work_keys_str_mv | AT wanglele lidarcamerafusionforroaddetectionusingarecurrentconditionalrandomfieldmodel AT huangyingping lidarcamerafusionforroaddetectionusingarecurrentconditionalrandomfieldmodel |