Cargando…

Combining Low-Light Scene Enhancement for Fast and Accurate Lane Detection

Lane detection is a crucial task in the field of autonomous driving, as it enables vehicles to safely navigate on the road by interpreting the high-level semantics of traffic signs. Unfortunately, lane detection is a challenging problem due to factors such as low-light conditions, occlusions, and la...

Descripción completa

Detalles Bibliográficos
Autores principales: Ke, Changshuo, Xu, Zhijie, Zhang, Jianqin, Zhang, Dongmei
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10223488/
https://www.ncbi.nlm.nih.gov/pubmed/37430833
http://dx.doi.org/10.3390/s23104917
_version_ 1785049954044084224
author Ke, Changshuo
Xu, Zhijie
Zhang, Jianqin
Zhang, Dongmei
author_facet Ke, Changshuo
Xu, Zhijie
Zhang, Jianqin
Zhang, Dongmei
author_sort Ke, Changshuo
collection PubMed
description Lane detection is a crucial task in the field of autonomous driving, as it enables vehicles to safely navigate on the road by interpreting the high-level semantics of traffic signs. Unfortunately, lane detection is a challenging problem due to factors such as low-light conditions, occlusions, and lane line blurring. These factors increase the perplexity and indeterminacy of the lane features, making them hard to distinguish and segment. To tackle these challenges, we propose a method called low-light enhancement fast lane detection (LLFLD) that integrates the automatic low-light scene enhancement network (ALLE) with the lane detection network to improve lane detection performance under low-light conditions. Specifically, we first utilize the ALLE network to enhance the input image’s brightness and contrast while reducing excessive noise and color distortion. Then, we introduce symmetric feature flipping module (SFFM) and channel fusion self-attention mechanism (CFSAT) to the model, which refine the low-level features and utilize more abundant global contextual information, respectively. Moreover, we devise a novel structural loss function that leverages the inherent prior geometric constraints of lanes to optimize the detection results. We evaluate our method on the CULane dataset, a public benchmark for lane detection in various lighting conditions. Our experiments show that our approach surpasses other state of the arts in both daytime and nighttime settings, especially in low-light scenarios.
format Online
Article
Text
id pubmed-10223488
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-102234882023-05-28 Combining Low-Light Scene Enhancement for Fast and Accurate Lane Detection Ke, Changshuo Xu, Zhijie Zhang, Jianqin Zhang, Dongmei Sensors (Basel) Article Lane detection is a crucial task in the field of autonomous driving, as it enables vehicles to safely navigate on the road by interpreting the high-level semantics of traffic signs. Unfortunately, lane detection is a challenging problem due to factors such as low-light conditions, occlusions, and lane line blurring. These factors increase the perplexity and indeterminacy of the lane features, making them hard to distinguish and segment. To tackle these challenges, we propose a method called low-light enhancement fast lane detection (LLFLD) that integrates the automatic low-light scene enhancement network (ALLE) with the lane detection network to improve lane detection performance under low-light conditions. Specifically, we first utilize the ALLE network to enhance the input image’s brightness and contrast while reducing excessive noise and color distortion. Then, we introduce symmetric feature flipping module (SFFM) and channel fusion self-attention mechanism (CFSAT) to the model, which refine the low-level features and utilize more abundant global contextual information, respectively. Moreover, we devise a novel structural loss function that leverages the inherent prior geometric constraints of lanes to optimize the detection results. We evaluate our method on the CULane dataset, a public benchmark for lane detection in various lighting conditions. Our experiments show that our approach surpasses other state of the arts in both daytime and nighttime settings, especially in low-light scenarios. MDPI 2023-05-19 /pmc/articles/PMC10223488/ /pubmed/37430833 http://dx.doi.org/10.3390/s23104917 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Ke, Changshuo
Xu, Zhijie
Zhang, Jianqin
Zhang, Dongmei
Combining Low-Light Scene Enhancement for Fast and Accurate Lane Detection
title Combining Low-Light Scene Enhancement for Fast and Accurate Lane Detection
title_full Combining Low-Light Scene Enhancement for Fast and Accurate Lane Detection
title_fullStr Combining Low-Light Scene Enhancement for Fast and Accurate Lane Detection
title_full_unstemmed Combining Low-Light Scene Enhancement for Fast and Accurate Lane Detection
title_short Combining Low-Light Scene Enhancement for Fast and Accurate Lane Detection
title_sort combining low-light scene enhancement for fast and accurate lane detection
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10223488/
https://www.ncbi.nlm.nih.gov/pubmed/37430833
http://dx.doi.org/10.3390/s23104917
work_keys_str_mv AT kechangshuo combininglowlightsceneenhancementforfastandaccuratelanedetection
AT xuzhijie combininglowlightsceneenhancementforfastandaccuratelanedetection
AT zhangjianqin combininglowlightsceneenhancementforfastandaccuratelanedetection
AT zhangdongmei combininglowlightsceneenhancementforfastandaccuratelanedetection