Cargando…
Mapping with Monocular Camera Sensor under Adversarial Illumination for Intelligent Vehicles
High-precision maps are widely applied in intelligent-driving vehicles for localization and planning tasks. The vision sensor, especially monocular cameras, has become favoured in mapping approaches due to its high flexibility and low cost. However, monocular visual mapping suffers from great perfor...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10058667/ https://www.ncbi.nlm.nih.gov/pubmed/36992006 http://dx.doi.org/10.3390/s23063296 |
_version_ | 1785016688616407040 |
---|---|
author | Tian, Wei Wen, Yongkun Chu, Xinning |
author_facet | Tian, Wei Wen, Yongkun Chu, Xinning |
author_sort | Tian, Wei |
collection | PubMed |
description | High-precision maps are widely applied in intelligent-driving vehicles for localization and planning tasks. The vision sensor, especially monocular cameras, has become favoured in mapping approaches due to its high flexibility and low cost. However, monocular visual mapping suffers from great performance degradation in adversarial illumination environments such as on low-light roads or in underground spaces. To address this issue, in this paper, we first introduce an unsupervised learning approach to improve keypoint detection and description on monocular camera images. By emphasizing the consistency between feature points in the learning loss, visual features in dim environment can be better extracted. Second, to suppress the scale drift in monocular visual mapping, a robust loop-closure detection scheme is presented, which integrates both feature-point verification and multi-grained image similarity measurements. With experiments on public benchmarks, our keypoint detection approach is proven robust against varied illumination. With scenario tests including both underground and on-road driving, we demonstrate that our approach is able to reduce the scale drift in reconstructing the scene and achieve a mapping accuracy gain of up to 0.14 m in textureless or low-illumination environments. |
format | Online Article Text |
id | pubmed-10058667 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-100586672023-03-30 Mapping with Monocular Camera Sensor under Adversarial Illumination for Intelligent Vehicles Tian, Wei Wen, Yongkun Chu, Xinning Sensors (Basel) Article High-precision maps are widely applied in intelligent-driving vehicles for localization and planning tasks. The vision sensor, especially monocular cameras, has become favoured in mapping approaches due to its high flexibility and low cost. However, monocular visual mapping suffers from great performance degradation in adversarial illumination environments such as on low-light roads or in underground spaces. To address this issue, in this paper, we first introduce an unsupervised learning approach to improve keypoint detection and description on monocular camera images. By emphasizing the consistency between feature points in the learning loss, visual features in dim environment can be better extracted. Second, to suppress the scale drift in monocular visual mapping, a robust loop-closure detection scheme is presented, which integrates both feature-point verification and multi-grained image similarity measurements. With experiments on public benchmarks, our keypoint detection approach is proven robust against varied illumination. With scenario tests including both underground and on-road driving, we demonstrate that our approach is able to reduce the scale drift in reconstructing the scene and achieve a mapping accuracy gain of up to 0.14 m in textureless or low-illumination environments. MDPI 2023-03-21 /pmc/articles/PMC10058667/ /pubmed/36992006 http://dx.doi.org/10.3390/s23063296 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Tian, Wei Wen, Yongkun Chu, Xinning Mapping with Monocular Camera Sensor under Adversarial Illumination for Intelligent Vehicles |
title | Mapping with Monocular Camera Sensor under Adversarial Illumination for Intelligent Vehicles |
title_full | Mapping with Monocular Camera Sensor under Adversarial Illumination for Intelligent Vehicles |
title_fullStr | Mapping with Monocular Camera Sensor under Adversarial Illumination for Intelligent Vehicles |
title_full_unstemmed | Mapping with Monocular Camera Sensor under Adversarial Illumination for Intelligent Vehicles |
title_short | Mapping with Monocular Camera Sensor under Adversarial Illumination for Intelligent Vehicles |
title_sort | mapping with monocular camera sensor under adversarial illumination for intelligent vehicles |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10058667/ https://www.ncbi.nlm.nih.gov/pubmed/36992006 http://dx.doi.org/10.3390/s23063296 |
work_keys_str_mv | AT tianwei mappingwithmonocularcamerasensorunderadversarialilluminationforintelligentvehicles AT wenyongkun mappingwithmonocularcamerasensorunderadversarialilluminationforintelligentvehicles AT chuxinning mappingwithmonocularcamerasensorunderadversarialilluminationforintelligentvehicles |