Cargando…
Radar-Camera Fusion Network for Depth Estimation in Structured Driving Scenes
Depth estimation is an important part of the perception system in autonomous driving. Current studies often reconstruct dense depth maps from RGB images and sparse depth maps obtained from other sensors. However, existing methods often pay insufficient attention to latent semantic information. Consi...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10490688/ https://www.ncbi.nlm.nih.gov/pubmed/37688016 http://dx.doi.org/10.3390/s23177560 |
_version_ | 1785103898246119424 |
---|---|
author | Li, Shuguang Yan, Jiafu Chen, Haoran Zheng, Ke |
author_facet | Li, Shuguang Yan, Jiafu Chen, Haoran Zheng, Ke |
author_sort | Li, Shuguang |
collection | PubMed |
description | Depth estimation is an important part of the perception system in autonomous driving. Current studies often reconstruct dense depth maps from RGB images and sparse depth maps obtained from other sensors. However, existing methods often pay insufficient attention to latent semantic information. Considering the highly structured characteristics of driving scenes, we propose a dual-branch network to predict dense depth maps by fusing radar and RGB images. The driving scene is divided into three parts in the proposed architecture, each predicting a depth map, which is finally merged into one by implementing the fusion strategy in order to make full use of the potential semantic information in the driving scene. In addition, a variant L1 loss function is applied in the training phase, directing the network to focus more on those areas of interest when driving. Our proposed method is evaluated on the nuScenes dataset. Experiments demonstrate its effectiveness in comparison with previous state of the art methods. |
format | Online Article Text |
id | pubmed-10490688 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-104906882023-09-09 Radar-Camera Fusion Network for Depth Estimation in Structured Driving Scenes Li, Shuguang Yan, Jiafu Chen, Haoran Zheng, Ke Sensors (Basel) Article Depth estimation is an important part of the perception system in autonomous driving. Current studies often reconstruct dense depth maps from RGB images and sparse depth maps obtained from other sensors. However, existing methods often pay insufficient attention to latent semantic information. Considering the highly structured characteristics of driving scenes, we propose a dual-branch network to predict dense depth maps by fusing radar and RGB images. The driving scene is divided into three parts in the proposed architecture, each predicting a depth map, which is finally merged into one by implementing the fusion strategy in order to make full use of the potential semantic information in the driving scene. In addition, a variant L1 loss function is applied in the training phase, directing the network to focus more on those areas of interest when driving. Our proposed method is evaluated on the nuScenes dataset. Experiments demonstrate its effectiveness in comparison with previous state of the art methods. MDPI 2023-08-31 /pmc/articles/PMC10490688/ /pubmed/37688016 http://dx.doi.org/10.3390/s23177560 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Li, Shuguang Yan, Jiafu Chen, Haoran Zheng, Ke Radar-Camera Fusion Network for Depth Estimation in Structured Driving Scenes |
title | Radar-Camera Fusion Network for Depth Estimation in Structured Driving Scenes |
title_full | Radar-Camera Fusion Network for Depth Estimation in Structured Driving Scenes |
title_fullStr | Radar-Camera Fusion Network for Depth Estimation in Structured Driving Scenes |
title_full_unstemmed | Radar-Camera Fusion Network for Depth Estimation in Structured Driving Scenes |
title_short | Radar-Camera Fusion Network for Depth Estimation in Structured Driving Scenes |
title_sort | radar-camera fusion network for depth estimation in structured driving scenes |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10490688/ https://www.ncbi.nlm.nih.gov/pubmed/37688016 http://dx.doi.org/10.3390/s23177560 |
work_keys_str_mv | AT lishuguang radarcamerafusionnetworkfordepthestimationinstructureddrivingscenes AT yanjiafu radarcamerafusionnetworkfordepthestimationinstructureddrivingscenes AT chenhaoran radarcamerafusionnetworkfordepthestimationinstructureddrivingscenes AT zhengke radarcamerafusionnetworkfordepthestimationinstructureddrivingscenes |