Cargando…
AFTR: A Robustness Multi-Sensor Fusion Model for 3D Object Detection Based on Adaptive Fusion Transformer
Multi-modal sensors are the key to ensuring the robust and accurate operation of autonomous driving systems, where LiDAR and cameras are important on-board sensors. However, current fusion methods face challenges due to inconsistent multi-sensor data representations and the misalignment of dynamic s...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10611098/ https://www.ncbi.nlm.nih.gov/pubmed/37896496 http://dx.doi.org/10.3390/s23208400 |
_version_ | 1785128411901984768 |
---|---|
author | Zhang, Yan Liu, Kang Bao, Hong Qian, Xu Wang, Zihan Ye, Shiqing Wang, Weicen |
author_facet | Zhang, Yan Liu, Kang Bao, Hong Qian, Xu Wang, Zihan Ye, Shiqing Wang, Weicen |
author_sort | Zhang, Yan |
collection | PubMed |
description | Multi-modal sensors are the key to ensuring the robust and accurate operation of autonomous driving systems, where LiDAR and cameras are important on-board sensors. However, current fusion methods face challenges due to inconsistent multi-sensor data representations and the misalignment of dynamic scenes. Specifically, current fusion methods either explicitly correlate multi-sensor data features by calibrating parameters, ignoring the feature blurring problems caused by misalignment, or find correlated features between multi-sensor data through global attention, causing rapidly escalating computational costs. On this basis, we propose a transformer-based end-to-end multi-sensor fusion framework named the adaptive fusion transformer (AFTR). The proposed AFTR consists of the adaptive spatial cross-attention (ASCA) mechanism and the spatial temporal self-attention (STSA) mechanism. Specifically, ASCA adaptively associates and interacts with multi-sensor data features in 3D space through learnable local attention, alleviating the problem of the misalignment of geometric information and reducing computational costs, and STSA interacts with cross-temporal information using learnable offsets in deformable attention, mitigating displacements due to dynamic scenes. We show through numerous experiments that the AFTR obtains SOTA performance in the nuScenes 3D object detection task (74.9% NDS and 73.2% mAP) and demonstrates strong robustness to misalignment (only a 0.2% NDS drop with slight noise). At the same time, we demonstrate the effectiveness of the AFTR components through ablation studies. In summary, the proposed AFTR is an accurate, efficient, and robust multi-sensor data fusion framework. |
format | Online Article Text |
id | pubmed-10611098 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-106110982023-10-28 AFTR: A Robustness Multi-Sensor Fusion Model for 3D Object Detection Based on Adaptive Fusion Transformer Zhang, Yan Liu, Kang Bao, Hong Qian, Xu Wang, Zihan Ye, Shiqing Wang, Weicen Sensors (Basel) Article Multi-modal sensors are the key to ensuring the robust and accurate operation of autonomous driving systems, where LiDAR and cameras are important on-board sensors. However, current fusion methods face challenges due to inconsistent multi-sensor data representations and the misalignment of dynamic scenes. Specifically, current fusion methods either explicitly correlate multi-sensor data features by calibrating parameters, ignoring the feature blurring problems caused by misalignment, or find correlated features between multi-sensor data through global attention, causing rapidly escalating computational costs. On this basis, we propose a transformer-based end-to-end multi-sensor fusion framework named the adaptive fusion transformer (AFTR). The proposed AFTR consists of the adaptive spatial cross-attention (ASCA) mechanism and the spatial temporal self-attention (STSA) mechanism. Specifically, ASCA adaptively associates and interacts with multi-sensor data features in 3D space through learnable local attention, alleviating the problem of the misalignment of geometric information and reducing computational costs, and STSA interacts with cross-temporal information using learnable offsets in deformable attention, mitigating displacements due to dynamic scenes. We show through numerous experiments that the AFTR obtains SOTA performance in the nuScenes 3D object detection task (74.9% NDS and 73.2% mAP) and demonstrates strong robustness to misalignment (only a 0.2% NDS drop with slight noise). At the same time, we demonstrate the effectiveness of the AFTR components through ablation studies. In summary, the proposed AFTR is an accurate, efficient, and robust multi-sensor data fusion framework. MDPI 2023-10-12 /pmc/articles/PMC10611098/ /pubmed/37896496 http://dx.doi.org/10.3390/s23208400 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Zhang, Yan Liu, Kang Bao, Hong Qian, Xu Wang, Zihan Ye, Shiqing Wang, Weicen AFTR: A Robustness Multi-Sensor Fusion Model for 3D Object Detection Based on Adaptive Fusion Transformer |
title | AFTR: A Robustness Multi-Sensor Fusion Model for 3D Object Detection Based on Adaptive Fusion Transformer |
title_full | AFTR: A Robustness Multi-Sensor Fusion Model for 3D Object Detection Based on Adaptive Fusion Transformer |
title_fullStr | AFTR: A Robustness Multi-Sensor Fusion Model for 3D Object Detection Based on Adaptive Fusion Transformer |
title_full_unstemmed | AFTR: A Robustness Multi-Sensor Fusion Model for 3D Object Detection Based on Adaptive Fusion Transformer |
title_short | AFTR: A Robustness Multi-Sensor Fusion Model for 3D Object Detection Based on Adaptive Fusion Transformer |
title_sort | aftr: a robustness multi-sensor fusion model for 3d object detection based on adaptive fusion transformer |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10611098/ https://www.ncbi.nlm.nih.gov/pubmed/37896496 http://dx.doi.org/10.3390/s23208400 |
work_keys_str_mv | AT zhangyan aftrarobustnessmultisensorfusionmodelfor3dobjectdetectionbasedonadaptivefusiontransformer AT liukang aftrarobustnessmultisensorfusionmodelfor3dobjectdetectionbasedonadaptivefusiontransformer AT baohong aftrarobustnessmultisensorfusionmodelfor3dobjectdetectionbasedonadaptivefusiontransformer AT qianxu aftrarobustnessmultisensorfusionmodelfor3dobjectdetectionbasedonadaptivefusiontransformer AT wangzihan aftrarobustnessmultisensorfusionmodelfor3dobjectdetectionbasedonadaptivefusiontransformer AT yeshiqing aftrarobustnessmultisensorfusionmodelfor3dobjectdetectionbasedonadaptivefusiontransformer AT wangweicen aftrarobustnessmultisensorfusionmodelfor3dobjectdetectionbasedonadaptivefusiontransformer |