Cargando…
Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles
There are many sensor fusion frameworks proposed in the literature using different sensors and fusion methods combinations and configurations. More focus has been on improving the accuracy performance; however, the implementation feasibility of these frameworks in an autonomous vehicle is less explo...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6833089/ https://www.ncbi.nlm.nih.gov/pubmed/31600922 http://dx.doi.org/10.3390/s19204357 |
_version_ | 1783466298632568832 |
---|---|
author | Shahian Jahromi, Babak Tulabandhula, Theja Cetin, Sabri |
author_facet | Shahian Jahromi, Babak Tulabandhula, Theja Cetin, Sabri |
author_sort | Shahian Jahromi, Babak |
collection | PubMed |
description | There are many sensor fusion frameworks proposed in the literature using different sensors and fusion methods combinations and configurations. More focus has been on improving the accuracy performance; however, the implementation feasibility of these frameworks in an autonomous vehicle is less explored. Some fusion architectures can perform very well in lab conditions using powerful computational resources; however, in real-world applications, they cannot be implemented in an embedded edge computer due to their high cost and computational need. We propose a new hybrid multi-sensor fusion pipeline configuration that performs environment perception for autonomous vehicles such as road segmentation, obstacle detection, and tracking. This fusion framework uses a proposed encoder-decoder based Fully Convolutional Neural Network (FCNx) and a traditional Extended Kalman Filter (EKF) nonlinear state estimator method. It also uses a configuration of camera, LiDAR, and radar sensors that are best suited for each fusion method. The goal of this hybrid framework is to provide a cost-effective, lightweight, modular, and robust (in case of a sensor failure) fusion system solution. It uses FCNx algorithm that improve road detection accuracy compared to benchmark models while maintaining real-time efficiency that can be used in an autonomous vehicle embedded computer. Tested on over 3K road scenes, our fusion algorithm shows better performance in various environment scenarios compared to baseline benchmark networks. Moreover, the algorithm is implemented in a vehicle and tested using actual sensor data collected from a vehicle, performing real-time environment perception. |
format | Online Article Text |
id | pubmed-6833089 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-68330892019-11-25 Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles Shahian Jahromi, Babak Tulabandhula, Theja Cetin, Sabri Sensors (Basel) Article There are many sensor fusion frameworks proposed in the literature using different sensors and fusion methods combinations and configurations. More focus has been on improving the accuracy performance; however, the implementation feasibility of these frameworks in an autonomous vehicle is less explored. Some fusion architectures can perform very well in lab conditions using powerful computational resources; however, in real-world applications, they cannot be implemented in an embedded edge computer due to their high cost and computational need. We propose a new hybrid multi-sensor fusion pipeline configuration that performs environment perception for autonomous vehicles such as road segmentation, obstacle detection, and tracking. This fusion framework uses a proposed encoder-decoder based Fully Convolutional Neural Network (FCNx) and a traditional Extended Kalman Filter (EKF) nonlinear state estimator method. It also uses a configuration of camera, LiDAR, and radar sensors that are best suited for each fusion method. The goal of this hybrid framework is to provide a cost-effective, lightweight, modular, and robust (in case of a sensor failure) fusion system solution. It uses FCNx algorithm that improve road detection accuracy compared to benchmark models while maintaining real-time efficiency that can be used in an autonomous vehicle embedded computer. Tested on over 3K road scenes, our fusion algorithm shows better performance in various environment scenarios compared to baseline benchmark networks. Moreover, the algorithm is implemented in a vehicle and tested using actual sensor data collected from a vehicle, performing real-time environment perception. MDPI 2019-10-09 /pmc/articles/PMC6833089/ /pubmed/31600922 http://dx.doi.org/10.3390/s19204357 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Shahian Jahromi, Babak Tulabandhula, Theja Cetin, Sabri Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles |
title | Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles |
title_full | Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles |
title_fullStr | Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles |
title_full_unstemmed | Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles |
title_short | Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles |
title_sort | real-time hybrid multi-sensor fusion framework for perception in autonomous vehicles |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6833089/ https://www.ncbi.nlm.nih.gov/pubmed/31600922 http://dx.doi.org/10.3390/s19204357 |
work_keys_str_mv | AT shahianjahromibabak realtimehybridmultisensorfusionframeworkforperceptioninautonomousvehicles AT tulabandhulatheja realtimehybridmultisensorfusionframeworkforperceptioninautonomousvehicles AT cetinsabri realtimehybridmultisensorfusionframeworkforperceptioninautonomousvehicles |