Cargando…
Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation
The stabilization and validation process of the measured position of objects is an important step for high-level perception functions and for the correct processing of sensory data. The goal of this process is to detect and handle inconsistencies between different sensor measurements, which result f...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7070899/ https://www.ncbi.nlm.nih.gov/pubmed/32085608 http://dx.doi.org/10.3390/s20041110 |
_version_ | 1783506081333379072 |
---|---|
author | Muresan, Mircea Paul Giosan, Ion Nedevschi, Sergiu |
author_facet | Muresan, Mircea Paul Giosan, Ion Nedevschi, Sergiu |
author_sort | Muresan, Mircea Paul |
collection | PubMed |
description | The stabilization and validation process of the measured position of objects is an important step for high-level perception functions and for the correct processing of sensory data. The goal of this process is to detect and handle inconsistencies between different sensor measurements, which result from the perception system. The aggregation of the detections from different sensors consists in the combination of the sensorial data in one common reference frame for each identified object, leading to the creation of a super-sensor. The result of the data aggregation may end up with errors such as false detections, misplaced object cuboids or an incorrect number of objects in the scene. The stabilization and validation process is focused on mitigating these problems. The current paper proposes four contributions for solving the stabilization and validation task, for autonomous vehicles, using the following sensors: trifocal camera, fisheye camera, long-range RADAR (Radio detection and ranging), and 4-layer and 16-layer LIDARs (Light Detection and Ranging). We propose two original data association methods used in the sensor fusion and tracking processes. The first data association algorithm is created for tracking LIDAR objects and combines multiple appearance and motion features in order to exploit the available information for road objects. The second novel data association algorithm is designed for trifocal camera objects and has the objective of finding measurement correspondences to sensor fused objects such that the super-sensor data are enriched by adding the semantic class information. The implemented trifocal object association solution uses a novel polar association scheme combined with a decision tree to find the best hypothesis–measurement correlations. Another contribution we propose for stabilizing object position and unpredictable behavior of road objects, provided by multiple types of complementary sensors, is the use of a fusion approach based on the Unscented Kalman Filter and a single-layer perceptron. The last novel contribution is related to the validation of the 3D object position, which is solved using a fuzzy logic technique combined with a semantic segmentation image. The proposed algorithms have a real-time performance, achieving a cumulative running time of 90 ms, and have been evaluated using ground truth data extracted from a high-precision GPS (global positioning system) with 2 cm accuracy, obtaining an average error of 0.8 m. |
format | Online Article Text |
id | pubmed-7070899 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-70708992020-03-19 Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation Muresan, Mircea Paul Giosan, Ion Nedevschi, Sergiu Sensors (Basel) Article The stabilization and validation process of the measured position of objects is an important step for high-level perception functions and for the correct processing of sensory data. The goal of this process is to detect and handle inconsistencies between different sensor measurements, which result from the perception system. The aggregation of the detections from different sensors consists in the combination of the sensorial data in one common reference frame for each identified object, leading to the creation of a super-sensor. The result of the data aggregation may end up with errors such as false detections, misplaced object cuboids or an incorrect number of objects in the scene. The stabilization and validation process is focused on mitigating these problems. The current paper proposes four contributions for solving the stabilization and validation task, for autonomous vehicles, using the following sensors: trifocal camera, fisheye camera, long-range RADAR (Radio detection and ranging), and 4-layer and 16-layer LIDARs (Light Detection and Ranging). We propose two original data association methods used in the sensor fusion and tracking processes. The first data association algorithm is created for tracking LIDAR objects and combines multiple appearance and motion features in order to exploit the available information for road objects. The second novel data association algorithm is designed for trifocal camera objects and has the objective of finding measurement correspondences to sensor fused objects such that the super-sensor data are enriched by adding the semantic class information. The implemented trifocal object association solution uses a novel polar association scheme combined with a decision tree to find the best hypothesis–measurement correlations. Another contribution we propose for stabilizing object position and unpredictable behavior of road objects, provided by multiple types of complementary sensors, is the use of a fusion approach based on the Unscented Kalman Filter and a single-layer perceptron. The last novel contribution is related to the validation of the 3D object position, which is solved using a fuzzy logic technique combined with a semantic segmentation image. The proposed algorithms have a real-time performance, achieving a cumulative running time of 90 ms, and have been evaluated using ground truth data extracted from a high-precision GPS (global positioning system) with 2 cm accuracy, obtaining an average error of 0.8 m. MDPI 2020-02-18 /pmc/articles/PMC7070899/ /pubmed/32085608 http://dx.doi.org/10.3390/s20041110 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Muresan, Mircea Paul Giosan, Ion Nedevschi, Sergiu Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation |
title | Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation |
title_full | Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation |
title_fullStr | Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation |
title_full_unstemmed | Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation |
title_short | Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation |
title_sort | stabilization and validation of 3d object position using multimodal sensor fusion and semantic segmentation |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7070899/ https://www.ncbi.nlm.nih.gov/pubmed/32085608 http://dx.doi.org/10.3390/s20041110 |
work_keys_str_mv | AT muresanmirceapaul stabilizationandvalidationof3dobjectpositionusingmultimodalsensorfusionandsemanticsegmentation AT giosanion stabilizationandvalidationof3dobjectpositionusingmultimodalsensorfusionandsemanticsegmentation AT nedevschisergiu stabilizationandvalidationof3dobjectpositionusingmultimodalsensorfusionandsemanticsegmentation |