Cargando…

Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching

This work presents a visual information fusion approach for robust probability-oriented feature matching. It is sustained by omnidirectional imaging, and it is tested in a visual localization framework, in mobile robotics. General visual localization methods have been extensively studied and optimiz...

Descripción completa

Detalles Bibliográficos
Autores principales: Valiente, David, Payá, Luis, Jiménez, Luis M., Sebastián, Jose M., Reinoso, Óscar
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6069515/
https://www.ncbi.nlm.nih.gov/pubmed/29949916
http://dx.doi.org/10.3390/s18072041
_version_ 1783343514767065088
author Valiente, David
Payá, Luis
Jiménez, Luis M.
Sebastián, Jose M.
Reinoso, Óscar
author_facet Valiente, David
Payá, Luis
Jiménez, Luis M.
Sebastián, Jose M.
Reinoso, Óscar
author_sort Valiente, David
collection PubMed
description This work presents a visual information fusion approach for robust probability-oriented feature matching. It is sustained by omnidirectional imaging, and it is tested in a visual localization framework, in mobile robotics. General visual localization methods have been extensively studied and optimized in terms of performance. However, one of the main threats that jeopardizes the final estimation is the presence of outliers. In this paper, we present several contributions to deal with that issue. First, 3D information data, associated with SURF (Speeded-Up Robust Feature) points detected on the images, is inferred under the Bayesian framework established by Gaussian processes (GPs). Such information represents a probability distribution for the feature points’ existence, which is successively fused and updated throughout the robot’s poses. Secondly, this distribution can be properly sampled and projected onto the next 2D image frame in [Formula: see text] , by means of a filter-motion prediction. This strategy permits obtaining relevant areas in the image reference system, from which probable matches could be detected, in terms of the accumulated probability of feature existence. This approach entails an adaptive probability-oriented matching search, which accounts for significant areas of the image, but it also considers unseen parts of the scene, thanks to an internal modulation of the probability distribution domain, computed in terms of the current uncertainty of the system. The main outcomes confirm a robust feature matching, which permits producing consistent localization estimates, aided by the odometer’s prior to estimate the scale factor. Publicly available datasets have been used to validate the design and operation of the approach. Moreover, the proposal has been compared, firstly with a standard feature matching and secondly with a localization method, based on an inverse depth parametrization. The results confirm the validity of the approach in terms of feature matching, localization accuracy, and time consumption.
format Online
Article
Text
id pubmed-6069515
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-60695152018-08-07 Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching Valiente, David Payá, Luis Jiménez, Luis M. Sebastián, Jose M. Reinoso, Óscar Sensors (Basel) Article This work presents a visual information fusion approach for robust probability-oriented feature matching. It is sustained by omnidirectional imaging, and it is tested in a visual localization framework, in mobile robotics. General visual localization methods have been extensively studied and optimized in terms of performance. However, one of the main threats that jeopardizes the final estimation is the presence of outliers. In this paper, we present several contributions to deal with that issue. First, 3D information data, associated with SURF (Speeded-Up Robust Feature) points detected on the images, is inferred under the Bayesian framework established by Gaussian processes (GPs). Such information represents a probability distribution for the feature points’ existence, which is successively fused and updated throughout the robot’s poses. Secondly, this distribution can be properly sampled and projected onto the next 2D image frame in [Formula: see text] , by means of a filter-motion prediction. This strategy permits obtaining relevant areas in the image reference system, from which probable matches could be detected, in terms of the accumulated probability of feature existence. This approach entails an adaptive probability-oriented matching search, which accounts for significant areas of the image, but it also considers unseen parts of the scene, thanks to an internal modulation of the probability distribution domain, computed in terms of the current uncertainty of the system. The main outcomes confirm a robust feature matching, which permits producing consistent localization estimates, aided by the odometer’s prior to estimate the scale factor. Publicly available datasets have been used to validate the design and operation of the approach. Moreover, the proposal has been compared, firstly with a standard feature matching and secondly with a localization method, based on an inverse depth parametrization. The results confirm the validity of the approach in terms of feature matching, localization accuracy, and time consumption. MDPI 2018-06-26 /pmc/articles/PMC6069515/ /pubmed/29949916 http://dx.doi.org/10.3390/s18072041 Text en © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Valiente, David
Payá, Luis
Jiménez, Luis M.
Sebastián, Jose M.
Reinoso, Óscar
Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching
title Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching
title_full Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching
title_fullStr Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching
title_full_unstemmed Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching
title_short Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching
title_sort visual information fusion through bayesian inference for adaptive probability-oriented feature matching
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6069515/
https://www.ncbi.nlm.nih.gov/pubmed/29949916
http://dx.doi.org/10.3390/s18072041
work_keys_str_mv AT valientedavid visualinformationfusionthroughbayesianinferenceforadaptiveprobabilityorientedfeaturematching
AT payaluis visualinformationfusionthroughbayesianinferenceforadaptiveprobabilityorientedfeaturematching
AT jimenezluism visualinformationfusionthroughbayesianinferenceforadaptiveprobabilityorientedfeaturematching
AT sebastianjosem visualinformationfusionthroughbayesianinferenceforadaptiveprobabilityorientedfeaturematching
AT reinosooscar visualinformationfusionthroughbayesianinferenceforadaptiveprobabilityorientedfeaturematching