Cargando…
Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors
This work presents some methods to create local maps and to estimate the position of a mobile robot, using the global appearance of omnidirectional images. We use a robot that carries an omnidirectional vision system on it. Every omnidirectional image acquired by the robot is described only with one...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2015
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4634508/ https://www.ncbi.nlm.nih.gov/pubmed/26501289 http://dx.doi.org/10.3390/s151026368 |
_version_ | 1782399372212830208 |
---|---|
author | Berenguer, Yerai Payá, Luis Ballesta, Mónica Reinoso, Oscar |
author_facet | Berenguer, Yerai Payá, Luis Ballesta, Mónica Reinoso, Oscar |
author_sort | Berenguer, Yerai |
collection | PubMed |
description | This work presents some methods to create local maps and to estimate the position of a mobile robot, using the global appearance of omnidirectional images. We use a robot that carries an omnidirectional vision system on it. Every omnidirectional image acquired by the robot is described only with one global appearance descriptor, based on the Radon transform. In the work presented in this paper, two different possibilities have been considered. In the first one, we assume the existence of a map previously built composed of omnidirectional images that have been captured from previously-known positions. The purpose in this case consists of estimating the nearest position of the map to the current position of the robot, making use of the visual information acquired by the robot from its current (unknown) position. In the second one, we assume that we have a model of the environment composed of omnidirectional images, but with no information about the location of where the images were acquired. The purpose in this case consists of building a local map and estimating the position of the robot within this map. Both methods are tested with different databases (including virtual and real images) taking into consideration the changes of the position of different objects in the environment, different lighting conditions and occlusions. The results show the effectiveness and the robustness of both methods. |
format | Online Article Text |
id | pubmed-4634508 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2015 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-46345082015-11-23 Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors Berenguer, Yerai Payá, Luis Ballesta, Mónica Reinoso, Oscar Sensors (Basel) Article This work presents some methods to create local maps and to estimate the position of a mobile robot, using the global appearance of omnidirectional images. We use a robot that carries an omnidirectional vision system on it. Every omnidirectional image acquired by the robot is described only with one global appearance descriptor, based on the Radon transform. In the work presented in this paper, two different possibilities have been considered. In the first one, we assume the existence of a map previously built composed of omnidirectional images that have been captured from previously-known positions. The purpose in this case consists of estimating the nearest position of the map to the current position of the robot, making use of the visual information acquired by the robot from its current (unknown) position. In the second one, we assume that we have a model of the environment composed of omnidirectional images, but with no information about the location of where the images were acquired. The purpose in this case consists of building a local map and estimating the position of the robot within this map. Both methods are tested with different databases (including virtual and real images) taking into consideration the changes of the position of different objects in the environment, different lighting conditions and occlusions. The results show the effectiveness and the robustness of both methods. MDPI 2015-10-16 /pmc/articles/PMC4634508/ /pubmed/26501289 http://dx.doi.org/10.3390/s151026368 Text en © 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Berenguer, Yerai Payá, Luis Ballesta, Mónica Reinoso, Oscar Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors |
title | Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors |
title_full | Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors |
title_fullStr | Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors |
title_full_unstemmed | Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors |
title_short | Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors |
title_sort | position estimation and local mapping using omnidirectional images and global appearance descriptors |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4634508/ https://www.ncbi.nlm.nih.gov/pubmed/26501289 http://dx.doi.org/10.3390/s151026368 |
work_keys_str_mv | AT berengueryerai positionestimationandlocalmappingusingomnidirectionalimagesandglobalappearancedescriptors AT payaluis positionestimationandlocalmappingusingomnidirectionalimagesandglobalappearancedescriptors AT ballestamonica positionestimationandlocalmappingusingomnidirectionalimagesandglobalappearancedescriptors AT reinosooscar positionestimationandlocalmappingusingomnidirectionalimagesandglobalappearancedescriptors |