Cargando…

Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter

Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and...

Descripción completa

Detalles Bibliográficos
Autores principales: Alatise, Mary B., Hancke, Gerhard P.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5676736/
https://www.ncbi.nlm.nih.gov/pubmed/28934102
http://dx.doi.org/10.3390/s17102164
_version_ 1783277114313670656
author Alatise, Mary B.
Hancke, Gerhard P.
author_facet Alatise, Mary B.
Hancke, Gerhard P.
author_sort Alatise, Mary B.
collection PubMed
description Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs).
format Online
Article
Text
id pubmed-5676736
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-56767362017-11-17 Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter Alatise, Mary B. Hancke, Gerhard P. Sensors (Basel) Article Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs). MDPI 2017-09-21 /pmc/articles/PMC5676736/ /pubmed/28934102 http://dx.doi.org/10.3390/s17102164 Text en © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Alatise, Mary B.
Hancke, Gerhard P.
Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter
title Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter
title_full Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter
title_fullStr Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter
title_full_unstemmed Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter
title_short Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter
title_sort pose estimation of a mobile robot based on fusion of imu data and vision data using an extended kalman filter
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5676736/
https://www.ncbi.nlm.nih.gov/pubmed/28934102
http://dx.doi.org/10.3390/s17102164
work_keys_str_mv AT alatisemaryb poseestimationofamobilerobotbasedonfusionofimudataandvisiondatausinganextendedkalmanfilter
AT hanckegerhardp poseestimationofamobilerobotbasedonfusionofimudataandvisiondatausinganextendedkalmanfilter