Cargando…

Monocular Visual-Inertial Odometry with an Unbiased Linear System Model and Robust Feature Tracking Front-End

The research field of visual-inertial odometry has entered a mature stage in recent years. However, unneglectable problems still exist. Tradeoffs have to be made between high accuracy and low computation for users. In addition, notation confusion exists in quaternion descriptions of rotation; althou...

Descripción completa

Detalles Bibliográficos
Autores principales: Qiu, Xiaochen, Zhang, Hai, Fu, Wenxing, Zhao, Chenxu, Jin, Yanqiong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6515200/
https://www.ncbi.nlm.nih.gov/pubmed/31027218
http://dx.doi.org/10.3390/s19081941
_version_ 1783418036587331584
author Qiu, Xiaochen
Zhang, Hai
Fu, Wenxing
Zhao, Chenxu
Jin, Yanqiong
author_facet Qiu, Xiaochen
Zhang, Hai
Fu, Wenxing
Zhao, Chenxu
Jin, Yanqiong
author_sort Qiu, Xiaochen
collection PubMed
description The research field of visual-inertial odometry has entered a mature stage in recent years. However, unneglectable problems still exist. Tradeoffs have to be made between high accuracy and low computation for users. In addition, notation confusion exists in quaternion descriptions of rotation; although not fatal, this may results in unnecessary difficulties in understanding for researchers. In this paper, we develop a visual-inertial odometry which gives consideration to both precision and computation. The proposed algorithm is a filter-based solution that utilizes the framework of the noted multi-state constraint Kalman filter. To dispel notation confusion, we deduced the error state transition equation from scratch, using the more cognitive Hamilton notation of quaternion. We further come up with a fully linear closed-form formulation that is readily implemented. As the filter-based back-end is vulnerable to feature matching outliers, a descriptor-assisted optical flow tracking front-end was developed to cope with the issue. This modification only requires negligible additional computation. In addition, an initialization procedure is implemented, which automatically selects static data to initialize the filter state. Evaluations of proposed methods were done on a public, real-world dataset, and comparisons were made with state-of-the-art solutions. The experimental results show that the proposed solution is comparable in precision and demonstrates higher computation efficiency compared to the state-of-the-art.
format Online
Article
Text
id pubmed-6515200
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-65152002019-05-30 Monocular Visual-Inertial Odometry with an Unbiased Linear System Model and Robust Feature Tracking Front-End Qiu, Xiaochen Zhang, Hai Fu, Wenxing Zhao, Chenxu Jin, Yanqiong Sensors (Basel) Article The research field of visual-inertial odometry has entered a mature stage in recent years. However, unneglectable problems still exist. Tradeoffs have to be made between high accuracy and low computation for users. In addition, notation confusion exists in quaternion descriptions of rotation; although not fatal, this may results in unnecessary difficulties in understanding for researchers. In this paper, we develop a visual-inertial odometry which gives consideration to both precision and computation. The proposed algorithm is a filter-based solution that utilizes the framework of the noted multi-state constraint Kalman filter. To dispel notation confusion, we deduced the error state transition equation from scratch, using the more cognitive Hamilton notation of quaternion. We further come up with a fully linear closed-form formulation that is readily implemented. As the filter-based back-end is vulnerable to feature matching outliers, a descriptor-assisted optical flow tracking front-end was developed to cope with the issue. This modification only requires negligible additional computation. In addition, an initialization procedure is implemented, which automatically selects static data to initialize the filter state. Evaluations of proposed methods were done on a public, real-world dataset, and comparisons were made with state-of-the-art solutions. The experimental results show that the proposed solution is comparable in precision and demonstrates higher computation efficiency compared to the state-of-the-art. MDPI 2019-04-25 /pmc/articles/PMC6515200/ /pubmed/31027218 http://dx.doi.org/10.3390/s19081941 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Qiu, Xiaochen
Zhang, Hai
Fu, Wenxing
Zhao, Chenxu
Jin, Yanqiong
Monocular Visual-Inertial Odometry with an Unbiased Linear System Model and Robust Feature Tracking Front-End
title Monocular Visual-Inertial Odometry with an Unbiased Linear System Model and Robust Feature Tracking Front-End
title_full Monocular Visual-Inertial Odometry with an Unbiased Linear System Model and Robust Feature Tracking Front-End
title_fullStr Monocular Visual-Inertial Odometry with an Unbiased Linear System Model and Robust Feature Tracking Front-End
title_full_unstemmed Monocular Visual-Inertial Odometry with an Unbiased Linear System Model and Robust Feature Tracking Front-End
title_short Monocular Visual-Inertial Odometry with an Unbiased Linear System Model and Robust Feature Tracking Front-End
title_sort monocular visual-inertial odometry with an unbiased linear system model and robust feature tracking front-end
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6515200/
https://www.ncbi.nlm.nih.gov/pubmed/31027218
http://dx.doi.org/10.3390/s19081941
work_keys_str_mv AT qiuxiaochen monocularvisualinertialodometrywithanunbiasedlinearsystemmodelandrobustfeaturetrackingfrontend
AT zhanghai monocularvisualinertialodometrywithanunbiasedlinearsystemmodelandrobustfeaturetrackingfrontend
AT fuwenxing monocularvisualinertialodometrywithanunbiasedlinearsystemmodelandrobustfeaturetrackingfrontend
AT zhaochenxu monocularvisualinertialodometrywithanunbiasedlinearsystemmodelandrobustfeaturetrackingfrontend
AT jinyanqiong monocularvisualinertialodometrywithanunbiasedlinearsystemmodelandrobustfeaturetrackingfrontend