Cargando…

Unsupervised Monocular Visual Odometry for Fast-Moving Scenes Based on Optical Flow Network with Feature Point Matching Constraint

Robust and accurate visual feature tracking is essential for good pose estimation in visual odometry. However, in fast-moving scenes, feature point extraction and matching are unstable because of blurred images and large image disparity. In this paper, we propose an unsupervised monocular visual odo...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhuang, Yuji, Jiang, Xiaoyan, Gao, Yongbin, Fang, Zhijun, Fujita, Hamido
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9788516/
https://www.ncbi.nlm.nih.gov/pubmed/36560015
http://dx.doi.org/10.3390/s22249647
_version_ 1784858772894646272
author Zhuang, Yuji
Jiang, Xiaoyan
Gao, Yongbin
Fang, Zhijun
Fujita, Hamido
author_facet Zhuang, Yuji
Jiang, Xiaoyan
Gao, Yongbin
Fang, Zhijun
Fujita, Hamido
author_sort Zhuang, Yuji
collection PubMed
description Robust and accurate visual feature tracking is essential for good pose estimation in visual odometry. However, in fast-moving scenes, feature point extraction and matching are unstable because of blurred images and large image disparity. In this paper, we propose an unsupervised monocular visual odometry framework based on a fusion of features extracted from two sources, that is, the optical flow network and the traditional point feature extractor. In the training process, point features are generated for scene images and the outliers of matched point pairs are filtered by FlannMatch. Meanwhile, the optical flow network constrained by the principle of forward–backward flow consistency is used to select another group of corresponding point pairs. The Euclidean distance between the matching points found by FlannMatch and the corresponding point pairs by the flow network is added to the loss function of the flow network. Compared with SURF, the trained flow network shows more robust performance in complicated fast-motion scenarios. Furthermore, we propose the AvgFlow estimation module, which selects one group of the matched point pairs generated by the two methods according to the scene motion. The camera pose is then recovered by Perspective-n-Point (PnP) or the epipolar geometry. Experiments conducted on the KITTI Odometry dataset verify the effectiveness of the trajectory estimation of our approach, especially in fast-moving scenarios.
format Online
Article
Text
id pubmed-9788516
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-97885162022-12-24 Unsupervised Monocular Visual Odometry for Fast-Moving Scenes Based on Optical Flow Network with Feature Point Matching Constraint Zhuang, Yuji Jiang, Xiaoyan Gao, Yongbin Fang, Zhijun Fujita, Hamido Sensors (Basel) Article Robust and accurate visual feature tracking is essential for good pose estimation in visual odometry. However, in fast-moving scenes, feature point extraction and matching are unstable because of blurred images and large image disparity. In this paper, we propose an unsupervised monocular visual odometry framework based on a fusion of features extracted from two sources, that is, the optical flow network and the traditional point feature extractor. In the training process, point features are generated for scene images and the outliers of matched point pairs are filtered by FlannMatch. Meanwhile, the optical flow network constrained by the principle of forward–backward flow consistency is used to select another group of corresponding point pairs. The Euclidean distance between the matching points found by FlannMatch and the corresponding point pairs by the flow network is added to the loss function of the flow network. Compared with SURF, the trained flow network shows more robust performance in complicated fast-motion scenarios. Furthermore, we propose the AvgFlow estimation module, which selects one group of the matched point pairs generated by the two methods according to the scene motion. The camera pose is then recovered by Perspective-n-Point (PnP) or the epipolar geometry. Experiments conducted on the KITTI Odometry dataset verify the effectiveness of the trajectory estimation of our approach, especially in fast-moving scenarios. MDPI 2022-12-09 /pmc/articles/PMC9788516/ /pubmed/36560015 http://dx.doi.org/10.3390/s22249647 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Zhuang, Yuji
Jiang, Xiaoyan
Gao, Yongbin
Fang, Zhijun
Fujita, Hamido
Unsupervised Monocular Visual Odometry for Fast-Moving Scenes Based on Optical Flow Network with Feature Point Matching Constraint
title Unsupervised Monocular Visual Odometry for Fast-Moving Scenes Based on Optical Flow Network with Feature Point Matching Constraint
title_full Unsupervised Monocular Visual Odometry for Fast-Moving Scenes Based on Optical Flow Network with Feature Point Matching Constraint
title_fullStr Unsupervised Monocular Visual Odometry for Fast-Moving Scenes Based on Optical Flow Network with Feature Point Matching Constraint
title_full_unstemmed Unsupervised Monocular Visual Odometry for Fast-Moving Scenes Based on Optical Flow Network with Feature Point Matching Constraint
title_short Unsupervised Monocular Visual Odometry for Fast-Moving Scenes Based on Optical Flow Network with Feature Point Matching Constraint
title_sort unsupervised monocular visual odometry for fast-moving scenes based on optical flow network with feature point matching constraint
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9788516/
https://www.ncbi.nlm.nih.gov/pubmed/36560015
http://dx.doi.org/10.3390/s22249647
work_keys_str_mv AT zhuangyuji unsupervisedmonocularvisualodometryforfastmovingscenesbasedonopticalflownetworkwithfeaturepointmatchingconstraint
AT jiangxiaoyan unsupervisedmonocularvisualodometryforfastmovingscenesbasedonopticalflownetworkwithfeaturepointmatchingconstraint
AT gaoyongbin unsupervisedmonocularvisualodometryforfastmovingscenesbasedonopticalflownetworkwithfeaturepointmatchingconstraint
AT fangzhijun unsupervisedmonocularvisualodometryforfastmovingscenesbasedonopticalflownetworkwithfeaturepointmatchingconstraint
AT fujitahamido unsupervisedmonocularvisualodometryforfastmovingscenesbasedonopticalflownetworkwithfeaturepointmatchingconstraint