Cargando…
Robust Visual Odometry Leveraging Mixture of Manhattan Frames in Indoor Environments
We propose a robust RGB-Depth (RGB-D) Visual Odometry (VO) system to improve the localization performance of indoor scenes by using geometric features, including point and line features. Previous VO/Simultaneous Localization and Mapping (SLAM) algorithms estimate the low-drift camera poses with the...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9698556/ https://www.ncbi.nlm.nih.gov/pubmed/36433239 http://dx.doi.org/10.3390/s22228644 |
_version_ | 1784838849101299712 |
---|---|
author | Yuan, Huayu Wu, Chengfeng Deng, Zhongliang Yin, Jiahui |
author_facet | Yuan, Huayu Wu, Chengfeng Deng, Zhongliang Yin, Jiahui |
author_sort | Yuan, Huayu |
collection | PubMed |
description | We propose a robust RGB-Depth (RGB-D) Visual Odometry (VO) system to improve the localization performance of indoor scenes by using geometric features, including point and line features. Previous VO/Simultaneous Localization and Mapping (SLAM) algorithms estimate the low-drift camera poses with the Manhattan World (MW)/Atlanta World (AW) assumption, which limits the applications of such systems. In this paper, we divide the indoor environments into two different scenes: MW and non-MW scenes. The Manhattan scenes are modeled as a Mixture of Manhattan Frames, in which each Manhattan Frame in itself defines a Manhattan World of a specific orientation. Moreover, we provide a method to detect Manhattan Frames (MFs) using the dominant directions extracted from the parallel lines. Our approach is designed with lower computational complexity than existing techniques using planes to detect Manhattan Frame (MF). For MW scenes, we separately estimate rotational and translational motion. A novel method is proposed to estimate the drift-free rotation using MF observations, unit direction vectors of lines, and surface normal vectors. Then, the translation part is recovered from point-line tracking. In non-MW scenes, the tracked and matched dominant directions are combined with the point and line features to estimate the full 6 degree of freedom (DoF) camera poses. Additionally, we exploit the rotation constraints generated from the multi-view dominant directions observations. The constraints are combined with the reprojection errors of points and lines to refine the camera pose through local map bundle adjustment. Evaluations on both synthesized and real-world datasets demonstrate that our approach outperforms state-of-the-art methods. On synthesized datasets, average localization accuracy is 1.5 cm, which is equivalent to state-of-the-art methods. On real-world datasets, the average localization accuracy is 1.7 cm, which outperforms the state-of-the-art methods by 43%. Our time consumption is reduced by 36%. |
format | Online Article Text |
id | pubmed-9698556 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-96985562022-11-26 Robust Visual Odometry Leveraging Mixture of Manhattan Frames in Indoor Environments Yuan, Huayu Wu, Chengfeng Deng, Zhongliang Yin, Jiahui Sensors (Basel) Article We propose a robust RGB-Depth (RGB-D) Visual Odometry (VO) system to improve the localization performance of indoor scenes by using geometric features, including point and line features. Previous VO/Simultaneous Localization and Mapping (SLAM) algorithms estimate the low-drift camera poses with the Manhattan World (MW)/Atlanta World (AW) assumption, which limits the applications of such systems. In this paper, we divide the indoor environments into two different scenes: MW and non-MW scenes. The Manhattan scenes are modeled as a Mixture of Manhattan Frames, in which each Manhattan Frame in itself defines a Manhattan World of a specific orientation. Moreover, we provide a method to detect Manhattan Frames (MFs) using the dominant directions extracted from the parallel lines. Our approach is designed with lower computational complexity than existing techniques using planes to detect Manhattan Frame (MF). For MW scenes, we separately estimate rotational and translational motion. A novel method is proposed to estimate the drift-free rotation using MF observations, unit direction vectors of lines, and surface normal vectors. Then, the translation part is recovered from point-line tracking. In non-MW scenes, the tracked and matched dominant directions are combined with the point and line features to estimate the full 6 degree of freedom (DoF) camera poses. Additionally, we exploit the rotation constraints generated from the multi-view dominant directions observations. The constraints are combined with the reprojection errors of points and lines to refine the camera pose through local map bundle adjustment. Evaluations on both synthesized and real-world datasets demonstrate that our approach outperforms state-of-the-art methods. On synthesized datasets, average localization accuracy is 1.5 cm, which is equivalent to state-of-the-art methods. On real-world datasets, the average localization accuracy is 1.7 cm, which outperforms the state-of-the-art methods by 43%. Our time consumption is reduced by 36%. MDPI 2022-11-09 /pmc/articles/PMC9698556/ /pubmed/36433239 http://dx.doi.org/10.3390/s22228644 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Yuan, Huayu Wu, Chengfeng Deng, Zhongliang Yin, Jiahui Robust Visual Odometry Leveraging Mixture of Manhattan Frames in Indoor Environments |
title | Robust Visual Odometry Leveraging Mixture of Manhattan Frames in Indoor Environments |
title_full | Robust Visual Odometry Leveraging Mixture of Manhattan Frames in Indoor Environments |
title_fullStr | Robust Visual Odometry Leveraging Mixture of Manhattan Frames in Indoor Environments |
title_full_unstemmed | Robust Visual Odometry Leveraging Mixture of Manhattan Frames in Indoor Environments |
title_short | Robust Visual Odometry Leveraging Mixture of Manhattan Frames in Indoor Environments |
title_sort | robust visual odometry leveraging mixture of manhattan frames in indoor environments |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9698556/ https://www.ncbi.nlm.nih.gov/pubmed/36433239 http://dx.doi.org/10.3390/s22228644 |
work_keys_str_mv | AT yuanhuayu robustvisualodometryleveragingmixtureofmanhattanframesinindoorenvironments AT wuchengfeng robustvisualodometryleveragingmixtureofmanhattanframesinindoorenvironments AT dengzhongliang robustvisualodometryleveragingmixtureofmanhattanframesinindoorenvironments AT yinjiahui robustvisualodometryleveragingmixtureofmanhattanframesinindoorenvironments |