Cargando…
Direct and Indirect vSLAM Fusion for Augmented Reality
Augmented reality (AR) is an emerging technology that is applied in many fields. One of the limitations that still prevents AR to be even more widely used relates to the accessibility of devices. Indeed, the devices currently used are usually high end, expensive glasses or mobile devices. vSLAM (vis...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8404931/ https://www.ncbi.nlm.nih.gov/pubmed/34460777 http://dx.doi.org/10.3390/jimaging7080141 |
_version_ | 1783746236439855104 |
---|---|
author | Outahar, Mohamed Moreau, Guillaume Normand, Jean-Marie |
author_facet | Outahar, Mohamed Moreau, Guillaume Normand, Jean-Marie |
author_sort | Outahar, Mohamed |
collection | PubMed |
description | Augmented reality (AR) is an emerging technology that is applied in many fields. One of the limitations that still prevents AR to be even more widely used relates to the accessibility of devices. Indeed, the devices currently used are usually high end, expensive glasses or mobile devices. vSLAM (visual simultaneous localization and mapping) algorithms circumvent this problem by requiring relatively cheap cameras for AR. vSLAM algorithms can be classified as direct or indirect methods based on the type of data used. Each class of algorithms works optimally on a type of scene (e.g., textured or untextured) but unfortunately with little overlap. In this work, a method is proposed to fuse a direct and an indirect methods in order to have a higher robustness and to offer the possibility for AR to move seamlessly between different types of scenes. Our method is tested on three datasets against state-of-the-art direct (LSD-SLAM), semi-direct (LCSD) and indirect (ORBSLAM2) algorithms in two different scenarios: a trajectory planning and an AR scenario where a virtual object is displayed on top of the video feed; furthermore, a similar method (LCSD SLAM) is also compared to our proposal. Results show that our fusion algorithm is generally as efficient as the best algorithm both in terms of trajectory (mean errors with respect to ground truth trajectory measurements) as well as in terms of quality of the augmentation (robustness and stability). In short, we can propose a fusion algorithm that, in our tests, takes the best of both the direct and indirect methods. |
format | Online Article Text |
id | pubmed-8404931 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-84049312021-10-28 Direct and Indirect vSLAM Fusion for Augmented Reality Outahar, Mohamed Moreau, Guillaume Normand, Jean-Marie J Imaging Article Augmented reality (AR) is an emerging technology that is applied in many fields. One of the limitations that still prevents AR to be even more widely used relates to the accessibility of devices. Indeed, the devices currently used are usually high end, expensive glasses or mobile devices. vSLAM (visual simultaneous localization and mapping) algorithms circumvent this problem by requiring relatively cheap cameras for AR. vSLAM algorithms can be classified as direct or indirect methods based on the type of data used. Each class of algorithms works optimally on a type of scene (e.g., textured or untextured) but unfortunately with little overlap. In this work, a method is proposed to fuse a direct and an indirect methods in order to have a higher robustness and to offer the possibility for AR to move seamlessly between different types of scenes. Our method is tested on three datasets against state-of-the-art direct (LSD-SLAM), semi-direct (LCSD) and indirect (ORBSLAM2) algorithms in two different scenarios: a trajectory planning and an AR scenario where a virtual object is displayed on top of the video feed; furthermore, a similar method (LCSD SLAM) is also compared to our proposal. Results show that our fusion algorithm is generally as efficient as the best algorithm both in terms of trajectory (mean errors with respect to ground truth trajectory measurements) as well as in terms of quality of the augmentation (robustness and stability). In short, we can propose a fusion algorithm that, in our tests, takes the best of both the direct and indirect methods. MDPI 2021-08-10 /pmc/articles/PMC8404931/ /pubmed/34460777 http://dx.doi.org/10.3390/jimaging7080141 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Outahar, Mohamed Moreau, Guillaume Normand, Jean-Marie Direct and Indirect vSLAM Fusion for Augmented Reality |
title | Direct and Indirect vSLAM Fusion for Augmented Reality |
title_full | Direct and Indirect vSLAM Fusion for Augmented Reality |
title_fullStr | Direct and Indirect vSLAM Fusion for Augmented Reality |
title_full_unstemmed | Direct and Indirect vSLAM Fusion for Augmented Reality |
title_short | Direct and Indirect vSLAM Fusion for Augmented Reality |
title_sort | direct and indirect vslam fusion for augmented reality |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8404931/ https://www.ncbi.nlm.nih.gov/pubmed/34460777 http://dx.doi.org/10.3390/jimaging7080141 |
work_keys_str_mv | AT outaharmohamed directandindirectvslamfusionforaugmentedreality AT moreauguillaume directandindirectvslamfusionforaugmentedreality AT normandjeanmarie directandindirectvslamfusionforaugmentedreality |