Cargando…
VSLAM method based on object detection in dynamic environments
Augmented Reality Registration field now requires improved SLAM systems to adapt to more complex and highly dynamic environments. The commonly used VSLAM algorithm has problems such as excessive pose estimation errors and easy loss of camera tracking in dynamic scenes. To solve these problems, we pr...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9478733/ https://www.ncbi.nlm.nih.gov/pubmed/36119718 http://dx.doi.org/10.3389/fnbot.2022.990453 |
_version_ | 1784790639314993152 |
---|---|
author | Liu, Jia Gu, Qiyao Chen, Dapeng Yan, Dong |
author_facet | Liu, Jia Gu, Qiyao Chen, Dapeng Yan, Dong |
author_sort | Liu, Jia |
collection | PubMed |
description | Augmented Reality Registration field now requires improved SLAM systems to adapt to more complex and highly dynamic environments. The commonly used VSLAM algorithm has problems such as excessive pose estimation errors and easy loss of camera tracking in dynamic scenes. To solve these problems, we propose a real-time tracking and mapping method based on GMM combined with YOLOv3. The method utilizes the ORB-SLAM2 system framework and improves its tracking thread. It combines the affine transformation matrix to correct the front and back frames, and employs GMM to model the background image and segment the foreground dynamic region. Then, the obtained dynamic region is sent to the YOLO detector to find the possible dynamic target. It uses the improved Kalman filter algorithm to predict and track the detected dynamic objects in the tracking stage. Before building a map, the method filters the feature points detected in the current frame and eliminates dynamic feature points. Finally, we validate the proposed method using the TUM dataset and conduct real-time Augmented Reality Registration experiments in a dynamic environment. The results show that the method proposed in this paper is more robust under dynamic datasets and can register virtual objects stably and in real time. |
format | Online Article Text |
id | pubmed-9478733 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-94787332022-09-17 VSLAM method based on object detection in dynamic environments Liu, Jia Gu, Qiyao Chen, Dapeng Yan, Dong Front Neurorobot Neuroscience Augmented Reality Registration field now requires improved SLAM systems to adapt to more complex and highly dynamic environments. The commonly used VSLAM algorithm has problems such as excessive pose estimation errors and easy loss of camera tracking in dynamic scenes. To solve these problems, we propose a real-time tracking and mapping method based on GMM combined with YOLOv3. The method utilizes the ORB-SLAM2 system framework and improves its tracking thread. It combines the affine transformation matrix to correct the front and back frames, and employs GMM to model the background image and segment the foreground dynamic region. Then, the obtained dynamic region is sent to the YOLO detector to find the possible dynamic target. It uses the improved Kalman filter algorithm to predict and track the detected dynamic objects in the tracking stage. Before building a map, the method filters the feature points detected in the current frame and eliminates dynamic feature points. Finally, we validate the proposed method using the TUM dataset and conduct real-time Augmented Reality Registration experiments in a dynamic environment. The results show that the method proposed in this paper is more robust under dynamic datasets and can register virtual objects stably and in real time. Frontiers Media S.A. 2022-09-02 /pmc/articles/PMC9478733/ /pubmed/36119718 http://dx.doi.org/10.3389/fnbot.2022.990453 Text en Copyright © 2022 Liu, Gu, Chen and Yan. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Liu, Jia Gu, Qiyao Chen, Dapeng Yan, Dong VSLAM method based on object detection in dynamic environments |
title | VSLAM method based on object detection in dynamic environments |
title_full | VSLAM method based on object detection in dynamic environments |
title_fullStr | VSLAM method based on object detection in dynamic environments |
title_full_unstemmed | VSLAM method based on object detection in dynamic environments |
title_short | VSLAM method based on object detection in dynamic environments |
title_sort | vslam method based on object detection in dynamic environments |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9478733/ https://www.ncbi.nlm.nih.gov/pubmed/36119718 http://dx.doi.org/10.3389/fnbot.2022.990453 |
work_keys_str_mv | AT liujia vslammethodbasedonobjectdetectionindynamicenvironments AT guqiyao vslammethodbasedonobjectdetectionindynamicenvironments AT chendapeng vslammethodbasedonobjectdetectionindynamicenvironments AT yandong vslammethodbasedonobjectdetectionindynamicenvironments |