Cargando…

Robustness Improvement of Visual Templates Matching Based on Frequency-Tuned Model in RatSLAM

This paper describes an improved brain-inspired simultaneous localization and mapping (RatSLAM) that extracts visual features from saliency maps using a frequency-tuned (FT) model. In the traditional RatSLAM algorithm, the visual template feature is organized as a one-dimensional vector whose values...

Descripción completa

Detalles Bibliográficos
Autores principales: Yu, Shumei, Wu, Junyi, Xu, Haidong, Sun, Rongchuan, Sun, Lining
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7546858/
https://www.ncbi.nlm.nih.gov/pubmed/33101002
http://dx.doi.org/10.3389/fnbot.2020.568091
_version_ 1783592308380270592
author Yu, Shumei
Wu, Junyi
Xu, Haidong
Sun, Rongchuan
Sun, Lining
author_facet Yu, Shumei
Wu, Junyi
Xu, Haidong
Sun, Rongchuan
Sun, Lining
author_sort Yu, Shumei
collection PubMed
description This paper describes an improved brain-inspired simultaneous localization and mapping (RatSLAM) that extracts visual features from saliency maps using a frequency-tuned (FT) model. In the traditional RatSLAM algorithm, the visual template feature is organized as a one-dimensional vector whose values only depend on pixel intensity; therefore, this feature is susceptible to changes in illumination intensity. In contrast to this approach, which directly generates visual templates from raw RGB images, we propose an FT model that converts RGB images into saliency maps to obtain visual templates. The visual templates extracted from the saliency maps contain more of the feature information contained within the original images. Our experimental results demonstrate that the accuracy of loop closure detection was improved, as measured by the number of loop closures detected by our method compared with the traditional RatSLAM system. We additionally verified that the proposed FT model-based visual templates improve the robustness of familiar visual scene identification by RatSLAM.
format Online
Article
Text
id pubmed-7546858
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-75468582020-10-22 Robustness Improvement of Visual Templates Matching Based on Frequency-Tuned Model in RatSLAM Yu, Shumei Wu, Junyi Xu, Haidong Sun, Rongchuan Sun, Lining Front Neurorobot Neuroscience This paper describes an improved brain-inspired simultaneous localization and mapping (RatSLAM) that extracts visual features from saliency maps using a frequency-tuned (FT) model. In the traditional RatSLAM algorithm, the visual template feature is organized as a one-dimensional vector whose values only depend on pixel intensity; therefore, this feature is susceptible to changes in illumination intensity. In contrast to this approach, which directly generates visual templates from raw RGB images, we propose an FT model that converts RGB images into saliency maps to obtain visual templates. The visual templates extracted from the saliency maps contain more of the feature information contained within the original images. Our experimental results demonstrate that the accuracy of loop closure detection was improved, as measured by the number of loop closures detected by our method compared with the traditional RatSLAM system. We additionally verified that the proposed FT model-based visual templates improve the robustness of familiar visual scene identification by RatSLAM. Frontiers Media S.A. 2020-09-25 /pmc/articles/PMC7546858/ /pubmed/33101002 http://dx.doi.org/10.3389/fnbot.2020.568091 Text en Copyright © 2020 Yu, Wu, Xu, Sun and Sun. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Yu, Shumei
Wu, Junyi
Xu, Haidong
Sun, Rongchuan
Sun, Lining
Robustness Improvement of Visual Templates Matching Based on Frequency-Tuned Model in RatSLAM
title Robustness Improvement of Visual Templates Matching Based on Frequency-Tuned Model in RatSLAM
title_full Robustness Improvement of Visual Templates Matching Based on Frequency-Tuned Model in RatSLAM
title_fullStr Robustness Improvement of Visual Templates Matching Based on Frequency-Tuned Model in RatSLAM
title_full_unstemmed Robustness Improvement of Visual Templates Matching Based on Frequency-Tuned Model in RatSLAM
title_short Robustness Improvement of Visual Templates Matching Based on Frequency-Tuned Model in RatSLAM
title_sort robustness improvement of visual templates matching based on frequency-tuned model in ratslam
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7546858/
https://www.ncbi.nlm.nih.gov/pubmed/33101002
http://dx.doi.org/10.3389/fnbot.2020.568091
work_keys_str_mv AT yushumei robustnessimprovementofvisualtemplatesmatchingbasedonfrequencytunedmodelinratslam
AT wujunyi robustnessimprovementofvisualtemplatesmatchingbasedonfrequencytunedmodelinratslam
AT xuhaidong robustnessimprovementofvisualtemplatesmatchingbasedonfrequencytunedmodelinratslam
AT sunrongchuan robustnessimprovementofvisualtemplatesmatchingbasedonfrequencytunedmodelinratslam
AT sunlining robustnessimprovementofvisualtemplatesmatchingbasedonfrequencytunedmodelinratslam