Cargando…
LPMP: A Bio-Inspired Model for Visual Localization in Challenging Environments
Autonomous vehicles require precise and reliable self-localization to cope with dynamic environments. The field of visual place recognition (VPR) aims to solve this challenge by relying on the visual modality to recognize a place despite changes in the appearance of the perceived visual scene. In th...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8855039/ https://www.ncbi.nlm.nih.gov/pubmed/35187091 http://dx.doi.org/10.3389/frobt.2021.703811 |
_version_ | 1784653568581566464 |
---|---|
author | Colomer, Sylvain Cuperlier, Nicolas Bresson, Guillaume Gaussier, Philippe Romain, Olivier |
author_facet | Colomer, Sylvain Cuperlier, Nicolas Bresson, Guillaume Gaussier, Philippe Romain, Olivier |
author_sort | Colomer, Sylvain |
collection | PubMed |
description | Autonomous vehicles require precise and reliable self-localization to cope with dynamic environments. The field of visual place recognition (VPR) aims to solve this challenge by relying on the visual modality to recognize a place despite changes in the appearance of the perceived visual scene. In this paper, we propose to tackle the VPR problem following a neuro-cybernetic approach. To this end, the Log-Polar Max-Pi (LPMP) model is introduced. This bio-inspired neural network allows building a neural representation of the environment via an unsupervised one-shot learning. Inspired by the spatial cognition of mammals, visual information in the LPMP model are processed through two distinct pathways: a “what” pathway that extracts and learns the local visual signatures (landmarks) of a visual scene and a “where” pathway that computes their azimuth. These two pieces of information are then merged to build a visuospatial code that is characteristic of the place where the visual scene was perceived. Three main contributions are presented in this article: 1) the LPMP model is studied and compared with NetVLAD and CoHog, two state-of-the-art VPR models; 2) a test benchmark for the evaluation of VPR models according to the type of environment traveled is proposed based on the Oxford car dataset; and 3) the impact of the use of a novel detector leading to an uneven paving of an environment is evaluated in terms of the localization performance and compared to a regular paving. Our experiments show that the LPMP model can achieve comparable or better localization performance than NetVLAD and CoHog. |
format | Online Article Text |
id | pubmed-8855039 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-88550392022-02-19 LPMP: A Bio-Inspired Model for Visual Localization in Challenging Environments Colomer, Sylvain Cuperlier, Nicolas Bresson, Guillaume Gaussier, Philippe Romain, Olivier Front Robot AI Robotics and AI Autonomous vehicles require precise and reliable self-localization to cope with dynamic environments. The field of visual place recognition (VPR) aims to solve this challenge by relying on the visual modality to recognize a place despite changes in the appearance of the perceived visual scene. In this paper, we propose to tackle the VPR problem following a neuro-cybernetic approach. To this end, the Log-Polar Max-Pi (LPMP) model is introduced. This bio-inspired neural network allows building a neural representation of the environment via an unsupervised one-shot learning. Inspired by the spatial cognition of mammals, visual information in the LPMP model are processed through two distinct pathways: a “what” pathway that extracts and learns the local visual signatures (landmarks) of a visual scene and a “where” pathway that computes their azimuth. These two pieces of information are then merged to build a visuospatial code that is characteristic of the place where the visual scene was perceived. Three main contributions are presented in this article: 1) the LPMP model is studied and compared with NetVLAD and CoHog, two state-of-the-art VPR models; 2) a test benchmark for the evaluation of VPR models according to the type of environment traveled is proposed based on the Oxford car dataset; and 3) the impact of the use of a novel detector leading to an uneven paving of an environment is evaluated in terms of the localization performance and compared to a regular paving. Our experiments show that the LPMP model can achieve comparable or better localization performance than NetVLAD and CoHog. Frontiers Media S.A. 2022-02-04 /pmc/articles/PMC8855039/ /pubmed/35187091 http://dx.doi.org/10.3389/frobt.2021.703811 Text en Copyright © 2022 Colomer, Cuperlier, Bresson, Gaussier and Romain. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Robotics and AI Colomer, Sylvain Cuperlier, Nicolas Bresson, Guillaume Gaussier, Philippe Romain, Olivier LPMP: A Bio-Inspired Model for Visual Localization in Challenging Environments |
title | LPMP: A Bio-Inspired Model for Visual Localization in Challenging Environments |
title_full | LPMP: A Bio-Inspired Model for Visual Localization in Challenging Environments |
title_fullStr | LPMP: A Bio-Inspired Model for Visual Localization in Challenging Environments |
title_full_unstemmed | LPMP: A Bio-Inspired Model for Visual Localization in Challenging Environments |
title_short | LPMP: A Bio-Inspired Model for Visual Localization in Challenging Environments |
title_sort | lpmp: a bio-inspired model for visual localization in challenging environments |
topic | Robotics and AI |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8855039/ https://www.ncbi.nlm.nih.gov/pubmed/35187091 http://dx.doi.org/10.3389/frobt.2021.703811 |
work_keys_str_mv | AT colomersylvain lpmpabioinspiredmodelforvisuallocalizationinchallengingenvironments AT cuperliernicolas lpmpabioinspiredmodelforvisuallocalizationinchallengingenvironments AT bressonguillaume lpmpabioinspiredmodelforvisuallocalizationinchallengingenvironments AT gaussierphilippe lpmpabioinspiredmodelforvisuallocalizationinchallengingenvironments AT romainolivier lpmpabioinspiredmodelforvisuallocalizationinchallengingenvironments |