Cargando…

Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation

Visual teach and repeat navigation (VT&R) is popular in robotics thanks to its simplicity and versatility. It enables mobile robots equipped with a camera to traverse learned paths without the need to create globally consistent metric maps. Although teach and repeat frameworks have been reported...

Descripción completa

Detalles Bibliográficos
Autores principales: Rozsypálek, Zdeněk, Broughton, George, Linder, Pavel, Rouček, Tomáš, Blaha, Jan, Mentzl, Leonard, Kusumam, Keerthy, Krajník, Tomáš
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9030179/
https://www.ncbi.nlm.nih.gov/pubmed/35458959
http://dx.doi.org/10.3390/s22082975
_version_ 1784692078540750848
author Rozsypálek, Zdeněk
Broughton, George
Linder, Pavel
Rouček, Tomáš
Blaha, Jan
Mentzl, Leonard
Kusumam, Keerthy
Krajník, Tomáš
author_facet Rozsypálek, Zdeněk
Broughton, George
Linder, Pavel
Rouček, Tomáš
Blaha, Jan
Mentzl, Leonard
Kusumam, Keerthy
Krajník, Tomáš
author_sort Rozsypálek, Zdeněk
collection PubMed
description Visual teach and repeat navigation (VT&R) is popular in robotics thanks to its simplicity and versatility. It enables mobile robots equipped with a camera to traverse learned paths without the need to create globally consistent metric maps. Although teach and repeat frameworks have been reported to be relatively robust to changing environments, they still struggle with day-to-night and seasonal changes. This paper aims to find the horizontal displacement between prerecorded and currently perceived images required to steer a robot towards the previously traversed path. We employ a fully convolutional neural network to obtain dense representations of the images that are robust to changes in the environment and variations in illumination. The proposed model achieves state-of-the-art performance on multiple datasets with seasonal and day/night variations. In addition, our experiments show that it is possible to use the model to generate additional training examples that can be used to further improve the original model’s robustness. We also conducted a real-world experiment on a mobile robot to demonstrate the suitability of our method for VT&R.
format Online
Article
Text
id pubmed-9030179
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-90301792022-04-23 Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation Rozsypálek, Zdeněk Broughton, George Linder, Pavel Rouček, Tomáš Blaha, Jan Mentzl, Leonard Kusumam, Keerthy Krajník, Tomáš Sensors (Basel) Article Visual teach and repeat navigation (VT&R) is popular in robotics thanks to its simplicity and versatility. It enables mobile robots equipped with a camera to traverse learned paths without the need to create globally consistent metric maps. Although teach and repeat frameworks have been reported to be relatively robust to changing environments, they still struggle with day-to-night and seasonal changes. This paper aims to find the horizontal displacement between prerecorded and currently perceived images required to steer a robot towards the previously traversed path. We employ a fully convolutional neural network to obtain dense representations of the images that are robust to changes in the environment and variations in illumination. The proposed model achieves state-of-the-art performance on multiple datasets with seasonal and day/night variations. In addition, our experiments show that it is possible to use the model to generate additional training examples that can be used to further improve the original model’s robustness. We also conducted a real-world experiment on a mobile robot to demonstrate the suitability of our method for VT&R. MDPI 2022-04-13 /pmc/articles/PMC9030179/ /pubmed/35458959 http://dx.doi.org/10.3390/s22082975 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Rozsypálek, Zdeněk
Broughton, George
Linder, Pavel
Rouček, Tomáš
Blaha, Jan
Mentzl, Leonard
Kusumam, Keerthy
Krajník, Tomáš
Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation
title Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation
title_full Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation
title_fullStr Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation
title_full_unstemmed Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation
title_short Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation
title_sort contrastive learning for image registration in visual teach and repeat navigation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9030179/
https://www.ncbi.nlm.nih.gov/pubmed/35458959
http://dx.doi.org/10.3390/s22082975
work_keys_str_mv AT rozsypalekzdenek contrastivelearningforimageregistrationinvisualteachandrepeatnavigation
AT broughtongeorge contrastivelearningforimageregistrationinvisualteachandrepeatnavigation
AT linderpavel contrastivelearningforimageregistrationinvisualteachandrepeatnavigation
AT roucektomas contrastivelearningforimageregistrationinvisualteachandrepeatnavigation
AT blahajan contrastivelearningforimageregistrationinvisualteachandrepeatnavigation
AT mentzlleonard contrastivelearningforimageregistrationinvisualteachandrepeatnavigation
AT kusumamkeerthy contrastivelearningforimageregistrationinvisualteachandrepeatnavigation
AT krajniktomas contrastivelearningforimageregistrationinvisualteachandrepeatnavigation