Cargando…

A robust method for approximate visual robot localization in feature-sparse sewer pipes

Buried sewer pipe networks present many challenges for robot localization systems, which require non-standard solutions due to the unique nature of these environments: they cannot receive signals from global positioning systems (GPS) and can also lack visual features necessary for standard visual od...

Descripción completa

Detalles Bibliográficos
Autores principales: Edwards, S., Zhang, R., Worley, R., Mihaylova, L., Aitken, J., Anderson, S. R.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10115998/
https://www.ncbi.nlm.nih.gov/pubmed/37090891
http://dx.doi.org/10.3389/frobt.2023.1150508
_version_ 1785028328163377152
author Edwards, S.
Zhang, R.
Worley, R.
Mihaylova, L.
Aitken, J.
Anderson, S. R.
author_facet Edwards, S.
Zhang, R.
Worley, R.
Mihaylova, L.
Aitken, J.
Anderson, S. R.
author_sort Edwards, S.
collection PubMed
description Buried sewer pipe networks present many challenges for robot localization systems, which require non-standard solutions due to the unique nature of these environments: they cannot receive signals from global positioning systems (GPS) and can also lack visual features necessary for standard visual odometry algorithms. In this paper, we exploit the fact that pipe joints are equally spaced and develop a robot localization method based on pipe joint detection that operates in one degree-of-freedom along the pipe length. Pipe joints are detected in visual images from an on-board forward facing (electro-optical) camera using a bag-of-keypoints visual categorization algorithm, which is trained offline by unsupervised learning from images of sewer pipe joints. We augment the pipe joint detection algorithm with drift correction using vision-based manhole recognition. We evaluated the approach using real-world data recorded from three sewer pipes (of lengths 30, 50 and 90 m) and benchmarked against a standard method for visual odometry (ORB-SLAM3), which demonstrated that our proposed method operates more robustly and accurately in these feature-sparse pipes: ORB-SLAM3 completely failed on one tested pipe due to a lack of visual features and gave a mean absolute error in localization of approximately 12%–20% on the other pipes (and regularly lost track of features, having to re-initialize multiple times), whilst our method worked successfully on all tested pipes and gave a mean absolute error in localization of approximately 2%–4%. In summary, our results highlight an important trade-off between modern visual odometry algorithms that have potentially high precision and estimate full six degree-of-freedom pose but are potentially fragile in feature sparse pipes, versus simpler, approximate localization methods that operate in one degree-of-freedom along the pipe length that are more robust and can lead to substantial improvements in accuracy.
format Online
Article
Text
id pubmed-10115998
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-101159982023-04-21 A robust method for approximate visual robot localization in feature-sparse sewer pipes Edwards, S. Zhang, R. Worley, R. Mihaylova, L. Aitken, J. Anderson, S. R. Front Robot AI Robotics and AI Buried sewer pipe networks present many challenges for robot localization systems, which require non-standard solutions due to the unique nature of these environments: they cannot receive signals from global positioning systems (GPS) and can also lack visual features necessary for standard visual odometry algorithms. In this paper, we exploit the fact that pipe joints are equally spaced and develop a robot localization method based on pipe joint detection that operates in one degree-of-freedom along the pipe length. Pipe joints are detected in visual images from an on-board forward facing (electro-optical) camera using a bag-of-keypoints visual categorization algorithm, which is trained offline by unsupervised learning from images of sewer pipe joints. We augment the pipe joint detection algorithm with drift correction using vision-based manhole recognition. We evaluated the approach using real-world data recorded from three sewer pipes (of lengths 30, 50 and 90 m) and benchmarked against a standard method for visual odometry (ORB-SLAM3), which demonstrated that our proposed method operates more robustly and accurately in these feature-sparse pipes: ORB-SLAM3 completely failed on one tested pipe due to a lack of visual features and gave a mean absolute error in localization of approximately 12%–20% on the other pipes (and regularly lost track of features, having to re-initialize multiple times), whilst our method worked successfully on all tested pipes and gave a mean absolute error in localization of approximately 2%–4%. In summary, our results highlight an important trade-off between modern visual odometry algorithms that have potentially high precision and estimate full six degree-of-freedom pose but are potentially fragile in feature sparse pipes, versus simpler, approximate localization methods that operate in one degree-of-freedom along the pipe length that are more robust and can lead to substantial improvements in accuracy. Frontiers Media S.A. 2023-03-06 /pmc/articles/PMC10115998/ /pubmed/37090891 http://dx.doi.org/10.3389/frobt.2023.1150508 Text en Copyright © 2023 Edwards, Zhang, Worley, Mihaylova, Aitken and Anderson. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Edwards, S.
Zhang, R.
Worley, R.
Mihaylova, L.
Aitken, J.
Anderson, S. R.
A robust method for approximate visual robot localization in feature-sparse sewer pipes
title A robust method for approximate visual robot localization in feature-sparse sewer pipes
title_full A robust method for approximate visual robot localization in feature-sparse sewer pipes
title_fullStr A robust method for approximate visual robot localization in feature-sparse sewer pipes
title_full_unstemmed A robust method for approximate visual robot localization in feature-sparse sewer pipes
title_short A robust method for approximate visual robot localization in feature-sparse sewer pipes
title_sort robust method for approximate visual robot localization in feature-sparse sewer pipes
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10115998/
https://www.ncbi.nlm.nih.gov/pubmed/37090891
http://dx.doi.org/10.3389/frobt.2023.1150508
work_keys_str_mv AT edwardss arobustmethodforapproximatevisualrobotlocalizationinfeaturesparsesewerpipes
AT zhangr arobustmethodforapproximatevisualrobotlocalizationinfeaturesparsesewerpipes
AT worleyr arobustmethodforapproximatevisualrobotlocalizationinfeaturesparsesewerpipes
AT mihayloval arobustmethodforapproximatevisualrobotlocalizationinfeaturesparsesewerpipes
AT aitkenj arobustmethodforapproximatevisualrobotlocalizationinfeaturesparsesewerpipes
AT andersonsr arobustmethodforapproximatevisualrobotlocalizationinfeaturesparsesewerpipes
AT edwardss robustmethodforapproximatevisualrobotlocalizationinfeaturesparsesewerpipes
AT zhangr robustmethodforapproximatevisualrobotlocalizationinfeaturesparsesewerpipes
AT worleyr robustmethodforapproximatevisualrobotlocalizationinfeaturesparsesewerpipes
AT mihayloval robustmethodforapproximatevisualrobotlocalizationinfeaturesparsesewerpipes
AT aitkenj robustmethodforapproximatevisualrobotlocalizationinfeaturesparsesewerpipes
AT andersonsr robustmethodforapproximatevisualrobotlocalizationinfeaturesparsesewerpipes