Cargando…

Visual Pose Estimation of Rescue Unmanned Surface Vehicle From Unmanned Aerial System

This article addresses the problem of how to visually estimate the pose of a rescue unmanned surface vehicle (USV) using an unmanned aerial system (UAS) in marine mass casualty events. A UAS visually navigating the USV can help solve problems with teleoperation and manpower requirements. The solutio...

Descripción completa

Detalles Bibliográficos
Autores principales: Dufek, Jan, Murphy, Robin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7805959/
https://www.ncbi.nlm.nih.gov/pubmed/33501058
http://dx.doi.org/10.3389/frobt.2019.00042
_version_ 1783636422173917184
author Dufek, Jan
Murphy, Robin
author_facet Dufek, Jan
Murphy, Robin
author_sort Dufek, Jan
collection PubMed
description This article addresses the problem of how to visually estimate the pose of a rescue unmanned surface vehicle (USV) using an unmanned aerial system (UAS) in marine mass casualty events. A UAS visually navigating the USV can help solve problems with teleoperation and manpower requirements. The solution has to estimate full pose (both position and orientation) and has to work in an outdoor environment from oblique view angle (up to 85° from nadir) at large distances (180 m) in real-time (5 Hz) and assume both moving UAS (up to 22 m s(−1)) and moving object (up to 10 m s(−1)). None of the 58 reviewed studies satisfied all those requirements. This article presents two algorithms for visual position estimation using the object's hue (thresholding and histogramming) and four techniques for visual orientation estimation using the object's shape while satisfying those requirements. Four physical experiments were performed to validate the feasibility and compare the thresholding and histogramming algorithms. The histogramming had statistically significantly lower position estimation error compared to thresholding for all four trials (p-value ranged from ~0 to 8.23263 × 10(−29)), but it only had statistically significantly lower orientation estimation error for two of the trials (p-values 3.51852 × 10(−39) and 1.32762 × 10(−46)). The mean position estimation error ranged from 7 to 43 px while the mean orientation estimation error ranged from 0.134 to 0.480 rad. The histogramming algorithm demonstrated feasibility for variations in environmental conditions and physical settings while requiring fewer parameters than thresholding. However, three problems were identified. The orientation estimation error was quite large for both algorithms, both algorithms required manual tuning before each trial, and both algorithms were not robust enough to recover from significant changes in illumination conditions. To reduce the orientation estimation error, inverse perspective warping will be necessary to reduce the perspective distortion. To eliminate the necessity for tuning and increase the robustness, a machine learning approach to pose estimation might ultimately be a better solution.
format Online
Article
Text
id pubmed-7805959
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-78059592021-01-25 Visual Pose Estimation of Rescue Unmanned Surface Vehicle From Unmanned Aerial System Dufek, Jan Murphy, Robin Front Robot AI Robotics and AI This article addresses the problem of how to visually estimate the pose of a rescue unmanned surface vehicle (USV) using an unmanned aerial system (UAS) in marine mass casualty events. A UAS visually navigating the USV can help solve problems with teleoperation and manpower requirements. The solution has to estimate full pose (both position and orientation) and has to work in an outdoor environment from oblique view angle (up to 85° from nadir) at large distances (180 m) in real-time (5 Hz) and assume both moving UAS (up to 22 m s(−1)) and moving object (up to 10 m s(−1)). None of the 58 reviewed studies satisfied all those requirements. This article presents two algorithms for visual position estimation using the object's hue (thresholding and histogramming) and four techniques for visual orientation estimation using the object's shape while satisfying those requirements. Four physical experiments were performed to validate the feasibility and compare the thresholding and histogramming algorithms. The histogramming had statistically significantly lower position estimation error compared to thresholding for all four trials (p-value ranged from ~0 to 8.23263 × 10(−29)), but it only had statistically significantly lower orientation estimation error for two of the trials (p-values 3.51852 × 10(−39) and 1.32762 × 10(−46)). The mean position estimation error ranged from 7 to 43 px while the mean orientation estimation error ranged from 0.134 to 0.480 rad. The histogramming algorithm demonstrated feasibility for variations in environmental conditions and physical settings while requiring fewer parameters than thresholding. However, three problems were identified. The orientation estimation error was quite large for both algorithms, both algorithms required manual tuning before each trial, and both algorithms were not robust enough to recover from significant changes in illumination conditions. To reduce the orientation estimation error, inverse perspective warping will be necessary to reduce the perspective distortion. To eliminate the necessity for tuning and increase the robustness, a machine learning approach to pose estimation might ultimately be a better solution. Frontiers Media S.A. 2019-05-31 /pmc/articles/PMC7805959/ /pubmed/33501058 http://dx.doi.org/10.3389/frobt.2019.00042 Text en Copyright © 2019 Dufek and Murphy. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Dufek, Jan
Murphy, Robin
Visual Pose Estimation of Rescue Unmanned Surface Vehicle From Unmanned Aerial System
title Visual Pose Estimation of Rescue Unmanned Surface Vehicle From Unmanned Aerial System
title_full Visual Pose Estimation of Rescue Unmanned Surface Vehicle From Unmanned Aerial System
title_fullStr Visual Pose Estimation of Rescue Unmanned Surface Vehicle From Unmanned Aerial System
title_full_unstemmed Visual Pose Estimation of Rescue Unmanned Surface Vehicle From Unmanned Aerial System
title_short Visual Pose Estimation of Rescue Unmanned Surface Vehicle From Unmanned Aerial System
title_sort visual pose estimation of rescue unmanned surface vehicle from unmanned aerial system
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7805959/
https://www.ncbi.nlm.nih.gov/pubmed/33501058
http://dx.doi.org/10.3389/frobt.2019.00042
work_keys_str_mv AT dufekjan visualposeestimationofrescueunmannedsurfacevehiclefromunmannedaerialsystem
AT murphyrobin visualposeestimationofrescueunmannedsurfacevehiclefromunmannedaerialsystem