Cargando…

Teleoperation and Visualization Interfaces for Remote Intervention in Space

Approaches to robotic manufacturing, assembly, and servicing of in-space assets range from autonomous operation to direct teleoperation, with many forms of semi-autonomous teleoperation in between. Because most approaches require one or more human operators at some level, it is important to explore...

Descripción completa

Detalles Bibliográficos
Autores principales: Kazanzides, Peter, Vagvolgyi, Balazs P., Pryor, Will, Deguet, Anton, Leonard, Simon, Whitcomb, Louis L.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8673825/
https://www.ncbi.nlm.nih.gov/pubmed/34926590
http://dx.doi.org/10.3389/frobt.2021.747917
_version_ 1784615523642769408
author Kazanzides, Peter
Vagvolgyi, Balazs P.
Pryor, Will
Deguet, Anton
Leonard, Simon
Whitcomb, Louis L.
author_facet Kazanzides, Peter
Vagvolgyi, Balazs P.
Pryor, Will
Deguet, Anton
Leonard, Simon
Whitcomb, Louis L.
author_sort Kazanzides, Peter
collection PubMed
description Approaches to robotic manufacturing, assembly, and servicing of in-space assets range from autonomous operation to direct teleoperation, with many forms of semi-autonomous teleoperation in between. Because most approaches require one or more human operators at some level, it is important to explore the control and visualization interfaces available to those operators, taking into account the challenges due to significant telemetry time delay. We consider one motivating application of remote teleoperation, which is ground-based control of a robot on-orbit for satellite servicing. This paper presents a model-based architecture that: 1) improves visualization and situation awareness, 2) enables more effective human/robot interaction and control, and 3) detects task failures based on anomalous sensor feedback. We illustrate elements of the architecture by drawing on 10 years of our research in this area. The paper further reports the results of several multi-user experiments to evaluate the model-based architecture, on ground-based test platforms, for satellite servicing tasks subject to round-trip communication latencies of several seconds. The most significant performance gains were obtained by enhancing the operators’ situation awareness via improved visualization and by enabling them to precisely specify intended motion. In contrast, changes to the control interface, including model-mediated control or an immersive 3D environment, often reduced the reported task load but did not significantly improve task performance. Considering the challenges of fully autonomous intervention, we expect that some form of teleoperation will continue to be necessary for robotic in-situ servicing, assembly, and manufacturing tasks for the foreseeable future. We propose that effective teleoperation can be enabled by modeling the remote environment, providing operators with a fused view of the real environment and virtual model, and incorporating interfaces and control strategies that enable interactive planning, precise operation, and prompt detection of errors.
format Online
Article
Text
id pubmed-8673825
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-86738252021-12-16 Teleoperation and Visualization Interfaces for Remote Intervention in Space Kazanzides, Peter Vagvolgyi, Balazs P. Pryor, Will Deguet, Anton Leonard, Simon Whitcomb, Louis L. Front Robot AI Robotics and AI Approaches to robotic manufacturing, assembly, and servicing of in-space assets range from autonomous operation to direct teleoperation, with many forms of semi-autonomous teleoperation in between. Because most approaches require one or more human operators at some level, it is important to explore the control and visualization interfaces available to those operators, taking into account the challenges due to significant telemetry time delay. We consider one motivating application of remote teleoperation, which is ground-based control of a robot on-orbit for satellite servicing. This paper presents a model-based architecture that: 1) improves visualization and situation awareness, 2) enables more effective human/robot interaction and control, and 3) detects task failures based on anomalous sensor feedback. We illustrate elements of the architecture by drawing on 10 years of our research in this area. The paper further reports the results of several multi-user experiments to evaluate the model-based architecture, on ground-based test platforms, for satellite servicing tasks subject to round-trip communication latencies of several seconds. The most significant performance gains were obtained by enhancing the operators’ situation awareness via improved visualization and by enabling them to precisely specify intended motion. In contrast, changes to the control interface, including model-mediated control or an immersive 3D environment, often reduced the reported task load but did not significantly improve task performance. Considering the challenges of fully autonomous intervention, we expect that some form of teleoperation will continue to be necessary for robotic in-situ servicing, assembly, and manufacturing tasks for the foreseeable future. We propose that effective teleoperation can be enabled by modeling the remote environment, providing operators with a fused view of the real environment and virtual model, and incorporating interfaces and control strategies that enable interactive planning, precise operation, and prompt detection of errors. Frontiers Media S.A. 2021-12-01 /pmc/articles/PMC8673825/ /pubmed/34926590 http://dx.doi.org/10.3389/frobt.2021.747917 Text en Copyright © 2021 Kazanzides, Vagvolgyi, Pryor, Deguet, Leonard and Whitcomb. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Kazanzides, Peter
Vagvolgyi, Balazs P.
Pryor, Will
Deguet, Anton
Leonard, Simon
Whitcomb, Louis L.
Teleoperation and Visualization Interfaces for Remote Intervention in Space
title Teleoperation and Visualization Interfaces for Remote Intervention in Space
title_full Teleoperation and Visualization Interfaces for Remote Intervention in Space
title_fullStr Teleoperation and Visualization Interfaces for Remote Intervention in Space
title_full_unstemmed Teleoperation and Visualization Interfaces for Remote Intervention in Space
title_short Teleoperation and Visualization Interfaces for Remote Intervention in Space
title_sort teleoperation and visualization interfaces for remote intervention in space
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8673825/
https://www.ncbi.nlm.nih.gov/pubmed/34926590
http://dx.doi.org/10.3389/frobt.2021.747917
work_keys_str_mv AT kazanzidespeter teleoperationandvisualizationinterfacesforremoteinterventioninspace
AT vagvolgyibalazsp teleoperationandvisualizationinterfacesforremoteinterventioninspace
AT pryorwill teleoperationandvisualizationinterfacesforremoteinterventioninspace
AT deguetanton teleoperationandvisualizationinterfacesforremoteinterventioninspace
AT leonardsimon teleoperationandvisualizationinterfacesforremoteinterventioninspace
AT whitcomblouisl teleoperationandvisualizationinterfacesforremoteinterventioninspace