Cargando…

Vision-Based Learning from Demonstration System for Robot Arms

Robotic arms have been widely used in various industries and have the advantages of cost savings, high productivity, and efficiency. Although robotic arms are good at increasing efficiency in repetitive tasks, they still need to be re-programmed and optimized when new tasks are to be deployed, resul...

Descripción completa

Detalles Bibliográficos
Autores principales: Hwang, Pin-Jui, Hsu, Chen-Chien, Chou, Po-Yung, Wang, Wei-Yen, Lin, Cheng-Hung
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9002941/
https://www.ncbi.nlm.nih.gov/pubmed/35408292
http://dx.doi.org/10.3390/s22072678
_version_ 1784686011277639680
author Hwang, Pin-Jui
Hsu, Chen-Chien
Chou, Po-Yung
Wang, Wei-Yen
Lin, Cheng-Hung
author_facet Hwang, Pin-Jui
Hsu, Chen-Chien
Chou, Po-Yung
Wang, Wei-Yen
Lin, Cheng-Hung
author_sort Hwang, Pin-Jui
collection PubMed
description Robotic arms have been widely used in various industries and have the advantages of cost savings, high productivity, and efficiency. Although robotic arms are good at increasing efficiency in repetitive tasks, they still need to be re-programmed and optimized when new tasks are to be deployed, resulting in detrimental downtime and high cost. It is therefore the objective of this paper to present a learning from demonstration (LfD) robotic system to provide a more intuitive way for robots to efficiently perform tasks through learning from human demonstration on the basis of two major components: understanding through human demonstration and reproduction by robot arm. To understand human demonstration, we propose a vision-based spatial-temporal action detection method to detect human actions that focuses on meticulous hand movement in real time to establish an action base. An object trajectory inductive method is then proposed to obtain a key path for objects manipulated by the human through multiple demonstrations. In robot reproduction, we integrate the sequence of actions in the action base and the key path derived by the object trajectory inductive method for motion planning to reproduce the task demonstrated by the human user. Because of the capability of learning from demonstration, the robot can reproduce the tasks that the human demonstrated with the help of vision sensors in unseen contexts.
format Online
Article
Text
id pubmed-9002941
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-90029412022-04-13 Vision-Based Learning from Demonstration System for Robot Arms Hwang, Pin-Jui Hsu, Chen-Chien Chou, Po-Yung Wang, Wei-Yen Lin, Cheng-Hung Sensors (Basel) Article Robotic arms have been widely used in various industries and have the advantages of cost savings, high productivity, and efficiency. Although robotic arms are good at increasing efficiency in repetitive tasks, they still need to be re-programmed and optimized when new tasks are to be deployed, resulting in detrimental downtime and high cost. It is therefore the objective of this paper to present a learning from demonstration (LfD) robotic system to provide a more intuitive way for robots to efficiently perform tasks through learning from human demonstration on the basis of two major components: understanding through human demonstration and reproduction by robot arm. To understand human demonstration, we propose a vision-based spatial-temporal action detection method to detect human actions that focuses on meticulous hand movement in real time to establish an action base. An object trajectory inductive method is then proposed to obtain a key path for objects manipulated by the human through multiple demonstrations. In robot reproduction, we integrate the sequence of actions in the action base and the key path derived by the object trajectory inductive method for motion planning to reproduce the task demonstrated by the human user. Because of the capability of learning from demonstration, the robot can reproduce the tasks that the human demonstrated with the help of vision sensors in unseen contexts. MDPI 2022-03-31 /pmc/articles/PMC9002941/ /pubmed/35408292 http://dx.doi.org/10.3390/s22072678 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Hwang, Pin-Jui
Hsu, Chen-Chien
Chou, Po-Yung
Wang, Wei-Yen
Lin, Cheng-Hung
Vision-Based Learning from Demonstration System for Robot Arms
title Vision-Based Learning from Demonstration System for Robot Arms
title_full Vision-Based Learning from Demonstration System for Robot Arms
title_fullStr Vision-Based Learning from Demonstration System for Robot Arms
title_full_unstemmed Vision-Based Learning from Demonstration System for Robot Arms
title_short Vision-Based Learning from Demonstration System for Robot Arms
title_sort vision-based learning from demonstration system for robot arms
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9002941/
https://www.ncbi.nlm.nih.gov/pubmed/35408292
http://dx.doi.org/10.3390/s22072678
work_keys_str_mv AT hwangpinjui visionbasedlearningfromdemonstrationsystemforrobotarms
AT hsuchenchien visionbasedlearningfromdemonstrationsystemforrobotarms
AT choupoyung visionbasedlearningfromdemonstrationsystemforrobotarms
AT wangweiyen visionbasedlearningfromdemonstrationsystemforrobotarms
AT linchenghung visionbasedlearningfromdemonstrationsystemforrobotarms