Cargando…

A Task-Learning Strategy for Robotic Assembly Tasks from Human Demonstrations

In manufacturing, traditional task pre-programming methods limit the efficiency of human–robot skill transfer. This paper proposes a novel task-learning strategy, enabling robots to learn skills from human demonstrations flexibly and generalize skills under new task situations. Specifically, we esta...

Descripción completa

Detalles Bibliográficos
Autores principales: Ding, Guanwen, Liu, Yubin, Zang, Xizhe, Zhang, Xuehe, Liu, Gangfeng, Zhao, Jie
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7583967/
https://www.ncbi.nlm.nih.gov/pubmed/32992888
http://dx.doi.org/10.3390/s20195505
_version_ 1783599500548374528
author Ding, Guanwen
Liu, Yubin
Zang, Xizhe
Zhang, Xuehe
Liu, Gangfeng
Zhao, Jie
author_facet Ding, Guanwen
Liu, Yubin
Zang, Xizhe
Zhang, Xuehe
Liu, Gangfeng
Zhao, Jie
author_sort Ding, Guanwen
collection PubMed
description In manufacturing, traditional task pre-programming methods limit the efficiency of human–robot skill transfer. This paper proposes a novel task-learning strategy, enabling robots to learn skills from human demonstrations flexibly and generalize skills under new task situations. Specifically, we establish a markerless vision capture system to acquire continuous human hand movements and develop a threshold-based heuristic segmentation algorithm to segment the complete movements into different movement primitives (MPs) which encode human hand movements with task-oriented models. For movement primitive learning, we adopt a Gaussian mixture model and Gaussian mixture regression (GMM-GMR) to extract the optimal trajectory encapsulating sufficient human features and utilize dynamical movement primitives (DMPs) to learn for trajectory generalization. In addition, we propose an improved visuo-spatial skill learning (VSL) algorithm to learn goal configurations concerning spatial relationships between task-relevant objects. Only one multioperation demonstration is required for learning, and robots can generalize goal configurations under new task situations following the task execution order from demonstration. A series of peg-in-hole experiments demonstrate that the proposed task-learning strategy can obtain exact pick-and-place points and generate smooth human-like trajectories, verifying the effectiveness of the proposed strategy.
format Online
Article
Text
id pubmed-7583967
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-75839672020-10-29 A Task-Learning Strategy for Robotic Assembly Tasks from Human Demonstrations Ding, Guanwen Liu, Yubin Zang, Xizhe Zhang, Xuehe Liu, Gangfeng Zhao, Jie Sensors (Basel) Article In manufacturing, traditional task pre-programming methods limit the efficiency of human–robot skill transfer. This paper proposes a novel task-learning strategy, enabling robots to learn skills from human demonstrations flexibly and generalize skills under new task situations. Specifically, we establish a markerless vision capture system to acquire continuous human hand movements and develop a threshold-based heuristic segmentation algorithm to segment the complete movements into different movement primitives (MPs) which encode human hand movements with task-oriented models. For movement primitive learning, we adopt a Gaussian mixture model and Gaussian mixture regression (GMM-GMR) to extract the optimal trajectory encapsulating sufficient human features and utilize dynamical movement primitives (DMPs) to learn for trajectory generalization. In addition, we propose an improved visuo-spatial skill learning (VSL) algorithm to learn goal configurations concerning spatial relationships between task-relevant objects. Only one multioperation demonstration is required for learning, and robots can generalize goal configurations under new task situations following the task execution order from demonstration. A series of peg-in-hole experiments demonstrate that the proposed task-learning strategy can obtain exact pick-and-place points and generate smooth human-like trajectories, verifying the effectiveness of the proposed strategy. MDPI 2020-09-25 /pmc/articles/PMC7583967/ /pubmed/32992888 http://dx.doi.org/10.3390/s20195505 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Ding, Guanwen
Liu, Yubin
Zang, Xizhe
Zhang, Xuehe
Liu, Gangfeng
Zhao, Jie
A Task-Learning Strategy for Robotic Assembly Tasks from Human Demonstrations
title A Task-Learning Strategy for Robotic Assembly Tasks from Human Demonstrations
title_full A Task-Learning Strategy for Robotic Assembly Tasks from Human Demonstrations
title_fullStr A Task-Learning Strategy for Robotic Assembly Tasks from Human Demonstrations
title_full_unstemmed A Task-Learning Strategy for Robotic Assembly Tasks from Human Demonstrations
title_short A Task-Learning Strategy for Robotic Assembly Tasks from Human Demonstrations
title_sort task-learning strategy for robotic assembly tasks from human demonstrations
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7583967/
https://www.ncbi.nlm.nih.gov/pubmed/32992888
http://dx.doi.org/10.3390/s20195505
work_keys_str_mv AT dingguanwen atasklearningstrategyforroboticassemblytasksfromhumandemonstrations
AT liuyubin atasklearningstrategyforroboticassemblytasksfromhumandemonstrations
AT zangxizhe atasklearningstrategyforroboticassemblytasksfromhumandemonstrations
AT zhangxuehe atasklearningstrategyforroboticassemblytasksfromhumandemonstrations
AT liugangfeng atasklearningstrategyforroboticassemblytasksfromhumandemonstrations
AT zhaojie atasklearningstrategyforroboticassemblytasksfromhumandemonstrations
AT dingguanwen tasklearningstrategyforroboticassemblytasksfromhumandemonstrations
AT liuyubin tasklearningstrategyforroboticassemblytasksfromhumandemonstrations
AT zangxizhe tasklearningstrategyforroboticassemblytasksfromhumandemonstrations
AT zhangxuehe tasklearningstrategyforroboticassemblytasksfromhumandemonstrations
AT liugangfeng tasklearningstrategyforroboticassemblytasksfromhumandemonstrations
AT zhaojie tasklearningstrategyforroboticassemblytasksfromhumandemonstrations