Cargando…

A New Kinect V2-Based Method for Visual Recognition and Grasping of a Yarn-Bobbin-Handling Robot

This work proposes a Kinect V2-based visual method to solve the human dependence on the yarn bobbin robot in the grabbing operation. In this new method, a Kinect V2 camera is used to produce three-dimensional (3D) yarn-bobbin point cloud data for the robot in a work scenario. After removing the nois...

Descripción completa

Detalles Bibliográficos
Autores principales: Han, Jinghai, Liu, Bo, Jia, Yongle, Jin, Shoufeng, Sulowicz, Maciej, Glowacz, Adam, Królczyk, Grzegorz, Li, Zhixiong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9227217/
https://www.ncbi.nlm.nih.gov/pubmed/35744500
http://dx.doi.org/10.3390/mi13060886
_version_ 1784734110993874944
author Han, Jinghai
Liu, Bo
Jia, Yongle
Jin, Shoufeng
Sulowicz, Maciej
Glowacz, Adam
Królczyk, Grzegorz
Li, Zhixiong
author_facet Han, Jinghai
Liu, Bo
Jia, Yongle
Jin, Shoufeng
Sulowicz, Maciej
Glowacz, Adam
Królczyk, Grzegorz
Li, Zhixiong
author_sort Han, Jinghai
collection PubMed
description This work proposes a Kinect V2-based visual method to solve the human dependence on the yarn bobbin robot in the grabbing operation. In this new method, a Kinect V2 camera is used to produce three-dimensional (3D) yarn-bobbin point cloud data for the robot in a work scenario. After removing the noise point cloud through a proper filtering process, the M-estimator sample consensus (MSAC) algorithm is employed to find the fitting plane of the 3D cloud data; then, the principal component analysis (PCA) is adopted to roughly register the template point cloud and the yarn-bobbin point cloud to define the initial position of the yarn bobbin. Lastly, the iterative closest point (ICP) algorithm is used to achieve precise registration of the 3D cloud data to determine the precise pose of the yarn bobbin. To evaluate the performance of the proposed method, an experimental platform is developed to validate the grabbing operation of the yarn bobbin robot in different scenarios. The analysis results show that the average working time of the robot system is within 10 s, and the grasping success rate is above 80%, which meets the industrial production requirements.
format Online
Article
Text
id pubmed-9227217
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-92272172022-06-25 A New Kinect V2-Based Method for Visual Recognition and Grasping of a Yarn-Bobbin-Handling Robot Han, Jinghai Liu, Bo Jia, Yongle Jin, Shoufeng Sulowicz, Maciej Glowacz, Adam Królczyk, Grzegorz Li, Zhixiong Micromachines (Basel) Article This work proposes a Kinect V2-based visual method to solve the human dependence on the yarn bobbin robot in the grabbing operation. In this new method, a Kinect V2 camera is used to produce three-dimensional (3D) yarn-bobbin point cloud data for the robot in a work scenario. After removing the noise point cloud through a proper filtering process, the M-estimator sample consensus (MSAC) algorithm is employed to find the fitting plane of the 3D cloud data; then, the principal component analysis (PCA) is adopted to roughly register the template point cloud and the yarn-bobbin point cloud to define the initial position of the yarn bobbin. Lastly, the iterative closest point (ICP) algorithm is used to achieve precise registration of the 3D cloud data to determine the precise pose of the yarn bobbin. To evaluate the performance of the proposed method, an experimental platform is developed to validate the grabbing operation of the yarn bobbin robot in different scenarios. The analysis results show that the average working time of the robot system is within 10 s, and the grasping success rate is above 80%, which meets the industrial production requirements. MDPI 2022-05-31 /pmc/articles/PMC9227217/ /pubmed/35744500 http://dx.doi.org/10.3390/mi13060886 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Han, Jinghai
Liu, Bo
Jia, Yongle
Jin, Shoufeng
Sulowicz, Maciej
Glowacz, Adam
Królczyk, Grzegorz
Li, Zhixiong
A New Kinect V2-Based Method for Visual Recognition and Grasping of a Yarn-Bobbin-Handling Robot
title A New Kinect V2-Based Method for Visual Recognition and Grasping of a Yarn-Bobbin-Handling Robot
title_full A New Kinect V2-Based Method for Visual Recognition and Grasping of a Yarn-Bobbin-Handling Robot
title_fullStr A New Kinect V2-Based Method for Visual Recognition and Grasping of a Yarn-Bobbin-Handling Robot
title_full_unstemmed A New Kinect V2-Based Method for Visual Recognition and Grasping of a Yarn-Bobbin-Handling Robot
title_short A New Kinect V2-Based Method for Visual Recognition and Grasping of a Yarn-Bobbin-Handling Robot
title_sort new kinect v2-based method for visual recognition and grasping of a yarn-bobbin-handling robot
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9227217/
https://www.ncbi.nlm.nih.gov/pubmed/35744500
http://dx.doi.org/10.3390/mi13060886
work_keys_str_mv AT hanjinghai anewkinectv2basedmethodforvisualrecognitionandgraspingofayarnbobbinhandlingrobot
AT liubo anewkinectv2basedmethodforvisualrecognitionandgraspingofayarnbobbinhandlingrobot
AT jiayongle anewkinectv2basedmethodforvisualrecognitionandgraspingofayarnbobbinhandlingrobot
AT jinshoufeng anewkinectv2basedmethodforvisualrecognitionandgraspingofayarnbobbinhandlingrobot
AT sulowiczmaciej anewkinectv2basedmethodforvisualrecognitionandgraspingofayarnbobbinhandlingrobot
AT glowaczadam anewkinectv2basedmethodforvisualrecognitionandgraspingofayarnbobbinhandlingrobot
AT krolczykgrzegorz anewkinectv2basedmethodforvisualrecognitionandgraspingofayarnbobbinhandlingrobot
AT lizhixiong anewkinectv2basedmethodforvisualrecognitionandgraspingofayarnbobbinhandlingrobot
AT hanjinghai newkinectv2basedmethodforvisualrecognitionandgraspingofayarnbobbinhandlingrobot
AT liubo newkinectv2basedmethodforvisualrecognitionandgraspingofayarnbobbinhandlingrobot
AT jiayongle newkinectv2basedmethodforvisualrecognitionandgraspingofayarnbobbinhandlingrobot
AT jinshoufeng newkinectv2basedmethodforvisualrecognitionandgraspingofayarnbobbinhandlingrobot
AT sulowiczmaciej newkinectv2basedmethodforvisualrecognitionandgraspingofayarnbobbinhandlingrobot
AT glowaczadam newkinectv2basedmethodforvisualrecognitionandgraspingofayarnbobbinhandlingrobot
AT krolczykgrzegorz newkinectv2basedmethodforvisualrecognitionandgraspingofayarnbobbinhandlingrobot
AT lizhixiong newkinectv2basedmethodforvisualrecognitionandgraspingofayarnbobbinhandlingrobot