Cargando…

Robust Fastener Detection Based on Force and Vision Algorithms in Robotic (Un)Screwing Applications

This article addresses how to tackle one of the most demanding tasks in manufacturing and industrial maintenance sectors: using robots with a novel and robust solution to detect the fastener and its rotation in (un)screwing tasks over parallel surfaces with respect to the tool. To this end, the visi...

Descripción completa

Detalles Bibliográficos
Autores principales: Espinosa Peralta, Paul, Ferre, Manuel, Sánchez-Urán, Miguel Ángel
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10181754/
https://www.ncbi.nlm.nih.gov/pubmed/37177735
http://dx.doi.org/10.3390/s23094527
_version_ 1785041650133762048
author Espinosa Peralta, Paul
Ferre, Manuel
Sánchez-Urán, Miguel Ángel
author_facet Espinosa Peralta, Paul
Ferre, Manuel
Sánchez-Urán, Miguel Ángel
author_sort Espinosa Peralta, Paul
collection PubMed
description This article addresses how to tackle one of the most demanding tasks in manufacturing and industrial maintenance sectors: using robots with a novel and robust solution to detect the fastener and its rotation in (un)screwing tasks over parallel surfaces with respect to the tool. To this end, the vision system is based on an industrial camera with a dynamic exposure time, a tunable liquid crystal lens (TLCL), and active near-infrared reflectance (NIR) illumination. Its camera parameters, combined with a fixed value of working distance (WD) and variable or constant field of view (FOV), make it possible to work with a variety of fastener sizes under several lighting conditions. This development also uses a collaborative robot with an embedded force sensor to verify the success of the fastener localization in a real test. Robust algorithms based on segmentation neural networks (SNN) and vision were developed to find the center and rotation of the hexagon fastener in a flawless condition and worn, scratched, and rusty conditions. SNNs were tested using a graphics processing unit (GPU), central processing unit (CPU), and edge devices, such as Jetson Javier Nx (JJNX), Intel Neural Compute Stick 2 (INCS2), and M.2 Accelerator with Dual Edge TPU (DETPU), with optimization parameters, such as the unsigned integer (UINT) and float (FP), to understand their performance. A virtual program logic controller (PLC) was mounted on a personal computer (PC) as the main control to process the images and save the data. Moreover, a mathematical analysis based on the international standard organization (ISO) and patents of the manual socket wrench was performed to determine the maximum error allowed. In addition, the work was substantiated using exhaustive evaluation tests, validating the tolerance errors, robotic forces for successfully completed tasks, and algorithms implemented. As a result of this work, the translation tolerances increase with higher sizes of fasteners from 0.75 for M6 to 2.50 for M24; however, the rotation decreases with the size from 5.5° for M6 to 3.5° for M24. The proposed methodology is a robust solution to tackle outliers contours and fake vertices produced by distorted masks present in non-constant illumination; it can reach an average accuracy to detect the vertices of 99.86% and the center of 100%, also, the time consumed by the SNN and the proposed algorithms is 73.91 ms on an Intel Core I9 CPU. This work is an interesting contribution to industrial robotics and improves current applications.
format Online
Article
Text
id pubmed-10181754
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-101817542023-05-13 Robust Fastener Detection Based on Force and Vision Algorithms in Robotic (Un)Screwing Applications Espinosa Peralta, Paul Ferre, Manuel Sánchez-Urán, Miguel Ángel Sensors (Basel) Article This article addresses how to tackle one of the most demanding tasks in manufacturing and industrial maintenance sectors: using robots with a novel and robust solution to detect the fastener and its rotation in (un)screwing tasks over parallel surfaces with respect to the tool. To this end, the vision system is based on an industrial camera with a dynamic exposure time, a tunable liquid crystal lens (TLCL), and active near-infrared reflectance (NIR) illumination. Its camera parameters, combined with a fixed value of working distance (WD) and variable or constant field of view (FOV), make it possible to work with a variety of fastener sizes under several lighting conditions. This development also uses a collaborative robot with an embedded force sensor to verify the success of the fastener localization in a real test. Robust algorithms based on segmentation neural networks (SNN) and vision were developed to find the center and rotation of the hexagon fastener in a flawless condition and worn, scratched, and rusty conditions. SNNs were tested using a graphics processing unit (GPU), central processing unit (CPU), and edge devices, such as Jetson Javier Nx (JJNX), Intel Neural Compute Stick 2 (INCS2), and M.2 Accelerator with Dual Edge TPU (DETPU), with optimization parameters, such as the unsigned integer (UINT) and float (FP), to understand their performance. A virtual program logic controller (PLC) was mounted on a personal computer (PC) as the main control to process the images and save the data. Moreover, a mathematical analysis based on the international standard organization (ISO) and patents of the manual socket wrench was performed to determine the maximum error allowed. In addition, the work was substantiated using exhaustive evaluation tests, validating the tolerance errors, robotic forces for successfully completed tasks, and algorithms implemented. As a result of this work, the translation tolerances increase with higher sizes of fasteners from 0.75 for M6 to 2.50 for M24; however, the rotation decreases with the size from 5.5° for M6 to 3.5° for M24. The proposed methodology is a robust solution to tackle outliers contours and fake vertices produced by distorted masks present in non-constant illumination; it can reach an average accuracy to detect the vertices of 99.86% and the center of 100%, also, the time consumed by the SNN and the proposed algorithms is 73.91 ms on an Intel Core I9 CPU. This work is an interesting contribution to industrial robotics and improves current applications. MDPI 2023-05-06 /pmc/articles/PMC10181754/ /pubmed/37177735 http://dx.doi.org/10.3390/s23094527 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Espinosa Peralta, Paul
Ferre, Manuel
Sánchez-Urán, Miguel Ángel
Robust Fastener Detection Based on Force and Vision Algorithms in Robotic (Un)Screwing Applications
title Robust Fastener Detection Based on Force and Vision Algorithms in Robotic (Un)Screwing Applications
title_full Robust Fastener Detection Based on Force and Vision Algorithms in Robotic (Un)Screwing Applications
title_fullStr Robust Fastener Detection Based on Force and Vision Algorithms in Robotic (Un)Screwing Applications
title_full_unstemmed Robust Fastener Detection Based on Force and Vision Algorithms in Robotic (Un)Screwing Applications
title_short Robust Fastener Detection Based on Force and Vision Algorithms in Robotic (Un)Screwing Applications
title_sort robust fastener detection based on force and vision algorithms in robotic (un)screwing applications
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10181754/
https://www.ncbi.nlm.nih.gov/pubmed/37177735
http://dx.doi.org/10.3390/s23094527
work_keys_str_mv AT espinosaperaltapaul robustfastenerdetectionbasedonforceandvisionalgorithmsinroboticunscrewingapplications
AT ferremanuel robustfastenerdetectionbasedonforceandvisionalgorithmsinroboticunscrewingapplications
AT sanchezuranmiguelangel robustfastenerdetectionbasedonforceandvisionalgorithmsinroboticunscrewingapplications