Cargando…

GR-ConvNet v2: A Real-Time Multi-Grasp Detection Network for Robotic Grasping †

We propose a dual-module robotic system to tackle the problem of generating and performing antipodal robotic grasps for unknown objects from the n-channel image of the scene. We present an improved version of the Generative Residual Convolutional Neural Network (GR-ConvNet v2) model that can generat...

Descripción completa

Detalles Bibliográficos
Autores principales: Kumra, Sulabh, Joshi, Shirin, Sahin, Ferat
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9415764/
https://www.ncbi.nlm.nih.gov/pubmed/36015978
http://dx.doi.org/10.3390/s22166208
_version_ 1784776312514150400
author Kumra, Sulabh
Joshi, Shirin
Sahin, Ferat
author_facet Kumra, Sulabh
Joshi, Shirin
Sahin, Ferat
author_sort Kumra, Sulabh
collection PubMed
description We propose a dual-module robotic system to tackle the problem of generating and performing antipodal robotic grasps for unknown objects from the n-channel image of the scene. We present an improved version of the Generative Residual Convolutional Neural Network (GR-ConvNet v2) model that can generate robust antipodal grasps from n-channel image input at real-time speeds (20 ms). We evaluated the proposed model architecture on three standard datasets and achieved a new state-of-the-art accuracy of 98.8%, 95.1%, and 97.4% on Cornell, Jacquard and Graspnet grasping datasets, respectively. Empirical results show that our model significantly outperformed the prior work with a stricter IoU-based grasp detection metric. We conducted a suite of tests in simulation and the real world on a diverse set of previously unseen objects with adversarial geometry and household items. We demonstrate the adaptability of our approach by directly transferring the trained model to a 7 DoF robotic manipulator with a grasp success rate of 95.4% and 93.0% on novel household and adversarial objects, respectively. Furthermore, we validate the generalization capability of our pixel-wise grasp prediction model by validating it on complex Ravens-10 benchmark tasks, some of which require closed-loop visual feedback for multi-step sequencing.
format Online
Article
Text
id pubmed-9415764
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-94157642022-08-27 GR-ConvNet v2: A Real-Time Multi-Grasp Detection Network for Robotic Grasping † Kumra, Sulabh Joshi, Shirin Sahin, Ferat Sensors (Basel) Article We propose a dual-module robotic system to tackle the problem of generating and performing antipodal robotic grasps for unknown objects from the n-channel image of the scene. We present an improved version of the Generative Residual Convolutional Neural Network (GR-ConvNet v2) model that can generate robust antipodal grasps from n-channel image input at real-time speeds (20 ms). We evaluated the proposed model architecture on three standard datasets and achieved a new state-of-the-art accuracy of 98.8%, 95.1%, and 97.4% on Cornell, Jacquard and Graspnet grasping datasets, respectively. Empirical results show that our model significantly outperformed the prior work with a stricter IoU-based grasp detection metric. We conducted a suite of tests in simulation and the real world on a diverse set of previously unseen objects with adversarial geometry and household items. We demonstrate the adaptability of our approach by directly transferring the trained model to a 7 DoF robotic manipulator with a grasp success rate of 95.4% and 93.0% on novel household and adversarial objects, respectively. Furthermore, we validate the generalization capability of our pixel-wise grasp prediction model by validating it on complex Ravens-10 benchmark tasks, some of which require closed-loop visual feedback for multi-step sequencing. MDPI 2022-08-18 /pmc/articles/PMC9415764/ /pubmed/36015978 http://dx.doi.org/10.3390/s22166208 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Kumra, Sulabh
Joshi, Shirin
Sahin, Ferat
GR-ConvNet v2: A Real-Time Multi-Grasp Detection Network for Robotic Grasping †
title GR-ConvNet v2: A Real-Time Multi-Grasp Detection Network for Robotic Grasping †
title_full GR-ConvNet v2: A Real-Time Multi-Grasp Detection Network for Robotic Grasping †
title_fullStr GR-ConvNet v2: A Real-Time Multi-Grasp Detection Network for Robotic Grasping †
title_full_unstemmed GR-ConvNet v2: A Real-Time Multi-Grasp Detection Network for Robotic Grasping †
title_short GR-ConvNet v2: A Real-Time Multi-Grasp Detection Network for Robotic Grasping †
title_sort gr-convnet v2: a real-time multi-grasp detection network for robotic grasping †
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9415764/
https://www.ncbi.nlm.nih.gov/pubmed/36015978
http://dx.doi.org/10.3390/s22166208
work_keys_str_mv AT kumrasulabh grconvnetv2arealtimemultigraspdetectionnetworkforroboticgrasping
AT joshishirin grconvnetv2arealtimemultigraspdetectionnetworkforroboticgrasping
AT sahinferat grconvnetv2arealtimemultigraspdetectionnetworkforroboticgrasping