Cargando…

Depth Image–Based Deep Learning of Grasp Planning for Textureless Planar-Faced Objects in Vision-Guided Robotic Bin-Picking

Bin-picking of small parcels and other textureless planar-faced objects is a common task at warehouses. A general color image–based vision-guided robot picking system requires feature extraction and goal image preparation of various objects. However, feature extraction for goal image matching is dif...

Descripción completa

Detalles Bibliográficos
Autores principales: Jiang, Ping, Ishihara, Yoshiyuki, Sugiyama, Nobukatsu, Oaki, Junji, Tokura, Seiji, Sugahara, Atsushi, Ogawa, Akihito
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7038393/
https://www.ncbi.nlm.nih.gov/pubmed/32012874
http://dx.doi.org/10.3390/s20030706
_version_ 1783500629967110144
author Jiang, Ping
Ishihara, Yoshiyuki
Sugiyama, Nobukatsu
Oaki, Junji
Tokura, Seiji
Sugahara, Atsushi
Ogawa, Akihito
author_facet Jiang, Ping
Ishihara, Yoshiyuki
Sugiyama, Nobukatsu
Oaki, Junji
Tokura, Seiji
Sugahara, Atsushi
Ogawa, Akihito
author_sort Jiang, Ping
collection PubMed
description Bin-picking of small parcels and other textureless planar-faced objects is a common task at warehouses. A general color image–based vision-guided robot picking system requires feature extraction and goal image preparation of various objects. However, feature extraction for goal image matching is difficult for textureless objects. Further, prior preparation of huge numbers of goal images is impractical at a warehouse. In this paper, we propose a novel depth image–based vision-guided robot bin-picking system for textureless planar-faced objects. Our method uses a deep convolutional neural network (DCNN) model that is trained on 15,000 annotated depth images synthetically generated in a physics simulator to directly predict grasp points without object segmentation. Unlike previous studies that predicted grasp points for a robot suction hand with only one vacuum cup, our DCNN also predicts optimal grasp patterns for a hand with two vacuum cups (left cup on, right cup on, or both cups on). Further, we propose a surface feature descriptor to extract surface features (center position and normal) and refine the predicted grasp point position, removing the need for texture features for vision-guided robot control and sim-to-real modification for DCNN model training. Experimental results demonstrate the efficiency of our system, namely that a robot with 7 degrees of freedom can pick randomly posed textureless boxes in a cluttered environment with a 97.5% success rate at speeds exceeding 1000 pieces per hour.
format Online
Article
Text
id pubmed-7038393
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-70383932020-03-09 Depth Image–Based Deep Learning of Grasp Planning for Textureless Planar-Faced Objects in Vision-Guided Robotic Bin-Picking Jiang, Ping Ishihara, Yoshiyuki Sugiyama, Nobukatsu Oaki, Junji Tokura, Seiji Sugahara, Atsushi Ogawa, Akihito Sensors (Basel) Article Bin-picking of small parcels and other textureless planar-faced objects is a common task at warehouses. A general color image–based vision-guided robot picking system requires feature extraction and goal image preparation of various objects. However, feature extraction for goal image matching is difficult for textureless objects. Further, prior preparation of huge numbers of goal images is impractical at a warehouse. In this paper, we propose a novel depth image–based vision-guided robot bin-picking system for textureless planar-faced objects. Our method uses a deep convolutional neural network (DCNN) model that is trained on 15,000 annotated depth images synthetically generated in a physics simulator to directly predict grasp points without object segmentation. Unlike previous studies that predicted grasp points for a robot suction hand with only one vacuum cup, our DCNN also predicts optimal grasp patterns for a hand with two vacuum cups (left cup on, right cup on, or both cups on). Further, we propose a surface feature descriptor to extract surface features (center position and normal) and refine the predicted grasp point position, removing the need for texture features for vision-guided robot control and sim-to-real modification for DCNN model training. Experimental results demonstrate the efficiency of our system, namely that a robot with 7 degrees of freedom can pick randomly posed textureless boxes in a cluttered environment with a 97.5% success rate at speeds exceeding 1000 pieces per hour. MDPI 2020-01-28 /pmc/articles/PMC7038393/ /pubmed/32012874 http://dx.doi.org/10.3390/s20030706 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Jiang, Ping
Ishihara, Yoshiyuki
Sugiyama, Nobukatsu
Oaki, Junji
Tokura, Seiji
Sugahara, Atsushi
Ogawa, Akihito
Depth Image–Based Deep Learning of Grasp Planning for Textureless Planar-Faced Objects in Vision-Guided Robotic Bin-Picking
title Depth Image–Based Deep Learning of Grasp Planning for Textureless Planar-Faced Objects in Vision-Guided Robotic Bin-Picking
title_full Depth Image–Based Deep Learning of Grasp Planning for Textureless Planar-Faced Objects in Vision-Guided Robotic Bin-Picking
title_fullStr Depth Image–Based Deep Learning of Grasp Planning for Textureless Planar-Faced Objects in Vision-Guided Robotic Bin-Picking
title_full_unstemmed Depth Image–Based Deep Learning of Grasp Planning for Textureless Planar-Faced Objects in Vision-Guided Robotic Bin-Picking
title_short Depth Image–Based Deep Learning of Grasp Planning for Textureless Planar-Faced Objects in Vision-Guided Robotic Bin-Picking
title_sort depth image–based deep learning of grasp planning for textureless planar-faced objects in vision-guided robotic bin-picking
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7038393/
https://www.ncbi.nlm.nih.gov/pubmed/32012874
http://dx.doi.org/10.3390/s20030706
work_keys_str_mv AT jiangping depthimagebaseddeeplearningofgraspplanningfortexturelessplanarfacedobjectsinvisionguidedroboticbinpicking
AT ishiharayoshiyuki depthimagebaseddeeplearningofgraspplanningfortexturelessplanarfacedobjectsinvisionguidedroboticbinpicking
AT sugiyamanobukatsu depthimagebaseddeeplearningofgraspplanningfortexturelessplanarfacedobjectsinvisionguidedroboticbinpicking
AT oakijunji depthimagebaseddeeplearningofgraspplanningfortexturelessplanarfacedobjectsinvisionguidedroboticbinpicking
AT tokuraseiji depthimagebaseddeeplearningofgraspplanningfortexturelessplanarfacedobjectsinvisionguidedroboticbinpicking
AT sugaharaatsushi depthimagebaseddeeplearningofgraspplanningfortexturelessplanarfacedobjectsinvisionguidedroboticbinpicking
AT ogawaakihito depthimagebaseddeeplearningofgraspplanningfortexturelessplanarfacedobjectsinvisionguidedroboticbinpicking