Cargando…

A pushing-grasping collaborative method based on deep Q-network algorithm in dual viewpoints

In the field of intelligent manufacturing, robot grasping and sorting is important content. However, there are some disadvantages in the traditional single-view-based manipulator grasping methods by using a 2D camera, where the efficiency and the accuracy of grasping are both low when facing the sce...

Descripción completa

Detalles Bibliográficos
Autores principales: Peng, Gang, Liao, Jinhu, Guan, Shangbin, Yang, Jin, Li, Xinde
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8913751/
https://www.ncbi.nlm.nih.gov/pubmed/35273281
http://dx.doi.org/10.1038/s41598-022-07900-2
_version_ 1784667518192844800
author Peng, Gang
Liao, Jinhu
Guan, Shangbin
Yang, Jin
Li, Xinde
author_facet Peng, Gang
Liao, Jinhu
Guan, Shangbin
Yang, Jin
Li, Xinde
author_sort Peng, Gang
collection PubMed
description In the field of intelligent manufacturing, robot grasping and sorting is important content. However, there are some disadvantages in the traditional single-view-based manipulator grasping methods by using a 2D camera, where the efficiency and the accuracy of grasping are both low when facing the scene of stacking and occlusion for the reason that there is information missing by single-view 2D camera-based methods while acquiring scene information, and the methods of grasping only can’t change the difficult-to-grasp scene which is stack and occluded. Regarding the issue above, a pushing-grasping collaborative method based on the deep Q-network in dual viewpoints is proposed in this paper. This method in this paper adopts an improved deep Q-network algorithm, with an RGB-D camera to obtain the information of objects’ RGB images and point clouds from two viewpoints, which solved the problem of lack of information missing. What’s more, it combines the pushing and grasping actions with the deep Q-network, which make it have the ability of active exploration, so that the trained manipulator can make the scenes less stacking and occlusion, and with the help of that, it can perform well in more complicated grasping scenes. In addition, we improved the reward function of the deep Q-network and propose the piecewise reward function to speed up the convergence of the deep Q-network. We trained different models and tried different methods in the V-REP simulation environment, and it drew a conclusion that the method proposed in this paper converges quickly and the success rate of grasping objects in unstructured scenes raises up to 83.5%. Besides, it shows the generalization ability and well performance when novel objects appear in the scenes that the manipulator has never grasped before.
format Online
Article
Text
id pubmed-8913751
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-89137512022-03-14 A pushing-grasping collaborative method based on deep Q-network algorithm in dual viewpoints Peng, Gang Liao, Jinhu Guan, Shangbin Yang, Jin Li, Xinde Sci Rep Article In the field of intelligent manufacturing, robot grasping and sorting is important content. However, there are some disadvantages in the traditional single-view-based manipulator grasping methods by using a 2D camera, where the efficiency and the accuracy of grasping are both low when facing the scene of stacking and occlusion for the reason that there is information missing by single-view 2D camera-based methods while acquiring scene information, and the methods of grasping only can’t change the difficult-to-grasp scene which is stack and occluded. Regarding the issue above, a pushing-grasping collaborative method based on the deep Q-network in dual viewpoints is proposed in this paper. This method in this paper adopts an improved deep Q-network algorithm, with an RGB-D camera to obtain the information of objects’ RGB images and point clouds from two viewpoints, which solved the problem of lack of information missing. What’s more, it combines the pushing and grasping actions with the deep Q-network, which make it have the ability of active exploration, so that the trained manipulator can make the scenes less stacking and occlusion, and with the help of that, it can perform well in more complicated grasping scenes. In addition, we improved the reward function of the deep Q-network and propose the piecewise reward function to speed up the convergence of the deep Q-network. We trained different models and tried different methods in the V-REP simulation environment, and it drew a conclusion that the method proposed in this paper converges quickly and the success rate of grasping objects in unstructured scenes raises up to 83.5%. Besides, it shows the generalization ability and well performance when novel objects appear in the scenes that the manipulator has never grasped before. Nature Publishing Group UK 2022-03-10 /pmc/articles/PMC8913751/ /pubmed/35273281 http://dx.doi.org/10.1038/s41598-022-07900-2 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Peng, Gang
Liao, Jinhu
Guan, Shangbin
Yang, Jin
Li, Xinde
A pushing-grasping collaborative method based on deep Q-network algorithm in dual viewpoints
title A pushing-grasping collaborative method based on deep Q-network algorithm in dual viewpoints
title_full A pushing-grasping collaborative method based on deep Q-network algorithm in dual viewpoints
title_fullStr A pushing-grasping collaborative method based on deep Q-network algorithm in dual viewpoints
title_full_unstemmed A pushing-grasping collaborative method based on deep Q-network algorithm in dual viewpoints
title_short A pushing-grasping collaborative method based on deep Q-network algorithm in dual viewpoints
title_sort pushing-grasping collaborative method based on deep q-network algorithm in dual viewpoints
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8913751/
https://www.ncbi.nlm.nih.gov/pubmed/35273281
http://dx.doi.org/10.1038/s41598-022-07900-2
work_keys_str_mv AT penggang apushinggraspingcollaborativemethodbasedondeepqnetworkalgorithmindualviewpoints
AT liaojinhu apushinggraspingcollaborativemethodbasedondeepqnetworkalgorithmindualviewpoints
AT guanshangbin apushinggraspingcollaborativemethodbasedondeepqnetworkalgorithmindualviewpoints
AT yangjin apushinggraspingcollaborativemethodbasedondeepqnetworkalgorithmindualviewpoints
AT lixinde apushinggraspingcollaborativemethodbasedondeepqnetworkalgorithmindualviewpoints
AT penggang pushinggraspingcollaborativemethodbasedondeepqnetworkalgorithmindualviewpoints
AT liaojinhu pushinggraspingcollaborativemethodbasedondeepqnetworkalgorithmindualviewpoints
AT guanshangbin pushinggraspingcollaborativemethodbasedondeepqnetworkalgorithmindualviewpoints
AT yangjin pushinggraspingcollaborativemethodbasedondeepqnetworkalgorithmindualviewpoints
AT lixinde pushinggraspingcollaborativemethodbasedondeepqnetworkalgorithmindualviewpoints