Cargando…
An Improved Point Cloud Descriptor for Vision Based Robotic Grasping System
In this paper, a novel global point cloud descriptor is proposed for reliable object recognition and pose estimation, which can be effectively applied to robot grasping operation. The viewpoint feature histogram (VFH) is widely used in three-dimensional (3D) object recognition and pose estimation in...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6567890/ https://www.ncbi.nlm.nih.gov/pubmed/31091751 http://dx.doi.org/10.3390/s19102225 |
_version_ | 1783427172827922432 |
---|---|
author | Wang, Fei Liang, Chen Ru, Changlei Cheng, Hongtai |
author_facet | Wang, Fei Liang, Chen Ru, Changlei Cheng, Hongtai |
author_sort | Wang, Fei |
collection | PubMed |
description | In this paper, a novel global point cloud descriptor is proposed for reliable object recognition and pose estimation, which can be effectively applied to robot grasping operation. The viewpoint feature histogram (VFH) is widely used in three-dimensional (3D) object recognition and pose estimation in real scene obtained by depth sensor because of its recognition performance and computational efficiency. However, when the object has a mirrored structure, it is often difficult to distinguish the mirrored poses relative to the viewpoint using VFH. In order to solve this difficulty, this study presents an improved feature descriptor named orthogonal viewpoint feature histogram (OVFH), which contains two components: a surface shape component and an improved viewpoint direction component. The improved viewpoint component is calculated by the orthogonal vector of the viewpoint direction, which is obtained based on the reference frame estimated for the entire point cloud. The evaluation of OVFH using a publicly available data set indicates that it enhances the ability to distinguish between mirrored poses while ensuring object recognition performance. The proposed method uses OVFH to recognize and register objects in the database and obtains precise poses by using the iterative closest point (ICP) algorithm. The experimental results show that the proposed approach can be effectively applied to guide the robot to grasp objects with mirrored poses. |
format | Online Article Text |
id | pubmed-6567890 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-65678902019-06-17 An Improved Point Cloud Descriptor for Vision Based Robotic Grasping System Wang, Fei Liang, Chen Ru, Changlei Cheng, Hongtai Sensors (Basel) Article In this paper, a novel global point cloud descriptor is proposed for reliable object recognition and pose estimation, which can be effectively applied to robot grasping operation. The viewpoint feature histogram (VFH) is widely used in three-dimensional (3D) object recognition and pose estimation in real scene obtained by depth sensor because of its recognition performance and computational efficiency. However, when the object has a mirrored structure, it is often difficult to distinguish the mirrored poses relative to the viewpoint using VFH. In order to solve this difficulty, this study presents an improved feature descriptor named orthogonal viewpoint feature histogram (OVFH), which contains two components: a surface shape component and an improved viewpoint direction component. The improved viewpoint component is calculated by the orthogonal vector of the viewpoint direction, which is obtained based on the reference frame estimated for the entire point cloud. The evaluation of OVFH using a publicly available data set indicates that it enhances the ability to distinguish between mirrored poses while ensuring object recognition performance. The proposed method uses OVFH to recognize and register objects in the database and obtains precise poses by using the iterative closest point (ICP) algorithm. The experimental results show that the proposed approach can be effectively applied to guide the robot to grasp objects with mirrored poses. MDPI 2019-05-14 /pmc/articles/PMC6567890/ /pubmed/31091751 http://dx.doi.org/10.3390/s19102225 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Wang, Fei Liang, Chen Ru, Changlei Cheng, Hongtai An Improved Point Cloud Descriptor for Vision Based Robotic Grasping System |
title | An Improved Point Cloud Descriptor for Vision Based Robotic Grasping System |
title_full | An Improved Point Cloud Descriptor for Vision Based Robotic Grasping System |
title_fullStr | An Improved Point Cloud Descriptor for Vision Based Robotic Grasping System |
title_full_unstemmed | An Improved Point Cloud Descriptor for Vision Based Robotic Grasping System |
title_short | An Improved Point Cloud Descriptor for Vision Based Robotic Grasping System |
title_sort | improved point cloud descriptor for vision based robotic grasping system |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6567890/ https://www.ncbi.nlm.nih.gov/pubmed/31091751 http://dx.doi.org/10.3390/s19102225 |
work_keys_str_mv | AT wangfei animprovedpointclouddescriptorforvisionbasedroboticgraspingsystem AT liangchen animprovedpointclouddescriptorforvisionbasedroboticgraspingsystem AT ruchanglei animprovedpointclouddescriptorforvisionbasedroboticgraspingsystem AT chenghongtai animprovedpointclouddescriptorforvisionbasedroboticgraspingsystem AT wangfei improvedpointclouddescriptorforvisionbasedroboticgraspingsystem AT liangchen improvedpointclouddescriptorforvisionbasedroboticgraspingsystem AT ruchanglei improvedpointclouddescriptorforvisionbasedroboticgraspingsystem AT chenghongtai improvedpointclouddescriptorforvisionbasedroboticgraspingsystem |