Cargando…

Visual and tactile 3D point cloud data from real robots for shape modeling and completion

Representing 3D geometry for different tasks, e.g. rendering and reconstruction, is an important goal in different fields, such as computer graphics, computer vision and robotics. Robotic applications often require perception of object shape information extracted from sensory data that can be noisy...

Descripción completa

Detalles Bibliográficos
Autores principales: Bekiroglu, Yasemin, Björkman, Mårten, Zarzar Gandler, Gabriela, Exner, Johannes, Ek, Carl Henrik, Kragic, Danica
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7125316/
https://www.ncbi.nlm.nih.gov/pubmed/32258263
http://dx.doi.org/10.1016/j.dib.2020.105335
_version_ 1783515919224406016
author Bekiroglu, Yasemin
Björkman, Mårten
Zarzar Gandler, Gabriela
Exner, Johannes
Ek, Carl Henrik
Kragic, Danica
author_facet Bekiroglu, Yasemin
Björkman, Mårten
Zarzar Gandler, Gabriela
Exner, Johannes
Ek, Carl Henrik
Kragic, Danica
author_sort Bekiroglu, Yasemin
collection PubMed
description Representing 3D geometry for different tasks, e.g. rendering and reconstruction, is an important goal in different fields, such as computer graphics, computer vision and robotics. Robotic applications often require perception of object shape information extracted from sensory data that can be noisy and incomplete. This is a challenging task and in order to facilitate analysis of new methods and comparison of different approaches for shape modeling (e.g. surface estimation), completion and exploration, we provide real sensory data acquired from exploring various objects of different complexities. The dataset includes visual and tactile readings in the form of 3D point clouds obtained using two different robot setups that are equipped with visual and tactile sensors. During data collection, the robots touch the experiment objects in a predefined manner at various exploration configurations and gather visual and tactile points in the same coordinate frame based on calibration between the robots and the used cameras. The goal of this exhaustive exploration procedure is to sense unseen parts of the objects which are not visible to the cameras, but can be sensed via tactile sensors activated at touched areas. The data was used for shape completion and modeling via Implicit Surface representation and Gaussian-Process-based regression, in the work “Object shape estimation and modeling, based on sparse Gaussian process implicit surfaces, combining visual data and tactile exploration” [3], and also used partially in “Enhancing visual perception of shape through tactile glances” [4], both studying efficient exploration of objects to reduce number of touches.
format Online
Article
Text
id pubmed-7125316
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-71253162020-04-06 Visual and tactile 3D point cloud data from real robots for shape modeling and completion Bekiroglu, Yasemin Björkman, Mårten Zarzar Gandler, Gabriela Exner, Johannes Ek, Carl Henrik Kragic, Danica Data Brief Computer Science Representing 3D geometry for different tasks, e.g. rendering and reconstruction, is an important goal in different fields, such as computer graphics, computer vision and robotics. Robotic applications often require perception of object shape information extracted from sensory data that can be noisy and incomplete. This is a challenging task and in order to facilitate analysis of new methods and comparison of different approaches for shape modeling (e.g. surface estimation), completion and exploration, we provide real sensory data acquired from exploring various objects of different complexities. The dataset includes visual and tactile readings in the form of 3D point clouds obtained using two different robot setups that are equipped with visual and tactile sensors. During data collection, the robots touch the experiment objects in a predefined manner at various exploration configurations and gather visual and tactile points in the same coordinate frame based on calibration between the robots and the used cameras. The goal of this exhaustive exploration procedure is to sense unseen parts of the objects which are not visible to the cameras, but can be sensed via tactile sensors activated at touched areas. The data was used for shape completion and modeling via Implicit Surface representation and Gaussian-Process-based regression, in the work “Object shape estimation and modeling, based on sparse Gaussian process implicit surfaces, combining visual data and tactile exploration” [3], and also used partially in “Enhancing visual perception of shape through tactile glances” [4], both studying efficient exploration of objects to reduce number of touches. Elsevier 2020-02-26 /pmc/articles/PMC7125316/ /pubmed/32258263 http://dx.doi.org/10.1016/j.dib.2020.105335 Text en © 2020 The Authors. Published by Elsevier Inc. http://creativecommons.org/licenses/by/4.0/ This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Computer Science
Bekiroglu, Yasemin
Björkman, Mårten
Zarzar Gandler, Gabriela
Exner, Johannes
Ek, Carl Henrik
Kragic, Danica
Visual and tactile 3D point cloud data from real robots for shape modeling and completion
title Visual and tactile 3D point cloud data from real robots for shape modeling and completion
title_full Visual and tactile 3D point cloud data from real robots for shape modeling and completion
title_fullStr Visual and tactile 3D point cloud data from real robots for shape modeling and completion
title_full_unstemmed Visual and tactile 3D point cloud data from real robots for shape modeling and completion
title_short Visual and tactile 3D point cloud data from real robots for shape modeling and completion
title_sort visual and tactile 3d point cloud data from real robots for shape modeling and completion
topic Computer Science
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7125316/
https://www.ncbi.nlm.nih.gov/pubmed/32258263
http://dx.doi.org/10.1016/j.dib.2020.105335
work_keys_str_mv AT bekirogluyasemin visualandtactile3dpointclouddatafromrealrobotsforshapemodelingandcompletion
AT bjorkmanmarten visualandtactile3dpointclouddatafromrealrobotsforshapemodelingandcompletion
AT zarzargandlergabriela visualandtactile3dpointclouddatafromrealrobotsforshapemodelingandcompletion
AT exnerjohannes visualandtactile3dpointclouddatafromrealrobotsforshapemodelingandcompletion
AT ekcarlhenrik visualandtactile3dpointclouddatafromrealrobotsforshapemodelingandcompletion
AT kragicdanica visualandtactile3dpointclouddatafromrealrobotsforshapemodelingandcompletion