Cargando…

An Application of Deep Learning to Tactile Data for Object Recognition under Visual Guidance †

Drawing inspiration from haptic exploration of objects by humans, the current work proposes a novel framework for robotic tactile object recognition, where visual information in the form of a set of visually interesting points is employed to guide the process of tactile data acquisition. Neuroscienc...

Descripción completa

Detalles Bibliográficos
Autores principales: Rouhafzay, Ghazal, Cretu, Ana-Maria
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6480322/
https://www.ncbi.nlm.nih.gov/pubmed/30934907
http://dx.doi.org/10.3390/s19071534
_version_ 1783413547628232704
author Rouhafzay, Ghazal
Cretu, Ana-Maria
author_facet Rouhafzay, Ghazal
Cretu, Ana-Maria
author_sort Rouhafzay, Ghazal
collection PubMed
description Drawing inspiration from haptic exploration of objects by humans, the current work proposes a novel framework for robotic tactile object recognition, where visual information in the form of a set of visually interesting points is employed to guide the process of tactile data acquisition. Neuroscience research confirms the integration of cutaneous data as a response to surface changes sensed by humans with data from joints, muscles, and bones (kinesthetic cues) for object recognition. On the other hand, psychological studies demonstrate that humans tend to follow object contours to perceive their global shape, which leads to object recognition. In compliance with these findings, a series of contours are determined around a set of 24 virtual objects from which bimodal tactile data (kinesthetic and cutaneous) are obtained sequentially and by adaptively changing the size of the sensor surface according to the object geometry for each object. A virtual Force Sensing Resistor array (FSR) is employed to capture cutaneous cues. Two different methods for sequential data classification are then implemented using Convolutional Neural Networks (CNN) and conventional classifiers, including support vector machines and k-nearest neighbors. In the case of conventional classifiers, we exploit contourlet transformation to extract features from tactile images. In the case of CNN, two networks are trained for cutaneous and kinesthetic data and a novel hybrid decision-making strategy is proposed for object recognition. The proposed framework is tested both for contours determined blindly (randomly determined contours of objects) and contours determined using a model of visual attention. Trained classifiers are tested on 4560 new sequential tactile data and the CNN trained over tactile data from object contours selected by the model of visual attention yields an accuracy of 98.97% which is the highest accuracy among other implemented approaches.
format Online
Article
Text
id pubmed-6480322
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-64803222019-04-29 An Application of Deep Learning to Tactile Data for Object Recognition under Visual Guidance † Rouhafzay, Ghazal Cretu, Ana-Maria Sensors (Basel) Article Drawing inspiration from haptic exploration of objects by humans, the current work proposes a novel framework for robotic tactile object recognition, where visual information in the form of a set of visually interesting points is employed to guide the process of tactile data acquisition. Neuroscience research confirms the integration of cutaneous data as a response to surface changes sensed by humans with data from joints, muscles, and bones (kinesthetic cues) for object recognition. On the other hand, psychological studies demonstrate that humans tend to follow object contours to perceive their global shape, which leads to object recognition. In compliance with these findings, a series of contours are determined around a set of 24 virtual objects from which bimodal tactile data (kinesthetic and cutaneous) are obtained sequentially and by adaptively changing the size of the sensor surface according to the object geometry for each object. A virtual Force Sensing Resistor array (FSR) is employed to capture cutaneous cues. Two different methods for sequential data classification are then implemented using Convolutional Neural Networks (CNN) and conventional classifiers, including support vector machines and k-nearest neighbors. In the case of conventional classifiers, we exploit contourlet transformation to extract features from tactile images. In the case of CNN, two networks are trained for cutaneous and kinesthetic data and a novel hybrid decision-making strategy is proposed for object recognition. The proposed framework is tested both for contours determined blindly (randomly determined contours of objects) and contours determined using a model of visual attention. Trained classifiers are tested on 4560 new sequential tactile data and the CNN trained over tactile data from object contours selected by the model of visual attention yields an accuracy of 98.97% which is the highest accuracy among other implemented approaches. MDPI 2019-03-29 /pmc/articles/PMC6480322/ /pubmed/30934907 http://dx.doi.org/10.3390/s19071534 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Rouhafzay, Ghazal
Cretu, Ana-Maria
An Application of Deep Learning to Tactile Data for Object Recognition under Visual Guidance †
title An Application of Deep Learning to Tactile Data for Object Recognition under Visual Guidance †
title_full An Application of Deep Learning to Tactile Data for Object Recognition under Visual Guidance †
title_fullStr An Application of Deep Learning to Tactile Data for Object Recognition under Visual Guidance †
title_full_unstemmed An Application of Deep Learning to Tactile Data for Object Recognition under Visual Guidance †
title_short An Application of Deep Learning to Tactile Data for Object Recognition under Visual Guidance †
title_sort application of deep learning to tactile data for object recognition under visual guidance †
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6480322/
https://www.ncbi.nlm.nih.gov/pubmed/30934907
http://dx.doi.org/10.3390/s19071534
work_keys_str_mv AT rouhafzayghazal anapplicationofdeeplearningtotactiledataforobjectrecognitionundervisualguidance
AT cretuanamaria anapplicationofdeeplearningtotactiledataforobjectrecognitionundervisualguidance
AT rouhafzayghazal applicationofdeeplearningtotactiledataforobjectrecognitionundervisualguidance
AT cretuanamaria applicationofdeeplearningtotactiledataforobjectrecognitionundervisualguidance