Cargando…

Cross-Modal Object Recognition Is Viewpoint-Independent

BACKGROUND: Previous research suggests that visual and haptic object recognition are viewpoint-dependent both within- and cross-modally. However, this conclusion may not be generally valid as it was reached using objects oriented along their extended y-axis, resulting in differential surface process...

Descripción completa

Detalles Bibliográficos
Autores principales: Lacey, Simon, Peters, Andrew, Sathian, K.
Formato: Texto
Lenguaje:English
Publicado: Public Library of Science 2007
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1964535/
https://www.ncbi.nlm.nih.gov/pubmed/17849019
http://dx.doi.org/10.1371/journal.pone.0000890
_version_ 1782134650929414144
author Lacey, Simon
Peters, Andrew
Sathian, K.
author_facet Lacey, Simon
Peters, Andrew
Sathian, K.
author_sort Lacey, Simon
collection PubMed
description BACKGROUND: Previous research suggests that visual and haptic object recognition are viewpoint-dependent both within- and cross-modally. However, this conclusion may not be generally valid as it was reached using objects oriented along their extended y-axis, resulting in differential surface processing in vision and touch. In the present study, we removed this differential by presenting objects along the z-axis, thus making all object surfaces more equally available to vision and touch. METHODOLOGY/PRINCIPAL FINDINGS: Participants studied previously unfamiliar objects, in groups of four, using either vision or touch. Subsequently, they performed a four-alternative forced-choice object identification task with the studied objects presented in both unrotated and rotated (180° about the x-, y-, and z-axes) orientations. Rotation impaired within-modal recognition accuracy in both vision and touch, but not cross-modal recognition accuracy. Within-modally, visual recognition accuracy was reduced by rotation about the x- and y-axes more than the z-axis, whilst haptic recognition was equally affected by rotation about all three axes. Cross-modal (but not within-modal) accuracy correlated with spatial (but not object) imagery scores. CONCLUSIONS/SIGNIFICANCE: The viewpoint-independence of cross-modal object identification points to its mediation by a high-level abstract representation. The correlation between spatial imagery scores and cross-modal performance suggest that construction of this high-level representation is linked to the ability to perform spatial transformations. Within-modal viewpoint-dependence appears to have a different basis in vision than in touch, possibly due to surface occlusion being important in vision but not touch.
format Text
id pubmed-1964535
institution National Center for Biotechnology Information
language English
publishDate 2007
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-19645352007-09-12 Cross-Modal Object Recognition Is Viewpoint-Independent Lacey, Simon Peters, Andrew Sathian, K. PLoS One Research Article BACKGROUND: Previous research suggests that visual and haptic object recognition are viewpoint-dependent both within- and cross-modally. However, this conclusion may not be generally valid as it was reached using objects oriented along their extended y-axis, resulting in differential surface processing in vision and touch. In the present study, we removed this differential by presenting objects along the z-axis, thus making all object surfaces more equally available to vision and touch. METHODOLOGY/PRINCIPAL FINDINGS: Participants studied previously unfamiliar objects, in groups of four, using either vision or touch. Subsequently, they performed a four-alternative forced-choice object identification task with the studied objects presented in both unrotated and rotated (180° about the x-, y-, and z-axes) orientations. Rotation impaired within-modal recognition accuracy in both vision and touch, but not cross-modal recognition accuracy. Within-modally, visual recognition accuracy was reduced by rotation about the x- and y-axes more than the z-axis, whilst haptic recognition was equally affected by rotation about all three axes. Cross-modal (but not within-modal) accuracy correlated with spatial (but not object) imagery scores. CONCLUSIONS/SIGNIFICANCE: The viewpoint-independence of cross-modal object identification points to its mediation by a high-level abstract representation. The correlation between spatial imagery scores and cross-modal performance suggest that construction of this high-level representation is linked to the ability to perform spatial transformations. Within-modal viewpoint-dependence appears to have a different basis in vision than in touch, possibly due to surface occlusion being important in vision but not touch. Public Library of Science 2007-09-12 /pmc/articles/PMC1964535/ /pubmed/17849019 http://dx.doi.org/10.1371/journal.pone.0000890 Text en Lacey et al. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
spellingShingle Research Article
Lacey, Simon
Peters, Andrew
Sathian, K.
Cross-Modal Object Recognition Is Viewpoint-Independent
title Cross-Modal Object Recognition Is Viewpoint-Independent
title_full Cross-Modal Object Recognition Is Viewpoint-Independent
title_fullStr Cross-Modal Object Recognition Is Viewpoint-Independent
title_full_unstemmed Cross-Modal Object Recognition Is Viewpoint-Independent
title_short Cross-Modal Object Recognition Is Viewpoint-Independent
title_sort cross-modal object recognition is viewpoint-independent
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1964535/
https://www.ncbi.nlm.nih.gov/pubmed/17849019
http://dx.doi.org/10.1371/journal.pone.0000890
work_keys_str_mv AT laceysimon crossmodalobjectrecognitionisviewpointindependent
AT petersandrew crossmodalobjectrecognitionisviewpointindependent
AT sathiank crossmodalobjectrecognitionisviewpointindependent