Cargando…

Response Modality vs. Target Modality: Sensory Transformations and Comparisons in Cross-modal Slant Matching Tasks

Humans constantly combine multi-sensory spatial information to successfully interact with objects in peripersonal space. Previous studies suggest that sensory inputs of different modalities are encoded in different reference frames. In cross-modal tasks where the target and response modalities are d...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Juan, Ando, Hiroshi
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6056512/
https://www.ncbi.nlm.nih.gov/pubmed/30038316
http://dx.doi.org/10.1038/s41598-018-29375-w
_version_ 1783341352274100224
author Liu, Juan
Ando, Hiroshi
author_facet Liu, Juan
Ando, Hiroshi
author_sort Liu, Juan
collection PubMed
description Humans constantly combine multi-sensory spatial information to successfully interact with objects in peripersonal space. Previous studies suggest that sensory inputs of different modalities are encoded in different reference frames. In cross-modal tasks where the target and response modalities are different, it is unclear which reference frame these multiple sensory signals are transformed to for comparison. The current study used a slant perception and parallelity paradigm to explore this issue. Participants perceived (either visually or haptically) the slant of a reference board and were asked to either adjust an invisible test board by hand manipulation or to adjust a visible test board through verbal instructions to be physically parallel to the reference board. We examined the patterns of constant error and variability of unimodal and cross-modal tasks with various reference slant angles at different reference/test locations. The results revealed that rather than a mixture of the patterns of unimodal conditions, the pattern in cross-modal conditions depended almost entirely on the response modality and was not substantially affected by the target modality. Deviations in haptic response conditions could be predicted by the locations of the reference and test board, whereas the reference slant angle was an important predictor in visual response conditions.
format Online
Article
Text
id pubmed-6056512
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-60565122018-07-30 Response Modality vs. Target Modality: Sensory Transformations and Comparisons in Cross-modal Slant Matching Tasks Liu, Juan Ando, Hiroshi Sci Rep Article Humans constantly combine multi-sensory spatial information to successfully interact with objects in peripersonal space. Previous studies suggest that sensory inputs of different modalities are encoded in different reference frames. In cross-modal tasks where the target and response modalities are different, it is unclear which reference frame these multiple sensory signals are transformed to for comparison. The current study used a slant perception and parallelity paradigm to explore this issue. Participants perceived (either visually or haptically) the slant of a reference board and were asked to either adjust an invisible test board by hand manipulation or to adjust a visible test board through verbal instructions to be physically parallel to the reference board. We examined the patterns of constant error and variability of unimodal and cross-modal tasks with various reference slant angles at different reference/test locations. The results revealed that rather than a mixture of the patterns of unimodal conditions, the pattern in cross-modal conditions depended almost entirely on the response modality and was not substantially affected by the target modality. Deviations in haptic response conditions could be predicted by the locations of the reference and test board, whereas the reference slant angle was an important predictor in visual response conditions. Nature Publishing Group UK 2018-07-23 /pmc/articles/PMC6056512/ /pubmed/30038316 http://dx.doi.org/10.1038/s41598-018-29375-w Text en © The Author(s) 2018 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Article
Liu, Juan
Ando, Hiroshi
Response Modality vs. Target Modality: Sensory Transformations and Comparisons in Cross-modal Slant Matching Tasks
title Response Modality vs. Target Modality: Sensory Transformations and Comparisons in Cross-modal Slant Matching Tasks
title_full Response Modality vs. Target Modality: Sensory Transformations and Comparisons in Cross-modal Slant Matching Tasks
title_fullStr Response Modality vs. Target Modality: Sensory Transformations and Comparisons in Cross-modal Slant Matching Tasks
title_full_unstemmed Response Modality vs. Target Modality: Sensory Transformations and Comparisons in Cross-modal Slant Matching Tasks
title_short Response Modality vs. Target Modality: Sensory Transformations and Comparisons in Cross-modal Slant Matching Tasks
title_sort response modality vs. target modality: sensory transformations and comparisons in cross-modal slant matching tasks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6056512/
https://www.ncbi.nlm.nih.gov/pubmed/30038316
http://dx.doi.org/10.1038/s41598-018-29375-w
work_keys_str_mv AT liujuan responsemodalityvstargetmodalitysensorytransformationsandcomparisonsincrossmodalslantmatchingtasks
AT andohiroshi responsemodalityvstargetmodalitysensorytransformationsandcomparisonsincrossmodalslantmatchingtasks