Cargando…

The role of vision and proprioception in self-motion encoding: An immersive virtual reality study

Past research on the advantages of multisensory input for remembering spatial information has mainly focused on memory for objects or surrounding environments. Less is known about the role of cue combination in memory for own body location in space. In a previous study, we investigated participants’...

Descripción completa

Detalles Bibliográficos
Autores principales: Bayramova, Rena, Valori, Irene, McKenna-Plumley, Phoebe E., Callegher, Claudio Zandonella, Farroni, Teresa
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8460581/
https://www.ncbi.nlm.nih.gov/pubmed/34341941
http://dx.doi.org/10.3758/s13414-021-02344-8
_version_ 1784571787771641856
author Bayramova, Rena
Valori, Irene
McKenna-Plumley, Phoebe E.
Callegher, Claudio Zandonella
Farroni, Teresa
author_facet Bayramova, Rena
Valori, Irene
McKenna-Plumley, Phoebe E.
Callegher, Claudio Zandonella
Farroni, Teresa
author_sort Bayramova, Rena
collection PubMed
description Past research on the advantages of multisensory input for remembering spatial information has mainly focused on memory for objects or surrounding environments. Less is known about the role of cue combination in memory for own body location in space. In a previous study, we investigated participants’ accuracy in reproducing a rotation angle in a self-rotation task. Here, we focus on the memory aspect of the task. Participants had to rotate themselves back to a specified starting position in three different sensory conditions: a blind condition, a condition with disrupted proprioception, and a condition where both vision and proprioception were reliably available. To investigate the difference between encoding and storage phases of remembering proprioceptive information, rotation amplitude and recall delay were manipulated. The task was completed in a real testing room and in immersive virtual reality (IVR) simulations of the same environment. We found that proprioceptive accuracy is lower when vision is not available and that performance is generally less accurate in IVR. In reality conditions, the degree of rotation affected accuracy only in the blind condition, whereas in IVR, it caused more errors in both the blind condition and to a lesser degree when proprioception was disrupted. These results indicate an improvement in encoding own body location when vision and proprioception are optimally integrated. No reliable effect of delay was found. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.3758/s13414-021-02344-8) contains supplementary material, which is available to authorized users.
format Online
Article
Text
id pubmed-8460581
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-84605812021-10-07 The role of vision and proprioception in self-motion encoding: An immersive virtual reality study Bayramova, Rena Valori, Irene McKenna-Plumley, Phoebe E. Callegher, Claudio Zandonella Farroni, Teresa Atten Percept Psychophys Article Past research on the advantages of multisensory input for remembering spatial information has mainly focused on memory for objects or surrounding environments. Less is known about the role of cue combination in memory for own body location in space. In a previous study, we investigated participants’ accuracy in reproducing a rotation angle in a self-rotation task. Here, we focus on the memory aspect of the task. Participants had to rotate themselves back to a specified starting position in three different sensory conditions: a blind condition, a condition with disrupted proprioception, and a condition where both vision and proprioception were reliably available. To investigate the difference between encoding and storage phases of remembering proprioceptive information, rotation amplitude and recall delay were manipulated. The task was completed in a real testing room and in immersive virtual reality (IVR) simulations of the same environment. We found that proprioceptive accuracy is lower when vision is not available and that performance is generally less accurate in IVR. In reality conditions, the degree of rotation affected accuracy only in the blind condition, whereas in IVR, it caused more errors in both the blind condition and to a lesser degree when proprioception was disrupted. These results indicate an improvement in encoding own body location when vision and proprioception are optimally integrated. No reliable effect of delay was found. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.3758/s13414-021-02344-8) contains supplementary material, which is available to authorized users. Springer US 2021-08-02 2021 /pmc/articles/PMC8460581/ /pubmed/34341941 http://dx.doi.org/10.3758/s13414-021-02344-8 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Bayramova, Rena
Valori, Irene
McKenna-Plumley, Phoebe E.
Callegher, Claudio Zandonella
Farroni, Teresa
The role of vision and proprioception in self-motion encoding: An immersive virtual reality study
title The role of vision and proprioception in self-motion encoding: An immersive virtual reality study
title_full The role of vision and proprioception in self-motion encoding: An immersive virtual reality study
title_fullStr The role of vision and proprioception in self-motion encoding: An immersive virtual reality study
title_full_unstemmed The role of vision and proprioception in self-motion encoding: An immersive virtual reality study
title_short The role of vision and proprioception in self-motion encoding: An immersive virtual reality study
title_sort role of vision and proprioception in self-motion encoding: an immersive virtual reality study
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8460581/
https://www.ncbi.nlm.nih.gov/pubmed/34341941
http://dx.doi.org/10.3758/s13414-021-02344-8
work_keys_str_mv AT bayramovarena theroleofvisionandproprioceptioninselfmotionencodinganimmersivevirtualrealitystudy
AT valoriirene theroleofvisionandproprioceptioninselfmotionencodinganimmersivevirtualrealitystudy
AT mckennaplumleyphoebee theroleofvisionandproprioceptioninselfmotionencodinganimmersivevirtualrealitystudy
AT callegherclaudiozandonella theroleofvisionandproprioceptioninselfmotionencodinganimmersivevirtualrealitystudy
AT farroniteresa theroleofvisionandproprioceptioninselfmotionencodinganimmersivevirtualrealitystudy
AT bayramovarena roleofvisionandproprioceptioninselfmotionencodinganimmersivevirtualrealitystudy
AT valoriirene roleofvisionandproprioceptioninselfmotionencodinganimmersivevirtualrealitystudy
AT mckennaplumleyphoebee roleofvisionandproprioceptioninselfmotionencodinganimmersivevirtualrealitystudy
AT callegherclaudiozandonella roleofvisionandproprioceptioninselfmotionencodinganimmersivevirtualrealitystudy
AT farroniteresa roleofvisionandproprioceptioninselfmotionencodinganimmersivevirtualrealitystudy