Cargando…

Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity

Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic He...

Descripción completa

Detalles Bibliográficos
Autores principales: Berger, Christopher C., Gonzalez-Franco, Mar, Tajadura-Jiménez, Ana, Florencio, Dinei, Zhang, Zhengyou
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5801410/
https://www.ncbi.nlm.nih.gov/pubmed/29456486
http://dx.doi.org/10.3389/fnins.2018.00021
_version_ 1783298344301363200
author Berger, Christopher C.
Gonzalez-Franco, Mar
Tajadura-Jiménez, Ana
Florencio, Dinei
Zhang, Zhengyou
author_facet Berger, Christopher C.
Gonzalez-Franco, Mar
Tajadura-Jiménez, Ana
Florencio, Dinei
Zhang, Zhengyou
author_sort Berger, Christopher C.
collection PubMed
description Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.
format Online
Article
Text
id pubmed-5801410
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-58014102018-02-16 Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity Berger, Christopher C. Gonzalez-Franco, Mar Tajadura-Jiménez, Ana Florencio, Dinei Zhang, Zhengyou Front Neurosci Neuroscience Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR. Frontiers Media S.A. 2018-02-02 /pmc/articles/PMC5801410/ /pubmed/29456486 http://dx.doi.org/10.3389/fnins.2018.00021 Text en Copyright © 2018 Berger, Gonzalez-Franco, Tajadura-Jiménez, Florencio and Zhang. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Berger, Christopher C.
Gonzalez-Franco, Mar
Tajadura-Jiménez, Ana
Florencio, Dinei
Zhang, Zhengyou
Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity
title Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity
title_full Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity
title_fullStr Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity
title_full_unstemmed Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity
title_short Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity
title_sort generic hrtfs may be good enough in virtual reality. improving source localization through cross-modal plasticity
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5801410/
https://www.ncbi.nlm.nih.gov/pubmed/29456486
http://dx.doi.org/10.3389/fnins.2018.00021
work_keys_str_mv AT bergerchristopherc generichrtfsmaybegoodenoughinvirtualrealityimprovingsourcelocalizationthroughcrossmodalplasticity
AT gonzalezfrancomar generichrtfsmaybegoodenoughinvirtualrealityimprovingsourcelocalizationthroughcrossmodalplasticity
AT tajadurajimenezana generichrtfsmaybegoodenoughinvirtualrealityimprovingsourcelocalizationthroughcrossmodalplasticity
AT florenciodinei generichrtfsmaybegoodenoughinvirtualrealityimprovingsourcelocalizationthroughcrossmodalplasticity
AT zhangzhengyou generichrtfsmaybegoodenoughinvirtualrealityimprovingsourcelocalizationthroughcrossmodalplasticity