Cargando…

The Oculus Rift: a cost-effective tool for studying visual-vestibular interactions in self-motion perception

For years now, virtual reality devices have been applied in the field of vision science in an attempt to improve our understanding of perceptual principles underlying the experience of self-motion. Some of this research has been concerned with exploring factors involved in the visually-induced illus...

Descripción completa

Detalles Bibliográficos
Autores principales: Kim, Juno, Chung, Charles Y. L., Nakamura, Shinji, Palmisano, Stephen, Khuu, Sieu K.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4358060/
https://www.ncbi.nlm.nih.gov/pubmed/25821438
http://dx.doi.org/10.3389/fpsyg.2015.00248
Descripción
Sumario:For years now, virtual reality devices have been applied in the field of vision science in an attempt to improve our understanding of perceptual principles underlying the experience of self-motion. Some of this research has been concerned with exploring factors involved in the visually-induced illusory perception of self-motion, known as vection. We examined the usefulness of the cost-effective Oculus Rift in generating vection in seated observers. This device has the capacity to display optic flow in world coordinates by compensating for tracked changes in 3D head orientation. We measured vection strength in three conditions of visual compensation for head movement: compensated, uncompensated, and inversely compensated. During presentation of optic flow, the observer was instructed to make periodic head oscillations (±22° horizontal excursions at approximately 0.53 Hz). We found that vection was best in the compensated condition, and was weakest in the inversely compensated condition. Surprisingly, vection was always better in passive viewing conditions, compared with conditions where active head rotations were performed. These findings suggest that vection is highly dependent on interactions between visual, vestibular and proprioceptive information, and may be highly sensitive to limitations of temporal lag in visual-vestibular coupling using this system.