Cargando…

Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution

Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have in...

Descripción completa

Detalles Bibliográficos
Autores principales: Maidenbaum, Shachar, Buchs, Galit, Abboud, Sami, Lavi-Rotbain, Ori, Amedi, Amir
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2016
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4755598/
https://www.ncbi.nlm.nih.gov/pubmed/26882473
http://dx.doi.org/10.1371/journal.pone.0147501
_version_ 1782416216195858432
author Maidenbaum, Shachar
Buchs, Galit
Abboud, Sami
Lavi-Rotbain, Ori
Amedi, Amir
author_facet Maidenbaum, Shachar
Buchs, Galit
Abboud, Sami
Lavi-Rotbain, Ori
Amedi, Amir
author_sort Maidenbaum, Shachar
collection PubMed
description Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks–walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience.
format Online
Article
Text
id pubmed-4755598
institution National Center for Biotechnology Information
language English
publishDate 2016
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-47555982016-02-26 Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution Maidenbaum, Shachar Buchs, Galit Abboud, Sami Lavi-Rotbain, Ori Amedi, Amir PLoS One Research Article Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks–walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience. Public Library of Science 2016-02-16 /pmc/articles/PMC4755598/ /pubmed/26882473 http://dx.doi.org/10.1371/journal.pone.0147501 Text en © 2016 Maidenbaum et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Maidenbaum, Shachar
Buchs, Galit
Abboud, Sami
Lavi-Rotbain, Ori
Amedi, Amir
Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution
title Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution
title_full Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution
title_fullStr Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution
title_full_unstemmed Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution
title_short Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution
title_sort perception of graphical virtual environments by blind users via sensory substitution
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4755598/
https://www.ncbi.nlm.nih.gov/pubmed/26882473
http://dx.doi.org/10.1371/journal.pone.0147501
work_keys_str_mv AT maidenbaumshachar perceptionofgraphicalvirtualenvironmentsbyblindusersviasensorysubstitution
AT buchsgalit perceptionofgraphicalvirtualenvironmentsbyblindusersviasensorysubstitution
AT abboudsami perceptionofgraphicalvirtualenvironmentsbyblindusersviasensorysubstitution
AT lavirotbainori perceptionofgraphicalvirtualenvironmentsbyblindusersviasensorysubstitution
AT amediamir perceptionofgraphicalvirtualenvironmentsbyblindusersviasensorysubstitution