Cargando…
A self-training program for sensory substitution devices
Sensory Substitution Devices (SSDs) convey visual information through audition or touch, targeting blind and visually impaired individuals. One bottleneck towards adopting SSDs in everyday life by blind users, is the constant dependency on sighted instructors throughout the learning process. Here, w...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8078811/ https://www.ncbi.nlm.nih.gov/pubmed/33905446 http://dx.doi.org/10.1371/journal.pone.0250281 |
_version_ | 1783685109745975296 |
---|---|
author | Buchs, Galit Haimler, Benedetta Kerem, Menachem Maidenbaum, Shachar Braun, Liraz Amedi, Amir |
author_facet | Buchs, Galit Haimler, Benedetta Kerem, Menachem Maidenbaum, Shachar Braun, Liraz Amedi, Amir |
author_sort | Buchs, Galit |
collection | PubMed |
description | Sensory Substitution Devices (SSDs) convey visual information through audition or touch, targeting blind and visually impaired individuals. One bottleneck towards adopting SSDs in everyday life by blind users, is the constant dependency on sighted instructors throughout the learning process. Here, we present a proof-of-concept for the efficacy of an online self-training program developed for learning the basics of the EyeMusic visual-to-auditory SSD tested on sighted blindfolded participants. Additionally, aiming to identify the best training strategy to be later re-adapted for the blind, we compared multisensory vs. unisensory as well as perceptual vs. descriptive feedback approaches. To these aims, sighted participants performed identical SSD-stimuli identification tests before and after ~75 minutes of self-training on the EyeMusic algorithm. Participants were divided into five groups, differing by the feedback delivered during training: auditory-descriptive, audio-visual textual description, audio-visual perceptual simultaneous and interleaved, and a control group which had no training. At baseline, before any EyeMusic training, participants SSD objects’ identification was significantly above chance, highlighting the algorithm’s intuitiveness. Furthermore, self-training led to a significant improvement in accuracy between pre- and post-training tests in each of the four feedback groups versus control, though no significant difference emerged among those groups. Nonetheless, significant correlations between individual post-training success rates and various learning measures acquired during training, suggest a trend for an advantage of multisensory vs. unisensory feedback strategies, while no trend emerged for perceptual vs. descriptive strategies. The success at baseline strengthens the conclusion that cross-modal correspondences facilitate learning, given SSD algorithms are based on such correspondences. Additionally, and crucially, the results highlight the feasibility of self-training for the first stages of SSD learning, and suggest that for these initial stages, unisensory training, easily implemented also for blind and visually impaired individuals, may suffice. Together, these findings will potentially boost the use of SSDs for rehabilitation. |
format | Online Article Text |
id | pubmed-8078811 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-80788112021-05-06 A self-training program for sensory substitution devices Buchs, Galit Haimler, Benedetta Kerem, Menachem Maidenbaum, Shachar Braun, Liraz Amedi, Amir PLoS One Research Article Sensory Substitution Devices (SSDs) convey visual information through audition or touch, targeting blind and visually impaired individuals. One bottleneck towards adopting SSDs in everyday life by blind users, is the constant dependency on sighted instructors throughout the learning process. Here, we present a proof-of-concept for the efficacy of an online self-training program developed for learning the basics of the EyeMusic visual-to-auditory SSD tested on sighted blindfolded participants. Additionally, aiming to identify the best training strategy to be later re-adapted for the blind, we compared multisensory vs. unisensory as well as perceptual vs. descriptive feedback approaches. To these aims, sighted participants performed identical SSD-stimuli identification tests before and after ~75 minutes of self-training on the EyeMusic algorithm. Participants were divided into five groups, differing by the feedback delivered during training: auditory-descriptive, audio-visual textual description, audio-visual perceptual simultaneous and interleaved, and a control group which had no training. At baseline, before any EyeMusic training, participants SSD objects’ identification was significantly above chance, highlighting the algorithm’s intuitiveness. Furthermore, self-training led to a significant improvement in accuracy between pre- and post-training tests in each of the four feedback groups versus control, though no significant difference emerged among those groups. Nonetheless, significant correlations between individual post-training success rates and various learning measures acquired during training, suggest a trend for an advantage of multisensory vs. unisensory feedback strategies, while no trend emerged for perceptual vs. descriptive strategies. The success at baseline strengthens the conclusion that cross-modal correspondences facilitate learning, given SSD algorithms are based on such correspondences. Additionally, and crucially, the results highlight the feasibility of self-training for the first stages of SSD learning, and suggest that for these initial stages, unisensory training, easily implemented also for blind and visually impaired individuals, may suffice. Together, these findings will potentially boost the use of SSDs for rehabilitation. Public Library of Science 2021-04-27 /pmc/articles/PMC8078811/ /pubmed/33905446 http://dx.doi.org/10.1371/journal.pone.0250281 Text en © 2021 Buchs et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Buchs, Galit Haimler, Benedetta Kerem, Menachem Maidenbaum, Shachar Braun, Liraz Amedi, Amir A self-training program for sensory substitution devices |
title | A self-training program for sensory substitution devices |
title_full | A self-training program for sensory substitution devices |
title_fullStr | A self-training program for sensory substitution devices |
title_full_unstemmed | A self-training program for sensory substitution devices |
title_short | A self-training program for sensory substitution devices |
title_sort | self-training program for sensory substitution devices |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8078811/ https://www.ncbi.nlm.nih.gov/pubmed/33905446 http://dx.doi.org/10.1371/journal.pone.0250281 |
work_keys_str_mv | AT buchsgalit aselftrainingprogramforsensorysubstitutiondevices AT haimlerbenedetta aselftrainingprogramforsensorysubstitutiondevices AT keremmenachem aselftrainingprogramforsensorysubstitutiondevices AT maidenbaumshachar aselftrainingprogramforsensorysubstitutiondevices AT braunliraz aselftrainingprogramforsensorysubstitutiondevices AT amediamir aselftrainingprogramforsensorysubstitutiondevices AT buchsgalit selftrainingprogramforsensorysubstitutiondevices AT haimlerbenedetta selftrainingprogramforsensorysubstitutiondevices AT keremmenachem selftrainingprogramforsensorysubstitutiondevices AT maidenbaumshachar selftrainingprogramforsensorysubstitutiondevices AT braunliraz selftrainingprogramforsensorysubstitutiondevices AT amediamir selftrainingprogramforsensorysubstitutiondevices |