Cargando…

Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device

Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), t...

Descripción completa

Detalles Bibliográficos
Autores principales: Shvadron, Shira, Snir, Adi, Maimon, Amber, Yizhar, Or, Harel, Sapir, Poradosu, Keinan, Amedi, Amir
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10017858/
https://www.ncbi.nlm.nih.gov/pubmed/36936618
http://dx.doi.org/10.3389/fnhum.2023.1058617
_version_ 1784907684653301760
author Shvadron, Shira
Snir, Adi
Maimon, Amber
Yizhar, Or
Harel, Sapir
Poradosu, Keinan
Amedi, Amir
author_facet Shvadron, Shira
Snir, Adi
Maimon, Amber
Yizhar, Or
Harel, Sapir
Poradosu, Keinan
Amedi, Amir
author_sort Shvadron, Shira
collection PubMed
description Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes’ identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception.
format Online
Article
Text
id pubmed-10017858
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-100178582023-03-17 Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device Shvadron, Shira Snir, Adi Maimon, Amber Yizhar, Or Harel, Sapir Poradosu, Keinan Amedi, Amir Front Hum Neurosci Neuroscience Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes’ identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception. Frontiers Media S.A. 2023-03-02 /pmc/articles/PMC10017858/ /pubmed/36936618 http://dx.doi.org/10.3389/fnhum.2023.1058617 Text en Copyright © 2023 Shvadron, Snir, Maimon, Yizhar, Harel, Poradosu and Amedi. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Shvadron, Shira
Snir, Adi
Maimon, Amber
Yizhar, Or
Harel, Sapir
Poradosu, Keinan
Amedi, Amir
Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device
title Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device
title_full Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device
title_fullStr Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device
title_full_unstemmed Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device
title_short Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device
title_sort shape detection beyond the visual field using a visual-to-auditory sensory augmentation device
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10017858/
https://www.ncbi.nlm.nih.gov/pubmed/36936618
http://dx.doi.org/10.3389/fnhum.2023.1058617
work_keys_str_mv AT shvadronshira shapedetectionbeyondthevisualfieldusingavisualtoauditorysensoryaugmentationdevice
AT sniradi shapedetectionbeyondthevisualfieldusingavisualtoauditorysensoryaugmentationdevice
AT maimonamber shapedetectionbeyondthevisualfieldusingavisualtoauditorysensoryaugmentationdevice
AT yizharor shapedetectionbeyondthevisualfieldusingavisualtoauditorysensoryaugmentationdevice
AT harelsapir shapedetectionbeyondthevisualfieldusingavisualtoauditorysensoryaugmentationdevice
AT poradosukeinan shapedetectionbeyondthevisualfieldusingavisualtoauditorysensoryaugmentationdevice
AT amediamir shapedetectionbeyondthevisualfieldusingavisualtoauditorysensoryaugmentationdevice