Cargando…
Navigating from a Depth Image Converted into Sound
Background. Common manufactured depth sensors generate depth images that humans normally obtain from their eyes and hands. Various designs converting spatial data into sound have been recently proposed, speculating on their applicability as sensory substitution devices (SSDs). Objective. We tested s...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi Publishing Corporation
2015
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4745448/ https://www.ncbi.nlm.nih.gov/pubmed/27019586 http://dx.doi.org/10.1155/2015/543492 |
Sumario: | Background. Common manufactured depth sensors generate depth images that humans normally obtain from their eyes and hands. Various designs converting spatial data into sound have been recently proposed, speculating on their applicability as sensory substitution devices (SSDs). Objective. We tested such a design as a travel aid in a navigation task. Methods. Our portable device (MeloSee) converted 2D array of a depth image into melody in real-time. Distance from the sensor was translated into sound intensity, stereo-modulated laterally, and the pitch represented verticality. Twenty-one blindfolded young adults navigated along four different paths during two sessions separated by one-week interval. In some instances, a dual task required them to recognize a temporal pattern applied through a tactile vibrator while they navigated. Results. Participants learnt how to use the system on both new paths and on those they had already navigated from. Based on travel time and errors, performance improved from one week to the next. The dual task was achieved successfully, slightly affecting but not preventing effective navigation. Conclusions. The use of Kinect-type sensors to implement SSDs is promising, but it is restricted to indoor use and it is inefficient on too short range. |
---|