Cargando…

Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study

The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and...

Descripción completa

Detalles Bibliográficos
Autores principales: Ursino, Mauro, Crisafulli, Andrea, di Pellegrino, Giuseppe, Magosso, Elisa, Cuppini, Cristiano
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5633019/
https://www.ncbi.nlm.nih.gov/pubmed/29046631
http://dx.doi.org/10.3389/fncom.2017.00089
_version_ 1783269813481635840
author Ursino, Mauro
Crisafulli, Andrea
di Pellegrino, Giuseppe
Magosso, Elisa
Cuppini, Cristiano
author_facet Ursino, Mauro
Crisafulli, Andrea
di Pellegrino, Giuseppe
Magosso, Elisa
Cuppini, Cristiano
author_sort Ursino, Mauro
collection PubMed
description The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity.
format Online
Article
Text
id pubmed-5633019
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-56330192017-10-18 Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study Ursino, Mauro Crisafulli, Andrea di Pellegrino, Giuseppe Magosso, Elisa Cuppini, Cristiano Front Comput Neurosci Neuroscience The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity. Frontiers Media S.A. 2017-10-04 /pmc/articles/PMC5633019/ /pubmed/29046631 http://dx.doi.org/10.3389/fncom.2017.00089 Text en Copyright © 2017 Ursino, Crisafulli, di Pellegrino, Magosso and Cuppini. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Ursino, Mauro
Crisafulli, Andrea
di Pellegrino, Giuseppe
Magosso, Elisa
Cuppini, Cristiano
Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study
title Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study
title_full Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study
title_fullStr Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study
title_full_unstemmed Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study
title_short Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study
title_sort development of a bayesian estimator for audio-visual integration: a neurocomputational study
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5633019/
https://www.ncbi.nlm.nih.gov/pubmed/29046631
http://dx.doi.org/10.3389/fncom.2017.00089
work_keys_str_mv AT ursinomauro developmentofabayesianestimatorforaudiovisualintegrationaneurocomputationalstudy
AT crisafulliandrea developmentofabayesianestimatorforaudiovisualintegrationaneurocomputationalstudy
AT dipellegrinogiuseppe developmentofabayesianestimatorforaudiovisualintegrationaneurocomputationalstudy
AT magossoelisa developmentofabayesianestimatorforaudiovisualintegrationaneurocomputationalstudy
AT cuppinicristiano developmentofabayesianestimatorforaudiovisualintegrationaneurocomputationalstudy