Cargando…

Predictive feedback to V1 dynamically updates with sensory input

Predictive coding theories propose that the brain creates internal models of the environment to predict upcoming sensory input. Hierarchical predictive coding models of vision postulate that higher visual areas generate predictions of sensory inputs and feed them back to early visual cortex. In V1,...

Descripción completa

Detalles Bibliográficos
Autores principales: Edwards, Grace, Vetter, Petra, McGruer, Fiona, Petro, Lucy S., Muckli, Lars
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5705713/
https://www.ncbi.nlm.nih.gov/pubmed/29184060
http://dx.doi.org/10.1038/s41598-017-16093-y
Descripción
Sumario:Predictive coding theories propose that the brain creates internal models of the environment to predict upcoming sensory input. Hierarchical predictive coding models of vision postulate that higher visual areas generate predictions of sensory inputs and feed them back to early visual cortex. In V1, sensory inputs that do not match the predictions lead to amplified brain activation, but does this amplification process dynamically update to new retinotopic locations with eye-movements? We investigated the effect of eye-movements in predictive feedback using functional brain imaging and eye-tracking whilst presenting an apparent motion illusion. Apparent motion induces an internal model of motion, during which sensory predictions of the illusory motion feed back to V1. We observed attenuated BOLD responses to predicted stimuli at the new post-saccadic location in V1. Therefore, pre-saccadic predictions update their retinotopic location in time for post-saccadic input, validating dynamic predictive coding theories in V1.