Cargando…

The Dynamic Ebbinghaus: motion dynamics greatly enhance the classic contextual size illusion

The Ebbinghaus illusion is a classic example of the influence of a contextual surround on the perceived size of an object. Here, we introduce a novel variant of this illusion called the Dynamic Ebbinghaus illusion in which the size and eccentricity of the surrounding inducers modulates dynamically o...

Descripción completa

Detalles Bibliográficos
Autores principales: Mruczek, Ryan E. B., Blair, Christopher D., Strother, Lars, Caplovitz, Gideon P.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4332331/
https://www.ncbi.nlm.nih.gov/pubmed/25741271
http://dx.doi.org/10.3389/fnhum.2015.00077
Descripción
Sumario:The Ebbinghaus illusion is a classic example of the influence of a contextual surround on the perceived size of an object. Here, we introduce a novel variant of this illusion called the Dynamic Ebbinghaus illusion in which the size and eccentricity of the surrounding inducers modulates dynamically over time. Under these conditions, the size of the central circle is perceived to change in opposition with the size of the inducers. Interestingly, this illusory effect is relatively weak when participants are fixating a stationary central target, less than half the magnitude of the classic static illusion. However, when the entire stimulus translates in space requiring a smooth pursuit eye movement to track the target, the illusory effect is greatly enhanced, almost twice the magnitude of the classic static illusion. A variety of manipulations including target motion, peripheral viewing, and smooth pursuit eye movements all lead to dramatic illusory effects, with the largest effect nearly four times the strength of the classic static illusion. We interpret these results in light of the fact that motion-related manipulations lead to uncertainty in the image size representation of the target, specifically due to added noise at the level of the retinal input. We propose that the neural circuits integrating visual cues for size perception, such as retinal image size, perceived distance, and various contextual factors, weight each cue according to the level of noise or uncertainty in their neural representation. Thus, more weight is given to the influence of contextual information in deriving perceived size in the presence of stimulus and eye motion. Biologically plausible models of size perception should be able to account for the reweighting of different visual cues under varying levels of certainty.