Cargando…

Semantic-based crossmodal processing during visual suppression

To reveal the mechanisms underpinning the influence of auditory input on visual awareness, we examine, (1) whether purely semantic-based multisensory integration facilitates the access to visual awareness for familiar visual events, and (2) whether crossmodal semantic priming is the mechanism respon...

Descripción completa

Detalles Bibliográficos
Autores principales: Cox, Dustin, Hong, Sang Wook
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4451233/
https://www.ncbi.nlm.nih.gov/pubmed/26082736
http://dx.doi.org/10.3389/fpsyg.2015.00722
_version_ 1782374109761503232
author Cox, Dustin
Hong, Sang Wook
author_facet Cox, Dustin
Hong, Sang Wook
author_sort Cox, Dustin
collection PubMed
description To reveal the mechanisms underpinning the influence of auditory input on visual awareness, we examine, (1) whether purely semantic-based multisensory integration facilitates the access to visual awareness for familiar visual events, and (2) whether crossmodal semantic priming is the mechanism responsible for the semantic auditory influence on visual awareness. Using continuous flash suppression, we rendered dynamic and familiar visual events (e.g., a video clip of an approaching train) inaccessible to visual awareness. We manipulated the semantic auditory context of the videos by concurrently pairing them with a semantically matching soundtrack (congruent audiovisual condition), a semantically non-matching soundtrack (incongruent audiovisual condition), or with no soundtrack (neutral video-only condition). We found that participants identified the suppressed visual events significantly faster (an earlier breakup of suppression) in the congruent audiovisual condition compared to the incongruent audiovisual condition and video-only condition. However, this facilitatory influence of semantic auditory input was only observed when audiovisual stimulation co-occurred. Our results suggest that the enhanced visual processing with a semantically congruent auditory input occurs due to audiovisual crossmodal processing rather than semantic priming, which may occur even when visual information is not available to visual awareness.
format Online
Article
Text
id pubmed-4451233
institution National Center for Biotechnology Information
language English
publishDate 2015
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-44512332015-06-16 Semantic-based crossmodal processing during visual suppression Cox, Dustin Hong, Sang Wook Front Psychol Psychology To reveal the mechanisms underpinning the influence of auditory input on visual awareness, we examine, (1) whether purely semantic-based multisensory integration facilitates the access to visual awareness for familiar visual events, and (2) whether crossmodal semantic priming is the mechanism responsible for the semantic auditory influence on visual awareness. Using continuous flash suppression, we rendered dynamic and familiar visual events (e.g., a video clip of an approaching train) inaccessible to visual awareness. We manipulated the semantic auditory context of the videos by concurrently pairing them with a semantically matching soundtrack (congruent audiovisual condition), a semantically non-matching soundtrack (incongruent audiovisual condition), or with no soundtrack (neutral video-only condition). We found that participants identified the suppressed visual events significantly faster (an earlier breakup of suppression) in the congruent audiovisual condition compared to the incongruent audiovisual condition and video-only condition. However, this facilitatory influence of semantic auditory input was only observed when audiovisual stimulation co-occurred. Our results suggest that the enhanced visual processing with a semantically congruent auditory input occurs due to audiovisual crossmodal processing rather than semantic priming, which may occur even when visual information is not available to visual awareness. Frontiers Media S.A. 2015-06-02 /pmc/articles/PMC4451233/ /pubmed/26082736 http://dx.doi.org/10.3389/fpsyg.2015.00722 Text en Copyright © 2015 Cox and Hong. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Psychology
Cox, Dustin
Hong, Sang Wook
Semantic-based crossmodal processing during visual suppression
title Semantic-based crossmodal processing during visual suppression
title_full Semantic-based crossmodal processing during visual suppression
title_fullStr Semantic-based crossmodal processing during visual suppression
title_full_unstemmed Semantic-based crossmodal processing during visual suppression
title_short Semantic-based crossmodal processing during visual suppression
title_sort semantic-based crossmodal processing during visual suppression
topic Psychology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4451233/
https://www.ncbi.nlm.nih.gov/pubmed/26082736
http://dx.doi.org/10.3389/fpsyg.2015.00722
work_keys_str_mv AT coxdustin semanticbasedcrossmodalprocessingduringvisualsuppression
AT hongsangwook semanticbasedcrossmodalprocessingduringvisualsuppression