Cargando…

Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain

Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some heari...

Descripción completa

Detalles Bibliográficos
Autores principales: Csonka, Matt, Mardmomen, Nadia, Webster, Paula J, Brefczynski-Lewis, Julie A, Frum, Chris, Lewis, James W
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7941256/
https://www.ncbi.nlm.nih.gov/pubmed/33718874
http://dx.doi.org/10.1093/texcom/tgab002
_version_ 1783662116788502528
author Csonka, Matt
Mardmomen, Nadia
Webster, Paula J
Brefczynski-Lewis, Julie A
Frum, Chris
Lewis, James W
author_facet Csonka, Matt
Mardmomen, Nadia
Webster, Paula J
Brefczynski-Lewis, Julie A
Frum, Chris
Lewis, James W
author_sort Csonka, Matt
collection PubMed
description Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical “hubs”) preferentially involved in multisensory processing along different stimulus category dimensions, including 1) living versus nonliving audio-visual events, 2) audio-visual events involving vocalizations versus actions by living sources, 3) emotionally valent events, and 4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies.
format Online
Article
Text
id pubmed-7941256
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Oxford University Press
record_format MEDLINE/PubMed
spelling pubmed-79412562021-03-12 Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain Csonka, Matt Mardmomen, Nadia Webster, Paula J Brefczynski-Lewis, Julie A Frum, Chris Lewis, James W Cereb Cortex Commun Original Article Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical “hubs”) preferentially involved in multisensory processing along different stimulus category dimensions, including 1) living versus nonliving audio-visual events, 2) audio-visual events involving vocalizations versus actions by living sources, 3) emotionally valent events, and 4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies. Oxford University Press 2021-01-18 /pmc/articles/PMC7941256/ /pubmed/33718874 http://dx.doi.org/10.1093/texcom/tgab002 Text en © The Author(s) 2021. Published by Oxford University Press. https://creativecommons.org/licenses/by/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) ), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Original Article
Csonka, Matt
Mardmomen, Nadia
Webster, Paula J
Brefczynski-Lewis, Julie A
Frum, Chris
Lewis, James W
Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain
title Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain
title_full Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain
title_fullStr Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain
title_full_unstemmed Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain
title_short Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain
title_sort meta-analyses support a taxonomic model for representations of different categories of audio-visual interaction events in the human brain
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7941256/
https://www.ncbi.nlm.nih.gov/pubmed/33718874
http://dx.doi.org/10.1093/texcom/tgab002
work_keys_str_mv AT csonkamatt metaanalysessupportataxonomicmodelforrepresentationsofdifferentcategoriesofaudiovisualinteractioneventsinthehumanbrain
AT mardmomennadia metaanalysessupportataxonomicmodelforrepresentationsofdifferentcategoriesofaudiovisualinteractioneventsinthehumanbrain
AT websterpaulaj metaanalysessupportataxonomicmodelforrepresentationsofdifferentcategoriesofaudiovisualinteractioneventsinthehumanbrain
AT brefczynskilewisjuliea metaanalysessupportataxonomicmodelforrepresentationsofdifferentcategoriesofaudiovisualinteractioneventsinthehumanbrain
AT frumchris metaanalysessupportataxonomicmodelforrepresentationsofdifferentcategoriesofaudiovisualinteractioneventsinthehumanbrain
AT lewisjamesw metaanalysessupportataxonomicmodelforrepresentationsofdifferentcategoriesofaudiovisualinteractioneventsinthehumanbrain