Cargando…
Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information
During communication in real‐life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady‐state evoked fields an...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
John Wiley & Sons, Inc.
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7856646/ https://www.ncbi.nlm.nih.gov/pubmed/33206441 http://dx.doi.org/10.1002/hbm.25282 |
_version_ | 1783646288001105920 |
---|---|
author | Drijvers, Linda Jensen, Ole Spaak, Eelke |
author_facet | Drijvers, Linda Jensen, Ole Spaak, Eelke |
author_sort | Drijvers, Linda |
collection | PubMed |
description | During communication in real‐life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady‐state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1,440 Hz refresh rate). Integration difficulty was manipulated by lower‐order auditory factors (clear/degraded speech) and higher‐order visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (f(visual) − f(auditory) = 7 Hz), specifically when lower‐order integration was easiest because signal quality was optimal. This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech‐gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of lower‐order audiovisual integration and demonstrates that speech‐gesture information interacts in higher‐order language areas. Furthermore, we provide a proof‐of‐principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context. |
format | Online Article Text |
id | pubmed-7856646 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | John Wiley & Sons, Inc. |
record_format | MEDLINE/PubMed |
spelling | pubmed-78566462021-02-05 Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information Drijvers, Linda Jensen, Ole Spaak, Eelke Hum Brain Mapp Research Articles During communication in real‐life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady‐state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1,440 Hz refresh rate). Integration difficulty was manipulated by lower‐order auditory factors (clear/degraded speech) and higher‐order visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (f(visual) − f(auditory) = 7 Hz), specifically when lower‐order integration was easiest because signal quality was optimal. This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech‐gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of lower‐order audiovisual integration and demonstrates that speech‐gesture information interacts in higher‐order language areas. Furthermore, we provide a proof‐of‐principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context. John Wiley & Sons, Inc. 2020-11-18 /pmc/articles/PMC7856646/ /pubmed/33206441 http://dx.doi.org/10.1002/hbm.25282 Text en © 2020 The Authors. Human Brain Mapping published by Wiley Periodicals LLC. This is an open access article under the terms of the http://creativecommons.org/licenses/by-nc-nd/4.0/ License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non‐commercial and no modifications or adaptations are made. |
spellingShingle | Research Articles Drijvers, Linda Jensen, Ole Spaak, Eelke Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information |
title | Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information |
title_full | Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information |
title_fullStr | Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information |
title_full_unstemmed | Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information |
title_short | Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information |
title_sort | rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information |
topic | Research Articles |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7856646/ https://www.ncbi.nlm.nih.gov/pubmed/33206441 http://dx.doi.org/10.1002/hbm.25282 |
work_keys_str_mv | AT drijverslinda rapidinvisiblefrequencytaggingrevealsnonlinearintegrationofauditoryandvisualinformation AT jensenole rapidinvisiblefrequencytaggingrevealsnonlinearintegrationofauditoryandvisualinformation AT spaakeelke rapidinvisiblefrequencytaggingrevealsnonlinearintegrationofauditoryandvisualinformation |