Cargando…

Cross-Modal Multivariate Pattern Analysis

Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data(1-4). Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been...

Descripción completa

Detalles Bibliográficos
Autores principales: Meyer, Kaspar, Kaplan, Jonas T.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MyJove Corporation 2011
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3308596/
https://www.ncbi.nlm.nih.gov/pubmed/22105246
http://dx.doi.org/10.3791/3307
_version_ 1782227434137976832
author Meyer, Kaspar
Kaplan, Jonas T.
author_facet Meyer, Kaspar
Kaplan, Jonas T.
author_sort Meyer, Kaspar
collection PubMed
description Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data(1-4). Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been employed to predict the orientation of visual gratings a subject perceives from activity in early visual cortices(5) or, analogously, the content of speech from activity in early auditory cortices(6). Here, we present an extension of the classical MVPA paradigm, according to which perceptual stimuli are not predicted within, but across sensory systems. Specifically, the method we describe addresses the question of whether stimuli that evoke memory associations in modalities other than the one through which they are presented induce content-specific activity patterns in the sensory cortices of those other modalities. For instance, seeing a muted video clip of a glass vase shattering on the ground automatically triggers in most observers an auditory image of the associated sound; is the experience of this image in the "mind's ear" correlated with a specific neural activity pattern in early auditory cortices? Furthermore, is this activity pattern distinct from the pattern that could be observed if the subject were, instead, watching a video clip of a howling dog? In two previous studies(7,8), we were able to predict sound- and touch-implying video clips based on neural activity in early auditory and somatosensory cortices, respectively. Our results are in line with a neuroarchitectural framework proposed by Damasio(9,10), according to which the experience of mental images that are based on memories - such as hearing the shattering sound of a vase in the "mind's ear" upon seeing the corresponding video clip - is supported by the re-construction of content-specific neural activity patterns in early sensory cortices.
format Online
Article
Text
id pubmed-3308596
institution National Center for Biotechnology Information
language English
publishDate 2011
publisher MyJove Corporation
record_format MEDLINE/PubMed
spelling pubmed-33085962012-06-28 Cross-Modal Multivariate Pattern Analysis Meyer, Kaspar Kaplan, Jonas T. J Vis Exp Neuroscience Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data(1-4). Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been employed to predict the orientation of visual gratings a subject perceives from activity in early visual cortices(5) or, analogously, the content of speech from activity in early auditory cortices(6). Here, we present an extension of the classical MVPA paradigm, according to which perceptual stimuli are not predicted within, but across sensory systems. Specifically, the method we describe addresses the question of whether stimuli that evoke memory associations in modalities other than the one through which they are presented induce content-specific activity patterns in the sensory cortices of those other modalities. For instance, seeing a muted video clip of a glass vase shattering on the ground automatically triggers in most observers an auditory image of the associated sound; is the experience of this image in the "mind's ear" correlated with a specific neural activity pattern in early auditory cortices? Furthermore, is this activity pattern distinct from the pattern that could be observed if the subject were, instead, watching a video clip of a howling dog? In two previous studies(7,8), we were able to predict sound- and touch-implying video clips based on neural activity in early auditory and somatosensory cortices, respectively. Our results are in line with a neuroarchitectural framework proposed by Damasio(9,10), according to which the experience of mental images that are based on memories - such as hearing the shattering sound of a vase in the "mind's ear" upon seeing the corresponding video clip - is supported by the re-construction of content-specific neural activity patterns in early sensory cortices. MyJove Corporation 2011-11-09 /pmc/articles/PMC3308596/ /pubmed/22105246 http://dx.doi.org/10.3791/3307 Text en Copyright © 2011, Journal of Visualized Experiments http://creativecommons.org/licenses/by-nc-nd/3.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. To view a copy of this license, visithttp://creativecommons.org/licenses/by-nc-nd/3.0/
spellingShingle Neuroscience
Meyer, Kaspar
Kaplan, Jonas T.
Cross-Modal Multivariate Pattern Analysis
title Cross-Modal Multivariate Pattern Analysis
title_full Cross-Modal Multivariate Pattern Analysis
title_fullStr Cross-Modal Multivariate Pattern Analysis
title_full_unstemmed Cross-Modal Multivariate Pattern Analysis
title_short Cross-Modal Multivariate Pattern Analysis
title_sort cross-modal multivariate pattern analysis
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3308596/
https://www.ncbi.nlm.nih.gov/pubmed/22105246
http://dx.doi.org/10.3791/3307
work_keys_str_mv AT meyerkaspar crossmodalmultivariatepatternanalysis
AT kaplanjonast crossmodalmultivariatepatternanalysis