Cargando…

Differences in Mismatch Responses to Vowels and Musical Intervals: MEG Evidence

We investigated the electrophysiological response to matched two-formant vowels and two-note musical intervals, with the goal of examining whether music is processed differently from language in early cortical responses. Using magnetoencephalography (MEG), we compared the mismatch-response (MMN/MMF,...

Descripción completa

Detalles Bibliográficos
Autores principales: Bergelson, Elika, Shvartsman, Michael, Idsardi, William J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2013
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3797141/
https://www.ncbi.nlm.nih.gov/pubmed/24143193
http://dx.doi.org/10.1371/journal.pone.0076758
_version_ 1782287583855771648
author Bergelson, Elika
Shvartsman, Michael
Idsardi, William J.
author_facet Bergelson, Elika
Shvartsman, Michael
Idsardi, William J.
author_sort Bergelson, Elika
collection PubMed
description We investigated the electrophysiological response to matched two-formant vowels and two-note musical intervals, with the goal of examining whether music is processed differently from language in early cortical responses. Using magnetoencephalography (MEG), we compared the mismatch-response (MMN/MMF, an early, pre-attentive difference-detector occurring approximately 200 ms post-onset) to musical intervals and vowels composed of matched frequencies. Participants heard blocks of two stimuli in a passive oddball paradigm in one of three conditions: sine waves, piano tones and vowels. In each condition, participants heard two-formant vowels or musical intervals whose frequencies were 11, 12, or 24 semitones apart. In music, 12 semitones and 24 semitones are perceived as highly similar intervals (one and two octaves, respectively), while in speech 12 semitones and 11 semitones formant separations are perceived as highly similar (both variants of the vowel in ‘cut’). Our results indicate that the MMN response mirrors the perceptual one: larger MMNs were elicited for the 12–11 pairing in the music conditions than in the language condition; conversely, larger MMNs were elicited to the 12–24 pairing in the language condition that in the music conditions, suggesting that within 250 ms of hearing complex auditory stimuli, the neural computation of similarity, just as the behavioral one, differs significantly depending on whether the context is music or speech.
format Online
Article
Text
id pubmed-3797141
institution National Center for Biotechnology Information
language English
publishDate 2013
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-37971412013-10-18 Differences in Mismatch Responses to Vowels and Musical Intervals: MEG Evidence Bergelson, Elika Shvartsman, Michael Idsardi, William J. PLoS One Research Article We investigated the electrophysiological response to matched two-formant vowels and two-note musical intervals, with the goal of examining whether music is processed differently from language in early cortical responses. Using magnetoencephalography (MEG), we compared the mismatch-response (MMN/MMF, an early, pre-attentive difference-detector occurring approximately 200 ms post-onset) to musical intervals and vowels composed of matched frequencies. Participants heard blocks of two stimuli in a passive oddball paradigm in one of three conditions: sine waves, piano tones and vowels. In each condition, participants heard two-formant vowels or musical intervals whose frequencies were 11, 12, or 24 semitones apart. In music, 12 semitones and 24 semitones are perceived as highly similar intervals (one and two octaves, respectively), while in speech 12 semitones and 11 semitones formant separations are perceived as highly similar (both variants of the vowel in ‘cut’). Our results indicate that the MMN response mirrors the perceptual one: larger MMNs were elicited for the 12–11 pairing in the music conditions than in the language condition; conversely, larger MMNs were elicited to the 12–24 pairing in the language condition that in the music conditions, suggesting that within 250 ms of hearing complex auditory stimuli, the neural computation of similarity, just as the behavioral one, differs significantly depending on whether the context is music or speech. Public Library of Science 2013-10-15 /pmc/articles/PMC3797141/ /pubmed/24143193 http://dx.doi.org/10.1371/journal.pone.0076758 Text en © 2013 Bergelson et al http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
spellingShingle Research Article
Bergelson, Elika
Shvartsman, Michael
Idsardi, William J.
Differences in Mismatch Responses to Vowels and Musical Intervals: MEG Evidence
title Differences in Mismatch Responses to Vowels and Musical Intervals: MEG Evidence
title_full Differences in Mismatch Responses to Vowels and Musical Intervals: MEG Evidence
title_fullStr Differences in Mismatch Responses to Vowels and Musical Intervals: MEG Evidence
title_full_unstemmed Differences in Mismatch Responses to Vowels and Musical Intervals: MEG Evidence
title_short Differences in Mismatch Responses to Vowels and Musical Intervals: MEG Evidence
title_sort differences in mismatch responses to vowels and musical intervals: meg evidence
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3797141/
https://www.ncbi.nlm.nih.gov/pubmed/24143193
http://dx.doi.org/10.1371/journal.pone.0076758
work_keys_str_mv AT bergelsonelika differencesinmismatchresponsestovowelsandmusicalintervalsmegevidence
AT shvartsmanmichael differencesinmismatchresponsestovowelsandmusicalintervalsmegevidence
AT idsardiwilliamj differencesinmismatchresponsestovowelsandmusicalintervalsmegevidence