Cargando…

The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study

The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level...

Descripción completa

Detalles Bibliográficos
Autores principales: LaCroix, Arianna N., Diaz, Alvaro F., Rogalsky, Corianne
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4531212/
https://www.ncbi.nlm.nih.gov/pubmed/26321976
http://dx.doi.org/10.3389/fpsyg.2015.01138
_version_ 1782385008040738816
author LaCroix, Arianna N.
Diaz, Alvaro F.
Rogalsky, Corianne
author_facet LaCroix, Arianna N.
Diaz, Alvaro F.
Rogalsky, Corianne
author_sort LaCroix, Arianna N.
collection PubMed
description The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music.
format Online
Article
Text
id pubmed-4531212
institution National Center for Biotechnology Information
language English
publishDate 2015
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-45312122015-08-28 The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study LaCroix, Arianna N. Diaz, Alvaro F. Rogalsky, Corianne Front Psychol Psychology The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music. Frontiers Media S.A. 2015-08-11 /pmc/articles/PMC4531212/ /pubmed/26321976 http://dx.doi.org/10.3389/fpsyg.2015.01138 Text en Copyright © 2015 LaCroix, Diaz and Rogalsky. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Psychology
LaCroix, Arianna N.
Diaz, Alvaro F.
Rogalsky, Corianne
The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study
title The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study
title_full The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study
title_fullStr The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study
title_full_unstemmed The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study
title_short The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study
title_sort relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study
topic Psychology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4531212/
https://www.ncbi.nlm.nih.gov/pubmed/26321976
http://dx.doi.org/10.3389/fpsyg.2015.01138
work_keys_str_mv AT lacroixariannan therelationshipbetweentheneuralcomputationsforspeechandmusicperceptioniscontextdependentanactivationlikelihoodestimatestudy
AT diazalvarof therelationshipbetweentheneuralcomputationsforspeechandmusicperceptioniscontextdependentanactivationlikelihoodestimatestudy
AT rogalskycorianne therelationshipbetweentheneuralcomputationsforspeechandmusicperceptioniscontextdependentanactivationlikelihoodestimatestudy
AT lacroixariannan relationshipbetweentheneuralcomputationsforspeechandmusicperceptioniscontextdependentanactivationlikelihoodestimatestudy
AT diazalvarof relationshipbetweentheneuralcomputationsforspeechandmusicperceptioniscontextdependentanactivationlikelihoodestimatestudy
AT rogalskycorianne relationshipbetweentheneuralcomputationsforspeechandmusicperceptioniscontextdependentanactivationlikelihoodestimatestudy