Cargando…

Aggressive vocal expressions—an investigation of their underlying neural network

Recent neural network models for the production of primate vocalizations are largely based on research in nonhuman primates. These models seem yet not fully capable of explaining the neural network dynamics especially underlying different types of human vocalizations. Unlike animal vocalizations, hu...

Descripción completa

Detalles Bibliográficos
Autores principales: Klaas, Hannah S., Frühholz, Sascha, Grandjean, Didier
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4426728/
https://www.ncbi.nlm.nih.gov/pubmed/26029069
http://dx.doi.org/10.3389/fnbeh.2015.00121
Descripción
Sumario:Recent neural network models for the production of primate vocalizations are largely based on research in nonhuman primates. These models seem yet not fully capable of explaining the neural network dynamics especially underlying different types of human vocalizations. Unlike animal vocalizations, human affective vocalizations might involve higher levels of vocal control and monitoring demands, especially in case of more complex vocal expressions of emotions superimposed on speech. Here we therefore investigated the functional cortico-subcortical network underlying different types (evoked vs. repetition) of producing human affective vocalizations in terms of affective prosody, especially examining the aggressive tone of a voice while producing meaningless speech-like utterances. Functional magnetic resonance imaging revealed, first, that bilateral auditory cortices showed a close functional interconnectivity during affective vocalizations pointing to a bilateral exchange of relevant acoustic information of produced vocalizations. Second, bilateral motor cortices (MC) that directly control vocal motor behavior showed functional connectivity to the right inferior frontal gyrus (IFG) and the right superior temporal gyrus (STG). Thus, vocal motor behavior during affective vocalizations seems to be controlled by a right lateralized network that provides vocal monitoring (IFG), probably based on auditory feedback processing (STG). Third, the basal ganglia (BG) showed both positive and negative modulatory connectivity with several frontal (ACC, IFG) and temporal brain regions (STG). Finally, the repetition of affective prosody compared to evoked vocalizations revealed a more extended neural network probably based on higher control and vocal monitoring demands. Taken together, the functional brain network underlying human affective vocalizations revealed several features that have been so far neglected in models of primate vocalizations.