Cargando…

On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common

Without doubt, there is emotional information in almost any kind of sound received by humans every day: be it the affective state of a person transmitted by means of speech; the emotion intended by a composer while writing a musical piece, or conveyed by a musician while performing it; or the affect...

Descripción completa

Detalles Bibliográficos
Autores principales: Weninger, Felix, Eyben, Florian, Schuller, Björn W., Mortillaro, Marcello, Scherer, Klaus R.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2013
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3664314/
https://www.ncbi.nlm.nih.gov/pubmed/23750144
http://dx.doi.org/10.3389/fpsyg.2013.00292
_version_ 1782271080342224896
author Weninger, Felix
Eyben, Florian
Schuller, Björn W.
Mortillaro, Marcello
Scherer, Klaus R.
author_facet Weninger, Felix
Eyben, Florian
Schuller, Björn W.
Mortillaro, Marcello
Scherer, Klaus R.
author_sort Weninger, Felix
collection PubMed
description Without doubt, there is emotional information in almost any kind of sound received by humans every day: be it the affective state of a person transmitted by means of speech; the emotion intended by a composer while writing a musical piece, or conveyed by a musician while performing it; or the affective state connected to an acoustic event occurring in the environment, in the soundtrack of a movie, or in a radio play. In the field of affective computing, there is currently some loosely connected research concerning either of these phenomena, but a holistic computational model of affect in sound is still lacking. In turn, for tomorrow’s pervasive technical systems, including affective companions and robots, it is expected to be highly beneficial to understand the affective dimensions of “the sound that something makes,” in order to evaluate the system’s auditory environment and its own audio output. This article aims at a first step toward a holistic computational model: starting from standard acoustic feature extraction schemes in the domains of speech, music, and sound analysis, we interpret the worth of individual features across these three domains, considering four audio databases with observer annotations in the arousal and valence dimensions. In the results, we find that by selection of appropriate descriptors, cross-domain arousal, and valence regression is feasible achieving significant correlations with the observer annotations of up to 0.78 for arousal (training on sound and testing on enacted speech) and 0.60 for valence (training on enacted speech and testing on music). The high degree of cross-domain consistency in encoding the two main dimensions of affect may be attributable to the co-evolution of speech and music from multimodal affect bursts, including the integration of nature sounds for expressive effects.
format Online
Article
Text
id pubmed-3664314
institution National Center for Biotechnology Information
language English
publishDate 2013
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-36643142013-06-07 On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common Weninger, Felix Eyben, Florian Schuller, Björn W. Mortillaro, Marcello Scherer, Klaus R. Front Psychol Psychology Without doubt, there is emotional information in almost any kind of sound received by humans every day: be it the affective state of a person transmitted by means of speech; the emotion intended by a composer while writing a musical piece, or conveyed by a musician while performing it; or the affective state connected to an acoustic event occurring in the environment, in the soundtrack of a movie, or in a radio play. In the field of affective computing, there is currently some loosely connected research concerning either of these phenomena, but a holistic computational model of affect in sound is still lacking. In turn, for tomorrow’s pervasive technical systems, including affective companions and robots, it is expected to be highly beneficial to understand the affective dimensions of “the sound that something makes,” in order to evaluate the system’s auditory environment and its own audio output. This article aims at a first step toward a holistic computational model: starting from standard acoustic feature extraction schemes in the domains of speech, music, and sound analysis, we interpret the worth of individual features across these three domains, considering four audio databases with observer annotations in the arousal and valence dimensions. In the results, we find that by selection of appropriate descriptors, cross-domain arousal, and valence regression is feasible achieving significant correlations with the observer annotations of up to 0.78 for arousal (training on sound and testing on enacted speech) and 0.60 for valence (training on enacted speech and testing on music). The high degree of cross-domain consistency in encoding the two main dimensions of affect may be attributable to the co-evolution of speech and music from multimodal affect bursts, including the integration of nature sounds for expressive effects. Frontiers Media S.A. 2013-05-27 /pmc/articles/PMC3664314/ /pubmed/23750144 http://dx.doi.org/10.3389/fpsyg.2013.00292 Text en Copyright © 2013 Weninger, Eyben, Schuller, Mortillaro and Scherer. http://creativecommons.org/licenses/by/3.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and subject to any copyright notices concerning any third-party graphics etc.
spellingShingle Psychology
Weninger, Felix
Eyben, Florian
Schuller, Björn W.
Mortillaro, Marcello
Scherer, Klaus R.
On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common
title On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common
title_full On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common
title_fullStr On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common
title_full_unstemmed On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common
title_short On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common
title_sort on the acoustics of emotion in audio: what speech, music, and sound have in common
topic Psychology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3664314/
https://www.ncbi.nlm.nih.gov/pubmed/23750144
http://dx.doi.org/10.3389/fpsyg.2013.00292
work_keys_str_mv AT weningerfelix ontheacousticsofemotioninaudiowhatspeechmusicandsoundhaveincommon
AT eybenflorian ontheacousticsofemotioninaudiowhatspeechmusicandsoundhaveincommon
AT schullerbjornw ontheacousticsofemotioninaudiowhatspeechmusicandsoundhaveincommon
AT mortillaromarcello ontheacousticsofemotioninaudiowhatspeechmusicandsoundhaveincommon
AT schererklausr ontheacousticsofemotioninaudiowhatspeechmusicandsoundhaveincommon