Cargando…
A Computational Model of Immanent Accent Salience in Tonal Music
Accents are local musical events that attract the attention of the listener, and can be either immanent (evident from the score) or performed (added by the performer). Immanent accents involve temporal grouping (phrasing), meter, melody, and harmony; performed accents involve changes in timing, dyna...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6449458/ https://www.ncbi.nlm.nih.gov/pubmed/30984047 http://dx.doi.org/10.3389/fpsyg.2019.00317 |
_version_ | 1783408851047940096 |
---|---|
author | Bisesi, Erica Friberg, Anders Parncutt, Richard |
author_facet | Bisesi, Erica Friberg, Anders Parncutt, Richard |
author_sort | Bisesi, Erica |
collection | PubMed |
description | Accents are local musical events that attract the attention of the listener, and can be either immanent (evident from the score) or performed (added by the performer). Immanent accents involve temporal grouping (phrasing), meter, melody, and harmony; performed accents involve changes in timing, dynamics, articulation, and timbre. In the past, grouping, metrical and melodic accents were investigated in the context of expressive music performance. We present a novel computational model of immanent accent salience in tonal music that automatically predicts the positions and saliences of metrical, melodic and harmonic accents. The model extends previous research by improving on preliminary formulations of metrical and melodic accents and introducing a new model for harmonic accents that combines harmonic dissonance and harmonic surprise. In an analysis-by-synthesis approach, model predictions were compared with data from two experiments, respectively involving 239 sonorities and 638 sonorities, and 16 musicians and 5 experts in music theory. Average pair-wise correlations between raters were lower for metrical (0.27) and melodic accents (0.37) than for harmonic accents (0.49). In both experiments, when combining all the raters into a single measure expressing their consensus, correlations between ratings and model predictions ranged from 0.43 to 0.62. When different accent categories of accents were combined together, correlations were higher than for separate categories (r = 0.66). This suggests that raters might use strategies different from individual metrical, melodic or harmonic accent models to mark the musical events. |
format | Online Article Text |
id | pubmed-6449458 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-64494582019-04-12 A Computational Model of Immanent Accent Salience in Tonal Music Bisesi, Erica Friberg, Anders Parncutt, Richard Front Psychol Psychology Accents are local musical events that attract the attention of the listener, and can be either immanent (evident from the score) or performed (added by the performer). Immanent accents involve temporal grouping (phrasing), meter, melody, and harmony; performed accents involve changes in timing, dynamics, articulation, and timbre. In the past, grouping, metrical and melodic accents were investigated in the context of expressive music performance. We present a novel computational model of immanent accent salience in tonal music that automatically predicts the positions and saliences of metrical, melodic and harmonic accents. The model extends previous research by improving on preliminary formulations of metrical and melodic accents and introducing a new model for harmonic accents that combines harmonic dissonance and harmonic surprise. In an analysis-by-synthesis approach, model predictions were compared with data from two experiments, respectively involving 239 sonorities and 638 sonorities, and 16 musicians and 5 experts in music theory. Average pair-wise correlations between raters were lower for metrical (0.27) and melodic accents (0.37) than for harmonic accents (0.49). In both experiments, when combining all the raters into a single measure expressing their consensus, correlations between ratings and model predictions ranged from 0.43 to 0.62. When different accent categories of accents were combined together, correlations were higher than for separate categories (r = 0.66). This suggests that raters might use strategies different from individual metrical, melodic or harmonic accent models to mark the musical events. Frontiers Media S.A. 2019-03-29 /pmc/articles/PMC6449458/ /pubmed/30984047 http://dx.doi.org/10.3389/fpsyg.2019.00317 Text en Copyright © 2019 Bisesi, Friberg and Parncutt. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Bisesi, Erica Friberg, Anders Parncutt, Richard A Computational Model of Immanent Accent Salience in Tonal Music |
title | A Computational Model of Immanent Accent Salience in Tonal Music |
title_full | A Computational Model of Immanent Accent Salience in Tonal Music |
title_fullStr | A Computational Model of Immanent Accent Salience in Tonal Music |
title_full_unstemmed | A Computational Model of Immanent Accent Salience in Tonal Music |
title_short | A Computational Model of Immanent Accent Salience in Tonal Music |
title_sort | computational model of immanent accent salience in tonal music |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6449458/ https://www.ncbi.nlm.nih.gov/pubmed/30984047 http://dx.doi.org/10.3389/fpsyg.2019.00317 |
work_keys_str_mv | AT bisesierica acomputationalmodelofimmanentaccentsalienceintonalmusic AT friberganders acomputationalmodelofimmanentaccentsalienceintonalmusic AT parncuttrichard acomputationalmodelofimmanentaccentsalienceintonalmusic AT bisesierica computationalmodelofimmanentaccentsalienceintonalmusic AT friberganders computationalmodelofimmanentaccentsalienceintonalmusic AT parncuttrichard computationalmodelofimmanentaccentsalienceintonalmusic |