Cargando…

Understanding Musical Predictions With an Embodied Interface for Musical Machine Learning

Machine-learning models of music often exist outside the worlds of musical performance practice and abstracted from the physical gestures of musicians. In this work, we consider how a recurrent neural network (RNN) model of simple music gestures may be integrated into a physical instrument so that p...

Descripción completa

Detalles Bibliográficos
Autores principales: Martin, Charles Patrick, Glette, Kyrre, Nygaard, Tønnes Frostad, Torresen, Jim
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7861300/
https://www.ncbi.nlm.nih.gov/pubmed/33733126
http://dx.doi.org/10.3389/frai.2020.00006
_version_ 1783647056615702528
author Martin, Charles Patrick
Glette, Kyrre
Nygaard, Tønnes Frostad
Torresen, Jim
author_facet Martin, Charles Patrick
Glette, Kyrre
Nygaard, Tønnes Frostad
Torresen, Jim
author_sort Martin, Charles Patrick
collection PubMed
description Machine-learning models of music often exist outside the worlds of musical performance practice and abstracted from the physical gestures of musicians. In this work, we consider how a recurrent neural network (RNN) model of simple music gestures may be integrated into a physical instrument so that predictions are sonically and physically entwined with the performer's actions. We introduce EMPI, an embodied musical prediction interface that simplifies musical interaction and prediction to just one dimension of continuous input and output. The predictive model is a mixture density RNN trained to estimate the performer's next physical input action and the time at which this will occur. Predictions are represented sonically through synthesized audio, and physically with a motorized output indicator. We use EMPI to investigate how performers understand and exploit different predictive models to make music through a controlled study of performances with different models and levels of physical feedback. We show that while performers often favor a model trained on human-sourced data, they find different musical affordances in models trained on synthetic, and even random, data. Physical representation of predictions seemed to affect the length of performances. This work contributes new understandings of how musicians use generative ML models in real-time performance backed up by experimental evidence. We argue that a constrained musical interface can expose the affordances of embodied predictive interactions.
format Online
Article
Text
id pubmed-7861300
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-78613002021-03-16 Understanding Musical Predictions With an Embodied Interface for Musical Machine Learning Martin, Charles Patrick Glette, Kyrre Nygaard, Tønnes Frostad Torresen, Jim Front Artif Intell Artificial Intelligence Machine-learning models of music often exist outside the worlds of musical performance practice and abstracted from the physical gestures of musicians. In this work, we consider how a recurrent neural network (RNN) model of simple music gestures may be integrated into a physical instrument so that predictions are sonically and physically entwined with the performer's actions. We introduce EMPI, an embodied musical prediction interface that simplifies musical interaction and prediction to just one dimension of continuous input and output. The predictive model is a mixture density RNN trained to estimate the performer's next physical input action and the time at which this will occur. Predictions are represented sonically through synthesized audio, and physically with a motorized output indicator. We use EMPI to investigate how performers understand and exploit different predictive models to make music through a controlled study of performances with different models and levels of physical feedback. We show that while performers often favor a model trained on human-sourced data, they find different musical affordances in models trained on synthetic, and even random, data. Physical representation of predictions seemed to affect the length of performances. This work contributes new understandings of how musicians use generative ML models in real-time performance backed up by experimental evidence. We argue that a constrained musical interface can expose the affordances of embodied predictive interactions. Frontiers Media S.A. 2020-03-03 /pmc/articles/PMC7861300/ /pubmed/33733126 http://dx.doi.org/10.3389/frai.2020.00006 Text en Copyright © 2020 Martin, Glette, Nygaard and Torresen. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Martin, Charles Patrick
Glette, Kyrre
Nygaard, Tønnes Frostad
Torresen, Jim
Understanding Musical Predictions With an Embodied Interface for Musical Machine Learning
title Understanding Musical Predictions With an Embodied Interface for Musical Machine Learning
title_full Understanding Musical Predictions With an Embodied Interface for Musical Machine Learning
title_fullStr Understanding Musical Predictions With an Embodied Interface for Musical Machine Learning
title_full_unstemmed Understanding Musical Predictions With an Embodied Interface for Musical Machine Learning
title_short Understanding Musical Predictions With an Embodied Interface for Musical Machine Learning
title_sort understanding musical predictions with an embodied interface for musical machine learning
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7861300/
https://www.ncbi.nlm.nih.gov/pubmed/33733126
http://dx.doi.org/10.3389/frai.2020.00006
work_keys_str_mv AT martincharlespatrick understandingmusicalpredictionswithanembodiedinterfaceformusicalmachinelearning
AT glettekyrre understandingmusicalpredictionswithanembodiedinterfaceformusicalmachinelearning
AT nygaardtønnesfrostad understandingmusicalpredictionswithanembodiedinterfaceformusicalmachinelearning
AT torresenjim understandingmusicalpredictionswithanembodiedinterfaceformusicalmachinelearning