Cargando…
New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music
Interactive music uses wearable sensors (i.e., gestural interfaces—GIs) and biometric datasets to reinvent traditional human–computer interaction and enhance music composition. In recent years, machine learning (ML) has been important for the artform. This is because ML helps process complex biometr...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7762429/ https://www.ncbi.nlm.nih.gov/pubmed/33297582 http://dx.doi.org/10.3390/e22121384 |
_version_ | 1783627803613200384 |
---|---|
author | Rhodes, Chris Allmendinger, Richard Climent, Ricardo |
author_facet | Rhodes, Chris Allmendinger, Richard Climent, Ricardo |
author_sort | Rhodes, Chris |
collection | PubMed |
description | Interactive music uses wearable sensors (i.e., gestural interfaces—GIs) and biometric datasets to reinvent traditional human–computer interaction and enhance music composition. In recent years, machine learning (ML) has been important for the artform. This is because ML helps process complex biometric datasets from GIs when predicting musical actions (termed performance gestures). ML allows musicians to create novel interactions with digital media. Wekinator is a popular ML software amongst artists, allowing users to train models through demonstration. It is built on the Waikato Environment for Knowledge Analysis (WEKA) framework, which is used to build supervised predictive models. Previous research has used biometric data from GIs to train specific ML models. However, previous research does not inform optimum ML model choice, within music, or compare model performance. Wekinator offers several ML models. Thus, we used Wekinator and the Myo armband GI and study three performance gestures for piano practice to solve this problem. Using these, we trained all models in Wekinator and investigated their accuracy, how gesture representation affects model accuracy and if optimisation can arise. Results show that neural networks are the strongest continuous classifiers, mapping behaviour differs amongst continuous models, optimisation can occur and gesture representation disparately affects model mapping behaviour; impacting music practice. |
format | Online Article Text |
id | pubmed-7762429 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-77624292021-02-24 New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music Rhodes, Chris Allmendinger, Richard Climent, Ricardo Entropy (Basel) Article Interactive music uses wearable sensors (i.e., gestural interfaces—GIs) and biometric datasets to reinvent traditional human–computer interaction and enhance music composition. In recent years, machine learning (ML) has been important for the artform. This is because ML helps process complex biometric datasets from GIs when predicting musical actions (termed performance gestures). ML allows musicians to create novel interactions with digital media. Wekinator is a popular ML software amongst artists, allowing users to train models through demonstration. It is built on the Waikato Environment for Knowledge Analysis (WEKA) framework, which is used to build supervised predictive models. Previous research has used biometric data from GIs to train specific ML models. However, previous research does not inform optimum ML model choice, within music, or compare model performance. Wekinator offers several ML models. Thus, we used Wekinator and the Myo armband GI and study three performance gestures for piano practice to solve this problem. Using these, we trained all models in Wekinator and investigated their accuracy, how gesture representation affects model accuracy and if optimisation can arise. Results show that neural networks are the strongest continuous classifiers, mapping behaviour differs amongst continuous models, optimisation can occur and gesture representation disparately affects model mapping behaviour; impacting music practice. MDPI 2020-12-07 /pmc/articles/PMC7762429/ /pubmed/33297582 http://dx.doi.org/10.3390/e22121384 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Rhodes, Chris Allmendinger, Richard Climent, Ricardo New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music |
title | New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music |
title_full | New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music |
title_fullStr | New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music |
title_full_unstemmed | New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music |
title_short | New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music |
title_sort | new interfaces and approaches to machine learning when classifying gestures within music |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7762429/ https://www.ncbi.nlm.nih.gov/pubmed/33297582 http://dx.doi.org/10.3390/e22121384 |
work_keys_str_mv | AT rhodeschris newinterfacesandapproachestomachinelearningwhenclassifyinggestureswithinmusic AT allmendingerrichard newinterfacesandapproachestomachinelearningwhenclassifyinggestureswithinmusic AT climentricardo newinterfacesandapproachestomachinelearningwhenclassifyinggestureswithinmusic |