Cargando…

Time–frequency scattering accurately models auditory similarities between instrumental playing techniques

Instrumentalplaying techniques such as vibratos, glissandos, and trills often denote musical expressivity, both in classical and folk contexts. However, most existing approaches to music similarity retrieval fail to describe timbre beyond the so-called “ordinary” technique, use instrument identity a...

Descripción completa

Detalles Bibliográficos
Autores principales: Lostanlen, Vincent, El-Hajj, Christian, Rossignol, Mathias, Lafay, Grégoire, Andén, Joakim, Lagrange, Mathieu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7801324/
https://www.ncbi.nlm.nih.gov/pubmed/33488686
http://dx.doi.org/10.1186/s13636-020-00187-z
_version_ 1783635552196624384
author Lostanlen, Vincent
El-Hajj, Christian
Rossignol, Mathias
Lafay, Grégoire
Andén, Joakim
Lagrange, Mathieu
author_facet Lostanlen, Vincent
El-Hajj, Christian
Rossignol, Mathias
Lafay, Grégoire
Andén, Joakim
Lagrange, Mathieu
author_sort Lostanlen, Vincent
collection PubMed
description Instrumentalplaying techniques such as vibratos, glissandos, and trills often denote musical expressivity, both in classical and folk contexts. However, most existing approaches to music similarity retrieval fail to describe timbre beyond the so-called “ordinary” technique, use instrument identity as a proxy for timbre quality, and do not allow for customization to the perceptual idiosyncrasies of a new subject. In this article, we ask 31 human participants to organize 78 isolated notes into a set of timbre clusters. Analyzing their responses suggests that timbre perception operates within a more flexible taxonomy than those provided by instruments or playing techniques alone. In addition, we propose a machine listening model to recover the cluster graph of auditory similarities across instruments, mutes, and techniques. Our model relies on joint time–frequency scattering features to extract spectrotemporal modulations as acoustic features. Furthermore, it minimizes triplet loss in the cluster graph by means of the large-margin nearest neighbor (LMNN) metric learning algorithm. Over a dataset of 9346 isolated notes, we report a state-of-the-art average precision at rank five (AP@5) of 99.0%±1. An ablation study demonstrates that removing either the joint time–frequency scattering transform or the metric learning algorithm noticeably degrades performance.
format Online
Article
Text
id pubmed-7801324
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Springer International Publishing
record_format MEDLINE/PubMed
spelling pubmed-78013242021-01-21 Time–frequency scattering accurately models auditory similarities between instrumental playing techniques Lostanlen, Vincent El-Hajj, Christian Rossignol, Mathias Lafay, Grégoire Andén, Joakim Lagrange, Mathieu EURASIP J Audio Speech Music Process Research Instrumentalplaying techniques such as vibratos, glissandos, and trills often denote musical expressivity, both in classical and folk contexts. However, most existing approaches to music similarity retrieval fail to describe timbre beyond the so-called “ordinary” technique, use instrument identity as a proxy for timbre quality, and do not allow for customization to the perceptual idiosyncrasies of a new subject. In this article, we ask 31 human participants to organize 78 isolated notes into a set of timbre clusters. Analyzing their responses suggests that timbre perception operates within a more flexible taxonomy than those provided by instruments or playing techniques alone. In addition, we propose a machine listening model to recover the cluster graph of auditory similarities across instruments, mutes, and techniques. Our model relies on joint time–frequency scattering features to extract spectrotemporal modulations as acoustic features. Furthermore, it minimizes triplet loss in the cluster graph by means of the large-margin nearest neighbor (LMNN) metric learning algorithm. Over a dataset of 9346 isolated notes, we report a state-of-the-art average precision at rank five (AP@5) of 99.0%±1. An ablation study demonstrates that removing either the joint time–frequency scattering transform or the metric learning algorithm noticeably degrades performance. Springer International Publishing 2021-01-11 2021 /pmc/articles/PMC7801324/ /pubmed/33488686 http://dx.doi.org/10.1186/s13636-020-00187-z Text en © The Author(s) 2021 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Research
Lostanlen, Vincent
El-Hajj, Christian
Rossignol, Mathias
Lafay, Grégoire
Andén, Joakim
Lagrange, Mathieu
Time–frequency scattering accurately models auditory similarities between instrumental playing techniques
title Time–frequency scattering accurately models auditory similarities between instrumental playing techniques
title_full Time–frequency scattering accurately models auditory similarities between instrumental playing techniques
title_fullStr Time–frequency scattering accurately models auditory similarities between instrumental playing techniques
title_full_unstemmed Time–frequency scattering accurately models auditory similarities between instrumental playing techniques
title_short Time–frequency scattering accurately models auditory similarities between instrumental playing techniques
title_sort time–frequency scattering accurately models auditory similarities between instrumental playing techniques
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7801324/
https://www.ncbi.nlm.nih.gov/pubmed/33488686
http://dx.doi.org/10.1186/s13636-020-00187-z
work_keys_str_mv AT lostanlenvincent timefrequencyscatteringaccuratelymodelsauditorysimilaritiesbetweeninstrumentalplayingtechniques
AT elhajjchristian timefrequencyscatteringaccuratelymodelsauditorysimilaritiesbetweeninstrumentalplayingtechniques
AT rossignolmathias timefrequencyscatteringaccuratelymodelsauditorysimilaritiesbetweeninstrumentalplayingtechniques
AT lafaygregoire timefrequencyscatteringaccuratelymodelsauditorysimilaritiesbetweeninstrumentalplayingtechniques
AT andenjoakim timefrequencyscatteringaccuratelymodelsauditorysimilaritiesbetweeninstrumentalplayingtechniques
AT lagrangemathieu timefrequencyscatteringaccuratelymodelsauditorysimilaritiesbetweeninstrumentalplayingtechniques