Cargando…
Deep audio embeddings for vocalisation clustering
The study of non-human animals’ communication systems generally relies on the transcription of vocal sequences using a finite set of discrete units. This set is referred to as a vocal repertoire, which is specific to a species or a sub-group of a species. When conducted by human experts, the formal...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10332598/ https://www.ncbi.nlm.nih.gov/pubmed/37428759 http://dx.doi.org/10.1371/journal.pone.0283396 |
_version_ | 1785070468977393664 |
---|---|
author | Best, Paul Paris, Sébastien Glotin, Hervé Marxer, Ricard |
author_facet | Best, Paul Paris, Sébastien Glotin, Hervé Marxer, Ricard |
author_sort | Best, Paul |
collection | PubMed |
description | The study of non-human animals’ communication systems generally relies on the transcription of vocal sequences using a finite set of discrete units. This set is referred to as a vocal repertoire, which is specific to a species or a sub-group of a species. When conducted by human experts, the formal description of vocal repertoires can be laborious and/or biased. This motivates computerised assistance for this procedure, for which machine learning algorithms represent a good opportunity. Unsupervised clustering algorithms are suited for grouping close points together, provided a relevant representation. This paper therefore studies a new method for encoding vocalisations, allowing for automatic clustering to alleviate vocal repertoire characterisation. Borrowing from deep representation learning, we use a convolutional auto-encoder network to learn an abstract representation of vocalisations. We report on the quality of the learnt representation, as well as of state of the art methods, by quantifying their agreement with expert labelled vocalisation types from 8 datasets of other studies across 6 species (birds and marine mammals). With this benchmark, we demonstrate that using auto-encoders improves the relevance of vocalisation representation which serves repertoire characterisation using a very limited number of settings. We also publish a Python package for the bioacoustic community to train their own vocalisation auto-encoders or use a pretrained encoder to browse vocal repertoires and ease unit wise annotation. |
format | Online Article Text |
id | pubmed-10332598 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-103325982023-07-11 Deep audio embeddings for vocalisation clustering Best, Paul Paris, Sébastien Glotin, Hervé Marxer, Ricard PLoS One Research Article The study of non-human animals’ communication systems generally relies on the transcription of vocal sequences using a finite set of discrete units. This set is referred to as a vocal repertoire, which is specific to a species or a sub-group of a species. When conducted by human experts, the formal description of vocal repertoires can be laborious and/or biased. This motivates computerised assistance for this procedure, for which machine learning algorithms represent a good opportunity. Unsupervised clustering algorithms are suited for grouping close points together, provided a relevant representation. This paper therefore studies a new method for encoding vocalisations, allowing for automatic clustering to alleviate vocal repertoire characterisation. Borrowing from deep representation learning, we use a convolutional auto-encoder network to learn an abstract representation of vocalisations. We report on the quality of the learnt representation, as well as of state of the art methods, by quantifying their agreement with expert labelled vocalisation types from 8 datasets of other studies across 6 species (birds and marine mammals). With this benchmark, we demonstrate that using auto-encoders improves the relevance of vocalisation representation which serves repertoire characterisation using a very limited number of settings. We also publish a Python package for the bioacoustic community to train their own vocalisation auto-encoders or use a pretrained encoder to browse vocal repertoires and ease unit wise annotation. Public Library of Science 2023-07-10 /pmc/articles/PMC10332598/ /pubmed/37428759 http://dx.doi.org/10.1371/journal.pone.0283396 Text en © 2023 Best et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Best, Paul Paris, Sébastien Glotin, Hervé Marxer, Ricard Deep audio embeddings for vocalisation clustering |
title | Deep audio embeddings for vocalisation clustering |
title_full | Deep audio embeddings for vocalisation clustering |
title_fullStr | Deep audio embeddings for vocalisation clustering |
title_full_unstemmed | Deep audio embeddings for vocalisation clustering |
title_short | Deep audio embeddings for vocalisation clustering |
title_sort | deep audio embeddings for vocalisation clustering |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10332598/ https://www.ncbi.nlm.nih.gov/pubmed/37428759 http://dx.doi.org/10.1371/journal.pone.0283396 |
work_keys_str_mv | AT bestpaul deepaudioembeddingsforvocalisationclustering AT parissebastien deepaudioembeddingsforvocalisationclustering AT glotinherve deepaudioembeddingsforvocalisationclustering AT marxerricard deepaudioembeddingsforvocalisationclustering |