Cargando…

Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires

Increases in the scale and complexity of behavioral data pose an increasing challenge for data analysis. A common strategy involves replacing entire behaviors with small numbers of handpicked, domain-specific features, but this approach suffers from several crucial limitations. For example, handpick...

Descripción completa

Detalles Bibliográficos
Autores principales: Goffinet, Jack, Brudner, Samuel, Mooney, Richard, Pearson, John
Formato: Online Artículo Texto
Lenguaje:English
Publicado: eLife Sciences Publications, Ltd 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8213406/
https://www.ncbi.nlm.nih.gov/pubmed/33988503
http://dx.doi.org/10.7554/eLife.67855
_version_ 1783709840463364096
author Goffinet, Jack
Brudner, Samuel
Mooney, Richard
Pearson, John
author_facet Goffinet, Jack
Brudner, Samuel
Mooney, Richard
Pearson, John
author_sort Goffinet, Jack
collection PubMed
description Increases in the scale and complexity of behavioral data pose an increasing challenge for data analysis. A common strategy involves replacing entire behaviors with small numbers of handpicked, domain-specific features, but this approach suffers from several crucial limitations. For example, handpicked features may miss important dimensions of variability, and correlations among them complicate statistical testing. Here, by contrast, we apply the variational autoencoder (VAE), an unsupervised learning method, to learn features directly from data and quantify the vocal behavior of two model species: the laboratory mouse and the zebra finch. The VAE converges on a parsimonious representation that outperforms handpicked features on a variety of common analysis tasks, enables the measurement of moment-by-moment vocal variability on the timescale of tens of milliseconds in the zebra finch, provides strong evidence that mouse ultrasonic vocalizations do not cluster as is commonly believed, and captures the similarity of tutor and pupil birdsong with qualitatively higher fidelity than previous approaches. In all, we demonstrate the utility of modern unsupervised learning approaches to the quantification of complex and high-dimensional vocal behavior.
format Online
Article
Text
id pubmed-8213406
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher eLife Sciences Publications, Ltd
record_format MEDLINE/PubMed
spelling pubmed-82134062021-06-21 Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires Goffinet, Jack Brudner, Samuel Mooney, Richard Pearson, John eLife Computational and Systems Biology Increases in the scale and complexity of behavioral data pose an increasing challenge for data analysis. A common strategy involves replacing entire behaviors with small numbers of handpicked, domain-specific features, but this approach suffers from several crucial limitations. For example, handpicked features may miss important dimensions of variability, and correlations among them complicate statistical testing. Here, by contrast, we apply the variational autoencoder (VAE), an unsupervised learning method, to learn features directly from data and quantify the vocal behavior of two model species: the laboratory mouse and the zebra finch. The VAE converges on a parsimonious representation that outperforms handpicked features on a variety of common analysis tasks, enables the measurement of moment-by-moment vocal variability on the timescale of tens of milliseconds in the zebra finch, provides strong evidence that mouse ultrasonic vocalizations do not cluster as is commonly believed, and captures the similarity of tutor and pupil birdsong with qualitatively higher fidelity than previous approaches. In all, we demonstrate the utility of modern unsupervised learning approaches to the quantification of complex and high-dimensional vocal behavior. eLife Sciences Publications, Ltd 2021-05-14 /pmc/articles/PMC8213406/ /pubmed/33988503 http://dx.doi.org/10.7554/eLife.67855 Text en © 2021, Goffinet et al https://creativecommons.org/licenses/by/4.0/This article is distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use and redistribution provided that the original author and source are credited.
spellingShingle Computational and Systems Biology
Goffinet, Jack
Brudner, Samuel
Mooney, Richard
Pearson, John
Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires
title Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires
title_full Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires
title_fullStr Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires
title_full_unstemmed Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires
title_short Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires
title_sort low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires
topic Computational and Systems Biology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8213406/
https://www.ncbi.nlm.nih.gov/pubmed/33988503
http://dx.doi.org/10.7554/eLife.67855
work_keys_str_mv AT goffinetjack lowdimensionallearnedfeaturespacesquantifyindividualandgroupdifferencesinvocalrepertoires
AT brudnersamuel lowdimensionallearnedfeaturespacesquantifyindividualandgroupdifferencesinvocalrepertoires
AT mooneyrichard lowdimensionallearnedfeaturespacesquantifyindividualandgroupdifferencesinvocalrepertoires
AT pearsonjohn lowdimensionallearnedfeaturespacesquantifyindividualandgroupdifferencesinvocalrepertoires