Cargando…
Fast and accurate annotation of acoustic signals with deep neural networks
Acoustic signals serve communication within and across species throughout the animal kingdom. Studying the genetics, evolution, and neurobiology of acoustic communication requires annotating acoustic signals: segmenting and identifying individual acoustic elements like syllables or sound pulses. To...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
eLife Sciences Publications, Ltd
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8560090/ https://www.ncbi.nlm.nih.gov/pubmed/34723794 http://dx.doi.org/10.7554/eLife.68837 |
_version_ | 1784592874715742208 |
---|---|
author | Steinfath, Elsa Palacios-Muñoz, Adrian Rottschäfer, Julian R Yuezak, Deniz Clemens, Jan |
author_facet | Steinfath, Elsa Palacios-Muñoz, Adrian Rottschäfer, Julian R Yuezak, Deniz Clemens, Jan |
author_sort | Steinfath, Elsa |
collection | PubMed |
description | Acoustic signals serve communication within and across species throughout the animal kingdom. Studying the genetics, evolution, and neurobiology of acoustic communication requires annotating acoustic signals: segmenting and identifying individual acoustic elements like syllables or sound pulses. To be useful, annotations need to be accurate, robust to noise, and fast. We here introduce DeepAudioSegmenter (DAS), a method that annotates acoustic signals across species based on a deep-learning derived hierarchical presentation of sound. We demonstrate the accuracy, robustness, and speed of DAS using acoustic signals with diverse characteristics from insects, birds, and mammals. DAS comes with a graphical user interface for annotating song, training the network, and for generating and proofreading annotations. The method can be trained to annotate signals from new species with little manual annotation and can be combined with unsupervised methods to discover novel signal types. DAS annotates song with high throughput and low latency for experimental interventions in realtime. Overall, DAS is a universal, versatile, and accessible tool for annotating acoustic communication signals. |
format | Online Article Text |
id | pubmed-8560090 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | eLife Sciences Publications, Ltd |
record_format | MEDLINE/PubMed |
spelling | pubmed-85600902021-11-03 Fast and accurate annotation of acoustic signals with deep neural networks Steinfath, Elsa Palacios-Muñoz, Adrian Rottschäfer, Julian R Yuezak, Deniz Clemens, Jan eLife Neuroscience Acoustic signals serve communication within and across species throughout the animal kingdom. Studying the genetics, evolution, and neurobiology of acoustic communication requires annotating acoustic signals: segmenting and identifying individual acoustic elements like syllables or sound pulses. To be useful, annotations need to be accurate, robust to noise, and fast. We here introduce DeepAudioSegmenter (DAS), a method that annotates acoustic signals across species based on a deep-learning derived hierarchical presentation of sound. We demonstrate the accuracy, robustness, and speed of DAS using acoustic signals with diverse characteristics from insects, birds, and mammals. DAS comes with a graphical user interface for annotating song, training the network, and for generating and proofreading annotations. The method can be trained to annotate signals from new species with little manual annotation and can be combined with unsupervised methods to discover novel signal types. DAS annotates song with high throughput and low latency for experimental interventions in realtime. Overall, DAS is a universal, versatile, and accessible tool for annotating acoustic communication signals. eLife Sciences Publications, Ltd 2021-11-01 /pmc/articles/PMC8560090/ /pubmed/34723794 http://dx.doi.org/10.7554/eLife.68837 Text en © 2021, Steinfath et al https://creativecommons.org/licenses/by/4.0/This article is distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use and redistribution provided that the original author and source are credited. |
spellingShingle | Neuroscience Steinfath, Elsa Palacios-Muñoz, Adrian Rottschäfer, Julian R Yuezak, Deniz Clemens, Jan Fast and accurate annotation of acoustic signals with deep neural networks |
title | Fast and accurate annotation of acoustic signals with deep neural networks |
title_full | Fast and accurate annotation of acoustic signals with deep neural networks |
title_fullStr | Fast and accurate annotation of acoustic signals with deep neural networks |
title_full_unstemmed | Fast and accurate annotation of acoustic signals with deep neural networks |
title_short | Fast and accurate annotation of acoustic signals with deep neural networks |
title_sort | fast and accurate annotation of acoustic signals with deep neural networks |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8560090/ https://www.ncbi.nlm.nih.gov/pubmed/34723794 http://dx.doi.org/10.7554/eLife.68837 |
work_keys_str_mv | AT steinfathelsa fastandaccurateannotationofacousticsignalswithdeepneuralnetworks AT palaciosmunozadrian fastandaccurateannotationofacousticsignalswithdeepneuralnetworks AT rottschaferjulianr fastandaccurateannotationofacousticsignalswithdeepneuralnetworks AT yuezakdeniz fastandaccurateannotationofacousticsignalswithdeepneuralnetworks AT clemensjan fastandaccurateannotationofacousticsignalswithdeepneuralnetworks |