Cargando…

Operator bias in software-aided bat call identification

Software-aided identification facilitates the handling of large sets of bat call recordings, which is particularly useful in extensive acoustic surveys with several collaborators. Species lists are generated by “objective” automated classification. Subsequent validation consists of removing any spec...

Descripción completa

Detalles Bibliográficos
Autores principales: Fritsch, Georg, Bruckner, Alexander
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BlackWell Publishing Ltd 2014
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4113294/
https://www.ncbi.nlm.nih.gov/pubmed/25077021
http://dx.doi.org/10.1002/ece3.1122
_version_ 1782328273524490240
author Fritsch, Georg
Bruckner, Alexander
author_facet Fritsch, Georg
Bruckner, Alexander
author_sort Fritsch, Georg
collection PubMed
description Software-aided identification facilitates the handling of large sets of bat call recordings, which is particularly useful in extensive acoustic surveys with several collaborators. Species lists are generated by “objective” automated classification. Subsequent validation consists of removing any species not believed to be present. So far, very little is known about the identification bias introduced by individual validation of operators with varying degrees of experience. Effects on the quality of the resulting data may be considerable, especially for bat species that are difficult to identify acoustically. Using the batcorder system as an example, we compared validation results from 21 volunteer operators with 1–26 years of experience of working on bats. All of them validated identical recordings of bats from eastern Austria. The final outcomes were individual validated lists of plausible species. A questionnaire was used to enquire about individual experience and validation procedures. In the course of species validation, the operators reduced the software's estimate of species richness. The most experienced operators accepted the smallest percentage of species from the software's output and validated conservatively with low interoperator variability. Operators with intermediate experience accepted the largest percentage, with larger variability. Sixty-six percent of the operators, mainly with intermediate and low levels of experience, reintroduced species to their validated lists which had been identified by the automated classification, but were finally excluded from the unvalidated lists. These were, in many cases, rare and infrequently recorded species. The average dissimilarity of the validated species lists dropped with increasing numbers of recordings, tending toward a level of ˜20%. Our results suggest that the operators succeeded in removing false positives and that they detected species that had been wrongly excluded during automated classification. Thus, manual validation of the software's unvalidated output is indispensable for reasonable results. However, although application seems easy, software-aided bat call identification requires an advanced level of operator experience. Identification bias during validation is a major issue, particularly in studies with more than one participant. Measures should be taken to standardize the validation process and harmonize the results of different operators.
format Online
Article
Text
id pubmed-4113294
institution National Center for Biotechnology Information
language English
publishDate 2014
publisher BlackWell Publishing Ltd
record_format MEDLINE/PubMed
spelling pubmed-41132942014-07-30 Operator bias in software-aided bat call identification Fritsch, Georg Bruckner, Alexander Ecol Evol Original Research Software-aided identification facilitates the handling of large sets of bat call recordings, which is particularly useful in extensive acoustic surveys with several collaborators. Species lists are generated by “objective” automated classification. Subsequent validation consists of removing any species not believed to be present. So far, very little is known about the identification bias introduced by individual validation of operators with varying degrees of experience. Effects on the quality of the resulting data may be considerable, especially for bat species that are difficult to identify acoustically. Using the batcorder system as an example, we compared validation results from 21 volunteer operators with 1–26 years of experience of working on bats. All of them validated identical recordings of bats from eastern Austria. The final outcomes were individual validated lists of plausible species. A questionnaire was used to enquire about individual experience and validation procedures. In the course of species validation, the operators reduced the software's estimate of species richness. The most experienced operators accepted the smallest percentage of species from the software's output and validated conservatively with low interoperator variability. Operators with intermediate experience accepted the largest percentage, with larger variability. Sixty-six percent of the operators, mainly with intermediate and low levels of experience, reintroduced species to their validated lists which had been identified by the automated classification, but were finally excluded from the unvalidated lists. These were, in many cases, rare and infrequently recorded species. The average dissimilarity of the validated species lists dropped with increasing numbers of recordings, tending toward a level of ˜20%. Our results suggest that the operators succeeded in removing false positives and that they detected species that had been wrongly excluded during automated classification. Thus, manual validation of the software's unvalidated output is indispensable for reasonable results. However, although application seems easy, software-aided bat call identification requires an advanced level of operator experience. Identification bias during validation is a major issue, particularly in studies with more than one participant. Measures should be taken to standardize the validation process and harmonize the results of different operators. BlackWell Publishing Ltd 2014-07 2014-05-30 /pmc/articles/PMC4113294/ /pubmed/25077021 http://dx.doi.org/10.1002/ece3.1122 Text en © 2014 The Authors. Ecology and Evolution published by John Wiley & Sons Ltd. http://creativecommons.org/licenses/by/3.0/ This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
spellingShingle Original Research
Fritsch, Georg
Bruckner, Alexander
Operator bias in software-aided bat call identification
title Operator bias in software-aided bat call identification
title_full Operator bias in software-aided bat call identification
title_fullStr Operator bias in software-aided bat call identification
title_full_unstemmed Operator bias in software-aided bat call identification
title_short Operator bias in software-aided bat call identification
title_sort operator bias in software-aided bat call identification
topic Original Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4113294/
https://www.ncbi.nlm.nih.gov/pubmed/25077021
http://dx.doi.org/10.1002/ece3.1122
work_keys_str_mv AT fritschgeorg operatorbiasinsoftwareaidedbatcallidentification
AT bruckneralexander operatorbiasinsoftwareaidedbatcallidentification