Cargando…

Automated location invariant animal detection in camera trap images using publicly available data sources

1. A time‐consuming challenge faced by camera trap practitioners is the extraction of meaningful data from images to inform ecological management. An increasingly popular solution is automated image classification software. However, most solutions are not sufficiently robust to be deployed on a larg...

Descripción completa

Detalles Bibliográficos
Autores principales: Shepley, Andrew, Falzon, Greg, Meek, Paul, Kwan, Paul
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8093655/
https://www.ncbi.nlm.nih.gov/pubmed/33976825
http://dx.doi.org/10.1002/ece3.7344
_version_ 1783687857103175680
author Shepley, Andrew
Falzon, Greg
Meek, Paul
Kwan, Paul
author_facet Shepley, Andrew
Falzon, Greg
Meek, Paul
Kwan, Paul
author_sort Shepley, Andrew
collection PubMed
description 1. A time‐consuming challenge faced by camera trap practitioners is the extraction of meaningful data from images to inform ecological management. An increasingly popular solution is automated image classification software. However, most solutions are not sufficiently robust to be deployed on a large scale due to lack of location invariance when transferring models between sites. This prevents optimal use of ecological data resulting in significant expenditure of time and resources to annotate and retrain deep learning models. 2. We present a method ecologists can use to develop optimized location invariant camera trap object detectors by (a) evaluating publicly available image datasets characterized by high intradataset variability in training deep learning models for camera trap object detection and (b) using small subsets of camera trap images to optimize models for high accuracy domain‐specific applications. 3. We collected and annotated three datasets of images of striped hyena, rhinoceros, and pigs, from the image‐sharing websites FlickR and iNaturalist (FiN), to train three object detection models. We compared the performance of these models to that of three models trained on the Wildlife Conservation Society and Camera CATalogue datasets, when tested on out‐of‐sample Snapshot Serengeti datasets. We then increased FiN model robustness by infusing small subsets of camera trap images into training. 4. In all experiments, the mean Average Precision (mAP) of the FiN trained models was significantly higher (82.33%–88.59%) than that achieved by the models trained only on camera trap datasets (38.5%–66.74%). Infusion further improved mAP by 1.78%–32.08%. 5. Ecologists can use FiN images for training deep learning object detection solutions for camera trap image processing to develop location invariant, robust, out‐of‐the‐box software. Models can be further optimized by infusion of 5%–10% camera trap images into training data. This would allow AI technologies to be deployed on a large scale in ecological applications. Datasets and code related to this study are open source and available on this repository: https://doi.org/10.5061/dryad.1c59zw3tx.
format Online
Article
Text
id pubmed-8093655
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher John Wiley and Sons Inc.
record_format MEDLINE/PubMed
spelling pubmed-80936552021-05-10 Automated location invariant animal detection in camera trap images using publicly available data sources Shepley, Andrew Falzon, Greg Meek, Paul Kwan, Paul Ecol Evol Original Research 1. A time‐consuming challenge faced by camera trap practitioners is the extraction of meaningful data from images to inform ecological management. An increasingly popular solution is automated image classification software. However, most solutions are not sufficiently robust to be deployed on a large scale due to lack of location invariance when transferring models between sites. This prevents optimal use of ecological data resulting in significant expenditure of time and resources to annotate and retrain deep learning models. 2. We present a method ecologists can use to develop optimized location invariant camera trap object detectors by (a) evaluating publicly available image datasets characterized by high intradataset variability in training deep learning models for camera trap object detection and (b) using small subsets of camera trap images to optimize models for high accuracy domain‐specific applications. 3. We collected and annotated three datasets of images of striped hyena, rhinoceros, and pigs, from the image‐sharing websites FlickR and iNaturalist (FiN), to train three object detection models. We compared the performance of these models to that of three models trained on the Wildlife Conservation Society and Camera CATalogue datasets, when tested on out‐of‐sample Snapshot Serengeti datasets. We then increased FiN model robustness by infusing small subsets of camera trap images into training. 4. In all experiments, the mean Average Precision (mAP) of the FiN trained models was significantly higher (82.33%–88.59%) than that achieved by the models trained only on camera trap datasets (38.5%–66.74%). Infusion further improved mAP by 1.78%–32.08%. 5. Ecologists can use FiN images for training deep learning object detection solutions for camera trap image processing to develop location invariant, robust, out‐of‐the‐box software. Models can be further optimized by infusion of 5%–10% camera trap images into training data. This would allow AI technologies to be deployed on a large scale in ecological applications. Datasets and code related to this study are open source and available on this repository: https://doi.org/10.5061/dryad.1c59zw3tx. John Wiley and Sons Inc. 2021-03-10 /pmc/articles/PMC8093655/ /pubmed/33976825 http://dx.doi.org/10.1002/ece3.7344 Text en © 2021 The Authors. Ecology and Evolution published by John Wiley & Sons Ltd. https://creativecommons.org/licenses/by/4.0/This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
spellingShingle Original Research
Shepley, Andrew
Falzon, Greg
Meek, Paul
Kwan, Paul
Automated location invariant animal detection in camera trap images using publicly available data sources
title Automated location invariant animal detection in camera trap images using publicly available data sources
title_full Automated location invariant animal detection in camera trap images using publicly available data sources
title_fullStr Automated location invariant animal detection in camera trap images using publicly available data sources
title_full_unstemmed Automated location invariant animal detection in camera trap images using publicly available data sources
title_short Automated location invariant animal detection in camera trap images using publicly available data sources
title_sort automated location invariant animal detection in camera trap images using publicly available data sources
topic Original Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8093655/
https://www.ncbi.nlm.nih.gov/pubmed/33976825
http://dx.doi.org/10.1002/ece3.7344
work_keys_str_mv AT shepleyandrew automatedlocationinvariantanimaldetectionincameratrapimagesusingpubliclyavailabledatasources
AT falzongreg automatedlocationinvariantanimaldetectionincameratrapimagesusingpubliclyavailabledatasources
AT meekpaul automatedlocationinvariantanimaldetectionincameratrapimagesusingpubliclyavailabledatasources
AT kwanpaul automatedlocationinvariantanimaldetectionincameratrapimagesusingpubliclyavailabledatasources