Cargando…

Three critical factors affecting automated image species recognition performance for camera traps

Ecological camera traps are increasingly used by wildlife biologists to unobtrusively monitor an ecosystems animal population. However, manual inspection of the images produced is expensive, laborious, and time‐consuming. The success of deep learning systems using camera trap images has been previou...

Descripción completa

Detalles Bibliográficos
Autores principales: Schneider, Stefan, Greenberg, Saul, Taylor, Graham W., Kremer, Stefan C.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7141055/
https://www.ncbi.nlm.nih.gov/pubmed/32274005
http://dx.doi.org/10.1002/ece3.6147
_version_ 1783519114116988928
author Schneider, Stefan
Greenberg, Saul
Taylor, Graham W.
Kremer, Stefan C.
author_facet Schneider, Stefan
Greenberg, Saul
Taylor, Graham W.
Kremer, Stefan C.
author_sort Schneider, Stefan
collection PubMed
description Ecological camera traps are increasingly used by wildlife biologists to unobtrusively monitor an ecosystems animal population. However, manual inspection of the images produced is expensive, laborious, and time‐consuming. The success of deep learning systems using camera trap images has been previously explored in preliminary stages. These studies, however, are lacking in their practicality. They are primarily focused on extremely large datasets, often millions of images, and there is little to no focus on performance when tasked with species identification in new locations not seen during training. Our goal was to test the capabilities of deep learning systems trained on camera trap images using modestly sized training data, compare performance when considering unseen background locations, and quantify the gradient of lower bound performance to provide a guideline of data requirements in correspondence to performance expectations. We use a dataset provided by Parks Canada containing 47,279 images collected from 36 unique geographic locations across multiple environments. Images represent 55 animal species and human activity with high‐class imbalance. We trained, tested, and compared the capabilities of six deep learning computer vision networks using transfer learning and image augmentation: DenseNet201, Inception‐ResNet‐V3, InceptionV3, NASNetMobile, MobileNetV2, and Xception. We compare overall performance on “trained” locations where DenseNet201 performed best with 95.6% top‐1 accuracy showing promise for deep learning methods for smaller scale research efforts. Using trained locations, classifications with <500 images had low and highly variable recall of 0.750 ± 0.329, while classifications with over 1,000 images had a high and stable recall of 0.971 ± 0.0137. Models tasked with classifying species from untrained locations were less accurate, with DenseNet201 performing best with 68.7% top‐1 accuracy. Finally, we provide an open repository where ecologists can insert their image data to train and test custom species detection models for their desired ecological domain.
format Online
Article
Text
id pubmed-7141055
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher John Wiley and Sons Inc.
record_format MEDLINE/PubMed
spelling pubmed-71410552020-04-09 Three critical factors affecting automated image species recognition performance for camera traps Schneider, Stefan Greenberg, Saul Taylor, Graham W. Kremer, Stefan C. Ecol Evol Original Research Ecological camera traps are increasingly used by wildlife biologists to unobtrusively monitor an ecosystems animal population. However, manual inspection of the images produced is expensive, laborious, and time‐consuming. The success of deep learning systems using camera trap images has been previously explored in preliminary stages. These studies, however, are lacking in their practicality. They are primarily focused on extremely large datasets, often millions of images, and there is little to no focus on performance when tasked with species identification in new locations not seen during training. Our goal was to test the capabilities of deep learning systems trained on camera trap images using modestly sized training data, compare performance when considering unseen background locations, and quantify the gradient of lower bound performance to provide a guideline of data requirements in correspondence to performance expectations. We use a dataset provided by Parks Canada containing 47,279 images collected from 36 unique geographic locations across multiple environments. Images represent 55 animal species and human activity with high‐class imbalance. We trained, tested, and compared the capabilities of six deep learning computer vision networks using transfer learning and image augmentation: DenseNet201, Inception‐ResNet‐V3, InceptionV3, NASNetMobile, MobileNetV2, and Xception. We compare overall performance on “trained” locations where DenseNet201 performed best with 95.6% top‐1 accuracy showing promise for deep learning methods for smaller scale research efforts. Using trained locations, classifications with <500 images had low and highly variable recall of 0.750 ± 0.329, while classifications with over 1,000 images had a high and stable recall of 0.971 ± 0.0137. Models tasked with classifying species from untrained locations were less accurate, with DenseNet201 performing best with 68.7% top‐1 accuracy. Finally, we provide an open repository where ecologists can insert their image data to train and test custom species detection models for their desired ecological domain. John Wiley and Sons Inc. 2020-03-07 /pmc/articles/PMC7141055/ /pubmed/32274005 http://dx.doi.org/10.1002/ece3.6147 Text en © 2020 The Authors. Ecology and Evolution published by John Wiley & Sons Ltd. This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
spellingShingle Original Research
Schneider, Stefan
Greenberg, Saul
Taylor, Graham W.
Kremer, Stefan C.
Three critical factors affecting automated image species recognition performance for camera traps
title Three critical factors affecting automated image species recognition performance for camera traps
title_full Three critical factors affecting automated image species recognition performance for camera traps
title_fullStr Three critical factors affecting automated image species recognition performance for camera traps
title_full_unstemmed Three critical factors affecting automated image species recognition performance for camera traps
title_short Three critical factors affecting automated image species recognition performance for camera traps
title_sort three critical factors affecting automated image species recognition performance for camera traps
topic Original Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7141055/
https://www.ncbi.nlm.nih.gov/pubmed/32274005
http://dx.doi.org/10.1002/ece3.6147
work_keys_str_mv AT schneiderstefan threecriticalfactorsaffectingautomatedimagespeciesrecognitionperformanceforcameratraps
AT greenbergsaul threecriticalfactorsaffectingautomatedimagespeciesrecognitionperformanceforcameratraps
AT taylorgrahamw threecriticalfactorsaffectingautomatedimagespeciesrecognitionperformanceforcameratraps
AT kremerstefanc threecriticalfactorsaffectingautomatedimagespeciesrecognitionperformanceforcameratraps