Cargando…

An approach to rapid processing of camera trap images with minimal human input

1. Camera traps have become an extensively utilized tool in ecological research, but the manual processing of images created by a network of camera traps rapidly becomes an overwhelming task, even for small camera trap studies. 2. We used transfer learning to create convolutional neural network (CNN...

Descripción completa

Detalles Bibliográficos
Autores principales: Duggan, Matthew T., Groleau, Melissa F., Shealy, Ethan P., Self, Lillian S., Utter, Taylor E., Waller, Matthew M., Hall, Bryan C., Stone, Chris G., Anderson, Layne L., Mousseau, Timothy A.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8427629/
https://www.ncbi.nlm.nih.gov/pubmed/34522360
http://dx.doi.org/10.1002/ece3.7970
_version_ 1783750214922797056
author Duggan, Matthew T.
Groleau, Melissa F.
Shealy, Ethan P.
Self, Lillian S.
Utter, Taylor E.
Waller, Matthew M.
Hall, Bryan C.
Stone, Chris G.
Anderson, Layne L.
Mousseau, Timothy A.
author_facet Duggan, Matthew T.
Groleau, Melissa F.
Shealy, Ethan P.
Self, Lillian S.
Utter, Taylor E.
Waller, Matthew M.
Hall, Bryan C.
Stone, Chris G.
Anderson, Layne L.
Mousseau, Timothy A.
author_sort Duggan, Matthew T.
collection PubMed
description 1. Camera traps have become an extensively utilized tool in ecological research, but the manual processing of images created by a network of camera traps rapidly becomes an overwhelming task, even for small camera trap studies. 2. We used transfer learning to create convolutional neural network (CNN) models for identification and classification. By utilizing a small dataset with an average of 275 labeled images per species class, the model was able to distinguish between species and remove false triggers. 3. We trained the model to detect 17 object classes with individual species identification, reaching an accuracy up to 92% and an average F1 score of 85%. Previous studies have suggested the need for thousands of images of each object class to reach results comparable to those achieved by human observers; however, we show that such accuracy can be achieved with fewer images. 4. With transfer learning and an ongoing camera trap study, a deep learning model can be successfully created by a small camera trap study. A generalizable model produced from an unbalanced class set can be utilized to extract trap events that can later be confirmed by human processors.
format Online
Article
Text
id pubmed-8427629
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher John Wiley and Sons Inc.
record_format MEDLINE/PubMed
spelling pubmed-84276292021-09-13 An approach to rapid processing of camera trap images with minimal human input Duggan, Matthew T. Groleau, Melissa F. Shealy, Ethan P. Self, Lillian S. Utter, Taylor E. Waller, Matthew M. Hall, Bryan C. Stone, Chris G. Anderson, Layne L. Mousseau, Timothy A. Ecol Evol Original Research 1. Camera traps have become an extensively utilized tool in ecological research, but the manual processing of images created by a network of camera traps rapidly becomes an overwhelming task, even for small camera trap studies. 2. We used transfer learning to create convolutional neural network (CNN) models for identification and classification. By utilizing a small dataset with an average of 275 labeled images per species class, the model was able to distinguish between species and remove false triggers. 3. We trained the model to detect 17 object classes with individual species identification, reaching an accuracy up to 92% and an average F1 score of 85%. Previous studies have suggested the need for thousands of images of each object class to reach results comparable to those achieved by human observers; however, we show that such accuracy can be achieved with fewer images. 4. With transfer learning and an ongoing camera trap study, a deep learning model can be successfully created by a small camera trap study. A generalizable model produced from an unbalanced class set can be utilized to extract trap events that can later be confirmed by human processors. John Wiley and Sons Inc. 2021-08-02 /pmc/articles/PMC8427629/ /pubmed/34522360 http://dx.doi.org/10.1002/ece3.7970 Text en © 2021 The Authors. Ecology and Evolution published by John Wiley & Sons Ltd. https://creativecommons.org/licenses/by/4.0/This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
spellingShingle Original Research
Duggan, Matthew T.
Groleau, Melissa F.
Shealy, Ethan P.
Self, Lillian S.
Utter, Taylor E.
Waller, Matthew M.
Hall, Bryan C.
Stone, Chris G.
Anderson, Layne L.
Mousseau, Timothy A.
An approach to rapid processing of camera trap images with minimal human input
title An approach to rapid processing of camera trap images with minimal human input
title_full An approach to rapid processing of camera trap images with minimal human input
title_fullStr An approach to rapid processing of camera trap images with minimal human input
title_full_unstemmed An approach to rapid processing of camera trap images with minimal human input
title_short An approach to rapid processing of camera trap images with minimal human input
title_sort approach to rapid processing of camera trap images with minimal human input
topic Original Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8427629/
https://www.ncbi.nlm.nih.gov/pubmed/34522360
http://dx.doi.org/10.1002/ece3.7970
work_keys_str_mv AT dugganmatthewt anapproachtorapidprocessingofcameratrapimageswithminimalhumaninput
AT groleaumelissaf anapproachtorapidprocessingofcameratrapimageswithminimalhumaninput
AT shealyethanp anapproachtorapidprocessingofcameratrapimageswithminimalhumaninput
AT selflillians anapproachtorapidprocessingofcameratrapimageswithminimalhumaninput
AT uttertaylore anapproachtorapidprocessingofcameratrapimageswithminimalhumaninput
AT wallermatthewm anapproachtorapidprocessingofcameratrapimageswithminimalhumaninput
AT hallbryanc anapproachtorapidprocessingofcameratrapimageswithminimalhumaninput
AT stonechrisg anapproachtorapidprocessingofcameratrapimageswithminimalhumaninput
AT andersonlaynel anapproachtorapidprocessingofcameratrapimageswithminimalhumaninput
AT mousseautimothya anapproachtorapidprocessingofcameratrapimageswithminimalhumaninput
AT dugganmatthewt approachtorapidprocessingofcameratrapimageswithminimalhumaninput
AT groleaumelissaf approachtorapidprocessingofcameratrapimageswithminimalhumaninput
AT shealyethanp approachtorapidprocessingofcameratrapimageswithminimalhumaninput
AT selflillians approachtorapidprocessingofcameratrapimageswithminimalhumaninput
AT uttertaylore approachtorapidprocessingofcameratrapimageswithminimalhumaninput
AT wallermatthewm approachtorapidprocessingofcameratrapimageswithminimalhumaninput
AT hallbryanc approachtorapidprocessingofcameratrapimageswithminimalhumaninput
AT stonechrisg approachtorapidprocessingofcameratrapimageswithminimalhumaninput
AT andersonlaynel approachtorapidprocessingofcameratrapimageswithminimalhumaninput
AT mousseautimothya approachtorapidprocessingofcameratrapimageswithminimalhumaninput