Cargando…
The successes and pitfalls: Deep‐learning effectiveness in a Chernobyl field camera trap application
Camera traps have become in situ sensors for collecting information on animal abundance and occupancy estimates. When deployed over a large landscape, camera traps have become ideal for measuring the health of ecosystems, particularly in unstable habitats where it can be dangerous or even impossible...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
John Wiley and Sons Inc.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10477951/ https://www.ncbi.nlm.nih.gov/pubmed/37674649 http://dx.doi.org/10.1002/ece3.10454 |
_version_ | 1785101245346742272 |
---|---|
author | Maile, Rachel E. Duggan, Matthew T. Mousseau, Timothy A. |
author_facet | Maile, Rachel E. Duggan, Matthew T. Mousseau, Timothy A. |
author_sort | Maile, Rachel E. |
collection | PubMed |
description | Camera traps have become in situ sensors for collecting information on animal abundance and occupancy estimates. When deployed over a large landscape, camera traps have become ideal for measuring the health of ecosystems, particularly in unstable habitats where it can be dangerous or even impossible to observe using conventional methods. However, manual processing of imagery is extremely time and labor intensive. Because of the associated expense, many studies have started to employ machine‐learning tools, such as convolutional neural networks (CNNs). One drawback for the majority of networks is that a large number of images (millions) are necessary to devise an effective identification or classification model. This study examines specific factors pertinent to camera trap placement in the field that may influence the accuracy metrics of a deep‐learning model that has been trained with a small set of images. False negatives and false positives may occur due to a variety of environmental factors that make it difficult for even a human observer to classify, including local weather patterns and daylight. We transfer‐trained a CNN to detect 16 different object classes (14 animal species, humans, and fires) across 9576 images taken from camera traps placed in the Chernobyl Exclusion Zone. After analyzing wind speed, cloud cover, temperature, image contrast, and precipitation, there was not a significant correlation between CNN success and ambient conditions. However, a possible positive relationship between temperature and CNN success was noted. Furthermore, we found that the model was more successful when images were taken during the day as well as when precipitation was not present. This study suggests that while qualitative site‐specific factors may confuse quantitative classification algorithms such as CNNs, training with a dynamic training set can account for ambient conditions so that they do not have a significant impact on CNN success. |
format | Online Article Text |
id | pubmed-10477951 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | John Wiley and Sons Inc. |
record_format | MEDLINE/PubMed |
spelling | pubmed-104779512023-09-06 The successes and pitfalls: Deep‐learning effectiveness in a Chernobyl field camera trap application Maile, Rachel E. Duggan, Matthew T. Mousseau, Timothy A. Ecol Evol Research Articles Camera traps have become in situ sensors for collecting information on animal abundance and occupancy estimates. When deployed over a large landscape, camera traps have become ideal for measuring the health of ecosystems, particularly in unstable habitats where it can be dangerous or even impossible to observe using conventional methods. However, manual processing of imagery is extremely time and labor intensive. Because of the associated expense, many studies have started to employ machine‐learning tools, such as convolutional neural networks (CNNs). One drawback for the majority of networks is that a large number of images (millions) are necessary to devise an effective identification or classification model. This study examines specific factors pertinent to camera trap placement in the field that may influence the accuracy metrics of a deep‐learning model that has been trained with a small set of images. False negatives and false positives may occur due to a variety of environmental factors that make it difficult for even a human observer to classify, including local weather patterns and daylight. We transfer‐trained a CNN to detect 16 different object classes (14 animal species, humans, and fires) across 9576 images taken from camera traps placed in the Chernobyl Exclusion Zone. After analyzing wind speed, cloud cover, temperature, image contrast, and precipitation, there was not a significant correlation between CNN success and ambient conditions. However, a possible positive relationship between temperature and CNN success was noted. Furthermore, we found that the model was more successful when images were taken during the day as well as when precipitation was not present. This study suggests that while qualitative site‐specific factors may confuse quantitative classification algorithms such as CNNs, training with a dynamic training set can account for ambient conditions so that they do not have a significant impact on CNN success. John Wiley and Sons Inc. 2023-09-05 /pmc/articles/PMC10477951/ /pubmed/37674649 http://dx.doi.org/10.1002/ece3.10454 Text en © 2023 The Authors. Ecology and Evolution published by John Wiley & Sons Ltd. https://creativecommons.org/licenses/by/4.0/This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Research Articles Maile, Rachel E. Duggan, Matthew T. Mousseau, Timothy A. The successes and pitfalls: Deep‐learning effectiveness in a Chernobyl field camera trap application |
title | The successes and pitfalls: Deep‐learning effectiveness in a Chernobyl field camera trap application |
title_full | The successes and pitfalls: Deep‐learning effectiveness in a Chernobyl field camera trap application |
title_fullStr | The successes and pitfalls: Deep‐learning effectiveness in a Chernobyl field camera trap application |
title_full_unstemmed | The successes and pitfalls: Deep‐learning effectiveness in a Chernobyl field camera trap application |
title_short | The successes and pitfalls: Deep‐learning effectiveness in a Chernobyl field camera trap application |
title_sort | successes and pitfalls: deep‐learning effectiveness in a chernobyl field camera trap application |
topic | Research Articles |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10477951/ https://www.ncbi.nlm.nih.gov/pubmed/37674649 http://dx.doi.org/10.1002/ece3.10454 |
work_keys_str_mv | AT mailerachele thesuccessesandpitfallsdeeplearningeffectivenessinachernobylfieldcameratrapapplication AT dugganmatthewt thesuccessesandpitfallsdeeplearningeffectivenessinachernobylfieldcameratrapapplication AT mousseautimothya thesuccessesandpitfallsdeeplearningeffectivenessinachernobylfieldcameratrapapplication AT mailerachele successesandpitfallsdeeplearningeffectivenessinachernobylfieldcameratrapapplication AT dugganmatthewt successesandpitfallsdeeplearningeffectivenessinachernobylfieldcameratrapapplication AT mousseautimothya successesandpitfallsdeeplearningeffectivenessinachernobylfieldcameratrapapplication |