Cargando…
Domain-Aware Neural Architecture Search for Classifying Animals in Camera Trap Images
SIMPLE SUMMARY: Camera traps acquire visual data in a non-disturbing and round-the-clock manner, so they are popular for ecological researchers observing wildlife. Each camera trap may record thousands of images of diverse species and bring about millions of images that need to be classified. Many m...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8868309/ https://www.ncbi.nlm.nih.gov/pubmed/35203145 http://dx.doi.org/10.3390/ani12040437 |
_version_ | 1784656237447610368 |
---|---|
author | Jia, Liang Tian, Ye Zhang, Junguo |
author_facet | Jia, Liang Tian, Ye Zhang, Junguo |
author_sort | Jia, Liang |
collection | PubMed |
description | SIMPLE SUMMARY: Camera traps acquire visual data in a non-disturbing and round-the-clock manner, so they are popular for ecological researchers observing wildlife. Each camera trap may record thousands of images of diverse species and bring about millions of images that need to be classified. Many methods have been proposed to classify camera trap images, but almost all methods rely on very deep convolutional neural networks that require intensive computational resources. Such resources may be unavailable and become formidable in cases where the surveillance area is large or becomes greatly expanded. We turn our attention to camera traps organized as groups, where each group produces images that are processed by the edge device with lightweight networks tailored for images produced by the group. To achieve this goal, we propose a method to automatically design networks deployable for edge devices with respect to given images. With the proposed method, researchers without any experience in designing neural networks can develop networks applicable for edge devices. Thus, camera trap images can be processed in a distributed manner through edge devices, lowering the costs of transferring and processing data accumulated at camera traps. ABSTRACT: Camera traps provide a feasible way for ecological researchers to observe wildlife, and they often produce millions of images of diverse species requiring classification. This classification can be automated via edge devices installed with convolutional neural networks, but networks may need to be customized per device because edge devices are highly heterogeneous and resource-limited. This can be addressed by a neural architecture search capable of automatically designing networks. However, search methods are usually developed based on benchmark datasets differing widely from camera trap images in many aspects including data distributions and aspect ratios. Therefore, we designed a novel search method conducted directly on camera trap images with lowered resolutions and maintained aspect ratios; the search is guided by a loss function whose hyper parameter is theoretically derived for finding lightweight networks. The search was applied to two datasets and led to lightweight networks tested on an edge device named NVIDIA Jetson X2. The resulting accuracies were competitive in comparison. Conclusively, researchers without knowledge of designing networks can obtain networks optimized for edge devices and thus establish or expand surveillance areas in a cost-effective way. |
format | Online Article Text |
id | pubmed-8868309 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-88683092022-02-25 Domain-Aware Neural Architecture Search for Classifying Animals in Camera Trap Images Jia, Liang Tian, Ye Zhang, Junguo Animals (Basel) Article SIMPLE SUMMARY: Camera traps acquire visual data in a non-disturbing and round-the-clock manner, so they are popular for ecological researchers observing wildlife. Each camera trap may record thousands of images of diverse species and bring about millions of images that need to be classified. Many methods have been proposed to classify camera trap images, but almost all methods rely on very deep convolutional neural networks that require intensive computational resources. Such resources may be unavailable and become formidable in cases where the surveillance area is large or becomes greatly expanded. We turn our attention to camera traps organized as groups, where each group produces images that are processed by the edge device with lightweight networks tailored for images produced by the group. To achieve this goal, we propose a method to automatically design networks deployable for edge devices with respect to given images. With the proposed method, researchers without any experience in designing neural networks can develop networks applicable for edge devices. Thus, camera trap images can be processed in a distributed manner through edge devices, lowering the costs of transferring and processing data accumulated at camera traps. ABSTRACT: Camera traps provide a feasible way for ecological researchers to observe wildlife, and they often produce millions of images of diverse species requiring classification. This classification can be automated via edge devices installed with convolutional neural networks, but networks may need to be customized per device because edge devices are highly heterogeneous and resource-limited. This can be addressed by a neural architecture search capable of automatically designing networks. However, search methods are usually developed based on benchmark datasets differing widely from camera trap images in many aspects including data distributions and aspect ratios. Therefore, we designed a novel search method conducted directly on camera trap images with lowered resolutions and maintained aspect ratios; the search is guided by a loss function whose hyper parameter is theoretically derived for finding lightweight networks. The search was applied to two datasets and led to lightweight networks tested on an edge device named NVIDIA Jetson X2. The resulting accuracies were competitive in comparison. Conclusively, researchers without knowledge of designing networks can obtain networks optimized for edge devices and thus establish or expand surveillance areas in a cost-effective way. MDPI 2022-02-11 /pmc/articles/PMC8868309/ /pubmed/35203145 http://dx.doi.org/10.3390/ani12040437 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Jia, Liang Tian, Ye Zhang, Junguo Domain-Aware Neural Architecture Search for Classifying Animals in Camera Trap Images |
title | Domain-Aware Neural Architecture Search for Classifying Animals in Camera Trap Images |
title_full | Domain-Aware Neural Architecture Search for Classifying Animals in Camera Trap Images |
title_fullStr | Domain-Aware Neural Architecture Search for Classifying Animals in Camera Trap Images |
title_full_unstemmed | Domain-Aware Neural Architecture Search for Classifying Animals in Camera Trap Images |
title_short | Domain-Aware Neural Architecture Search for Classifying Animals in Camera Trap Images |
title_sort | domain-aware neural architecture search for classifying animals in camera trap images |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8868309/ https://www.ncbi.nlm.nih.gov/pubmed/35203145 http://dx.doi.org/10.3390/ani12040437 |
work_keys_str_mv | AT jialiang domainawareneuralarchitecturesearchforclassifyinganimalsincameratrapimages AT tianye domainawareneuralarchitecturesearchforclassifyinganimalsincameratrapimages AT zhangjunguo domainawareneuralarchitecturesearchforclassifyinganimalsincameratrapimages |