Cargando…

Insect recognition based on complementary features from multiple views

Insect pest recognition has always been a significant branch of agriculture and ecology. The slight variance among different kinds of insects in appearance makes it hard for human experts to recognize. It is increasingly imperative to finely recognize specific insects by employing machine learning m...

Descripción completa

Detalles Bibliográficos
Autores principales: An, Jingmin, Du, Yong, Hong, Peng, Zhang, Lei, Weng, Xiaogang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9940688/
https://www.ncbi.nlm.nih.gov/pubmed/36806209
http://dx.doi.org/10.1038/s41598-023-29600-1
_version_ 1784891134789550080
author An, Jingmin
Du, Yong
Hong, Peng
Zhang, Lei
Weng, Xiaogang
author_facet An, Jingmin
Du, Yong
Hong, Peng
Zhang, Lei
Weng, Xiaogang
author_sort An, Jingmin
collection PubMed
description Insect pest recognition has always been a significant branch of agriculture and ecology. The slight variance among different kinds of insects in appearance makes it hard for human experts to recognize. It is increasingly imperative to finely recognize specific insects by employing machine learning methods. In this study, we proposed a feature fusion network to synthesize feature presentations in different backbone models. Firstly, we employed one CNN-based backbone ResNet, and two attention-based backbones Vision Transformer and Swin Transformer to localize the important regions of insect images with Grad-CAM. During this process, we designed new architectures for these two Transformers to enable Grad-CAM to be applicable in such attention-based models. Then we further proposed an attention-selection mechanism to reconstruct the attention area by delicately integrating the important regions, enabling these partial but key expressions to complement each other. We only need part of the image scope that represents the most crucial decision-making information for insect recognition. We randomly selected 20 species of insects from the IP102 dataset and then adopted all 102 kinds of insects to test the classification performance. Experimental results show that the proposed approach outperforms other advanced CNN-based models. More importantly, our attention-selection mechanism demonstrates good robustness to augmented images.
format Online
Article
Text
id pubmed-9940688
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-99406882023-02-21 Insect recognition based on complementary features from multiple views An, Jingmin Du, Yong Hong, Peng Zhang, Lei Weng, Xiaogang Sci Rep Article Insect pest recognition has always been a significant branch of agriculture and ecology. The slight variance among different kinds of insects in appearance makes it hard for human experts to recognize. It is increasingly imperative to finely recognize specific insects by employing machine learning methods. In this study, we proposed a feature fusion network to synthesize feature presentations in different backbone models. Firstly, we employed one CNN-based backbone ResNet, and two attention-based backbones Vision Transformer and Swin Transformer to localize the important regions of insect images with Grad-CAM. During this process, we designed new architectures for these two Transformers to enable Grad-CAM to be applicable in such attention-based models. Then we further proposed an attention-selection mechanism to reconstruct the attention area by delicately integrating the important regions, enabling these partial but key expressions to complement each other. We only need part of the image scope that represents the most crucial decision-making information for insect recognition. We randomly selected 20 species of insects from the IP102 dataset and then adopted all 102 kinds of insects to test the classification performance. Experimental results show that the proposed approach outperforms other advanced CNN-based models. More importantly, our attention-selection mechanism demonstrates good robustness to augmented images. Nature Publishing Group UK 2023-02-20 /pmc/articles/PMC9940688/ /pubmed/36806209 http://dx.doi.org/10.1038/s41598-023-29600-1 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
An, Jingmin
Du, Yong
Hong, Peng
Zhang, Lei
Weng, Xiaogang
Insect recognition based on complementary features from multiple views
title Insect recognition based on complementary features from multiple views
title_full Insect recognition based on complementary features from multiple views
title_fullStr Insect recognition based on complementary features from multiple views
title_full_unstemmed Insect recognition based on complementary features from multiple views
title_short Insect recognition based on complementary features from multiple views
title_sort insect recognition based on complementary features from multiple views
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9940688/
https://www.ncbi.nlm.nih.gov/pubmed/36806209
http://dx.doi.org/10.1038/s41598-023-29600-1
work_keys_str_mv AT anjingmin insectrecognitionbasedoncomplementaryfeaturesfrommultipleviews
AT duyong insectrecognitionbasedoncomplementaryfeaturesfrommultipleviews
AT hongpeng insectrecognitionbasedoncomplementaryfeaturesfrommultipleviews
AT zhanglei insectrecognitionbasedoncomplementaryfeaturesfrommultipleviews
AT wengxiaogang insectrecognitionbasedoncomplementaryfeaturesfrommultipleviews