Cargando…

Image classification model based on large kernel attention mechanism and relative position self-attention mechanism

The Transformer has achieved great success in many computer vision tasks. With the in-depth exploration of it, researchers have found that Transformers can better obtain long-range features than convolutional neural networks (CNN). However, there will be a deterioration of local feature details when...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Siqi, Wei, Jiangshu, Liu, Gang, Zhou, Bei
Formato: Online Artículo Texto
Lenguaje:English
Publicado: PeerJ Inc. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280586/
https://www.ncbi.nlm.nih.gov/pubmed/37346614
http://dx.doi.org/10.7717/peerj-cs.1344
_version_ 1785060828774400000
author Liu, Siqi
Wei, Jiangshu
Liu, Gang
Zhou, Bei
author_facet Liu, Siqi
Wei, Jiangshu
Liu, Gang
Zhou, Bei
author_sort Liu, Siqi
collection PubMed
description The Transformer has achieved great success in many computer vision tasks. With the in-depth exploration of it, researchers have found that Transformers can better obtain long-range features than convolutional neural networks (CNN). However, there will be a deterioration of local feature details when the Transformer extracts local features. Although CNN is adept at capturing the local feature details, it cannot easily obtain the global representation of features. In order to solve the above problems effectively, this paper proposes a hybrid model consisting of CNN and Transformer inspired by Visual Attention Net (VAN) and CoAtNet. This model optimizes its shortcomings in the difficulty of capturing the global representation of features by introducing Large Kernel Attention (LKA) in CNN while using the Transformer blocks with relative position self-attention variant to alleviate the problem of detail deterioration in local features of the Transformer. Our model effectively combines the advantages of the above two structures to obtain the details of local features more accurately and capture the relationship between features far apart more efficiently on a large receptive field. Our experiments show that in the image classification task without additional training data, the proposed model in this paper can achieve excellent results on the cifar10 dataset, the cifar100 dataset, and the birds400 dataset (a public dataset on the Kaggle platform) with fewer model parameters. Among them, SE_LKACAT achieved a Top-1 accuracy of 98.01% on the cifar10 dataset with only 7.5M parameters.
format Online
Article
Text
id pubmed-10280586
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher PeerJ Inc.
record_format MEDLINE/PubMed
spelling pubmed-102805862023-06-21 Image classification model based on large kernel attention mechanism and relative position self-attention mechanism Liu, Siqi Wei, Jiangshu Liu, Gang Zhou, Bei PeerJ Comput Sci Artificial Intelligence The Transformer has achieved great success in many computer vision tasks. With the in-depth exploration of it, researchers have found that Transformers can better obtain long-range features than convolutional neural networks (CNN). However, there will be a deterioration of local feature details when the Transformer extracts local features. Although CNN is adept at capturing the local feature details, it cannot easily obtain the global representation of features. In order to solve the above problems effectively, this paper proposes a hybrid model consisting of CNN and Transformer inspired by Visual Attention Net (VAN) and CoAtNet. This model optimizes its shortcomings in the difficulty of capturing the global representation of features by introducing Large Kernel Attention (LKA) in CNN while using the Transformer blocks with relative position self-attention variant to alleviate the problem of detail deterioration in local features of the Transformer. Our model effectively combines the advantages of the above two structures to obtain the details of local features more accurately and capture the relationship between features far apart more efficiently on a large receptive field. Our experiments show that in the image classification task without additional training data, the proposed model in this paper can achieve excellent results on the cifar10 dataset, the cifar100 dataset, and the birds400 dataset (a public dataset on the Kaggle platform) with fewer model parameters. Among them, SE_LKACAT achieved a Top-1 accuracy of 98.01% on the cifar10 dataset with only 7.5M parameters. PeerJ Inc. 2023-04-21 /pmc/articles/PMC10280586/ /pubmed/37346614 http://dx.doi.org/10.7717/peerj-cs.1344 Text en ©2023 Liu et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.
spellingShingle Artificial Intelligence
Liu, Siqi
Wei, Jiangshu
Liu, Gang
Zhou, Bei
Image classification model based on large kernel attention mechanism and relative position self-attention mechanism
title Image classification model based on large kernel attention mechanism and relative position self-attention mechanism
title_full Image classification model based on large kernel attention mechanism and relative position self-attention mechanism
title_fullStr Image classification model based on large kernel attention mechanism and relative position self-attention mechanism
title_full_unstemmed Image classification model based on large kernel attention mechanism and relative position self-attention mechanism
title_short Image classification model based on large kernel attention mechanism and relative position self-attention mechanism
title_sort image classification model based on large kernel attention mechanism and relative position self-attention mechanism
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280586/
https://www.ncbi.nlm.nih.gov/pubmed/37346614
http://dx.doi.org/10.7717/peerj-cs.1344
work_keys_str_mv AT liusiqi imageclassificationmodelbasedonlargekernelattentionmechanismandrelativepositionselfattentionmechanism
AT weijiangshu imageclassificationmodelbasedonlargekernelattentionmechanismandrelativepositionselfattentionmechanism
AT liugang imageclassificationmodelbasedonlargekernelattentionmechanismandrelativepositionselfattentionmechanism
AT zhoubei imageclassificationmodelbasedonlargekernelattentionmechanismandrelativepositionselfattentionmechanism