Cargando…

A novel residual block: replace Conv1× 1 with Conv3×3 and stack more convolutions

The residual structure has an important influence on the design of the neural network model. The neural network model based on residual structure has excellent performance in computer vision tasks. However, the performance of classical residual networks is restricted by the size of receptive fields,...

Descripción completa

Detalles Bibliográficos
Autores principales: Hu, XiuJian, Sheng, Guanglei, Zhang, Daohua, Li, Lin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: PeerJ Inc. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280407/
https://www.ncbi.nlm.nih.gov/pubmed/37346580
http://dx.doi.org/10.7717/peerj-cs.1302
_version_ 1785060787402833920
author Hu, XiuJian
Sheng, Guanglei
Zhang, Daohua
Li, Lin
author_facet Hu, XiuJian
Sheng, Guanglei
Zhang, Daohua
Li, Lin
author_sort Hu, XiuJian
collection PubMed
description The residual structure has an important influence on the design of the neural network model. The neural network model based on residual structure has excellent performance in computer vision tasks. However, the performance of classical residual networks is restricted by the size of receptive fields, channel information, spatial information and other factors. In this article, a novel residual structure is proposed. We modify the identity mapping and down-sampling block to get greater effective receptive field, and its excellent performance in channel information fusion and spatial feature extraction is verified by ablation studies. In order to further verify its feature extraction capability, a non-deep convolutional neural network (CNN) was designed and tested on Cifar10 and Cifar100 benchmark platforms using a naive training method. Our network model achieves better performance than other mainstream networks under the same training parameters, the accuracy we achieved is 3.08 percentage point higher than ResNet50 and 1.38 percentage points higher than ResNeXt50. Compared with SeResNet152, it is 0.29 percentage point higher in the case of 50 epochs less training.
format Online
Article
Text
id pubmed-10280407
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher PeerJ Inc.
record_format MEDLINE/PubMed
spelling pubmed-102804072023-06-21 A novel residual block: replace Conv1× 1 with Conv3×3 and stack more convolutions Hu, XiuJian Sheng, Guanglei Zhang, Daohua Li, Lin PeerJ Comput Sci Algorithms and Analysis of Algorithms The residual structure has an important influence on the design of the neural network model. The neural network model based on residual structure has excellent performance in computer vision tasks. However, the performance of classical residual networks is restricted by the size of receptive fields, channel information, spatial information and other factors. In this article, a novel residual structure is proposed. We modify the identity mapping and down-sampling block to get greater effective receptive field, and its excellent performance in channel information fusion and spatial feature extraction is verified by ablation studies. In order to further verify its feature extraction capability, a non-deep convolutional neural network (CNN) was designed and tested on Cifar10 and Cifar100 benchmark platforms using a naive training method. Our network model achieves better performance than other mainstream networks under the same training parameters, the accuracy we achieved is 3.08 percentage point higher than ResNet50 and 1.38 percentage points higher than ResNeXt50. Compared with SeResNet152, it is 0.29 percentage point higher in the case of 50 epochs less training. PeerJ Inc. 2023-03-31 /pmc/articles/PMC10280407/ /pubmed/37346580 http://dx.doi.org/10.7717/peerj-cs.1302 Text en ©2023 Hu et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.
spellingShingle Algorithms and Analysis of Algorithms
Hu, XiuJian
Sheng, Guanglei
Zhang, Daohua
Li, Lin
A novel residual block: replace Conv1× 1 with Conv3×3 and stack more convolutions
title A novel residual block: replace Conv1× 1 with Conv3×3 and stack more convolutions
title_full A novel residual block: replace Conv1× 1 with Conv3×3 and stack more convolutions
title_fullStr A novel residual block: replace Conv1× 1 with Conv3×3 and stack more convolutions
title_full_unstemmed A novel residual block: replace Conv1× 1 with Conv3×3 and stack more convolutions
title_short A novel residual block: replace Conv1× 1 with Conv3×3 and stack more convolutions
title_sort novel residual block: replace conv1× 1 with conv3×3 and stack more convolutions
topic Algorithms and Analysis of Algorithms
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280407/
https://www.ncbi.nlm.nih.gov/pubmed/37346580
http://dx.doi.org/10.7717/peerj-cs.1302
work_keys_str_mv AT huxiujian anovelresidualblockreplaceconv11withconv33andstackmoreconvolutions
AT shengguanglei anovelresidualblockreplaceconv11withconv33andstackmoreconvolutions
AT zhangdaohua anovelresidualblockreplaceconv11withconv33andstackmoreconvolutions
AT lilin anovelresidualblockreplaceconv11withconv33andstackmoreconvolutions
AT huxiujian novelresidualblockreplaceconv11withconv33andstackmoreconvolutions
AT shengguanglei novelresidualblockreplaceconv11withconv33andstackmoreconvolutions
AT zhangdaohua novelresidualblockreplaceconv11withconv33andstackmoreconvolutions
AT lilin novelresidualblockreplaceconv11withconv33andstackmoreconvolutions