Cargando…

A Novel Image Classification Method Based on Residual Network, Inception, and Proposed Activation Function

In deeper layers, ResNet heavily depends on skip connections and Relu. Although skip connections have demonstrated their usefulness in networks, a major issue arises when the dimensions between layers are not consistent. In such cases, it is necessary to use techniques such as zero-padding or projec...

Descripción completa

Detalles Bibliográficos
Autores principales: Yahya, Ali Abdullah, Liu, Kui, Hawbani, Ammar, Wang, Yibin, Hadi, Ali Naser
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10056718/
https://www.ncbi.nlm.nih.gov/pubmed/36991687
http://dx.doi.org/10.3390/s23062976
_version_ 1785016192261423104
author Yahya, Ali Abdullah
Liu, Kui
Hawbani, Ammar
Wang, Yibin
Hadi, Ali Naser
author_facet Yahya, Ali Abdullah
Liu, Kui
Hawbani, Ammar
Wang, Yibin
Hadi, Ali Naser
author_sort Yahya, Ali Abdullah
collection PubMed
description In deeper layers, ResNet heavily depends on skip connections and Relu. Although skip connections have demonstrated their usefulness in networks, a major issue arises when the dimensions between layers are not consistent. In such cases, it is necessary to use techniques such as zero-padding or projection to match the dimensions between layers. These adjustments increase the complexity of the network architecture, resulting in an increase in parameter number and a rise in computational costs. Another problem is the vanishing gradient caused by utilizing Relu. In our model, after making appropriate adjustments to the inception blocks, we replace the deeper layers of ResNet with modified inception blocks and Relu with our non-monotonic activation function (NMAF). To reduce parameter number, we use symmetric factorization and [Formula: see text] convolutions. Utilizing these two techniques contributed to reducing the parameter number by around 6 M parameters, which has helped reduce the run time by 30 s/epoch. Unlike Relu, NMAF addresses the deactivation problem of the non-positive number by activating the negative values and outputting small negative numbers instead of zero in Relu, which helped in enhancing the convergence speed and increasing the accuracy by 5%, 15%, and 5% for the non-noisy datasets, and 5%, 6%, 21% for non-noisy datasets.
format Online
Article
Text
id pubmed-10056718
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-100567182023-03-30 A Novel Image Classification Method Based on Residual Network, Inception, and Proposed Activation Function Yahya, Ali Abdullah Liu, Kui Hawbani, Ammar Wang, Yibin Hadi, Ali Naser Sensors (Basel) Article In deeper layers, ResNet heavily depends on skip connections and Relu. Although skip connections have demonstrated their usefulness in networks, a major issue arises when the dimensions between layers are not consistent. In such cases, it is necessary to use techniques such as zero-padding or projection to match the dimensions between layers. These adjustments increase the complexity of the network architecture, resulting in an increase in parameter number and a rise in computational costs. Another problem is the vanishing gradient caused by utilizing Relu. In our model, after making appropriate adjustments to the inception blocks, we replace the deeper layers of ResNet with modified inception blocks and Relu with our non-monotonic activation function (NMAF). To reduce parameter number, we use symmetric factorization and [Formula: see text] convolutions. Utilizing these two techniques contributed to reducing the parameter number by around 6 M parameters, which has helped reduce the run time by 30 s/epoch. Unlike Relu, NMAF addresses the deactivation problem of the non-positive number by activating the negative values and outputting small negative numbers instead of zero in Relu, which helped in enhancing the convergence speed and increasing the accuracy by 5%, 15%, and 5% for the non-noisy datasets, and 5%, 6%, 21% for non-noisy datasets. MDPI 2023-03-09 /pmc/articles/PMC10056718/ /pubmed/36991687 http://dx.doi.org/10.3390/s23062976 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Yahya, Ali Abdullah
Liu, Kui
Hawbani, Ammar
Wang, Yibin
Hadi, Ali Naser
A Novel Image Classification Method Based on Residual Network, Inception, and Proposed Activation Function
title A Novel Image Classification Method Based on Residual Network, Inception, and Proposed Activation Function
title_full A Novel Image Classification Method Based on Residual Network, Inception, and Proposed Activation Function
title_fullStr A Novel Image Classification Method Based on Residual Network, Inception, and Proposed Activation Function
title_full_unstemmed A Novel Image Classification Method Based on Residual Network, Inception, and Proposed Activation Function
title_short A Novel Image Classification Method Based on Residual Network, Inception, and Proposed Activation Function
title_sort novel image classification method based on residual network, inception, and proposed activation function
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10056718/
https://www.ncbi.nlm.nih.gov/pubmed/36991687
http://dx.doi.org/10.3390/s23062976
work_keys_str_mv AT yahyaaliabdullah anovelimageclassificationmethodbasedonresidualnetworkinceptionandproposedactivationfunction
AT liukui anovelimageclassificationmethodbasedonresidualnetworkinceptionandproposedactivationfunction
AT hawbaniammar anovelimageclassificationmethodbasedonresidualnetworkinceptionandproposedactivationfunction
AT wangyibin anovelimageclassificationmethodbasedonresidualnetworkinceptionandproposedactivationfunction
AT hadialinaser anovelimageclassificationmethodbasedonresidualnetworkinceptionandproposedactivationfunction
AT yahyaaliabdullah novelimageclassificationmethodbasedonresidualnetworkinceptionandproposedactivationfunction
AT liukui novelimageclassificationmethodbasedonresidualnetworkinceptionandproposedactivationfunction
AT hawbaniammar novelimageclassificationmethodbasedonresidualnetworkinceptionandproposedactivationfunction
AT wangyibin novelimageclassificationmethodbasedonresidualnetworkinceptionandproposedactivationfunction
AT hadialinaser novelimageclassificationmethodbasedonresidualnetworkinceptionandproposedactivationfunction