Cargando…
Deep Residual Network in Network
Deep network in network (DNIN) model is an efficient instance and an important extension of the convolutional neural network (CNN) consisting of alternating convolutional layers and pooling layers. In this model, a multilayer perceptron (MLP), a nonlinear function, is exploited to replace the linear...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7925065/ https://www.ncbi.nlm.nih.gov/pubmed/33679966 http://dx.doi.org/10.1155/2021/6659083 |
_version_ | 1783659215784509440 |
---|---|
author | Alaeddine, Hmidi Jihene, Malek |
author_facet | Alaeddine, Hmidi Jihene, Malek |
author_sort | Alaeddine, Hmidi |
collection | PubMed |
description | Deep network in network (DNIN) model is an efficient instance and an important extension of the convolutional neural network (CNN) consisting of alternating convolutional layers and pooling layers. In this model, a multilayer perceptron (MLP), a nonlinear function, is exploited to replace the linear filter for convolution. Increasing the depth of DNIN can also help improve classification accuracy while its formation becomes more difficult, learning time gets slower, and accuracy becomes saturated and then degrades. This paper presents a new deep residual network in network (DrNIN) model that represents a deeper model of DNIN. This model represents an interesting architecture for on-chip implementations on FPGAs. In fact, it can be applied to a variety of image recognition applications. This model has a homogeneous and multilength architecture with the hyperparameter “L” (“L” defines the model length). In this paper, we will apply the residual learning framework to DNIN and we will explicitly reformulate convolutional layers as residual learning functions to solve the vanishing gradient problem and facilitate and speed up the learning process. We will provide a comprehensive study showing that DrNIN models can gain accuracy from a significantly increased depth. On the CIFAR-10 dataset, we evaluate the proposed models with a depth of up to L = 5 DrMLPconv layers, 1.66x deeper than DNIN. The experimental results demonstrate the efficiency of the proposed method and its role in providing the model with a greater capacity to represent features and thus leading to better recognition performance. |
format | Online Article Text |
id | pubmed-7925065 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Hindawi |
record_format | MEDLINE/PubMed |
spelling | pubmed-79250652021-03-04 Deep Residual Network in Network Alaeddine, Hmidi Jihene, Malek Comput Intell Neurosci Research Article Deep network in network (DNIN) model is an efficient instance and an important extension of the convolutional neural network (CNN) consisting of alternating convolutional layers and pooling layers. In this model, a multilayer perceptron (MLP), a nonlinear function, is exploited to replace the linear filter for convolution. Increasing the depth of DNIN can also help improve classification accuracy while its formation becomes more difficult, learning time gets slower, and accuracy becomes saturated and then degrades. This paper presents a new deep residual network in network (DrNIN) model that represents a deeper model of DNIN. This model represents an interesting architecture for on-chip implementations on FPGAs. In fact, it can be applied to a variety of image recognition applications. This model has a homogeneous and multilength architecture with the hyperparameter “L” (“L” defines the model length). In this paper, we will apply the residual learning framework to DNIN and we will explicitly reformulate convolutional layers as residual learning functions to solve the vanishing gradient problem and facilitate and speed up the learning process. We will provide a comprehensive study showing that DrNIN models can gain accuracy from a significantly increased depth. On the CIFAR-10 dataset, we evaluate the proposed models with a depth of up to L = 5 DrMLPconv layers, 1.66x deeper than DNIN. The experimental results demonstrate the efficiency of the proposed method and its role in providing the model with a greater capacity to represent features and thus leading to better recognition performance. Hindawi 2021-02-23 /pmc/articles/PMC7925065/ /pubmed/33679966 http://dx.doi.org/10.1155/2021/6659083 Text en Copyright © 2021 Hmidi Alaeddine and Malek Jihene. https://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Research Article Alaeddine, Hmidi Jihene, Malek Deep Residual Network in Network |
title | Deep Residual Network in Network |
title_full | Deep Residual Network in Network |
title_fullStr | Deep Residual Network in Network |
title_full_unstemmed | Deep Residual Network in Network |
title_short | Deep Residual Network in Network |
title_sort | deep residual network in network |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7925065/ https://www.ncbi.nlm.nih.gov/pubmed/33679966 http://dx.doi.org/10.1155/2021/6659083 |
work_keys_str_mv | AT alaeddinehmidi deepresidualnetworkinnetwork AT jihenemalek deepresidualnetworkinnetwork |