Cargando…

Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks

This paper presents new theoretical results on the backpropagation algorithm with smoothing [Formula: see text] regularization and adaptive momentum for feedforward neural networks with a single hidden layer, i.e., we show that the gradient of error function goes to zero and the weight sequence goes...

Descripción completa

Detalles Bibliográficos
Autores principales: Fan, Qinwei, Wu, Wei, Zurada, Jacek M.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2016
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4783325/
https://www.ncbi.nlm.nih.gov/pubmed/27066332
http://dx.doi.org/10.1186/s40064-016-1931-0
_version_ 1782420085720219648
author Fan, Qinwei
Wu, Wei
Zurada, Jacek M.
author_facet Fan, Qinwei
Wu, Wei
Zurada, Jacek M.
author_sort Fan, Qinwei
collection PubMed
description This paper presents new theoretical results on the backpropagation algorithm with smoothing [Formula: see text] regularization and adaptive momentum for feedforward neural networks with a single hidden layer, i.e., we show that the gradient of error function goes to zero and the weight sequence goes to a fixed point as n (n is iteration steps) tends to infinity, respectively. Also, our results are more general since we do not require the error function to be quadratic or uniformly convex, and neuronal activation functions are relaxed. Moreover, compared with existed algorithms, our novel algorithm can get more sparse network structure, namely it forces weights to become smaller during the training and can eventually removed after the training, which means that it can simply the network structure and lower operation time. Finally, two numerical experiments are presented to show the characteristics of the main results in detail.
format Online
Article
Text
id pubmed-4783325
institution National Center for Biotechnology Information
language English
publishDate 2016
publisher Springer International Publishing
record_format MEDLINE/PubMed
spelling pubmed-47833252016-04-09 Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks Fan, Qinwei Wu, Wei Zurada, Jacek M. Springerplus Research This paper presents new theoretical results on the backpropagation algorithm with smoothing [Formula: see text] regularization and adaptive momentum for feedforward neural networks with a single hidden layer, i.e., we show that the gradient of error function goes to zero and the weight sequence goes to a fixed point as n (n is iteration steps) tends to infinity, respectively. Also, our results are more general since we do not require the error function to be quadratic or uniformly convex, and neuronal activation functions are relaxed. Moreover, compared with existed algorithms, our novel algorithm can get more sparse network structure, namely it forces weights to become smaller during the training and can eventually removed after the training, which means that it can simply the network structure and lower operation time. Finally, two numerical experiments are presented to show the characteristics of the main results in detail. Springer International Publishing 2016-03-08 /pmc/articles/PMC4783325/ /pubmed/27066332 http://dx.doi.org/10.1186/s40064-016-1931-0 Text en © Fan et al. 2016 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
spellingShingle Research
Fan, Qinwei
Wu, Wei
Zurada, Jacek M.
Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks
title Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks
title_full Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks
title_fullStr Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks
title_full_unstemmed Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks
title_short Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks
title_sort convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4783325/
https://www.ncbi.nlm.nih.gov/pubmed/27066332
http://dx.doi.org/10.1186/s40064-016-1931-0
work_keys_str_mv AT fanqinwei convergenceofbatchgradientlearningwithsmoothingregularizationandadaptivemomentumforneuralnetworks
AT wuwei convergenceofbatchgradientlearningwithsmoothingregularizationandadaptivemomentumforneuralnetworks
AT zuradajacekm convergenceofbatchgradientlearningwithsmoothingregularizationandadaptivemomentumforneuralnetworks