Cargando…

Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks

This paper presents new theoretical results on the backpropagation algorithm with smoothing [Formula: see text] regularization and adaptive momentum for feedforward neural networks with a single hidden layer, i.e., we show that the gradient of error function goes to zero and the weight sequence goes...

Descripción completa

Detalles Bibliográficos
Autores principales: Fan, Qinwei, Wu, Wei, Zurada, Jacek M.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2016
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4783325/
https://www.ncbi.nlm.nih.gov/pubmed/27066332
http://dx.doi.org/10.1186/s40064-016-1931-0
Descripción
Sumario:This paper presents new theoretical results on the backpropagation algorithm with smoothing [Formula: see text] regularization and adaptive momentum for feedforward neural networks with a single hidden layer, i.e., we show that the gradient of error function goes to zero and the weight sequence goes to a fixed point as n (n is iteration steps) tends to infinity, respectively. Also, our results are more general since we do not require the error function to be quadratic or uniformly convex, and neuronal activation functions are relaxed. Moreover, compared with existed algorithms, our novel algorithm can get more sparse network structure, namely it forces weights to become smaller during the training and can eventually removed after the training, which means that it can simply the network structure and lower operation time. Finally, two numerical experiments are presented to show the characteristics of the main results in detail.