Cargando…

Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks

This paper presents new theoretical results on the backpropagation algorithm with smoothing [Formula: see text] regularization and adaptive momentum for feedforward neural networks with a single hidden layer, i.e., we show that the gradient of error function goes to zero and the weight sequence goes...

Descripción completa

Detalles Bibliográficos
Autores principales: Fan, Qinwei, Wu, Wei, Zurada, Jacek M.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2016
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4783325/
https://www.ncbi.nlm.nih.gov/pubmed/27066332
http://dx.doi.org/10.1186/s40064-016-1931-0