Cargando…
Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks
This paper presents new theoretical results on the backpropagation algorithm with smoothing [Formula: see text] regularization and adaptive momentum for feedforward neural networks with a single hidden layer, i.e., we show that the gradient of error function goes to zero and the weight sequence goes...
Autores principales: | Fan, Qinwei, Wu, Wei, Zurada, Jacek M. |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer International Publishing
2016
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4783325/ https://www.ncbi.nlm.nih.gov/pubmed/27066332 http://dx.doi.org/10.1186/s40064-016-1931-0 |
Ejemplares similares
-
Regularized adversarial learning for normalization of multi-batch untargeted metabolomics data
por: Dmitrenko, Andrei, et al.
Publicado: (2023) -
Extra Proximal-Gradient Network with Learned Regularization for Image Compressive Sensing Reconstruction
por: Zhang, Qingchao, et al.
Publicado: (2022) -
Backpropagation With Sparsity Regularization for Spiking Neural Network Learning
por: Yan, Yulong, et al.
Publicado: (2022) -
Streaming Batch Eigenupdates for Hardware Neural Networks
por: Hoskins, Brian D., et al.
Publicado: (2019) -
Survival Analysis with High-Dimensional Omics Data Using a Threshold Gradient Descent Regularization-Based Neural Network Approach
por: Fan, Yu, et al.
Publicado: (2022)