Cargando…

Non-smooth Bayesian learning for artificial neural networks

Artificial neural networks (ANNs) are being widely used in supervised machine learning to analyze signals or images for many applications. Using an annotated learning database, one of the main challenges is to optimize the network weights. A lot of work on solving optimization problems or improving...

Descripción completa

Detalles Bibliográficos
Autores principales: Fakhfakh, Mohamed, Chaari, Lotfi, Bouaziz, Bassem, Gargouri, Faiez
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Berlin Heidelberg 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9244188/
https://www.ncbi.nlm.nih.gov/pubmed/35789599
http://dx.doi.org/10.1007/s12652-022-04073-8
_version_ 1784738470812450816
author Fakhfakh, Mohamed
Chaari, Lotfi
Bouaziz, Bassem
Gargouri, Faiez
author_facet Fakhfakh, Mohamed
Chaari, Lotfi
Bouaziz, Bassem
Gargouri, Faiez
author_sort Fakhfakh, Mohamed
collection PubMed
description Artificial neural networks (ANNs) are being widely used in supervised machine learning to analyze signals or images for many applications. Using an annotated learning database, one of the main challenges is to optimize the network weights. A lot of work on solving optimization problems or improving optimization methods in machine learning has been proposed successively such as gradient-based method, Newton-type method, meta-heuristic method. For the sake of efficiency, regularization is generally used. When non-smooth regularizers are used especially to promote sparse networks, such as the [Formula: see text] norm, this optimization becomes challenging due to non-differentiability issues of the target criterion. In this paper, we propose an MCMC-based optimization scheme formulated in a Bayesian framework. The proposed scheme solves the above-mentioned sparse optimization problem using an efficient sampling scheme and Hamiltonian dynamics. The designed optimizer is conducted on four (4) datasets, and the results are verified by a comparative study with two CNNs. Promising results show the usefulness of the proposed method to allow ANNs, even with low complexity levels, reaching high accuracy rates of up to [Formula: see text] . The proposed method is also faster and more robust concerning overfitting issues. More importantly, the training step of the proposed method is much faster than all competing algorithms.
format Online
Article
Text
id pubmed-9244188
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer Berlin Heidelberg
record_format MEDLINE/PubMed
spelling pubmed-92441882022-06-30 Non-smooth Bayesian learning for artificial neural networks Fakhfakh, Mohamed Chaari, Lotfi Bouaziz, Bassem Gargouri, Faiez J Ambient Intell Humaniz Comput Original Research Artificial neural networks (ANNs) are being widely used in supervised machine learning to analyze signals or images for many applications. Using an annotated learning database, one of the main challenges is to optimize the network weights. A lot of work on solving optimization problems or improving optimization methods in machine learning has been proposed successively such as gradient-based method, Newton-type method, meta-heuristic method. For the sake of efficiency, regularization is generally used. When non-smooth regularizers are used especially to promote sparse networks, such as the [Formula: see text] norm, this optimization becomes challenging due to non-differentiability issues of the target criterion. In this paper, we propose an MCMC-based optimization scheme formulated in a Bayesian framework. The proposed scheme solves the above-mentioned sparse optimization problem using an efficient sampling scheme and Hamiltonian dynamics. The designed optimizer is conducted on four (4) datasets, and the results are verified by a comparative study with two CNNs. Promising results show the usefulness of the proposed method to allow ANNs, even with low complexity levels, reaching high accuracy rates of up to [Formula: see text] . The proposed method is also faster and more robust concerning overfitting issues. More importantly, the training step of the proposed method is much faster than all competing algorithms. Springer Berlin Heidelberg 2022-06-25 /pmc/articles/PMC9244188/ /pubmed/35789599 http://dx.doi.org/10.1007/s12652-022-04073-8 Text en © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Original Research
Fakhfakh, Mohamed
Chaari, Lotfi
Bouaziz, Bassem
Gargouri, Faiez
Non-smooth Bayesian learning for artificial neural networks
title Non-smooth Bayesian learning for artificial neural networks
title_full Non-smooth Bayesian learning for artificial neural networks
title_fullStr Non-smooth Bayesian learning for artificial neural networks
title_full_unstemmed Non-smooth Bayesian learning for artificial neural networks
title_short Non-smooth Bayesian learning for artificial neural networks
title_sort non-smooth bayesian learning for artificial neural networks
topic Original Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9244188/
https://www.ncbi.nlm.nih.gov/pubmed/35789599
http://dx.doi.org/10.1007/s12652-022-04073-8
work_keys_str_mv AT fakhfakhmohamed nonsmoothbayesianlearningforartificialneuralnetworks
AT chaarilotfi nonsmoothbayesianlearningforartificialneuralnetworks
AT bouazizbassem nonsmoothbayesianlearningforartificialneuralnetworks
AT gargourifaiez nonsmoothbayesianlearningforartificialneuralnetworks