Cargando…

The superior fault tolerance of artificial neural network training with a fault/noise injection-based genetic algorithm

Artificial neural networks (ANNs) are powerful computational tools that are designed to replicate the human brain and adopted to solve a variety of problems in many different fields. Fault tolerance (FT), an important property of ANNs, ensures their reliability when significant portions of a network...

Descripción completa

Detalles Bibliográficos
Autores principales: Su, Feng, Yuan, Peijiang, Wang, Yangzhen, Zhang, Chen
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Higher Education Press 2016
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5055486/
https://www.ncbi.nlm.nih.gov/pubmed/27502185
http://dx.doi.org/10.1007/s13238-016-0302-5
_version_ 1782458767436152832
author Su, Feng
Yuan, Peijiang
Wang, Yangzhen
Zhang, Chen
author_facet Su, Feng
Yuan, Peijiang
Wang, Yangzhen
Zhang, Chen
author_sort Su, Feng
collection PubMed
description Artificial neural networks (ANNs) are powerful computational tools that are designed to replicate the human brain and adopted to solve a variety of problems in many different fields. Fault tolerance (FT), an important property of ANNs, ensures their reliability when significant portions of a network are lost. In this paper, a fault/noise injection-based (FIB) genetic algorithm (GA) is proposed to construct fault-tolerant ANNs. The FT performance of an FIB-GA was compared with that of a common genetic algorithm, the back-propagation algorithm, and the modification of weights algorithm. The FIB-GA showed a slower fitting speed when solving the exclusive OR (XOR) problem and the overlapping classification problem, but it significantly reduced the errors in cases of single or multiple faults in ANN weights or nodes. Further analysis revealed that the fit weights showed no correlation with the fitting errors in the ANNs constructed with the FIB-GA, suggesting a relatively even distribution of the various fitting parameters. In contrast, the output weights in the training of ANNs implemented with the use the other three algorithms demonstrated a positive correlation with the errors. Our findings therefore indicate that a combination of the fault/noise injection-based method and a GA is capable of introducing FT to ANNs and imply that the distributed ANNs demonstrate superior FT performance. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s13238-016-0302-5) contains supplementary material, which is available to authorized users.
format Online
Article
Text
id pubmed-5055486
institution National Center for Biotechnology Information
language English
publishDate 2016
publisher Higher Education Press
record_format MEDLINE/PubMed
spelling pubmed-50554862016-10-24 The superior fault tolerance of artificial neural network training with a fault/noise injection-based genetic algorithm Su, Feng Yuan, Peijiang Wang, Yangzhen Zhang, Chen Protein Cell Research Article Artificial neural networks (ANNs) are powerful computational tools that are designed to replicate the human brain and adopted to solve a variety of problems in many different fields. Fault tolerance (FT), an important property of ANNs, ensures their reliability when significant portions of a network are lost. In this paper, a fault/noise injection-based (FIB) genetic algorithm (GA) is proposed to construct fault-tolerant ANNs. The FT performance of an FIB-GA was compared with that of a common genetic algorithm, the back-propagation algorithm, and the modification of weights algorithm. The FIB-GA showed a slower fitting speed when solving the exclusive OR (XOR) problem and the overlapping classification problem, but it significantly reduced the errors in cases of single or multiple faults in ANN weights or nodes. Further analysis revealed that the fit weights showed no correlation with the fitting errors in the ANNs constructed with the FIB-GA, suggesting a relatively even distribution of the various fitting parameters. In contrast, the output weights in the training of ANNs implemented with the use the other three algorithms demonstrated a positive correlation with the errors. Our findings therefore indicate that a combination of the fault/noise injection-based method and a GA is capable of introducing FT to ANNs and imply that the distributed ANNs demonstrate superior FT performance. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s13238-016-0302-5) contains supplementary material, which is available to authorized users. Higher Education Press 2016-08-09 2016-10 /pmc/articles/PMC5055486/ /pubmed/27502185 http://dx.doi.org/10.1007/s13238-016-0302-5 Text en © The Author(s) 2016 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
spellingShingle Research Article
Su, Feng
Yuan, Peijiang
Wang, Yangzhen
Zhang, Chen
The superior fault tolerance of artificial neural network training with a fault/noise injection-based genetic algorithm
title The superior fault tolerance of artificial neural network training with a fault/noise injection-based genetic algorithm
title_full The superior fault tolerance of artificial neural network training with a fault/noise injection-based genetic algorithm
title_fullStr The superior fault tolerance of artificial neural network training with a fault/noise injection-based genetic algorithm
title_full_unstemmed The superior fault tolerance of artificial neural network training with a fault/noise injection-based genetic algorithm
title_short The superior fault tolerance of artificial neural network training with a fault/noise injection-based genetic algorithm
title_sort superior fault tolerance of artificial neural network training with a fault/noise injection-based genetic algorithm
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5055486/
https://www.ncbi.nlm.nih.gov/pubmed/27502185
http://dx.doi.org/10.1007/s13238-016-0302-5
work_keys_str_mv AT sufeng thesuperiorfaulttoleranceofartificialneuralnetworktrainingwithafaultnoiseinjectionbasedgeneticalgorithm
AT yuanpeijiang thesuperiorfaulttoleranceofartificialneuralnetworktrainingwithafaultnoiseinjectionbasedgeneticalgorithm
AT wangyangzhen thesuperiorfaulttoleranceofartificialneuralnetworktrainingwithafaultnoiseinjectionbasedgeneticalgorithm
AT zhangchen thesuperiorfaulttoleranceofartificialneuralnetworktrainingwithafaultnoiseinjectionbasedgeneticalgorithm
AT sufeng superiorfaulttoleranceofartificialneuralnetworktrainingwithafaultnoiseinjectionbasedgeneticalgorithm
AT yuanpeijiang superiorfaulttoleranceofartificialneuralnetworktrainingwithafaultnoiseinjectionbasedgeneticalgorithm
AT wangyangzhen superiorfaulttoleranceofartificialneuralnetworktrainingwithafaultnoiseinjectionbasedgeneticalgorithm
AT zhangchen superiorfaulttoleranceofartificialneuralnetworktrainingwithafaultnoiseinjectionbasedgeneticalgorithm