Cargando…

Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science

Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general prac...

Descripción completa

Detalles Bibliográficos
Autores principales: Mocanu, Decebal Constantin, Mocanu, Elena, Stone, Peter, Nguyen, Phuong H., Gibescu, Madeleine, Liotta, Antonio
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6008460/
https://www.ncbi.nlm.nih.gov/pubmed/29921910
http://dx.doi.org/10.1038/s41467-018-04316-3
_version_ 1783333179722039296
author Mocanu, Decebal Constantin
Mocanu, Elena
Stone, Peter
Nguyen, Phuong H.
Gibescu, Madeleine
Liotta, Antonio
author_facet Mocanu, Decebal Constantin
Mocanu, Elena
Stone, Peter
Nguyen, Phuong H.
Gibescu, Madeleine
Liotta, Antonio
author_sort Mocanu, Decebal Constantin
collection PubMed
description Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdős–Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.
format Online
Article
Text
id pubmed-6008460
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-60084602018-06-21 Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science Mocanu, Decebal Constantin Mocanu, Elena Stone, Peter Nguyen, Phuong H. Gibescu, Madeleine Liotta, Antonio Nat Commun Article Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdős–Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible. Nature Publishing Group UK 2018-06-19 /pmc/articles/PMC6008460/ /pubmed/29921910 http://dx.doi.org/10.1038/s41467-018-04316-3 Text en © The Author(s) 2018 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Article
Mocanu, Decebal Constantin
Mocanu, Elena
Stone, Peter
Nguyen, Phuong H.
Gibescu, Madeleine
Liotta, Antonio
Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science
title Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science
title_full Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science
title_fullStr Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science
title_full_unstemmed Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science
title_short Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science
title_sort scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6008460/
https://www.ncbi.nlm.nih.gov/pubmed/29921910
http://dx.doi.org/10.1038/s41467-018-04316-3
work_keys_str_mv AT mocanudecebalconstantin scalabletrainingofartificialneuralnetworkswithadaptivesparseconnectivityinspiredbynetworkscience
AT mocanuelena scalabletrainingofartificialneuralnetworkswithadaptivesparseconnectivityinspiredbynetworkscience
AT stonepeter scalabletrainingofartificialneuralnetworkswithadaptivesparseconnectivityinspiredbynetworkscience
AT nguyenphuongh scalabletrainingofartificialneuralnetworkswithadaptivesparseconnectivityinspiredbynetworkscience
AT gibescumadeleine scalabletrainingofartificialneuralnetworkswithadaptivesparseconnectivityinspiredbynetworkscience
AT liottaantonio scalabletrainingofartificialneuralnetworkswithadaptivesparseconnectivityinspiredbynetworkscience